Author: Joseph Mwaluko

  • How to Install Python on Your Windows 10/11

    How to Install Python on Your Windows 10/11

    How to Install Python on Your Windows 10/11

    Before you begin to write programs with Python, it must be available and correctly configured on your computer. This post will guide you in installing and creating your first Python program using the simple and attractive IDE interface and print function using real-life examples.

    Next, let’s begin with the following simple seven steps.

    The Process of downloading and installing Python from a credible source.

    Downloading Python on Windows.
    A simple and easy to follow step-by-step way to download Python on Windows 10/11

    Downloading and installing the current Python software on your Windows OS computer securely is very easy. To do that job in a moment, follow the simple steps below.

    Step 1: Search for software setup using your browser

    Open any of your favorite browsers (Google Chrome, Microsoft Edge, Operamini, Apple Safari, and Brave) and search “Python download for Windows.” You will receive the search results shown in the screenshot below.

    Click-Download-Python-Python.Org_.-First-option-on-the-search-results
    Web search results you get on a Google Chrome browser.

    Click the first link for the Python Organization downloads, which will take you to the next web page in step 2 below. Alternatively, click this link: https://www.python.org/downloads/ to go directly to Python’s official downloads page.

    Step 2: Downloading the Python setup

    Once the next page opens after step 1 above, click the “Download Python” button below the topic “Download the latest version for Windows” circled on the screenshot below.

    Click-the-button-named-Download-Python-3.12.2.
    Python’s official downloads page.

    You will be asked to save the setup file. Please choose the location on your computer where you want to keep it. For example, I stored the Python file in the Downloads folder for this guide, as shown in the screenshot below.

    Saving the Python setup file on your computer's download folder.
    Saving the Python setup file on your computer’s download folder.

    After that, it should start downloading, and indicate it on your browser’s download progress icon.

    Download in progress

    Stay cool as you wait for the setup to download. It should be noted that the download time depends on your internet speed. Therefore, if your internet speeds are high, you will wait for a short while and vice versa.

    Step 3: Open the Python setup to install

    Next, after the setup finishes downloading, it is time for us to install. Therefore, to install it on your PC, please visit and open the location where you downloaded and stored the setup file. Once you are in it, double-click the setup icon to start the installation process. As a result, the Python setup installation panel opens as shown below.

    If you look at position 1 on the screenshot, the version to be installed is 3.12.2, which is 64-bit. In addition, below the version topic is an installation statement guiding us on what we should choose next.

    The installation wizard instructions on your screen.

    Make sure you select the two check boxes circled and labeled 2 in red color on the screenshot above (number 2). After ticking by clicking on them, they are highlighted with a tiny tick in the middle and blue. Thus, they should look like in the image below.

    Select-the-two-checkboxes-at-the-bottom-of-the-installation-wizard

    The two options ensure you will not experience challenges using the Python environments after installation.

    Step 4: Starting the installation process

    Next, click the first option on the install menu – “Install Now” option.

    Click-the-Install-Now-option-to-start-installation

    Step 5: Managing the Microsoft User Account Control settings

    Next, if you are using the Microsoft User Account Control settings, you’ll be prompted to allow changes on your hard drive. Therefore, once it appears, do the following.

    On the “User Account Control” prompt menu on your screen asking “if you want to allow the app to make changes to your device”, click the “Yes” button.

    Select yes when prompted to allow the application to make changes to your harddisk.
    Select “Yes” when prompted to allow the application to make changes to your harddisk.

    Immediately, the installation process starts.

    The-Python-setup-installation-process-and-progress

    Wait until the setup completes the installation.

    Step 6: Finishing the Python setup installation successfully

    After that, the installation progress changes to “setup was successful” at the top, and a “Close”button is activated on the lower right side of the window. Click the close button to finish up the installation.

    Setup-completes-installation-successfully

    Step 7: Verifying if the setup was correctly installed

    After closing the installation wizard, the next activity is to confirm if the setup was installed successfully. For you to quickly verify, use the Windows 10/11 command line terminal known as cmd.

    Are you wondering how to access the cmd terminal? If yes, stop wondering because I figured out the answer to your question beforehand. Therefore, follow the following method to access it for verification.

    First, open the cmd interface by going to the search button (on your taskbar).

    Click-on-the-Windows-search-button

    Secondly, type “cmd” on the search bar (position 1), and the best march will appear on the list below the search bar.

    Then, choose either the first option (Number 2) or option 2 on the right side of the menu (number 3). After selecting one of the options on the search results, the command line interface opens in a moment.

    The-Windows-10-Command-Line-Interface-for-verification-of-the-installation

    Thirdly, once it opens, type the command “python—version” and press “Enter” on the keyboard to display the current version of Python that has been installed. The results of executing the command are as follows.

    The-command-used-to-check-the-Python-version-installed-and-the-version-results

    Hurray! You have Python 3.12.2 installed on your Windows operating system machine and are ready to use it.

    Note: When I first wrote this guide, the latest version was Python 3.12.2. So, the version might be updated and new when you come across this guide. However, don’t worry or be scared because the guide installs the latest version of Python. Therefore, I promise you a seamless installation journey.

    Now, let’s access and use the Python IDE to write a program for the first time.

    Writing your first Python program code – “Hello World” with Python IDLE

    By default, after installing Python, you have the IDLE installed. This is because the Integrated Development and Learning Environment (IDLE) is the primary IDE for the Python programming language. Generally, the acronym name IDE stands for Integrated Development Environment.

    So, how do you launch and use Python’s IDE to write your first program? Let me tell you something you may not know. The process is very simple and very effective. Are you excited to start writing Python programs? I’m pretty sure you are; therefore, let’s dive in.

    How do you open the IDLE and start writing program codes?

    I know you are eager to write your very first program, right? I know the feeling, too, because I’ve been through the same situation in many situations. Thus, trust me, I know waiting for something new, especially one you are excited about, is not easy.

    However, before we begin, let me tell you one important thing about the Integrated Development and Learning Environment. This will ensure you are the best at using the Python IDLE coding environment. Also, the tip will help you to be in a position to handle any size or complexity of your codes as projects grow bigger.

    Pro Tip

    If your code is small, when the Python Shell window opens, write them directly and test them on the same window. As a result, the process will be simple, and the output will be just below a few lines of code. For example, see the code lines and outputs in the same window below.

    However, if you want to have a long piece of code, open a new file by going to “File” on the top left and creating a new file from the menu. The new window opens with an advanced menu, enabling you to type and run a long code directly. Having said that, let us move on to even more interesting stuff.

    Are you ready to start writing your first Python Programming Language program? I know you are, but before that, let’s look at how to launch the Python IDLE on our Windows computers.

    2 Ways to access and open the Python IDLE interactive shell

    Generally, two main ways to open the Python coding shell are through the Windows start menu and the search button on the taskbar by typing in the search bar.

    A. Opening the IDLE shell using the Windows search button

    i. Click the search button next to the Windows icon.

    ii. Next, type IDLE in the search bar (position 1 on the screenshot below).

    Searching IDLE in the search bar.

    iii. Then, on the search results, click either the best match result containing the name (Position 2). Or the open button on the right side of the menu (Position 3).

    iv. After clicking, the IDLE will launch in a moment, and you can begin writing your first program. Therefore, you should start writing on the first line where the cursor is blinking.

    Python-IDLE-opens-immedietly-with-the-cursor-blinking-on-the-first-line

    v. Finally, when you finish typing the program, press “Enter” to execute it. For example, I typed the program code line “print(“Hello! World”)” in the first line.

    Start-typing-your-first-line-of-code-where-the-cursor-is-blinking

    After that, I pressed the enter button on my keyboard, and the following output was displayed on the next line.

    First-program-output_Hello-world

    Note: The interactive shell only executes small code pieces and displays the results below the code lines.

    Examples-of-simple-code-lines

    B. Opening the Python IDLE shell using the Windows start menu

    On your desktop, go to the Windows start menu and click on it to open the start menu.

    Windows-Start-Menu

    Next, on the new start menu, click on the all apps extension button to open the main menu, which is arranged in alphabetic order.

    Next, scroll up the menu until you find the newly installed Python application under the menu items starting with the letter “P.”

    Newly-installed-python-folder-and-menu

    After that, click it or the extension on the right side to open all the applications under the Python folder menu.

    Then, click the first item on the menu—IDLE (Python 3.13 64-it). The IDLE Shell will open in a new window.

    Now that you know how to access and launch the IDLE shell, let us open it, write, and execute a long code.

    How to create and execute long code pieces using the IDLE shell

    Earlier in our guide, I mentioned that you can write and execute small pieces of code directly after opening the interactive shell. What if you want to write a long piece of code or write small ones in a more advanced environment? Here is how to achieve it in simple terms.

    First, access using any of the methods discussed and open it. Then click on the “File” menu on the top left of the new shell window. On the floating menu, select the “New File” option.

    Click-on-File-Menu-and-then-choose-new-file

    Secondly, when the new file opens, start typing your code.

    Finally, after creating your code, save the file as follows before executing it: On the top menu, click File and choose Save on the dropdown menu. The program will not work if you execute the file before saving it.

    Next, choose where to save the program file on your Windows computer. In this case, I saved the file on the desktop.

    Also, remember to save the file using the Python file extension (.py). This enables your computer to detect and execute the program when you run the program in the shell.

    Note: Once you get used to shell programming, you can use the shortcuts displayed on each menu. For example, you only need to press the “Ctrl + S” combination to save the program.

    Once the file is saved, head to the top menu and click Run. Then, choose the first option, “Run Module,” which appears on the drop menu.

    Alternatively, you can press the F5 key on the keyboard to run the program directly.

    Summing up everything!

    The guide has taught you everything important about downloading and using Python. First, it has taught you that by following this guide step by step, you are guaranteed to install a legit copy of the programming language on your Windows 10 or 11 computer. The procedures followed are simple – including searching for the setup online, downloading, accessing, and installing.

    Therefore, you can follow this guide with or on your favorite browser to download and install the tool successfully. Secondly, you’re also guided on accessing the IDLE coding shell and writing your first code. You can either access it through;

    1. the Windows start menu by opening the “All apps” extension menu and scrolling up until you see the Python menu.
    2. by using the Search button next to the start menu above, type the shell name IDLE and select it from the search results.

    Furthermore, through the guide, we’ve seen how to verify if the installation was successful and the version of Python. Additionally, we have also learned how to write both short and long code pieces using the IDLE shell. In conclusion, with this guide, you’ve just kickstarted your Python programming journey with us. More tutorials, guides, and courses are coming up in the near future. Stay tuned for more learning content and guides.

  • How to Develop a Simple Web Application for Your Data Science with Python in 2024

    How to Develop a Simple Web Application for Your Data Science with Python in 2024

    Are you looking for a guide on how to build a web application for your data analysis project? Or are you just looking for a data analysis and machine learning project to inspire your creativity? Then, if your answer is “Yes,” this step-by-step guide is for you. You’ve come to the right place where we guide, inspire, and motivate people to venture into data science and ML. For your information, ML is the acronym for Machine Learning. Having that said, let’s move on with the project.

    The application landing page:

    Upon opening the application on the web, you will see the app name and tagline. They are listed below and shown in the screenshot after the name and tagline.

    Name:

    The Ultimate Career Decision-Making Guide: – Data Jobs

    Tagline:

    Navigating the Data-Driven Career Landscape: A Deep Dive into Artificial Intelligence (AI), Machine Learning (ML), and Data Science Salaries.

    Project introduction

    This is the second part of the project. As you can remember from the introductory part of the project, I divided the project into two parts. In phase one, I worked on the End-to-End data analysis part of the project using the Jupyter Notebook. As mentioned earlier, it formed the basis for this section – web application development.

    Thus, to remind ourselves, I performed the explorative data analysis (EDA) process in step 1 and data preprocessing in step 2. Finally, I visualized the data in step 3. I created nine key dimensions to create this web application to share my insights with you and the world.

    I recommend you visit the application using the link at the end of this guide.

    What to expect in this Web Application building guide.

    A step-by-step guide on how to design and develop a web application for your data analysis and machine learning project. By the end of the guide, you’ll learn how the author used the Python Streamlit library and GitHub repository to create an excellent application. Thus, gain skills, confidence, and expertise to come up with great web apps using free and readily available tools.

    The Challenge

    Today, there are different challenges or issues that hinder people from venturing into the fields of Artificial Intelligence (AI), Machine Learning (ML), and Data Science. One primary challenge individuals face is understanding the nuanced factors that influence career progression and salary structures within these fields.

    Currently, the demand for skilled professionals in AI, ML, and Data Science is high, but so is the competition. For example, 2024 statistics show that job postings in data engineering have seen a 98% increase while AI role postings have surged 119% in 2 years. Additionally, 40% or 1 million new machine learning jobs are expected to be created in the next five years.

    Therefore, these industry/field and market trends show an increase in demand for AI, ML, and Data Science. The growth needs more experts and professionals; thus, it is necessary to act now to join and take advantage of the trends. But how do you do that? The question will be directly or indirectly answered in the subsequent sections of this guide.

    Navigating this landscape requires a comprehensive understanding of how various elements such as years of experience, employment type, company size, and geographic location impact earning potential. That’s why I developed this solution for people like you.

    The Solution: Web Application

    To address these challenges, the project embarks on a journey to demystify the intricate web of factors contributing to success in AI, ML, and Data Science careers. By leveraging data analytics and visualization techniques, we aim to provide actionable insights that empower individuals to make informed decisions about their career trajectories. Keep it here to learn more. Next, let’s look at the project objective.

    The project’s phase 2 objective.

    The goal is to create a simple web application using the Python Streamlit library to uncover patterns and trends that can guide aspiring professionals. The application will be based on comprehensive visualizations highlighting salary distributions prepared in phase 1 of the project.

    The project’s phase 2 implementation.

    A web application built with the Python Streamlit library and the GitHub repository
    Web Application Dashboard: A web application built with the Python Streamlit library and the GitHub repository

    Web Application: the streamlit library in Python

    Generally, Streamlit is the Python library that simplifies the process of building interactive web applications for data science and machine learning projects. For example, in this project, I used it for a data science project. It allowed me as a developer to create dynamic and intuitive user interfaces directly from Python scripts without needing additional HTML, CSS, or JavaScript.

    With Streamlit, users can easily visualize data, create interactive charts, and incorporate machine learning models into web applications with minimal code. For example, I used several Python functions in this project to generate insights from visualizations and present them in a simple web application.

    This library is particularly beneficial for data scientists and developers who want to share their work or deploy models in a user-friendly manner, enabling them to prototype ideas and create powerful, data-driven applications quickly. This statement describes my project’s intentions from the beginning to the end.

    Creating the web application in Microsoft VS Code.

    To achieve the project’s objective, I use Six precise step-by-step procedures. Let us go through each one of them in detail in the next section.

    Step 1: Importing the necessary libraries to build the web application

    The first thing to do is import the Python libraries required to load and manipulate data.

    The libraries imported for the project.

    These are the same libraries used in the end-to-end data analysis process in phase 1 of the project. It’s important to note that Streamlit has been added to this phase.

    Step 2: Setting the page layout configurations

    Next, I basically designed the application interface using the code snippet below. I defined and set the application name, loaded the image on the homepage, and created the “Targeted Audience” and “About the App” buttons.

    Step 3: Loading the engineered data.

    After creating the web application layout, I proceeded to load the data along with the newly engineered features. The two code snippets below show how to load data in Streamlit and ensure that it remains the same every time it is loaded.

    Step 4: Create the web application “Raw Data” & “Warning: User Cases” display buttons

    The “Raw Data” button was designed to ensure a user can see the engineered dataset with the new features on the Web Application. Similarly, the “Warning: User Cases” button was created to warn the user about the application’s purpose and its limitations.

    Step 5: Insights generation and presentation in the Web Application.

    There were nine insights and recommendations. The code snippets below show how each insight was developed. Visit the application to view them by clicking the link given below.

    Step 6: Web application user appreciation note

    After exploring the 9 insights, the summary section congratulates the App user. Additionally, it tells them what they have done and gained by using my application.

    Finally, it bids them goodbye and welcomes them back to read more on the monthly updated insights.

    Key analytical dimensions in the web application

    Building a web application.

    As listed below, based on project phase 1 data visualizations, I generated 9 insights. Each analytical dimension is accompanied by the code snippet developed to provide insight into the web application interface. After reading them, I recommend you click the link at the end of the post to visit the deployed application. As a result, you will confirm my great work with the Streamlit Library, which is a very simple tool that is available to you.

    1. Employment Type: Unraveling the nuances of salaries in full-time, part-time, contract, or freelance roles.

    2. Work Years: Examining how salaries evolve over the years.

    3. Remote Ratio: Assessing the influence of remote work arrangements on salaries.

    4. Company Size: Analyzing the correlation between company size and compensation.

    5. Experience Level: Understanding the impact of skill proficiency on earning potential.

    6. Highest Paid Jobs: Exploring which job category earns the most.

    7. Employee Residence Impacts on Salary: Investigating the salary (KES) distribution based on the country where the employee resides.

    8. Company Location: Investigating geographical variations in salary structures.

    9. Salary_in_KES: Standardizing salaries to a common currency for cross-country comparisons.

    Conclusion

    In conclusion, by examining the critical analytical dimensions, the project seeks to provide a nuanced perspective on the diverse factors shaping salaries in the AI, ML, and Data Science sectors. Armed with these insights, individuals can navigate their career paths with a clearer understanding of the landscape, making strategic decisions that enhance their success in these dynamic and high-demand fields.

    I recommend you access the guidelines with an open mind and not forget to read the user case warning. Do not wait any longer; access “The Ultimate Career Decision-Making Guide: – Data Jobs” web application and get the best insights.

    The Streamlit Application Link: It takes you to the final application deployed using the Streamlit Sharing platform – Streamlit.io. https://career-transition-app-guide–data-jobs-mykju9yk46ziy4cagw9it8.streamlit.app/

  • End-to-End Data Analysis Project with Source Codes: Your Ultimate Guide

    End-to-End Data Analysis Project with Source Codes: Your Ultimate Guide

    End-to-End Data Analysis Project

    Are you looking for an End-to-End Data Analysis Project with source codes to inspire and guide you in your project? Look no further because you have come to the right place. This project is an excellent step-by-step tutorial to help you accomplish your data science and machine learning (ML) project. Therefore, its originality and uniqueness will greatly inspire you to innovate your own projects and solutions.

    1.0. Introduction

    In the ever-evolving landscape of technology, the temptness of Artificial Intelligence (AI), Machine Learning (ML), and Data Science has captivated the ambitions of professionals seeking career transitions or skill advancements.

    However, stepping into these domains has its challenges. Aspiring individuals like me often confront uncertainties surrounding market demands, skill prerequisites, and the intricacies of navigating a competitive job market. Thus, in 2023, I found myself in a data science and machine learning course as I tried to find my way into the industry.

    1.1 Scenario Summary

    Generally, at the end of any learning program or course, a learner or participant has to demonstrate that they meet all the criteria for successful completion by way of a test. The final test is for the purpose of certifications, graduation, and subsequent release to the job market. As a result, a test, exam, or project deliverable is administered to the student. Then, certification allows one to practice the skills acquired in real-life scenarios or situations such as company employment or startups.

    Applying the above concept to myself, I certified at the end of my three-month intermediary-level course in 2023. The course is on “Data Science and Machine Learning with the Python Programming Language.” The Africa Data School (ADS) offers the course in Kenya and other countries in Africa.

    At the end of the ADS course, the learner has to work on a project that meets the certification and graduation criteria. Therefore, from the college guidelines, it goes without saying that for me to graduate, I had to work on either;

    1. An end-to-end data analysis project, OR
    2. An end-to-end machine learning project.

    The final project product was to be presented as a Web App deployed using the Streamlit library in Python. To achieve optimum project results, I performed it in two phases: the data analysis phase and the App design and deployment phase.

    1.2. End-to-End data analysis project background: What inspired my project

    Last year, 2023, I found myself at a crossroads in my career path. As a result, I didn’t have a stable income or job. For over six and a half years, I have been a hybrid freelancer focusing on communication and information technology gigs. As a man, when you cannot meet your financial needs, your mental and psychological well-being is affected. Thus, things were not good or going well for me socially, economically, and financially. For a moment, I considered looking for a job to diversify my income to cater to my nuclear family.

    Since I mainly work online and look for new local gigs after my contracts end, I started looking for ways to diversify my knowledge, transition into a new career, and subsequently increase my income. From my online writing gigs and experience, I observed a specific trend over time and identified a gap in the data analysis field. Let us briefly look at how I spotted the market gap in data analysis.

    1.2.1.   The identified gap

    I realized that data analysis gigs and tasks that required programming knowledge were highly paid. However, only a few people bid on them on different online job platforms. Therefore, the big question was why data analysis jobs, especially those requiring a programming language, overstayed on the platforms with few bids. Examples of the programming languages I spotted an opportunity in most of those data analysis jobs include Python, R, SQL, Scala, MATLAB, and JavaScript. The list is not exhaustive – more languages can be found online.

    As a result of the phenomenon, I started doing some research. In conclusion, I realized that many freelancers, I included, lacked various programming skills for data analysis and machine learning. To venture into a new field and take advantage of the gap required me to learn and gain new skills.

    However, I needed guidance to take advantage of the market gap and transition into the new data analysis field with one of the programming languages. I did not readily find one, so I decided to take a course to gain all the basic and necessary skills and learn the rest later.

    Following strong intuition coupled with online research about data science, I landed at ADS for a course in Data Science and Machine Learning (ML) with Python Programming Language. It is an instructor-led intermediary course with all the necessary learning resources and support provided.

    Finally, at the end of my course, I decided to come up with a project that would help people like me to make the right decisions. It is a hybrid project. Therefore, it uses end-to-end data analysis skills and machine learning techniques to keep it current with financial market rates.

    I worked on it in two simple and straightforward steps and phases. They include:

    1.2.2. Phase 1: End-to-End Data Analysis

    Dataset Acquisition, Analysis, and Visualization using the Jupyter Notebook and Anaconda.

    1.2.3. Phase 2: App Design and Deployment

    – Converting the Phase 1 information into a Web App using the Streamlit library.

    Next, let me take you through the first phase. In any project, it is important to start by understanding its overall objective. By comprehending the goal of the project, you can determine if it fits your needs. It is not helpful to spend time reading through a project only to realize that it is not what you wanted.

    Therefore, I’ll start with the phase objective before moving on to the other sections of the project’s phase one.

    1.3. Objective of the end-to-end data analysis project phase 1

    The project analyzes and visualizes a dataset encompassing global salaries in the AI, ML, and Data Science domains to provide you with helpful data-driven insights to make the right career decisions. It delves into critical variables, including the working years, experience levels, employment types, company sizes, employee residence, company locations, remote work ratios, and salary in USD and KES of the dataset. Thus, the final results were useful data visualizations for developing a web application.

    2.0. The End-to-End Data Analysis Process.

    The first phase was the data analysis stage, where I searched and obtained a suitable dataset online.

    Step 1: Explorative Data Analysis (EDA) Process

    2.1 Dataset choice, collection, description, and loading.

    The project’s data is obtained from the ai-jobs.net platform. For this project, the link used to load the data explored is for the CSV file on the platform’s landing page. Nonetheless, the dataset can also be accessed through Kaggle. Since the data is updated weekly, the link will facilitate continuous weekly data fetching for analysis in order to keep the Ultimate Data Jobs and Salaries Guider Application updated with the current global payment trends.

    Dataset Source = https://ai-jobs.net/salaries/download/salaries.csv

    2.1.1 Raw dataset description

    The dataset contained 11 columns with the following characteristics:

    • work_year: The year the salary was paid.
    • experience_level: The experience level in the job during the year.
    • employment_type: The type of employment for the role.
    • job_title: The role worked during the year.
    • Salary: The total gross salary amount paid.
    • salary_currency: The currency of the salary paid is an ISO 4217 currency code.
    • salary_in_usd: The salary in USD (FX rate divided by the average USD rate for the respective year via data from fxdata.foorilla.com).
    • employee_residence: Employee’s primary country of residence as an ISO 3166 country code during the work year.
    • remote_ratio: The overall amount of work done remotely.
    • company_location: The country of the employer’s main office or contracting branch as an ISO 3166 country code.
    • company_size: The average number of people that worked for the company during the year.

    2.1.2 Data loading for the end-to-end data analysis process

    First, I imported all the necessary libraries and modules to load, manipulate, and visualize the data in cell 1 of the Jupyter Notebooks.

    Importing the required Python libraries.

    Then, I loaded the data using Pandas in Cell 2 and received the output below. The Pandas “.head ()” function displayed the first five rows in the dataset.

    Using Pandas to load the dataset.

    After loading the salaries dataset from the URL, I used the Pandas library to study it. I analyzed the dataset’s frame structure, basic statistics, numerical data, and any null values present to understand its composition before proceeding with the analysis process. The results showed that there were:                              

    I. Four columns out of the total eleven with numerical data.

    Four columns with numerical data.
    All columns with numerical data types in the dataset were obtained.

    ii. Eleven columns in total contain 14,373 entries. Four with numerical data and seven with object data types.

    Data types descriptions. Four columns had int64 datatypes and 7 columns with object datatypes.

    iii. There was no missing data in the fields of the 11 columns of the dataset. You can confirm this in the screenshot below.

    There were no Null-field since all coulmns contained a sum of zero nnull counts.
    (1) The dataset did not have any null field among the 11 columns present.
    There were no Null-field since all coulmns contained non-null counts
    (2) There were 14,374 non-null data tupes including int64 and objects.

    2.2. Conclusions – EDA Process.

    Based on the above results, the dataset does not contain any missing values. The categorical and numerical datatypes are well organized, as shown in the above outputs. The dataset has eleven columns—4 with integer datatypes and 7 with object datatypes. Therefore, the data is clean, ready, and organized for use in the analysis phase of my project.

    Step 2: Data Preprocessing

    The main preprocessing activities performed were dropping the unnecessary columns, handling the categorical data columns, and feature engineering.

    2.2.1. Dropping the unnecessary columns

    Dropping the unnecessary

    The columns dropped were the salary and salary_currency. I dropped them because of one main reason. The salary column had different currencies depending on employee residence and company location, and they were converted into USD from other currencies. Thus, the dropped columns were unnecessary because I only needed the salary amount in one currency.

    2.2.2. Handling the categorical data columns

    I developed a code snippet summarizing and displaying all the categorical columns in the salaries dataset. The first five entries were printed out and indexed from zero, as shown in the sample below.

    2.2.3. The engineered features in the end-to-end data analysis process

    Making the data in the dataset more meaningful and valuable to the project is crucial. Therefore, I engineered two new and crucial features in the dataset. As a result of the engineering process, our new features were full name labeling and salary conversion. Now you have a clue about the engineered features. Next, let us describe how each feature came about and into existence.

    2.2.3.1. Full Name Labeling:

    Initially, the column titles in the dataset were written in short forms by combining the title initials. For example, FT was the short form name for the full-time column, and so on. Thus, I took all the titles written in short form using initial letters, wrote them in their full names, and added the initials at the end of the names. For example, I changed “FT” to Full Time (FT). This ensured proper labeling, understanding, and comprehension, especially during data visualizations.

    The Python code snippet below was used for full naming.

    2.2.3.2. Salary Conversion:

    The initial salary column was in USD dollars. Similarly, just like in the previous feature, I came up with a method of changing the “salary_in_usd” column into Kenyan Shillings and renamed it “Salary_in_KES.” Since the dataset is updated weekly, the conversion process was automated. A function was created that requests the current USD Dollar exchange rate versus the Kenyan Shilling and multiplies it by the salary values in dollars to get the salary value in Kenyan money.

    The function uses an API Key and a base URL for a website that requests the current exchange rate, prints it on the output.

    Then, the function multiplies the exchange rate obtained with salary calculated in USD dollars to create a new column named “Salary_in_KES.” As a result, the screenshot below shows the new column which circled in red color.

    Therefore, every time the data-jobs guide application is launched, the process will be repeated, and the output will be updated accordingly.

    Next, let us briefly proof if the automation really occurs for both the dataset and the exchange rate above.

    2.2.3.3. Proof the automated processes are effective in their work in the project

    This was proven during the end-to-end data analysis process and web application development. This is because the current value was printed out every time the data analysis Jupyter Notebook was opened and the cell ran.

    Exchange rate automation confirmation

    As mentioned earlier, the first output result was captured in November 2023. The exchange rate was 1USD = 156.023619KES

    As I write this post in March 2024, the feature gives back an exchange rate of 137.030367 KES. See the screenshot below.

    Let us find the difference by taking the initial amount minus the current exchange rate. That is 156.023619 – 137.030367 = KES 18.993252. At this moment, the dollar has depreciated against the Shilling by approximately 19 KES.

    Guide Publication Date

    As you may have noticed from the beginning of our guide, I published it in October. But I’ve noted above that I wrote in March 2024 when calculating the difference. Yes, that is true you got it right. However, I pulled the whole site down for unavoidable reasons, and now I’m creating the posts again. I’m doing this to maintain consistency of the results. Later, I will also update it again with 2024 data.

    It is important to note that the process is constant.

    Proof that the dataset is updated weekly

    The dataset is frequently updated as the main data source is updated. To prove this fact, the total entries in the dataset should increase with time. For example, the screenshot below shows 10456 entries in November 2023.

    Similarly, the following screenshot shows 14354 entries in March 2024. This is an increase in the dataset entries thus the changes are automatically reflected in our project.

    Next, let us find the difference. The updated entries are 14374 – 10456 initial entries =

    2.2.3.4. Employee Company Location Relationship:

    I created a new dataset column named “employee_company_location.”

    The program created checks and indicates in the new column if an employee comes from the company location. Therefore, this is true if the employee residence and country codes are the same in the dataset. For example, in the screenshot below, the first person resided in a country different from the company location.

    Step 3: Data Visualization

    Here, we are at the last step of phase 1. I hope you have already learned something new and are getting inspired to jump-start your end-to-end data analysis project. Let me make it even more interesting, energizing, and motivating in the next graphical visualization stage. In the next section, I’m going to do some amazing work, letting the data speak for itself.

    I know you may ask yourself, how? Don’t worry because I will take you through step by step. We let the data speak by visualizing it into precise and meaningful statistical visuals. Examples include bar charts, pie charts, line graphs,

    In this project, I developed seven critical dimensions. The accompanying screenshot figures show the code snippet I developed to process the data and visualize each dimension.

    1. Employment Type:

    Unraveling the significant salary differences based on employee roles. The roles present in the data were full-time (FT), part-time (PT), contract (CT), and freelance (FL) roles.

    Visualization number 1

    Code snippet for the visualization above.

    2. Work Years:

    Examining how salaries evolve over the years.

    Code snippet for the visualization above.

    Work Year Code Snippet

    3. Remote Ratio:

    Assessing the influence of remote work arrangements on salaries.

    Code snippet for the visualization above.

    4. Company Size:

    Analyzing the correlation between company size and compensation.

    Code snippet for the visualization above.

    5. Experience Level:

    Understanding the impact of skill proficiency on earning potential.

    Code snippet for the visualization above.

    6. Company Location:

    Investigating geographical variations in salary structures.

    Code snippet for the visualization above.

    7. Employee Residence:

    Exploring the impact of residing in a specific country on earnings.

    Code snippet for the visualization above.

    8. Salary (USD) – Distribution Per Company Location:

    Investigating howearnings are distributed based on employee residence and company location.

    Code snippet for the visualization above.

    9. Salary (KES) – Distribution Based on Different Company Locations:

    Investigating howearnings in Shillings are distributed based on employee residence and company location.

    Code snippet for the visualization above.

    Salary (KES) distribution based on differenty company loaction.

    In Summary:

    We’ve come to the end of phase 1 – in other words, the end-to-end data analysis project phase. Generally, you have gained in-depth skills in how you can find, acquire, clean, preprocess, and explore a dataset. Therefore, considering this project phase alone, you can start and complete an end-to-end data analysis project for your purposes.

    Whether for your job or class work. Additionally, with the source code snippets, it becomes easy to visualize your data based on them. What I mean is that your dataset may be different, but they directly help you produce similar or better visualizations. In my case and for this project design, the phase opens the door to the second project milestone, phase 2.

  • How to Solve AWS 253-[LX] Bash Shell Scripting Lab Challenge

    How to Solve AWS 253-[LX] Bash Shell Scripting Lab Challenge

    Are you participating in the AWS ReStart program and looking for solutions to some lab challenges? You are not alone because I participated in the same program and was in the same situation in 2023. Therefore, you have come to the right place. Specifically, in this tutorial, I will help you solve the AWS 253-[LX] Bash Shell Scripting Lab Challenge. Are you ready to be inspired and excited to find a solution? Then, let’s get into the real process of solving the lab challenge.

    The Bash Shell Scripting Lab Challenge Excercise.
    The screenshot was taken by the author, and the content was created by the AWS ReStart Program.

    Before we get our hands dirty in the AWS lab challenge, let me briefly give you the project background so that we are on the same page. First, do you know where the exercise comes from? If you do not know, do not worry, because I will explain it to you. The lab challenge is adapted from the AWS re/Start Program. I participated in it last year. It is a 2–3-month intensive learning and training program in Kenya. Let me take you through a brief description of the program I’m talking about here.

    What is the AWS re/Start Program?

    Before we dive into solving the Bash Shell Scripting Lab Challenge, do you want to become a cloud professional? If you want to start a career within the cloud computing technology sphere but feel you are not qualified for it, then the AWS re/Start program could be perfect for you. By design, it is a rich, immersive, full-time training to become an AWS Cloud Services professional in 2-3 months.

    In other words, it is a 12-week exciting learning journey and skills-rewarding program that a person can experience for free. Nonetheless, it is intended for persons who are either out of work or working in less desired job opportunities, such as military veterans and their families, and youths who want a new beginning in the cloud services job market.

    What makes me say the program is unique?

    What makes AWS re/Start special is that it is very actionable. Informed by scenarios and process labs coursework, you’ll get the foundational knowledge and a stepping stone for an initial position in cloud computing. It is clear that there is a big difference between theory and practice, but what makes AWS re/Start unique is its approach. The used learning models, real-life scenarios, laboratory exercises, and lectures will help you build the knowledge and skills required to perform entry-level cloud roles.

    However, the technical education it provides for its participants is not the only kind of education it offers. They give practical advice on resumes and prepare you for an interview. So, by the end of the program, you will be ready to meet the employers. Therefore, if you’ve considered changing your career track to cloud computing, AWS re/Start can assist you.

    What to expect from the bash shell scripting exercise guide

    In this comprehensive lab guide, I will delve into the AWS challenge lab I solved during my journey, showcasing how AWS re/Start empowered me to embrace the challenge and emerge victorious.

    What is Bash Shell?

    Bash stands for the “Bourne Again Shell.” It is the default shell in the Linux Operating System. Bash Shell gives you an effective and efficient environment for scripting and interacting with your Linux Operating system through commands. It is widely available and used by many developers and organizations.

    Lab Objectives of the bash shell scripting lab challenge exercise

    Based on the lab manual, the challenge had one main lab goal. Therefore, the main goal was to create a directory at the end of the exercise.

    Step-by-step procedure to solve the bash shell scripting exercise

    First, note that the AWS predesigned the lab, and the necessary resources were provisioned and configured. So, according to how the exercise is set, I broke it down into two milestones. Then, the two main steps were divided into different tasks, and each was executed as part of the main project. As a result, the two stages and subtasks enabled me to do it easily. Thus, it will also enable you to achieve the same results easily.

    Step 1 of Solving the Bash Shell Scripting Exercise

    Task 1. Accessing the AWS Management Console

    From the Canvas Instructure homepage, click the modules in the left navigation menu and scroll down to the end of the Linux Module. It should be noted it is the third module after Cloud Foundation. Subsequently, the lab challenge for Bash Shell scripting in this guide is the last one on the module.

    When you click the lab challenge URL, Canvas opens a new page that prompts you to load the tool in a new browser window.

    Next, the lab exercise loads and opens in a new browser window automatically in no time. Subsequently, you should note that the new browser window page has the lab topic at the top.

    • At the top of the instructions, choose the “Start Lab” button on the menu list at the top right.

    At this time, you will notice a start lab panel that opens immediately to show the lab status. First and foremost, you will see that the lab status is “in creation,” as shown in the last statement on the Start Lab panel opened below.

    • Since the lab may take some time to load, wait for it until the “in creation” lab status message changes to display the message that it is ready. Until then, there is no other way to proceed.

    Next, once the status changes to “ready,” close the Start Lab panel by clicking on the X at the top right corner of the panel.

    • Afterward, return to the top of the instructions and select the “AWS” button to open the AWS Management Console. Similarly, the Management Console opens a new browser window next to the lab instructions.

    Additionally, you should notice that the system has automatically logged you into your AWS account. Before accessing the lab resources, you must be registered and recognized as a re/Start program learner.

    Task 2: Separating and pairing the two browser tabs opened in task 1

    Finally, with the two tabs open on the browser, separate them and stack them together so you can simultaneously see the AWS Management Console and the lab challenge instructions tab.

    At this point, you can follow the instructions easily and implement them on the Console.

    Step 2: Use SSH to connect to an Amazon Linux EC2 instance.

    For this lab challenge, AWS has given two main ways to connect to the Amazon Linux EC2 instance based on the operating system a user is using. The first is for Windows users like me, and the second is for Linux macOS users. Nevertheless, each one of them has to underload the access key first.

    Therefore, I downloaded the “. pem” key for Linux users for this lab challenge and used my preinstalled Git Bash Terminal to log in to the system. Git Bash is a command-line interface (CLI) that gives a Microsoft Windows user a friendly environment to interact with a system using CLI. Therefore, I used it to write my commands to interact with the AWS lab environment.

    Task 1 – Windows Users: Using SSH to connect to bash shell scripting lab environment

    The first step is downloading the access key provided for this lab following the procedures below.

    Select “Details” at the top of the instructions and click “Show” on the credentials window presented to you.

    Select the “Download PEM” button and choose where to save your “labsuser.pem” file on your computer. Additionally, make a note of the “PublicIP” address allocated for this lab challenge exercise.

    I am saving the “labsuser.pem” key file. In this tutorial, I saved the key in the downloads folder.

    • Finally, exit by clicking the X on the “Details” panel.

    Task 2: Using Git Bash for Access

    First, open the Git Bash terminal on your computer and type “pwd” to confirm the current working directory.

    Secondly, change the directory to where the key has been stored or downloaded on your hard drive. Use the command: cd”

    Afterward, change the permissions on the key to be read-only by running this command:

    chmod 400 labsuser.pem

    Then, run the following command to log in to the system.

    ssh -i labsuser.pem ec2-user@<public-ip>
    
    Remember: Replace thephrase <public-ip>” with the exact PublicIP address noted in step 2 above.

    Next, when asked, “Are you sure you want to continue connecting?” type “yes” and press enter.

    After pressing enter, you will receive the login successful screen immediately. See the screenshot below.

    In the EC2 Instance, use the “pwd” command to confirm your current working directory. It’s supposed to be “/home/ec2-user

    Configuring the AWS EC2 instance and environment for the lab

    Next, configure the instance for use as follows. The command is “aws configure”

    On the next section that appears, enter the lab details as required. Then, press the enter button after typing the correct lab details for each prompt.

    Remember, as instructed, you copied these details at the beginning of the lab session.

    After configuration, create a new directory using the “mkdir” command. For this tutorial, I made a directory named Lab253 and used the “ls” command to confirm if the directory existed in the EC2 Instance. The screenshot below shows two directories in the instance: companyA and Lab253, which I created previously.

    Next, navigate to the newly created directory using the “cd” command, as shown in the screenshot I captured below.

    After entering the new directory, use the “touch command” to create an empty file (0KB) with an appropriate name that you can easily relate to or remember. Next, open the file using a Linux file editor.

    As you can note in the screenshot below, I created an empty file named Jose1.sh and opened it using the Vim file editor for this tutorial.

    Additionally, the ‘.sh’ file extension means that the file is a bash shell script that should be executed using the bash shell. After running the open command in Vim, the empty file opens. Next, press the letter the “i” button on your keyboard to change the file into insert mode so that you can create your script.

    When the file is in insert mode, create the script prepared per the lab challenge instructions.

    Note: I first prepared the script using a text file for this tutorial. Then, I copied and pasted it into the file opened in the Vim file editor.

    After copying and pasting, press the escape keyboard button ‘esc’ to exit the insert mode in the Vim file editor. Then, type the full colon symbol “:” followed by ‘wq’ at the end of the file, as shown in the screenshot below.

    After that, press enter to save and close the file.

    How to make the bash shell script file automatically executable

    • The next step is to make the file executable every time it is opened and run. Therefore, use the following command to permit the execution of the file.
    ‘sudo chmod u+x’
    Followed by the file name.
    • Once the file is executable, use the following command to run it and see the final results.
    ‘./Jose1.sh’
    
    Note: Remember to replace the name “Jose1.sh” with your file name.

    The program results after executing the bash shell scripting exercise challenge

    Finally, the lab end output should be as follows.

    Testing the file execution to ensure you meet the Bash shell scripting lab challenge requirements

    If you open and execute the file again, the list grows with 25 more files.

    Therefore, this is according to the lab challenge requirements. It stated that every time the script is executed, it creates 25 more empty files.

    Lab Completion

    At the end of the instructions, you will see an AWS lab complete note. At this point, go back to the top of the instructions page. To finish the lab session successfully, click the ‘End Lab’ button.

    After clicking the button mentioned above, the message menu below pops up. Click the “Yes” button in blue color.

    Next, click the “x” button on the top right corner to end the lab.

    Finally, the lab timer stops, and the provisioned resources are terminated.

    Author Recommendation

    You can modify the script to arrange the files in ascending order every time the script is executed. Feel free to use this guide to inspire your solution.

    Summary

    In conclusion, as a participant in the program, I witnessed its impact firsthand. From the fundamentals of AWS services to hands-on labs and real-world scenarios, AWS re/Start gave me a solid foundation to build upon.

  • What is Generative AI? Everything you need to know in 2024.

    What is Generative AI? Everything you need to know in 2024.

    What is Generative AI_ Everything you need to know in 2024.
    There are many reasons why you should learn and use Generative AI in 2024 hence forth.

    Today, I learned something new worthy of sharing with you. I learned how artificial intelligence (AI), specifically generative AI, could revolutionize your work. Do not forget, I am not exceptional to the same impacts. Undoubtedly, artificial intelligence is taking us by storm and rapidly changing our daily lives. It is changing how we do things right, from health, communication, and education to finances and the economy in general.

    Do you use GenAI in your daily life? This is a ‘Yes or No’ answer question. I bet most of us will say ‘No’ due to unawareness.

    However, even if your answer is ‘No,’ it does not mean you lag; it’s only that you are unaware. Again, you are not alone because I have been using it without knowing it until recently.

    Since I have been there, I believe everyone using a smart device uses it.

    How I learned I was using Generative Artificial Intelligence (GenAI).

    In my case, I use it in several ways, including learning new things, solving questions, and sometimes generating ideas. Initially, I was unaware I was using generative AI when I auto-completed my sentences in my emails on Gmail on my phone and PC. Therefore, I learned I was already using GenAI when I received a marketing email from Sololearn, a learning App.

    Previously, I used the App to learn Python during my data science course. The email asked me to be among the first to interact with their new GenAI course by subscribing to the waiting list. See the author’s screenshot below.

    As a result, when I started learning the course, I realized that I had been using it. The course taught me how Generative Artificial Intelligence picks the next word and suggests it to you, thus the auto-complete phenomena I mentioned earlier.

    Interacting with Generative Artificial Intelligence

    This comprehensive research-based tutorial will teach you how to interact with GenAI tools to transform your daily life and work. Also, you will learn and master how to use the tools to create, automate, and become more productive. Specifically, you will be introduced to GenAI, the art of prompting, Large Language Models (LLMs), and how they work.

    How we interact with conversational AI tools.

    Now that you have a solid background in the tutorial’s subject and purpose, let’s explore the details.

    Examples of real-life ways you can tell if you have used GenAI before

    Before reading this article, if you had seen any word suggestions when writing a message on your smartphone, it was likely GenAI. Also, if you have seen Gmail’s word suggestions when composing an email, that is Generative Artificial Intelligence at work.

    Example-of-Gmail-using-Generative-Artificial-Intelligence-GenAI
    Example of Gmail using Generative Artificial Intelligence (GenAI). Here, I typed the word “How,” and the underlined words were suggested.

    Additionally, if you have used the ‘TAB button on your keyboard to auto-complete your statements in email messages, or Chatbots that is artificial intelligence. For example, as indicated on the screenshot below, I was propmted to use the “TAP” key to autocoplete my search prompt.

    Using the TAB button on your keyboard to add the next word in the statements.
    An example of a prompt to use the “TAB” button on your keyboard to add the next word on the search statement.

    Therefore, as GenAI assistant applications become popular each day, it’s essential to understand how their magical human-like responses are powered and presented to you as expected. As part of learning how to interact with and use GenAI tools to your advantage, I will explain how Generative AI applications that we use daily in real-life scenarios are powered by Natural Language, a Language Model, and large language models (LLMs).

    I will also explain how the LLM temperature setting controls GenAI creativity.

    What is Generative Artificial Intelligence?

    This is considered the latest innovation in the field of Artificial Intelligence (AI) technology.

    Generative Artifical Intelligence is a subest of AI
    Generative Artifical Intelligence (GenAI) is the latest advancement in Artificial Intelligence (AI).

    Generally, artificial intelligence refers to the science of making machines smart or intelligent like humans or designing and developing applications that use human intelligence to perform tasks. It is also known as GenAI. Therefore, we can conclude that GenAI is a subset of AI.

    5 Uses and Commonly Used Types of Generative AI Tools

    We can all agree that AI technology advancements, GenAI, have revolutionized how various industries work, from health systems to banking in our financial sector. This is because GenAI can create new and unique content, including images, text, audio, and videos, based on user descriptions.

    GenAI has also changed our daily lives and the work environment. For example, Grammarly.com laid off approximately 230 workers to adopt AI for a futuristic workplace. Due to its capability to generate new and quality content, GenAI is used to;

    a. Generate New Text or Content.

    Text generating AI.
    These are the AI tools used to create new content per user instructions.

    With the use of an AI Chat, a user can create new text by simply chatting with the Chatbot using text messages written in their natural language. The AI Chat will understand human language and respond with the most appropriate human-like responses in a content format.

    For example, I used ChatGPT 3.5 and Microsoft Copilot GenAI chatbots to generate the content on the two screenshots below.

    i. New text or content generated with the ChatGPT 3.5 tool

    The user asked the Chatbot to answer, “What is GenAI in a paragraph of 8 short sentences.”

    New content generated using ChatGPT 3.5 using a simple user description.
    An example of content I generated using the ChatGPT AI tool.

    ii. New text or content generated with the ChatGPT 3.5 tool

    New content or text is created using Microsoft Copilot, an AI tool designed to accept user input (prompt – user descriptions) and generate new content based on the instructions contained in the prompt.

    New content or text generated using the Microsoft Copilot AI tool.
    An example of new content generated using an AI tool.

    b. Edit or Create New Images

    Image Generating AI tools.
    These AI tools are designed to generate new images or edit old ones per the user’s requirements.

    Similarly, GenAI creates new images or edits existing ones using user text descriptions. For example, a tool like DALL-E can generate new realistic photos from your text descriptions.

    The screenshot below shows the image generated using the Microsoft AI Designer powered by DALL-E 3.

    Image generated using the Microsoft Design which is an AI image gnerating tool powered by DALLE-E.
    The four images were generated using one user prompt using the Designer image-generating AI tool from Microsoft.

    Furthermore, you can use the Adobe Generative Fill tool to edit existing images.

    Adobe Generative Fill AI tool
    Another image-generating tool you can use is the Adobe Generative Fill AI tool.
    Screenshot by Author.

    c. Generate Audio and Videos Based on User Text Descriptions

    Audio Generative AI
    These are the tools we use to generate audio and audiovisuals.

    Additionally, you can write specific text descriptions using the GenAI tools to generate audio or audiovisuals that meet your needs.

    Also, you can use tools to edit any of your previously created audio and videos. These audio-generating AI tools with various applications include audio generators, AI music generators, audio enhancers, text-to-speech tools, and AI video generators.

    Finally, specific examples you can use to create, generate, or edit your voice include ElevenLabs, WellSaid, LANDR, Speechify, and Descript, among others. Generate everything to do, audio and videos based on your text or descriptions written in the natural language of your choice.

    d. Text Analysis

    Moreover, the GenAI tools can help you analyze and classify text. They work in conjunction with large language models (LLMs) to perform text analysis tasks for us.

    Are you wondering how they do that? Do not worry; I will explain later in this tutorial how natural languages and LLMs work together to analyze and classify text.

    An example of a tool that we used to analyze text before AI came was Grammarly.

    Since it integrated with GenAI, it added text analysis functionality to generate classified content from scratch or edit existing content.

    e. Generating Program Code

    The AI tools used for generating program codes.

    On the other hand, code-generative AI makes it possible to create programs automatically using sophisticated algorithms that understand programming languages and patterns. It is based on learning how to generate new functional code according to predetermined requirements from existing codebases.

    AI systems predict and generate code pieces, functions, or entire applications by applying machine learning techniques, hence broadly improving the speed at which development is achieved. This technology harnesses vast amounts of data and computational power to continuously improve its ability to autonomously write efficient and reliable programs.

    How Generative AI Works

    How Generative AI works.

    Generative artificial intelligence is designed and developed to understand our natural languages and conversationally reply to them. Due to this capability, we can communicate with it like a fellow human being.

    We give it input in different formats, such as text, image, video, or audio, and it generates an output.

    As a result, we can use GenAI to work on tasks just like we would do with a workmate.

    Therefore, based on the above types and uses of GenAI, it works on two crucial things:

    • Natural Languages.
    • Language Models.

    Let us briefly look at the two terms.

    i. What is a Natural Language?

    By definition, a natural language refers to the language spoken by humans. As a result, we can communicate with AI assistants or Chatbots and receive human-like responses as we do with our fellows in our language.

    Therefore, we can give GenAI tasks to work on together to improve productivity. Natural language is the secret to its magical abilities to jungle millions of stories and generate real-time human-like responses.

    ii. What is a Language Model in GenAI?

    Similarly, a language model refers to a specific program capable of determining which words are likely to follow each other based on context. Also, the program analyzes corpus sequences to determine which words follow. Thus, a language model powers next-word predictions by detecting which words will likely follow others.

    I know you might be wondering what corpus sequences the program analyzes and where they come from, right? Okay, no need to worry. For us to vividly understand it, we need to ask ourselves and answer the following simple question. What is a corpus, and where does it come from? Now, let us answer these questions in the following section.

    What is a Corpus, and where does it come from?

    Generally, in Latin, Corpus means a “body” or “a body of work.” Thus, considering language research, a corpus is a large body or collection of extensive related writings. In other words, a corpus can be a collection of the written works of a single author or a related subject that have been put together overtime. They are used to study natural language petterns.

    According to Bot Penguin, corpus exsists in three distinguished forms. We have monolingual, parallel, and the multilingual corpus. If you want to learn more on types of the copora, click the Bot Penguin link at the beginging of the paragraph.

    Definition of a Corpus in Generative AI/GenAI

    When it comes to GenAI, in simple terms, a corpus is a body of related text data on a subject. In this case a body can be a well organized database or a dataset. Thus, the language models we’re talking about learn from these collections of texts and files.

    For example, if a language model learns from text data from medical records, then it is a medical corpus. Another example is that if a language model learns from a collection of texts or audio files related to computer science, then that is a computer technology corpus.

    How a corpus is made or where it comes from.

    A corpus is made or comes from collecting different related texts or audio recordings and grouping them to form datasets or databases. First, the texts are collected following certain criteria that are created and validated by the individual building the corpus in a particular field.

    Then, the collected materials are formatted and digitalized in a manner that supports smooth processing. Once processing is done, the text is annotated by adding more unique, identifiable information. After the collection of texts is analyzed, they are combined into a database. When the database is complete, the beginning of learning either by people or machines starts.

    Since human natural languages keep evolving, significant changes are continuously integrated into the database to reflect the advancements.

    Finally, as the corpus grows and encompasses huge and diverse subjects, it forms a Large Language Model (LLM). Therefore, a language model learns from a corpus.

    How do we interact with Generative AI tools?

    First, note that not all Generative AI applications are conversational, and not all conversational AI applications are generative. Therefore, examples of these conversational Artificial Intelligence assistants include Google’s Gemini, Open.ai’s ChatGPT, and Jasper AI. However, the conversation between us and any conversational GenAI assistants starts with a simple or complex prompt.

    How we interact with conversational AI tools.
    We interact with GenAI by prompting the AI tools.

    What is a Prompt?

    A prompt is the statement you enter into the AI Chat interface. It is also known as the input. After you input a statement, the GenAI gives you a response based on the input. As a result, your input query determines the quality of the response received.

    For example, if you write a poor prompt, the AI assistant will respond inadequately. Since the output solely depends on the input, one prompt can produce two different answers depending on how you write it.

    Besides, you can edit or add more details to your prompt to improve the final results. This is because the technology is flexible and thus very creative.

    How does Artificial Intelligence (AI) Choose the Next Word?

    The language model uses two methods to choose which word to follow. These are;

    i. Probabilities

    In this case, first, the language model measures how frequently words follow each other in various contexts and the corpus sequences of words. Then, it calculates the probabilities of each word being selected.

    Next, the probabilities are then represented as percentages for better understanding. For example, a word with a probability of 50% is more likely to be chosen than 5%. After the word is chosen and put in the text, the language model repeats the same process to select the next word until the output is complete.

    As a result, the final production can be an email, a sentence, a paragraph, or several paragraphs.

    ii. The Temperature Setting

    Secondly, the temperature feature is another crucial element of a language model. It controls and influences the randomness or creativity of the text generated.

    At the lowest temperature, the model is said to be sleeping, and there is no randomness. In other words, the input is similar to the output.

    Therefore, as temperature increases, model randomness increases. At very high temperatures, uncommon and unlikely words become probable choices or likely.

    At the lowest temparature, the ouput is the same as input - there is no randomness.
    At the lowest temparature, the ouput is the same as input.

    For example, “Playing” is the next word 100% at the sleeping level. The result is backed up by the fact that the other words have a zero probability. As a result, they cannot be chosen and this means that the word “Playing” is the only word that appears on the user interafce in the AI tool.

    Considering the author screenshot below, when the temperature is adjusted to the middle, it is 51% likely to be selected. As a result, other words start to become likely. For example, the probability of the word “eating” being suggested becomes 13% while “sleeping” becomes 16%.

    When temparature increases (by adjusting to the middle) other words become recommended.
    Adjusting temparature to the middle lowers probability for the same word.

    Furthermore, when the temperature is highest, all the words have the same probability of 20%.

    Equal probability for all words to be suggested as the next word.

    This means that the language model is less creative and more random. Even though the determination of the next word by GenAI tools is done in the background by experts, understanding how it works greatly influences your choice of the outputs. For example, when you prompt it and receive less accurate results, you will understand that the temparature may be high and give it sometime before trying again.

    What are Large Language Models (LLMs) in Generative AI?

    These are language models whose corpus contains massive internet-based text data on various subjects. The text data include books, public conversations, articles, and webpages. The text data is in different languages.

    Based on this, most Large Language Models are multilingual. Therefore, they are powerful natural language experts who are good at predicting the next word or text completion. They give GenAI the ability to understand and generate natural language.

    Thus, they communicate effectively in two-way chats with humans and at a high degree of accuracy.

    How Large Language Models Work.

    This is where you learn what tasks you can give to LLMs and leverage their power to be productive in your job. They are good at following human instructions.

    Since the AI Assistant or Chat is connected to LLMs, we can give them work and we work together as team on various tasks. Working with them saves us time because they can follow our instructions.

    In summary, since LLMs are excellent experts at following our instructions, we use them for the following tasks.

    a. Building Chatbots and Artificial Intelligence (AI) assistants.

    Relies on understanding and processing natural languages.

    b. Content Classifications Using Generative AI.

    Use its analysis and categorization abilities.

    c. Use Generative AI in Writing and Reading.

    Use them to generate new content or read other texts.

    d. Automation of Daily and Repetitive Workflows.

    Create scripts to automate your workflows for efficiency.

    Summing Up on Generative AI

    In conclusion, the article has extensively introduced you to Generative Artificial Intelligence (AI), popularly known as GenAI. You have learned how Generative AI applications are powered using natural language, a language model, and the Large Language Models (LLMs).

    Additionally, it has taught you the types and daily uses of GenAI tools. They include text or content generation, audio generation, creating videos, and editing or creating new images.

    Furthermore, the tutorial described and demonstrated how AI chooses the next word and adds it to the text. We learned that it uses probabilities represented as percentages and the temperature setting for a language model.

    Additionally, it has explained how LLMs work and how we use them in our daily lives. Such uses include building Chatbots, Classifications, Writing and Reading, and Automation.

    As a result, now you know how to influence the creativity of any Generative AI application.

    Similarly, you can tell when the responses you receive from your Generative AI Assistants and Chatbots are likely or unlikely to be accurate. This is because you know that the temperature setting of the language model influences responses by influencing the next word selected and added to the text in a repeated manner.

    Therefore, go out and apply the Generative AI knowledge gained in an excellent course. Finally, if you like our content, refer your friends we learn together and subscribe to our newsletter.

  • 7 Top Trends in Cloud Computing Technology You Must Follow

    7 Top Trends in Cloud Computing Technology You Must Follow

    Have you ever asked yourself what cloud computing technology will look like in the next decade? Or wondered what are the key trends in cloud computing you should be aware of in 2024? Whether you have asked yourself these questions or not, buckle up for an exciting, insightful, or mind-opening experience in this subject.

    In this tutorial, we will answer the above questions easily so you can comprehend and understand. As cloud services and web-based computing advance, individuals and businesses should take advantage of the recent trends in the niche/sector.

    Therefore, this tutorial is curated to quench your thirst for knowledge on the recent trends in cloud computing. Before we discuss the 7 top most cloud computing trends, let us explore some interesting statistical facts in 2024 about web-based computing. This will lay a strong background for understanding the key trends we will discuss later in the post.

    5 Interesting Statistical Facts about Cloud Computing Technology in 2024

    Cloud remains one of the key trends in the modern technology niches in the technology sector or industry. Its high scalability, flexibility, and efficiency allow many businesses worldwide to adjust to change. In recent years, the entire industry has made amazing progress, which helps companies boost their operations and generate innovation.

    In order to broaden our understanding, let’s take a closer look at the following facts.

    1. According to an August 2024 market survey report, the total 2023 market size was estimated at 587.78 billion US dollars. In 2024, the size grew to USD 676.29 billion. In addition, it is focused that by 2032, the market size will increase to whooping 2,291 billion dollars. We can take advantage of this market growth to reap big.
    2. In 2023, 57% of information technology decision-makers surveyed reported that they had accelerated their migration to the cloud. From this survey, I conclude that in the near future, most organizations, if not all of them, will have adopted cloud services. Therefore, I predict employees who do not upgrade by learning cloud services risk their jobs and future professional growth.
    3. As stated by the FOUNDRY research findings in 2024, 70% of organizations and businesses are defaulting to cloud services when upgrading or purchasing any new IT technical capabilities. Additionally, the study indicates that 60% of the survey participants agreed that cloud/web-based service capabilities helped achieve high and sustainable revenues in one year. This can mean that the cloud increases productivity, and everyone should embrace it.
    4. Furthermore, the FOUNDRY report indicates that 65% of organizational technology leaders expect their IT budgets and expenditures in the cloud to increase, and 31% plan to keep them steady. Based on the figures, we deduce that there will be more job opportunities in cloud computing as organizations invest more in it than in traditional IT. These trends are expected to increase in 2024.
    5. According to “Fact 15 on the integration of Artificial Intelligence (AI) into the cloud”, 65% of all the AI applications worldwide will be hosted in the cloud. This means more than half of the new technology revolutionizing our lives is hosted in the cloud. Therefore, cloud considerations or adoption strategies will be inventible in 2024 and in the future for every business.

    Next, let’s explore 7 of the most recent trends that are going to define tomorrow’s cloud generation.

    7 top trends in cloud computing technology

    1. Hybrid Cloud Solutions: Closing the gap between On-Premises and Cloud-based Environments

    Hybrid clouds are important for organizations because they enable them to combine their on-premise systems with cloud services. This way of doing business simultaneously takes advantage of both settings, optimizing performance and being the most cost-effective.

    By adopting higher levels of management and orchestration tools, organizations will be more agile and scalable across their hybrid cloud platforms.

    2. Edge Computing: Boosting Speed and Lead Time

    Edge computing today has grown in popularity thanks to its ability to provide the desired response time and reduce latency in business applications. By processing data right in front of where it is produced, edge computing removes the need for data to be sent long distance to servers in the cloud.

    It is beneficial for deploying IoT systems because of real-time visibility and action-taking on the edge.

    3. Serverless Cloud Computing Technology: How to Redesign Application Development.

    Serverless computing is one of the new and highly embraced advancements in the cloud computing industry. The concept has entirely changed the way programmers develop and deploy applications.

    Serverless Computing

    As a result, the model has taken responsibility for provisioning and maintaining backend infrastructure from the programmers so they can concentrate on their coding tasks exclusively. It should be noted that one can acquire and use numerous and readily available serverless platforms in the marketplace. Additionally, these platforms are owned and operated by different cloud service providers (CSPs).

    Examples of the Most Popular Serverless Platforms

    Since different CSPs offer them, it is conclusive that they have different features, prices, focus, limitations, and language support, among others that you may think of. Some examples of the most popular serverless platforms from the most commonly recognized CSPs are;

    1. AWS Lambda. It is one of the fully managed cloud services built, provisioned, and managed by Amazon Web Services and offered to customers globally.
    2. Azure Functions. This is a readily available event-driven solution for businesses, one among the Microsoft Cloud Services.
    3. Google Cloud Functions. It is Google’s primary execution environment. It enables and facilitates connection to cloud services for serverless computing activities, such as building and deploying applications.

    You should note that other serverless computing tools exist, such as Netlify Functions, Vercel Functions, and Cloudflare Workers, so the list of examples is not exclusive.

    Furthermore, serverless computing tools enable individuals and organizations to save money by utilizing resources more effectively with the pay-as-you-go approach. This revolutionary path is giving rise to new product design and speeding up the launch of digital projects.

    4. Artificial Intelligence (AI) and Machine Learning (ML) Integration: Unlocking Data Insights using cloud computing technology

    Combining AI and ML with cloud services opens the doors to new heights of data analysis and knowledge discovery. Cloud providers provide various AI-driven services, such as computer vision, predictive analytics, and NLP (natural language processing), which help businesses extract helpful information from vast amounts of data. The amalgamation or merger of cloud and AI technologies is the primary driver of innovation in the industry, from healthcare to finance.

    4.1. What is Artificial Intelligence (AI)?

    In simple terms and using layman’s language, artificial intelligence, popularly known by its acronym AI, refers to the science of making computer systems as intellectually intelligent as humans. Considering AI, human intelligence is simulated in different types of machines by developing and programming them to think like humans and exactly mimic their actions.

    In development, machines are given human cognitive abilities such as perception, learning power, reasoning, and natural language understanding. Therefore, in the end, they are capable of doing tasks that could previously be done by humans only. For example, you can use Generative AI to revolutionize your daily work.

    To gain more insights on interacting and using GenAI to improve your productivity, read our comprehensive tutorial. What is Generative AI? Everything you need to know in 2024. The tutorial expounds on how we use one of the arguably recent Artificial Intelligence (AI) innovations.

    4.2. What is Machine Learning (ML)?

    Machine Learning is a branch of Artificial Intelligence. ML focuses on creating and training algorithms using data sets and using them to create models that enable and support machines to perform tasks previously deemed to be performed by humans only.

    In other words, the algorithms are trained to analyze and find patterns and relationships in a particular dataset and use them to make data-driven decisions about various topics. To explore machine learning more, read our extensive tutorial – What is Machine Learning? Definition, History, Uses, & More.

    5. Quantum Computing: Leading the way in developing unprecedented computational capabilities.

    Quantum computing is one of the many new paradigms emerging in computing power. It allows for solving problems of excessive complexity at an unprecedented rate. Cloud providers are actively involved in quantum computing research and development by providing access to quantum processors through their platforms.

    Although quantum computing is just at the beginning of its development, it has the potential to transform the future of cryptography, materials science, and optimization.

    6. Security and Compliance Enhancements: Ensuring Data Protection.

    When organizations move critical workloads to the cloud, security and compliance measures become even more crucial. Cloud providers are constantly improving their security offerings, including technologies such as encryption, threat intelligence, and identity management, to ensure data safety and counter cyber threats.

    Furthermore, compliance certifications and frameworks are crucial for businesses in regulated industries as they help build trust in cloud services.

    7. Cloud computing as disaster response and recovery plans: Staying live online with backups.

    Unanticipated disasters can hit a business’s information technology systems anytime, disrupting services. According to a LogicMonitor 2023 Survey Report, 96% of all global information technology (IT) decision-makers and managers have experienced at least one outage in the past three operational years. To me, with only 4% remaining, this implies that downtime is rampant and can happen to any company at any time.

    As we all know, downtime in service delivery systems can cause huge losses and even the loss of loyal customers. To avoid this kind of mess, cloud computing comes in and helps in backing up all critical data. In the past, many companies or organizations using traditional IT infrastructures have faced huge losses due to disasters such as in-premise server crushes and cyberattacks.

    Today, many businesses and organizations have tapped into cloud computing to back up critical data and business information for quick recovery in natural disasters, hardware failures, data losses, and power outages. According to Otava 2024, 48% of medium-sized companies, 38% of small-sized businesses, and 26% of large companies had adopted cloud computing for data storage, backup, and disaster recovery.

    Also, according to Cody Slingerland, 80% of big organizations/businesses use more than one public and private cloud service.

    In summary, these statistics signify the future growth of cloud computing for disaster recovery worldwide. In 2024, disaster recovery will increase, with 94 percent of major global companies embracing or acquiring cloud computing for their operations. This is because other companies not in this category will follow suit. Let us prepare for all the opportunities coming with this trend.

    In a Nutshell!

    While these are only the top 7 trends, other trends exist, and the list is not in any way exhaustive. I have tackled and enlightened you on the top 7 overall cloud computing trends that are most important for any knowledge seeker or industry entrant. Therefore, they cover the general industry trends and what is happening as the cloud computing industry progresses.

    Thus, there is online content to do your research for specific trends such as market size, cloud computing expenditure, jobs, and future predictions. As Pythonic Brains, we promise to cover most of the remaining concepts mentioned above in the near future. From here, you can choose which trend to take advantage of and make the most of it.

    Additionally, the fast advancement of cloud technology keeps changing the digital field, allowing organizations to innovate, expand, and remain successful in the highly competitive industry and business environment. It goes from hybrid cloud to AI integration using the cloud as a platform for change. With the help of continuous education and acceptance of new trends, individuals, businesses, companies, and organizations can be ready for the future of cloud computing in the dynamic world.

  • Install AWS CLI on Windows OS Computer: How to Download and Install the AWS CLI Version 2

    Install AWS CLI on Windows OS Computer: How to Download and Install the AWS CLI Version 2

    After creating your AWS account, you can access it to provision resources or manage cloud services in the different ways AWS provides. First, you can access it through the AWS Management Console, which is the user’s graphical interface to access AWS Cloud Services. Secondly, you can access it through the AWS SDKs (Software Development Kits). These developer tools allow you to develop, build, deploy, or manage your applications in your AWS account. Thirdly, an AWS account can be accessed through a Command-Line Interface – AWS CLI.

    Similarly, this is the non-graphical user interface where interaction with the system is through text commands. Thus, the commands are written on a terminal interface, facilitating communication with your cloud resources and services. As a result, AWS has created a readily available Command Line Interface (AWS CLI) for Windows users.

    Introduction

    In the realm of cloud computing, efficiency and automation are key. Amazon Web Services (AWS) Command Line Interface (CLI) offers a streamlined way to locally interact with your AWS resources and(or) services directly from the command line. This empowers users like you and me to manage their cloud infrastructure easily.

    Whether you are a seasoned developer or just dipping your toes into the world of cloud computing, mastering the AWS CLI opens up a world of possibilities for accessing, building, and deploying resources on your AWS Cloud Services account.

    In this guide, I will help you unlock the power of cloud management. Following this tutorial, we will download and install AWS CLI on your Windows computer. Therefore, it is a set of simple steps curated to meet your installation needs.

    Before diving into the installation process, let’s briefly explore why you might choose to use the AWS CLI. In this tutorial, I’ve summarized four main reasons and benefits for using the AWS CLI.

    4 Top Benefits or Reasons Why You Must Use AWS CLI?

    1. Efficiency: Performing tasks via the command line can often be faster and more efficient than using a graphical interface.
    2. Automation: The AWS CLI allows you to reduce or eliminate repetition by automating daily tasks. Therefore, it saves you time and worries about making human errors.
    3. Flexibility: The AWS CLI tool gives you access to all AWS services and features, giving you unparalleled flexibility in managing your cloud infrastructure.
    4. Scalability: The AWS CLI scales to meet your needs, whether managing a small-scale deployment or a sprawling enterprise infrastructure.

    Now that we’ve highlighted the benefits, let’s begin the installation process.

    Downloading and Installing AWS CLI on Windows

    There are three simple steps to download, install, and verify the version of the AWS CLI installed. Follow them, and you will be done in a few minutes.

    Installing AWS CLI
    Simple steps to download and install the AWS CLI on a Windows 10 Machine.

    Step 1: Download the AWS CLI Installer.

    There are two ways to download the CLI installer for Windows. First, to save you time, I searched for the best and most reliable source and provided you with direct access to the installer. Therefore, you can confidently click the link below to download the installer because it is verified. You will be taken to the official AWS CLI documentation page: https://aws.amazon.com/cli/. The Windows installer is the first one on the list on the right side. You need to install the 64-bit version; therefore, click the “64-bit” link in blue.

    AWS CLI Download page.
    Click the 64-bit Windows installer link in blue color to download the AWS CLI.

    Saving the AWS CLI Installer Locally on Windows

    After clicking the 64-bit link, you will be prompted to save as the file on your computer. You will be prompted to save it in the downloads folder by default. Keep the default file name and click save.

    Save the Installer File in the Downloads Folder.
    Save the Installer File in the Downloads Folder.

    Immediately after the downloading process is complete, a notification will appear on the Google Chrome downloads icon.

    The downloading process is complete.
    You will be notified when the downloading process is complete.

    However, if it does not pop up or the notification window disappears before you see it, navigate to the downloads folder.

    Open the downloads folder and yo

    How to Change the Download Storage Location Folder.

    If you wish to change the storage location on your Windows computer, you can use the  “Save As” dialogue panel. To do this, click one of the available locations on the left side menu. For example, I selected Desktop to change from Downloads to Desktop.

    Changing the download storage location in your computer
    On the left side of the navigation menu, click to choose any of the storage locations to keep your downloaded file.

    Additionally, if you want to access more storage locations on your hard drive, click on the “This PC” menu. Next, scroll past the system folders to see the primary storage locations on your Windows computer hard drive. For example, the screenshot below shows three storage locations on my hard drive.

    Different hard drive storage locations.
    Click the “This PC” menu to access more storage locations on your hard disk drive.

    Pro Tips

    Double-click on any available storage to open it and store your file. If your hard drive has not been partitioned, you will only see one storage location when you open it.

    Therefore, you do not have to worry about it. Nonetheless, you can choose among the system folders such as 3D Objects, Desktop, Documents, Downloads, Music, Pictures, and Videos.

    Before we embark on step 2 of the downloading processing, let us look at an alternative way to download the AWS CLI installer.

    Alternative Downloading Method: How to Download AWS CLI Directly on your browser, such as Mozilla and Opera Mini

    If you are not interested in opening the provided link or it does not work on your current device, there is another way to do it. The second method is to use your favorite browser on your Windows machine to search for it online and follow the installation procedures. For example, my favorite browser is Google Chrome. I got this first search result by typing “AWS CLI for Window” in the search bar and pressing the enter button on my keyboard.

    Google Chrome browser search settings.
    Open your favorite browser, search AWS CLI for Windows, and press enter button on your keyboard.

    On the same note, you will receive similar results if you perform the exact search in your browser. Then, you can download it from one of the sources and install the AWS CLI. For example, my search results from the Mozilla Firefox browser were as follows.

    Mozilla Firefox Search Results
    Search results on the Mozilla Firefox browser.

    Similarly, an internet search using the OperaMini browser yielded the same search results. See the screenshot below as captured in realtime.

    Opera Mini browser search results.
    Similar internet search results that were obtained using the Opera Mini browser.

    Upon opening the first search result on any browser, you will land on this AWS Documentation page, as shown in the screenshot below. Every procedure described in it is correct. However, reading and choosing the correct download button or link will take time.

    AWS Documentation page for AWS CLI.
    Opening the first link on the search results leads to the official AWS CLI Documentation page.

    As you can see from above, the results are not specific, so you have to choose, unlike in method one, where I direct you to the exact installer version and download link.

    Pro Tip Recommendation

    Follow the first method. It directly opens the Official AWS downloads page, ensuring a reliable and secure file download. I tested it before writing this guide.

    Step 2: Run the Installer

    Navigate to the local storage location in your computer where you saved the installer. Next, double-click to run it. Then, follow the on-screen instructions to complete the installation process. For this tutorial, I opened the installer from the downloads storage location by clicking on it.

    Double click the installer to run it.
    Double clicking the installer opens the version 2 setup wizard.

    Then, as highlighted above, I clicked the “Next” button on the setup installation wizard. Next, I accepted the end-user agreement license for the wizard to proceed.

    Accept the end user agreement license.
    Select the checkbox below the license agreement to accept the end-user terms in the agreement.

    Select the box before the agreement statement and click next. The wizard will open the setup installation panel within a moment. Click Install at the bottom of the installation panel.

    Click the "Install" button to start the installation process.
    Click the install button to continue with the installation process.

    Next, when prompted by Microsoft Windows user account control to allow changes, click Yes to proceed.

    Click yes when prompted by the Microsoft Account Management wizard.
    Click “Yes” to allow the applications to make changes on your Windows computer.

    After that, the installation wizard will ask you to choose the default installation location or a custom installation. Leave the default and click next. By default, the installer will install it in the “C:\Program Files\Amazon\AWSCLI” directory.

    Default installation path or folder.
    Leave the installation folder and path as default.

    Then click next to install and wait for the installation process to complete.

    Installation in progress. Wait until the setup successfully finishes installatation.
    Installation in progress. Wait until the setup successfully finishes installing.

    The process takes a short while. When the setup is done, the following panel will appear, asking you to finish the installation.

    Wizard installation completed successfully.
    Click the “Finish” button to end the installation process of the Command Line Interface.

    Step 3: Verify the Installation

    Next, use the Windows command prompt terminal to verify that the Command Line Interface has been installed correctly. To open the command prompt (cmd), go to the Windows search bar on your taskbar and type cmd. From the popup menu, click open.

    Open the Windows 10 Commandline Interface (CMD) to verify installation.
    Open the Windows CMD to verify if the CLI version 2 has been installed successfully.

    Once the cmd opens, type the command aws –version and enter.

    Type in the CMD the verification command.

    If the installation was successful, you should see the version of the CLI displayed in the command prompt interface as follows.

    AWS CLI version 2 has been installed successfully on the Windows 10 machine.
    Now, you have verified that CLI was installed successfully on your Windows 10 computer or PC.

    Conclusion

    In conclusion, mastering the AWS CLI is a valuable skill for anyone working with AWS cloud services. By following the steps outlined in this guide, you can quickly download and install the AWS Command Line Interface on your Windows machine, enabling you to effectively manage your cloud services and resources using the command line.

    Whether you’re deploying new EC2 instances, managing S3 buckets, or automating complex workflows, the AWS CLI puts the power of AWS at your fingertips. Start exploring the possibilities today and unlock the full potential of cloud management with AWS CLI.

  • What is Cloud Computing? Definition, Types, Benefits, & More

    What is Cloud Computing? Definition, Types, Benefits, & More

    Cloud computing has become a disruptive force, redefining how people (you and I) and businesses engage with technology in the digital age, when data is critical, and connectivity is everything.

    This article explores the definition, background, uses, benefits, drawbacks, and potential future developments of cloud computing services and technology to give readers a thorough understanding.

    What is Cloud Computing? Definition and Overview

    In technology today, we consider and view or describe cloud computing as the provision and delivery of different web-based (cloud) services through the internet to customers on a daily basis based on their demand. Since the computing services are provided over the internet, it is not a must for users to have on-premises information technology equipment before they can access the cloud computing services.

    In other words, cloud computing is the act of remotely delivering on-demand computing services such as processing power, storage, databases, analytics, and software stored in the cloud where users can access resources and applications stored outside their premises or business location. In this case, “the cloud” refers to the internet, which is at the core of cloud computing, facilitating all the communications and interactions between users and cloud resources. Therefore, from the definitions above, one can infer and conclude on two main things, that is:

    1. When considering service access and utilization for individuals, businesses, or organizations, outside suppliers usually manage and maintain the underlying infrastructure of these services. These suppliers are collectively known as Cloud service providers (CSPs). Major cloud service suppliers include AWS, Google Cloud, and Microsoft Azure.
    • Users will always pay for the services they use only. Thus, cloud services give them cost-effectiveness, scalability, and flexibility. With “pay-as-you-go,” businesses can easily adjust their service usage to meet growing demand.

    What are the 3 Types of Cloud Computing?

    If you look at the definition of cloud computing, part of it says, “It is the delivery of on-demand,” meaning that users’ needs vary and not all cloud services are the same for everyone. For example, the AWS Cloud platform and services are not the same as Microsoft Azure cloud platform services, but they are similar in many ways, including the fact that both are cloud services platforms. Therefore, it means that no single cloud service fits all needs or is suitable for everyone.

    In summary, before you begin using a particular cloud service, consider the available types, models, and services to choose the one that will help you develop a solution that meets all your needs and those of clients and business activities. The main 3 types to choose from are private, public, and hybrid cloud.

    Three main types of clouds in Web Based Computing: Private, Public, and Hybrid Cloud.Private,
    The three main types of clouds in web based computing are Private, Public, and Hybrid Cloud.

    i. Private Cloud

    As the first word implies, these are exclusive computing resources and cloud services used by a single business, company, or organization. This type of cloud can be implemented in two main ways. First, a company can acquire the infrastructure and locally host the services in a data center found in their physical premises. Secondly, a company can hire a third-party service provider to implement and host their private cloud. In conclusion, in the private cloud setup, the cloud infrastructure and services are entirely managed on a privately secured network.

    ii. Public Cloud

    According to Microsoft Azure, one of the prominent cloud service providers, these are third-party cloud services owned, operated, and provisioned over the public internet. Thus, they are readily available to anyone who wants to use them. Also, it is essential to note public clouds can be used for free through limited access or purchased for more advanced services and functionality. For example, after I installed my Microsoft Windows 10, I was given 5 GB of free storage space on OneDrive. If I need more storage space, I must subscribe for more space, up to 100 GB.

    Finally, in this type of cloud, you have to create an account and use a web browser for access and resource management. This is because everything, including software, hardware, and supportive infrastructure, is owned and maintained by the public cloud owner. For example, Microsoft Azure is a public cloud you can use for free, and Microsoft Corporation owns and manages everything.

    iii. Hybrid Cloud

    When we talk about hybrid cloud, we’re referring to a cloud system that combines private and public clouds. It is developed using technology that links the two and allows applications and business data to be shared between them. Sharing and allowing data and applications between the two types of clouds gives you flexibility and more options in infrastructure deployment. Thus, a hybrid cloud helps you and your business optimize the existing infrastructure, enhancing security and compliance with industry standards.

    Based on the three types of cloud computing, you can deploy and implement cloud services in four broad models: SaaS, IaaS, PaaS, and Serverless computing. In the next section, I’ll take you through each of these types of cloud services. Without wasting time, let’s dive into details.

    What are the 4 Main Types of Cloud Computing Services Models?

    4 types of cloud computing services models.
    The 4 types of cloud computing service models.

    Generally, cloud-speaking, it is essential to understand these service models and their differences to choose the right one for your business goals. The models are sometimes called “cloud computing stuck” because they build on top of each other. The computing services fall under the following four primary service models:

    a. Infrastructure as a Service (IaaS):

    This model is the most common cloud computing service among businesses and organizations. The IaaS online platform virtually offers users all the necessary system resources (hardware and software) required for normal business operations. Some of the resources include Virtual Machines (VMs), networks, operating systems, and storage spaces. In this case, everything provisioned to do with operating systems, apps, and middleware is under user control, with the cloud provider handling the underlying infrastructure. You rent the IT resources and services you want to use and pay for what you use in the pay-as-you-go plan.

    b. Platform as a Service (PaaS):

    This is an on-demand development environment cloud service designed and supplied to businesses to enable their developers to quickly and effectively build, test, deploy, and manage software applications. Therefore, with PaaS, developers do not worry about setting up and managing the underlying infrastructure but should focus on creating and delivering mobile and web applications.

    Based on the above definition and interpretation, PaaS focuses on freeing developers from traditional IT resource management tasks and chores that consume a lot of time. Thus, to make the process of developing applications more efficient, PaaS providers provide middleware, runtime environments, and development tools through the Internet.

    As a result, and in simple terms, developers are not involved in any application development environment and tools such as servers, databases, networks, storage setup, maintenance, or management.

    c. Software as a Service (SaaS):

    With SaaS, a business does not need to install an on-premise application. Instead, it has to subscribe to the services it intends to use to meet its goals. Then, the application software is delivered over the Internet, and users access the resources and applications using their favourite web browsers or specific Application Programming Interfaces (APIs). At the same time, the service provider manages functionality, updates, and maintenance.

    d. Serverless Computing:

    Like PaaS, serverless computing eliminates the need for developers or users to set up, configure, manage, and maintain infrastructures. In other words, in an automatically scaled and managed mechanism, the cloud provider provides all the necessary backend resources and services for development in what is known as “Backend as a Service (BaaS).” Therefore, service providers create and manage the infrastructure required for developers to do so in their network—including capacity planning, setup, server maintenance, and management.

    Since users are freed from IT infrastructure setup and management chores, they should focus on the front-end development tasks. Thus, their primary role is to create the application’s functionality and run the code directly to deploy and launch it in the provider’s infrastructure using APIs or the Internet.

    Examples of Serverless Computing Services:

    Below are the four main examples of commonly used and readily available serverless platforms you can use immediately. In a future post, I’ll describe each and recommend the best service provider. The list is not exhaustive and includes:

    i. Google Cloud Functions.

    ii. AWS Lambda.

    iii. MS Azure Cloud Functions.

    iv. Cloud Code Functions for IBM.

    What Does the Name “Serverless” Mean in Cloud Computing?

    It is important to note that being serverless does not mean the absence of servers; the servers are there, and applications run and operate on them, but on the cloud provider’s side. Finally, in Serverless Computing, architectures are event-driven, meaning they use resources when triggered and release them when no events are to be handled.

    What are the Advantages or Benefits of Using Cloud Computing?

    Have you begun embracing or planning to adopt cloud computing in your business operations?

    If not, you must consider it early enough because it is the new wave. It is essential to join the rest to revolutionize how your business operates and engages its customers. Before I list and discuss the benefits you receive from using cloud computing, check out the following stats to understand why you should start using it.

    Interesting and Advantageous Statistical Facts

    As of 2024, according to a usage statistics summary by Rok Krivec combined from different sources, including the Colorlib blog;

    • a. 94% of companies worldwide already use different types of cloud computing.
    • b. More than 61% of the free cloud computing plans in different provider platforms are used for personal file storage.
    • c. 40% and more of organizations or companies are using cloud computing services in their daily operations to optimize time and costs.
    • d. 600+ billion market-worthy estimations of user expenditure in 2024. The growth is predicted to be even bigger in the coming years.
    • e. People use 94% of cloud storage for publicly sharing files.
    • f. Furthermore, research insights revealed that by 2025, 85%+ of global organizations will have embraced a cloud-first principle in their endeavors.
    • In brief, these numbers indicate that many businesses have changed their traditional ways of thinking about their information technology (IT) resources. This mindset shift is because cloud computing has caused a huge shift in the way IT networks and resources are acquired, set up, secured, and maintained.

    Moving on to our main agenda, you get numerous advantages from using cloud computing services. In this article, I will cover the top 7 common benefits. They include:

    1. 1. Global Scalability: Because cloud resources may be adjusted in size as needed (elastically), enterprises can meet seasonal demand and varying workloads without over-provisioning. From a cloud computing perspective, this means delivering IT resources and services such as storage, computing power, and bandwidth in the right amount when needed and from the right geographical location worldwide. For example, as of August 30, 2024, AWS has 34 geographical regions containing 108 availability zones in its global infrastructure to support scalability.

    2. Cost-Effectiveness: Pay-as-you-go pricing strategies minimize upfront capital investment in hardware and infrastructure by allowing businesses to pay for only the resources and services used. With cloud computing, businesses and organizations do not purchase software or hardware to set up, run, and maintain on-premise data centers. Thus, there are 100% IT cost optimizations.

    3. Speed: If you remember what we said when defining cloud computing, the services and resources are provisioned on-demand – only when required and automatically adjusted to cater to any changes in usage. Since most of the services are provided on demand and based on a self-service mechanism, businesses can easily provision any amount of computing resources in minutes, if not in seconds, with only a few clicks. As a result, cloud computing enables businesses to eliminate any capacity planning pressure on their shoulders and focus on other important goals.

    4. Increased Productivity: Cloud computing frees IT teams from daily tedious traditional data center tasks and enables them to focus on other important business goals. As we know, traditional onsite data centers require a lot of time-consuming IT work to set up hardware and software, as well as resource maintenance, including software patching and management chores. Thus, with cloud computing, you eliminate most of these traditional IT tasks and free teams to work on other things, enhancing productivity. Additionally, due to flexibility and accessibility, employees can work remotely, collaborate, and move around the world since cloud services are accessible from any location with an internet connection.

    5. Performance: Cloud service providers provide robust infrastructure with failover and redundancy built in, guaranteeing high availability and long-term data. For example, in addition to the 34 regions and 108 availability zones mentioned in benefit one (Global Scalability), AWS has 600+ CloudFront POPs and 13 edge caches in different regions to ensure superb performance.

    6. Reliability: Cloud computing uses a redundancy technique to back up data and ensure that resources and services are always available when needed. A business’s IT resources are distributed or mirrored into different redundant zones in the cloud services provider’s network, so if a disaster happens in one zone, they are accessible from another one. Therefore, they guarantee inexpensive business continuity during disaster recovery.

    7. Security: Today, due to rapid advancements in technology, data breaches, and privacy threats are everywhere, making IT resource protection costly. The good news is that most cloud service providers have put security policies, controls, and technologies in place to strengthen overall cloud safety. These security mechanisms help you protect your cloud infrastructure, apps, and data around the clock from potential threats. For example, I use the AWS AIM, their main security control mechanism, to manage how I access my AWS Root account daily.

    What are the Uses of Cloud Computing?

    Have you used cloud computing before? Your answer is “Yes” or “No.” What if I tell you most probably you have used it, but you’re not aware? I know you are asking yourself how. In a previous blog post on Generative Artificial Intelligence – GenAI, I told you how I found out I was using Generative AI without my knowledge. Yes, you read it right – that is what I’m telling you – I was using it without knowing. You can read “How I learned I was using Generative Artificial Intelligence (GenAI)” and tell me in the comments if you have been unknowingly using GenAI or Cloud Services, just like me.

    Moving on to our discussion on the uses of cloud computing, if you have used the following online services, you may have used cloud computing even if you did not realize it. You have used an online application to send emails, watch TV or movies, edit documents, play audiovisuals/music, store files/pictures, or play games. I say so because it is likely that the products and services owners providing these activities and services are using cloud computing platforms to support their operations behind the scenes.

    Check out some specific examples below and note that the list is not exhaustive – if need be, you can use the internet to find more examples.

    i. Email Sending Services

    Gmail and Yahoo Mail run on Google Cloud Platform, and Office 365 Outlook runs on Microsoft Azure Cloud Services.

    ii. Gaming Platforms

    Epic Games uses AWS Cloud for games such as Fortnite and Xbox Cloud Gaming, which are hosted online in Azure Cloud.

    iii. Streaming Services

    Movie streaming companies such as Netflix and Disney+ use AWS Cloud, while Spotify, a music-listening App, uses Google Cloud to deliver services.

    iv. Document Creation or Editing Services

    Google Cloud offers document creation and editing services, including Google Docs, Sheets, and Slides. Similarly, Microsoft Azure Cloud offers the MS Office 365 suite, which includes Excel, PowerPoint, and Word.

    v. File Storage Services

    For storage and file sharing, Azure Cloud offers Microsoft OneDrive, Google Cloud provides Google Drive, and the famous Dropbox uses a hybrid cloud for file storage and management, part of which is the AWS Cloud Platform.

    8 Top Most Uses of Cloud Computing

    Because of its adaptability, cloud computing has become widely used in a variety of sectors and use cases for various reasons. Typical uses for them include:

    1. Data Storage and Backup:

    Cloud storage services that enable data backup, preservation, and disaster recovery, such as Microsoft Azure Blob Storage, Google Cloud Storage, and Amazon S3, allow for scalable, dependable storage alternatives for companies of all sizes.

    2. Cloud-Native Applications Development and Deployment:

    Platform as a Service (PaaS) suppliers such as Google App Engine, Heroku, and Microsoft Azure App Service give developers the platforms and tools they need to create, launch, and scale apps quickly, which cuts down on infrastructure costs and time to market. The creation and deployment of these cloud-native applications are supported by cloud-native (RM1) approaches and technologies such as DevOps, containers, microservices architectures, Kubernetes, and API-driven Communications.

    3. Big Data and Analytics:

    Cloud systems, such as AWS Elastic MapReduce, Google BigQuery, and Azure HDInsight, provide robust tools and services for processing and analyzing massive amounts of data, enabling businesses to get insightful knowledge and make informed decisions.

    4. Enterprise Collaboration:

    In this case, collaboration is facilitated by software-as-a-service (SaaS) solutions such as Microsoft 365, Google Workspace, and Slack. These solutions provide productivity tools, messaging platforms, and file-sharing features accessible from any device.

    5. Internet of Things (IoT):

    Cloud computing makes real-time monitoring, prognostic maintenance, and intelligent automation across a range of industries possible. It offers the scale and processing power needed to manage and analyze data yielded by IoT devices.

    6. Streaming—Audio and Videos:

    Due to cloud computing’s global distribution capabilities, you can use any device to connect with any audience and stream high-quality videos and audio around the world. The only condition you have to meet is internet connectivity for access.

    7. On-Demand Software Delivery:

    With the SaaS (Software as a Service) cloud service model, you can easily and effectively deliver new and trending software versions and the latest updates for existing software to your customers anytime they need it at their comfort zones anywhere around the world.

    8. Build and Test Your Applications:

    With the cloud, you can cut app development time and costs by using readily available and easy-to-use computing infrastructures with an automated scale-up or scale-down mechanism that allows you to cope with changing usage.

    Naturally, everything, including cloud computing, has positive and negative qualities. In layman’s language, these are the good or bad sides. So far, in this article, we have already seen the positive qualities of web-based computing services. Next, let us look at some of its disadvantages.

    What are the Disadvantages of Using Cloud Computing?

    Cloud computing does, however, also come with certain drawbacks and issues. They include but are not limited to the following four. You can find more through research, but I’ll briefly take you through the listed four for this tutorial.

    a. Security and Compliance:

    Encouraging sensitive data storage on the cloud raises questions regarding data privacy, security, and compliance with industry standards and regulations. As a result of countermeasures, organizational data needs to be safeguarded and protected by implementing strong security measures and encryption methods.

    b. Vendor Lock-In:

    Client overreliance on a single provider or cloud provider monopoly might reduce adaptability and make it more challenging to integrate with other services and platforms. Using hybrid or multi-cloud cloud systems can help reduce this risk.

    c. Performance and Latency:

    Network latency and bandwidth restrictions can impact the performance of cloud-based applications, particularly for latency-sensitive workloads that need real-time processing and responsiveness.

    d. Data Transfer Costs:

    Transferring massive amounts of data into and out of the cloud can involve substantial expenses, especially for applications and projects requiring a lot of bandwidth and data movement.

    10 Top Most Examples of Industries and Businesses Benefiting from Cloud Computing: Application and Use Cases

    With the current rapid technological advancements, every industry has felt the impacts of revolutionizing technologies such as cloud computing, Artificial Intelligence (AI), and Generative AI, among others. Today, in our society, nearly all of us can agree that technology has changed our daily lives. We do things differently nowadays because technology, such as cloud computing, has made life convenient by impacting most sectors and perspectives, if not everything we do.

    Thus, cloud computing changes are felt right away, from how we learn in school to how we get diagnoses and treatment in our hospitals. Some of the sectors that have been revolutionized and are still facing potential technological transformation are;

    1. Education
    2. Financial Systems
    3. Insurance
    4. Banking Services
    5. Healthcare
    6. The Legal Industry
    7. Real Estate
    8. Hospitality
    9. Manufacturing and Production.
    10. eCommerce

    What are the Predicted Future Trends in Cloud Computing?

    Looking ahead, the following trends will influence how cloud computing develops:

    1. Edge Computing:The demand for edge computing solutions will be largely driven by two major factors. First, the rapid development and growth of IoT devices—where IoT stands for Internet of Things—and data generated. Second, due to the increased IoT data created, there will be increased requirements for real-time data processing from IoT equipment or devices. These solutions will take data processing and analysis closer to the point of data generation.
    2. Serverless Computing: By abstracting infrastructure management, serverless architectures free up developers to concentrate on developing code rather than setting up or maintaining servers. For event-driven and microservices-based applications, serverless technologies like AWS Lambda and Google Cloud Functions are becoming more and more popular.
    3. Adoption of Hybrid and Multi-Cloud Strategies: In the near future, many companies and organizations are expected to take advantage of the increasing number of cloud providers to prevent vendor lock-in by adopting hybrid or multi-cloud strategies at an increasing rate. This method provides more choice, flexibility, and resilience when distributing workloads among various settings.
    4. AI and ML: With Artificial Intelligence -AI and Machine Learning – ML techniques developing pre-trained and smart models, APIs, and scalable computing resources, cloud providers are allowing developers to create intelligent apps by incorporating AI and machine learning capabilities into their platforms. This trend is estimated to be more in the future since these two technologies are rapidly adopted and used in business activities and individual life.

    Conclusion

    To sum up, cloud computing redefines how we use and provide IT services by offering businesses of all sizes unmatched flexibility, scalability, and innovation. Numerous cloud service providers exist, including the AWS, Google, and Microsoft Azure Cloud platforms. Additionally, there are three main types of clouds: private cloud, public cloud, and hybrid or multi-cloud. Thus, your role is to choose one or a combination to achieve your goals.

    With the three types mentioned above, you have four different cloud service models to choose from, adopt, and implement in your business. The models are Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Serverless Computing. In addition, it is important to note that Serverless Computing is sometimes called Function as a Service (FaaS). However, as we shall see in a future post, FaaS is one of the Serverless Computing concepts, along with backend as a service (BaaS).

    Furthermore, we have learned that we use cloud computing in different ways, both directly and indirectly. By now, you should know whether you have been using the cloud in your business activities or personal life. Also, now you know that using the cloud has both positive and negative impacts; thus, it is your mandate to ensure you use it for positive gains.

    Even while issues with security, vendor lock-in, and performance still exist, new developments and trends indicate that the use and development of cloud computing will accelerate in the years to come. Stay tuned for new advancements, and I will bring them to you here in the Pythonic Brains cloud computing category.

  • How to Add the Control Panel, This PC, Network, and My Documents Icons to Windows 10 Desktop

    How to Add the Control Panel, This PC, Network, and My Documents Icons to Windows 10 Desktop

    Have you ever found yourself in a hurry to access something on your personal computer (PC), but you do not have a direct access icon on your desktop? Or do you find the process somehow long because you must use the File Explorer icon to access them? For example, you need to access and use the Control Panel, This PC, Network, and My Documents ASAP, but you do not have the icon on your desktop.

    Furthermore, you may want to quickly access crucial files, storage partitions, or program settings on your Windows 10 computer. If you have found yourself in this situation, you are not alone; most Windows users have found themselves in such a scenario.

    As a result, in this explicit tutorial, I will guide you through how to add all the essential Windows 10 icons on your desktop for easy access and use. The icons are the Control Panel, This PC, Network, and My Documents (sometimes called User’s Files).

    The Step-by-Step Process to add the Control Panel, This PC, Network, and My Documents Icons to Windows 10 Desktop

    Do you love straightforward processes or things in life? You are not alone; I am, too. I enjoy getting things done effortlessly. Since we love the same thing in life, that is why you came to this blog, and I prepared this tutorial for my fellow shortcut lovers. However, shortcuts do not mean I do not follow the proper channels to do things; I simplify the process, making it amazingly simple. In my case, these icons and other Win 10 shortcuts make my life a lot more enjoyable with only one click.

    Follow these steps to add all or your favorite of the four icons.

    Step 1

    On a blank desktop area, right-click and choose “Personalize” on the contextual or popup menu that appears.

    Right click on empty desktop space and choose personalize on the popup menu.
    Right click on empty desktop space and choose personalize on the popup menu.

    Next, in the personalization settings window that opens, select and click “Themes” on the left navigation menu.

    Click Themes on the left side navigation menu on the settings Windows.
    Click Themes on the left side navigation menu on the settings Windows

    Immediately, the Windows 10 Themes menu opens with various options to choose from.

    Next, click “Desktop icon settings” on the right-side navigation menu under the Related Settings option to open the icons panel.

    On the Theme Menu_ Under related settings_Click the Desktop Icon Setings on the right.
    On the Theme Menu_ Under related settings_Click the Desktop Icon Setings on the right

    In the Desktop Icons tab, you will see four icons with unmarked checkboxes: Computer (This PC), User’s Files (My Documents), Network, and Control Panel.

    The Desktop Icon Settings: Control Panel, This PC, Network, and My Documents.
    Choose any of the four desktop icons by selecting the checkboxes marked to add to your desktop.

    Note: The Recycle Bin checkbox is selected because it was placed on your desktop by default during Windows Operating System (OS) installation. Therefore, any other marked icon will be included on the desktop alongside the dustbin.

    Step 2

    Select and tick the Computer, User’s Files, Network, and Control Panel checkboxes. After that, click “Apply” to effect the changes.

    In my case I choose everything and clicked apply.
    In my case I choose everything and clicked apply

    Then, go on to click okay to finish the process.

    Next, click ok to finish the process
    Next, click ok to finish the process

    Step 3: Confirming if Control Panel, This PC, Network, and My Documents icons were added on your desktop.

    After completing the process, head to your desktop and confirm that the Desktop Icons have been added. In this tutorial, they were successfully added, as shown by numbers 1 – 4 in the desktop screenshot below.

    Go back to desktop and you will find them
    After that go back to desktop and you will find them

    To Sum it up!

    From now on, you can easily use them to access your PC settings, features, and other resources as soon as you need to without lengthy procedures. For example, to uninstall a program, you double-click the Control Panel icon on the desktop and access the uninstall options menu.

    Share On: