List of Top Trending Deep Learning Algorithms

Hello pupils! Welcome to the following lecture on deep learning. As we move forward, we are learning about many of the latest and trendiest tools and techniques, and this course is becoming more interesting. In the previous lecture, you saw some important frameworks in deep learning, and this time, I am here to introduce you to some fantastic algorithms of deep learning that are not only important to understand before going into the practical implementation of the deep learning frameworks but are also interesting to understand the applications of deep learning and related fields. So, get ready to learn the magical algorithms that are making deep learning so effective and cool. Yet before going into details, let me discuss the questions for which we are trying to find answers.

  • How does deep learning algorithms are introduced?

  • What is the working of deep learning algorithms?

  • What are some types of DL algorithms?

  • How do deep learning algorithms work?

  • How these algorithms are different from each other?

Deep learning plays an important role in the recognition of objects and therefore, people use this feature in image, video and voice recognition where the objects are not only detected but can be changed, removed, edited, or altered using different techniques. The purpose to discuss these algorithms with you is, to have more and more knowledge and practice to choose the perfect algorithm for you and to have the concept of the efficiency and working of each algorithm. Moreover, we will discuss the application to provide you with the idea to make new projects by merging two or more algorithms together or creating your own algorithm.

What is a Deep Learning Algorithm?

Throughout this course, you are learning that with the help of the implementation of deep learning, computers are trained in such a way that they can take human-like decisions and can have the ability to act like humans with the help of their own intelligence. Yet, it is time to learn about how they are doing this and what the core reason is behind the success of these intelligent computers. 

First of all, keep in mind that deep learning is done in different layers, and these layers are run with the help of the algorithm. We introduce the deep learning algorithm as:

“Deep learning algorithms are the set of instructions that are designed dynamically to run on the several layers of neural networks (depending upon the complexity of the neural networks) and are responsible for running all the data on the pre-trained decision-making neural networks.”

One must know that, usually, in machine learning, there is tough training to work with complex datasets that have hundreds of columns or features. This becomes difficult with the classic deep learning algorithm, so the developers are constantly designing a more powerful algorithm with the help of experimentation and research.

How Does Deep Learning Algorithm Work?

When people are using different types of neural networks with the help of deep learning, they have to learn several algorithms to understand the working of each layer of the algorithm. Basically, these algorithms depend upon the ANNs (artificial neural networks) that follow the principles of human brains to train the network.

While the training of the neural network is carried out, these algorithms take the unknown data as input and use it for the following purposes:

  • To group the objects

  • To extract the required features

  • To find out the usage patterns of data

The basic purpose of these algorithms is to build different types of models. There are several algorithms for neural networks, and it is considered that no algorithm is perfect for all types of tasks. All of them have their own pros and cons, and to have mastery over the deep learning algorithm, you have to study more and more and test several algorithms in different ways. 

Types of Deep Learning Algorithms

Do you remember that in the previous lectures we discussed the types of deep learning networks? Now you will observe that, while discussing the deep learning algorithms, you will utilize your concepts of neural networks. With the advancement of deep learning concepts, several algorithms are being introduced every year. So, have a look at the list of algorithms.

  1. Convolutional Neural Networks (CNNs)

  2. Long Short-Term Memory Networks (LSTMs)

  3. Deep Belief Networks (DBNs)

  4. Generative Adversarial Networks (GANs)

  5. Autoencoders

  6. Radial Basis Function Networks (RBFNs)

  7. Multilayer Perceptrons (MLPs)

  8. Restricted Boltzmann Machines( RBMs)

  9. Recurrent Neural Networks (RNNs)

  10. Self-Organizing Maps (SOMs)

Do not worry because we are not going to discuss all of them at a time but will discuss only the important ones to give you an overview of the networks.

Convolutional Neural Networks (CNNs)

Convolutional neural networks are also known as "ConvNets," and the main applications of these networks are in image processing and related fields. If we look back at its history, we find that it was first introduced in 1998. Yan LeCun initially referred to it as LeNet. At that time, it was introduced to recognize ZIP codes and other such characters.

Layers in CNN

We know that neural networks have many layers, and similar is the case with CNN. We observe different layers in this type of network, and these are described below:


Sr #

Name of the Layer

Description of the Layer

1

Convolution layer

The convolution layer contains many filters and is used to perform the convolution operations.

2

Rectified linear unit

The short form of this layer is ReLu, and it is used to perform different operations on the elements. It is called “rectified” because the output is obtained as a rectified feature map by using this layer. 

3

Pooling layer 

This is the layer where the results of the ReLu are fed as the input. Pooling is defined as the downsampling operation, and it is used to reduce the dimension of a feature map. The next phase is to convert this feature map, and then this two-dimensional array is converted into a single flat, continuous, and single vector. 

4

Fully connected layer

The single vector from the pooling layer is finally fed into this last layer. At the end, classification of the image is done to identify it.



As a reminder, you must know that neural networks have many layers, and the output of one layer becomes the input for the next layer. In this way, we get refined and better results in every layer. 

Long Short-Term Memory Networks (LSTMs)

This is a type of RNN (recurrent neural network) with a good memory that is used by experts to remember long-term dependencies. By default, it has the ability to recall past information over a long period of time. Because of this ability, LSTMs are used in time series prediction. It is not a single layer but a combination of four layers that communicate with each other in a unique way. Some very typical uses of LSTM are given below:

  • Speech recognition

  • Development in pharmaceutical operations

  • Different compositions in music 

Recurrent neural networks (RNNs)

If you are familiar with the fundamentals of programming, you will understand that if we want to repeat a process, loops, or recurrent processes, are the solution. Similarly, the recurrent neural network is the one that forms the directed cycles. The unique thing about it is that the output of the LSTM becomes the input of the RNN. It means these are connected in a sequence, and in this way, the current phase becomes the output of the LSTM.

The main reason why this connection is magical is that you can utilize the feature of memory storage in LSTM and the ability of RNNs to work in a cyclic way. Some uses of RNN are given next:

  • Recognition of handwriting

  • Time series analysis

  • Translation by the machine

  • Natural language processing

  • captioning the images

Working of RNN

The output of the RNN is obtained by following the equation given next:

If 

output=t-1

Then 

input=1

So at the output t

input=1+1

And this series goes on

Moreover, RNN can be used with any length of the input, but the size of the model does not increase when the input size is increased.

Generative Adversarial Networks (GANs)

Next on the list is the GAN or the generative adversarial network. These are known as “adversarial networks" because they use two networks that compete with each other to generate real-time synthesized data. It is one of the major reasons why we found applications of the generative adversarial network in video, image, and voice generation.

GANs were first described in a paper published in 2014 by Ian Goodfellow and other researchers at the University of Montreal, including Yoshua Bengio. Yann LeCun, Facebook's AI research director, referred to GANs as "the most interesting idea in ML in the last 10 years." This made GANs a popular and interesting neural network. Another reason why I like this network is the fantastic feature of mimicking. You can create music, voice, video, or any related application that is difficult to recognize as being made by a machine. The impressive results are making this network more and more popular every day, but the evil of this network is equal. As with all technologies, people can use them for negative purposes, so check and balance are applied to such techniques. Moreover, GAN can generate realistic images and cartoons with high-quality results and render 3D objects.

Working of GAN

At first, the network learns to distinguish between the generated fake data and sampled data. It happens when fake data is produced and the discriminator learns to recognise if it is real or false. After that, GAN is responsible to send the results to the generator so that it can learn and memorize the results and go for further training.

If it seems a simple and easy task, then think again because the recognition part is a tough job and you have to feed the perfect data in the perfect way so you may have accurate results every time. 

Radial Basis Function Networks (RBFNs)

For the problems in the function approximation, we use an artificial intelligence technique called the radial basis function network. It is somehow a little bit different from the previous ones. These are the types of forward-feed neural networks, and the speed and performance of these networks make them better than the other neural networks. These are highly efficient and have a better learning speed than others, but they require experience and hard work. Another reason to call them better is the presence of only one hidden layer and one radial basis function that is used as an activation function. Keep in mind that the activation function is highly efficient in its approximation of the results.

Working of Radial Basis Function Network

  • It takes the data from a training set and measures the similarities in the input. In this way, it classifies the data. 

  • In the layer of RBF neurons, the input vector is then fed into the input layer. 

  • After finding the weighted sum of the inputs, we obtain the output. Each category or class of data has one node. 

  • The difference from the other network is, the neurons contain a gaussian transfer function, and the output is inversely proportional to the distance between the centre of the network and the neuron. 

  • In the end, we get the output, which is a combination of both, the input of the radial basis function and the neuron parameters. 

So, it seems that these networks are enough for today. Although there are different types of neural networks as well, as we have said earlier, with the advancement in deep learning, more and more algorithms for the neural networks are being introduced that have their own specifications, yet at this level, we just wanted to give you an idea about the neural networks. At the start of this article, we have seen what deep learning algorithms are and how they are different from other types of algorithms. We have seen the types of neural networks that include CNNs, LSTMNs, RNNs, GANs, and RBFs.

IoT based Web Controlled Home Automation using Raspberry Pi 4

Greetings, and welcome to today's tutorial. In the last tutorial, we learned how to construct a system for tallying individuals using Raspberry Pi, astute subtraction, and blob tracking. We demonstrated the total number of building entrances and exits. Feature computation and HOG theory were also discussed. The tests proved that a device based on the raspberry pi could effectively function as a people counting station. One of the many benefits of the Pi 4 is its internet connectivity, which is especially useful for home automation projects due to its low price and ease of use. We're going to see if we can use a web page's buttons to manage our air conditioner today. With this Internet of Things (IoT) based home automation, you can command your home gadgets from the comfort of your couch. The user can access this web server from any gadget capable of loading HTML apps, such as a smartphone, tablet, computer, etc.

Where To Buy?
No.ComponentsDistributorLink To Buy
1BreadboardAmazonBuy Now
2DiodesAmazonBuy Now
3Jumper WiresAmazonBuy Now
4LEDsAmazonBuy Now
5ResistorAmazonBuy Now
6TransistorAmazonBuy Now
7Raspberry Pi 4AmazonBuy Now

Components

The needs of this project can be broken down into two broad classes: hardware and software.

Hardware Requirement

  • Raspberry Pi 4

  • Memory card 8 or 16GB running Raspbian Jessie

  • 5v Relays

  • 2n222 transistors

  • Diodes

  • Jumper Wires

  • Connection Blocks

  • LEDs to test.

  • AC lamp to Test

  • Breadboard and jumper cables

  • 220 or 100 ohms resistor

Software Requirement

We'll be using the WebIOPi framework, notepad++ on your PC, and FileZilla to transfer files (particularly web app files) from your computer to the raspberry pi and the Raspbian operating system.

The Raspberry Pi Setup Process

As a good habit, I constantly update the Raspberry Pi before using it for the first time. In this project phase, we will handle the web-to-raspberry-pi connection by upgrading the Pi and setting up the WebIOPi framework. The python Flask framework provides a potentially more straightforward alternative, but getting your hands dirty and looking at how things operate makes DIY appealing. When you get to that point, the fun of DIY begins. Use the updated commands below to upgrade your Raspberry Pi and restart the RPi.

sudo apt-get update

sudo apt-get upgrade

sudo reboot

After this is finished, we can set up the webIOPi framework. Using, verify that you are in your home directory.

cd ~

To download the files from the google page, type wget.

wget http://sourceforge.net/projects/webiopi/files/WebIOPi-0.7.1.tar.gz

Then, once the download is complete, unzip the file and enter the directory;

tar xvzf WebIOPi-0.7.1.tar.gz

cd WebIOPi-0.7.1/

Unfortunately, I could not locate a version of WebIOPi that is compatible with the Pi 4; thus, we have to download a patch before proceeding with the setup. Run the instructions below from within the WebIOPi directory to apply the patch.

wget https://raw.githubusercontent.com/doublebind/raspi/master/webiopi-pi2bplus.patch

patch -p1 -i webiopi-pi2bplus.patch

Once we have those things, we can begin the WebIOPi setup installation process by using the;

sudo ./setup.sh

Just click "Yes" when prompted to install more components during setup. Upon completion, restart your Pi.

sudo reboot

Verify the WebIOPi Setup

Before diving into the schematics and programs, we should power on the Raspberry Pi and ensure our WebIOPi installation is functioning as expected. Execute the command below;

sudo webiopi -d -c /etc/webiopi/config

After running the above command on the pi, open a web browser and navigate to http://raspberrypi.mshome.net:8000 (or HTTP;//thepi'sIPaddress:8000) on the computer that is attached to the pi. When logging in, you'll be asked for a username and password.

Username is webiopi

Password is raspberry

You may permanently disable this login if you no longer need it. Still, it's important to keep unauthorized users from taking control of your home's appliances and Internet of Things (IoT) components. After you've logged in, go to the GPIO header link.

Make GPIO 17 an output; we'll use it to power an LED in this Test.

Following this, attach the led to the Pi 4 as depicted in the schematics.

When you're ready to activate or deactivate the LED, return to the web page where you made the connection and select the pin 11 button. This allows us to use WebIOPi to manage the Raspberry Pi's GPIO pins. If the Test is successful, we can return to the console and exit the program by pressing CTRL + C. Please let me know in the comments if this arrangement has any problems. Once the pilot is finished, we can begin the actual project.

Developing a Web-Based Home-Control application for the Raspberry Pi

In this section, we will alter the WebIOPi service's standard setup and inject our code to be executed on demand. FileZilla or another FTP/SCP copy program will be the first tool we install on our computer. You'll agree that using the terminal to write code on the Pi is a stressful experience, so having access to Filezilla or another SCP program will be helpful. Let's make a project directory in which all our web scripts will be stored before we begin writing the HTML, CSS, and javascript programs for this Internet - of - things Home automated Web app and transferring them to the RPi.

First, make sure you're in your home directory using; next, create the folder; finally, open the newly constructed folder and make an HTML folder inside it.

cd ~

mkdir webapp

cd webapp

mkdir HTML

Make subfolders inside the HTML folder for scripts, CSS, and graphics.

mkdir html/css

mkdir html/img

mkdir html/scripts

Now that we have our files prepared, we can start coding on the computer and transfer our work to the Pi using Filezilla.

The JavaScript Code

Writing the javascript will be our first order of business. An easy-to-use script for interacting with the WebIOPi server. Our four-button web app will only use two relays in the demonstration, and we only intend to control four GPIO pins for this project.


 webiopi().ready(function() {

                        webiopi().setFunction(17,"out");

                        webiopi().setFunction(18,"out");

                        webiopi().setFunction(22,"out");

                        webiopi().setFunction(23,"out");

                                                var content, button;

                        content = $("#content");

                                                button = webiopi().createGPIOButton(17," Relay 1");

                        content.append(button);

                                                button = webiopi().createGPIOButton(18,"Relay 2");

                        content.append(button);

                                                button = webiopi().createGPIOButton(22,"Relay 3");

                        content.append(button);

                                                button = webiopi().createGPIOButton(23,"Relay 4");

                        content.append(button);

                                });

Once the WebIOPi is ready, the preceding code is executed. To help you understand JavaScript, we've explained below:

  • webiopi().ready(function()

All this tells our system to make this function and call it once the webiopi is set.

  • webiopi().setFunction(23,"out")

We can instruct the WebIOPi program to use GPIO23 for output. Four buttons are now available, but you may add more if necessary.

  • var content, button

With this line, we're instructing the system to make a new variable called content into a button.

  • content = $("#content")

We will continue using the content variable in our HTML and CSS. As a result, the WebIOPi framework generates everything connected to #content when it is mentioned.

  • button = webiopi().createGPIOButton(17,"Relay 1")

WebIOPi can make several distinct types of push buttons. This code instructs the WebIOPi program to generate a GPIO key that operates on the GPIO pin identified as "Relay 1" above. The other ones are the same, too.

  • content.append(button)

Add this code to the button's existing HTML or external code. New buttons can be made that are identical to this one in every respect. This is especially helpful while coding or writing CSS.

If you made your JS files the same way I did, you can save them and then move them with Filezilla to webapp/HTML/scripts after you've finished making them. Now we can move on to developing the CSS.

The CSS Code:

With the aid of CSS, our Internet of Things (IoT) Rpi 4 home automation website now looks fantastic. So that the website will look like the one in the picture below, I built a custom style sheet called smarthome.css.

I don't want to paste the entire CSS script here, so I'll use a subset for the explanation. If you want to learn CSS, all you have to do is read the code. You can skip this and use our CSS code if you want to.

The first section of the script, displayed below, represents the web application's main stylesheet.

 body {

         background-color:#ffffff;

         background-image:URL('/img/smart.png');

         background-repeat:no-repeat;

         background-position:center;

         background-size:cover;

         font: bold 18px/25px Arial, sans-serif;

         color:LightGray;

     }

The above code, which I hope needs no explanation, begins by setting the background colour to white (#ffffff), adds a background image to the document from the specified folder (remember the one we created earlier? ), makes sure the picture doesn't duplicate by setting the background-repeat to no-repeat, and finally tells the CSS to center the background. Next, we adjust the background's text size, font, and colour.

After finishing the main content, we styled the buttons with CSS.

button {

         display: block;

         position: relative;

         margin: 10px;

         padding: 0 10px;

         text-align: center;

         text-decoration: none;

         width: 130px;

         height: 40px;

         font: bold 18px/25px Arial, sans-serif;  color: black;

         text-shadow: 1px 1px 1px rgba(255,255,255, .22);

         -WebKit-border-radius: 30px;

          -Moz-border-radius: 30px;

          border-radius: 30px;

}

Everything else in the script is similarly optimized for readability and brevity. You can play with them and see what happens; this kind of learning is known as "learning by doing," I believe. However, CSS's strengths lie in its simplicity, and its rules are written in plain English. The button's text shadow and button shadow are two of the few supplementary features found in the block's other section. To top it all off, pressing the button triggers a subtle transition effect, making it look polished and lifelike. To guarantee optimal page performance on all browsers, these are defined independently for WebKit, firefox, opera, etc.

The following code snippet notifies the WebIOPi service that it is receiving data as input.

input[type="range"] {

                                                display: block;

                                                width: 160px;

                                                height: 45px;

                        }

Providing feedback on when a button is pressed will be the last element we want to implement. As a result, the screen's colour scheme and button hues provide a quick indicator of progress. To accomplish this, the following line of code is added to each button's HTML.

                        #gpio17.LOW {

                                                background-color: Gray;

                                                color: Black;

                        }

                        #gpio17.HIGH {

                                                background-color: Red;

                                                color: LightGray;

                        }

The code snippets up top alter the button's color depending on the user's selection. The button's background is gray when it is inactive (at LOW) and red when it is active (at HIGH). Now that we have our CSS under control let's save it as smarthome.css, upload it to our raspberry pi's styles folder using FileZilla (or another SCP client of your choosing), and fix the remaining HTML code.

HTML Code

The HTML code unifies the style sheets and java scripts.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">

<html>

<head>

        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

        <meta name="mobile-web-app-capable" content="yes">

        <meta name="viewport" content = "height = device-height, width = device-width, user-scalable = no" />

        <title>Smart Home</title>

        <script type="text/javascript" src="/webiopi.js"></script>

        <script type="text/javascript" src="/scripts/smarthome.js"></script>

        <link rel="stylesheet" type="text/CSS" href="/styles/smarthome.css">

        <link rel="shortcut icon" sizes="196x196" href="/img/smart.png" />

</head>

<body>

                        </br>

                        </br>

                        <div id="content" align="center"></div>

                        </br>

                        </br>

                        </br>

                        <p align="center">Push button; receive bacon</p>

                        </br>

                        </br>

</body>

</html>

The head tag contains several crucial elements.

<meta name="mobile-web-app-capable" content="yes"> 

The code line above makes it possible to add the web app to the mobile device's home screen when using Chrome or Safari. You can access this function using the Chrome menu. This makes it so the app may be quickly launched on any mobile device or desktop computer.

The following line of code provides a measure of responsiveness for the web app. Because of this, it can take up the entire display of any gadget on which it is run.

<meta name="viewport" content = "height = device-height, width = device-width, user-scalable = no" /> 

The web page's title is defined in the following line of code.

<title>Smart Home</title>

The following four lines of code all connect the Html file to multiple resources it requires to function as intended.

        <script type="text/javascript" src="/webiopi.js"></script>

        <script type="text/javascript" src="/scripts/smarthome.js"></script>

        <link rel="stylesheet" type="text/CSS" href="/styles/smarthome.css">

        <link rel="shortcut icon" sizes="196x196" href="/img/smart.png" />

The first line above directly connects to the WebIOPi framework JavaScript, which is stored in the server's root directory. This method must be invoked whenever WebIOPi is used.

The second line tells the HTML document where to find our jQuery script, and the third tells where to get our style sheet. The last line prepares an icon for the mobile desktop, which can be useful if we use the website as an app or a favicon.

To ensure that our HTML code displays whatever is contained in the JavaScript file, we include break tags in the body portion of the code. The definition of our button's content was made previously in the JavaScript code, and its id="content" should bring that to mind.

<div id="content" align="center"></div>

Everybody is familiar with the routine of saving an Html file as index.html and then transferring it to the Pi's HTML folder via Filezilla.

Modifications to the WebIOPi Server for Use in Automated Household Tasks

Before we can begin sketching out circuit diagrams and running tests on our web app, we need to make a few adjustments to the webiopi service's configuration file, instructing it to look for configuration information in our HTML folder rather than the default location.

Edit the configuration by executing the following commands as root:

sudo nano /etc/webiopi/config

Find the section of the configuration file labelled "HTTP" and look for the line that begins with "#" Modify the directory where HTML and resources are stored by default with doc-root.

Remove the # comments from anything below it, and if your folder is organized like mine, set the doc-root to the location of your project file.

doc-root = /home/pi/webapp/html

Lastly, save your work and exit. If you already have another server installed on the Pi utilizing port 8000, you may easily change it. If not, let's stop saving and call it a day.

It's worth noting that the WebIOPi service password can be changed using the command;

sudo webiopi-passwd

A new login name and password will be required. Getting rid of this entirely is possible, but safety comes first.

Finally, issue the following command to start the WebIOPi service.

sudo /etc/init.d/webiopi start

If you want to see how the server is doing, you can do so by;

sudo /etc/init.d/webiopi status

That's why there's a way to halt its execution:

sudo /etc/init.d/webiopi stop

Setup WebIOPi to start automatically with;

sudo update-RC.d webiopi defaults

To do the opposite and prevent it from starting up automatically, use the following;

sudo update-RC.d webiopi remove

Schematic and Explanation of a Circuit

Now that we have everything set up, we can begin developing the schematics for our Web-controlled home appliance.

Whereas I could not procure relay modules, which in my experience, make electronics projects simpler for do-it-yourselfers. So, I'm going to draw some diagrams for regular, single-relay, 5V-powered standalone devices.

Join the components as seen in the fritzing diagram. It's important to remember that your Relay's COM, NO (usually open), and NC (typically Close) contacts could be on opposite sides. Please verify this with a millimetre.

Relays Underlying Operating Principles

Relays can be found anywhere that electricity is being switched, from a simple traffic light controller to a high-voltage switchyard. Relays, in the broadest sense, are equivalent to any other switch. They can connect or disconnect a circuit and are frequently employed to activate or deactivate an electrical load. However, this is a comprehensive statement; there are many other relays, and each Relay behaves slightly differently depending on the task at hand; as the electromechanical Relay is one of the most widely used relays, we will devote more space to discussing it here. In spite of variations in design, all relays work according to the same fundamental concept, so let's dive into the nuts and bolts of relays and talk about how they function.

So, what exactly is Relay?

A relay is called an electromechanical switch that may either establish or rupture an electrical connection. A relay is like a mechanical switch, except that it is activated and deactivated by an electronic signal rather than by physically flipping a switch. It comprises a flexible movable mechanical portion controlled electrically through an electromagnet. Once again, this Relay operating concept is suitable exclusively for electromechanical relays.

A common and widely used relay consists of electromagnets typically employed as a switch. However, there are many kinds of relays, each with its purpose. When a signal is received on one side of the device, it controls the switching activity on the other, much like the dictionary definition of Relay. That's right, a relay is an electromechanical switch that can open and close circuits. This device's primary function is to establish or sever contact with the aid of a signal to turn it ON or OFF automatically and without human intervention. Its primary use is to allow a low-power signal to exert control over a circuit with a high power consumption. Typically, the high-voltage circuit is controlled by a direct current (DC) signal.

How the Relay is Built and Functions

The following diagram depicts the internal structure and design of a Relay.

A coil of copper wire is wound around a core, which is then placed inside a housing. When the coil is electrified, it attracts the movable armature, which is supported by a spring or stand and has a metal contact attached to one end. This assembly is positioned over the core. In most cases, the movable armature is a shared connection point for the motor's internal components and the other wiring harness. The usually closed (NC) pin is linked to the common terminal, while the ordinarily opened (NO) pin is not used in operation. By connecting the armature to the usually open contact whenever the coil is activated, current can flow uninterruptedly through the armature. When the power is turned off, it returns to its starting position.

The picture below shows a schematic of the Relay's circuit in its most basic form.

Relay Teardown: An Inside Look

In the images below, you can see the main components of an electromechanical relay—an electromagnet, a flexible armature, contacts, a yoke, and a spring/frame/stand. They have been thoughtfully placed into a relay.

The workings of a Relay's mechanical components have been outlined below.

  1. Electromagnet

An electromagnet is crucial to the operation of a relay. This metal lacks magnetic properties but can be transformed into a magnet when exposed to an electrical current. It is healthy knowledge that a conductor takes on the magnetic characteristics of the current flowing through it. Thus, a metal can operate as a magnet and attract magnetic objects within its range when wound with a conductive material and powered by an adequate power source.

  1. Movable Armature

A moveable armature is just one piece of metal that can rotate or stand on its own. It facilitates connection-making and -breaking with the contacts attached to it.

  1. Contacts

Internal conductors are the wires that run through a device and hook up to its terminals.

  1. Yoke

It's a tiny metal piece attached to a core that attracts and retains the armature whenever the coil is activated.

  1. Spring (optional)

While some relays can function without a spring, those that do have one attach it to the armature at one end to prevent any snagging or binding. One can use a metal "stand" in place of a spring.

Mechanism of Action of a Relay

Let's examine the differences between a relay's normally closed and normally open states. 

Relay's NORMALLY CLOSED state

If no current flows through the core, there will be no magnetic field, and the device will not be a magnet. As a result, it is unable to draw in the flexible framework. So, the ordinarily closed position of the armature is the starting point (NC).

Relay in NORMALLY OPENED state

When a high enough voltage is supplied to the core, it begins to have a strong magnetic field around itself, allowing it to function as a magnet. The magnetic field produced by the core attracts the movable armature whenever it comes within its field of influence, changing the armature's location. As it has been wired to a normally open relay pin, any external circuits attached to it will no longer operate in the same way.

It is important to connect the relay pins correctly so that the external circuit can do its job. When a coil is powered, the armature is drawn toward it, revealing the switching action; when the power is cut, the coil loses its magnetic property, and the armature returns to its original location. The animation provided below shows the Relay in action.

Transistor functions in the circuit

There is nothing complicated about a transistor, yet there is a lot going on inside it. Okay, so first, we'll tackle the easy stuff. An electronic transistor is a small component that can switch between two functions. It's a switch that can also act as an amplifier.

An amplifier is a device that takes in a little electric current and outputs a significantly larger electric current (called an output current). It can be thought of as a current booster. One of the earliest applications for transistors, this is particularly helpful in devices like hearing aids. A hearing aid contains a microscopic microphone that converts ambient sound into electrical signals. These are then amplified by a transistor and used to power a miniature loudspeaker, which reproduces the ambient noise at a much higher volume.

It is possible to use a transistor as a switch. A transistor is a device that allows for the passage of one electrical current to induce a much larger current to flow through the next part of the device. What this means is that a relatively small current can activate a much larger one. All computer chips function in this general way. As an illustration, a memory chip may have as many as a billion individually controllable transistors. Due to the fact that each transistor can exist in either of two states, it is capable of storing either a zero or a one. A chip's ability to hold billions of zeroes and ones, as well as almost as many regular numbers and letters, is made possible by its billions of transistors.

Diode functions in the circuit

Diodes can range in size from what's shown in the image up top. They feature a cylindrical body that is usually black with a stripe at one end and certain leads that protrude so that we may plug it into a circuit. The opposite terminal is called the cathode and is opposite the anode.

A diode is an electrical component that restricts current flow in one direction.

To illustrate, picture a swing valve fitted in a water line. The water pressure inside the pipe will force open the swing gate, allowing the water to flow uninterrupted. In contrast, the gate will be forced shut, and water flow will stop if the river alters its course. As a result, there is only one direction for water to flow.

Very much like a diode, which we also employ to alter the current flow through a circuit, it allows us to switch it on and off at will.

We have now animated this process using electron flow, in which electrons move from negative to positive. However, traditional flow, positive to negative, is the norm in electronics engineering. It's usually best to start with the conventional current because it's more familiar to most people, but feel free to use either one; we'll assume you're aware of the difference.

It's important to remember that the light-emitted diode will only light up properly if the diode is connected to the circuit in the correct orientation when adding it to a simple Light emitted diode circuit like the one shown above. Only one direction of current can travel through it. Accordingly, its conductive or insulating properties are determined by the orientation in which it is mounted.

So that it can conduct electricity, you must join the black end to the neutral and the striped end to the positive. The forward bias is the condition in which current can flow. If we invert the diode, it will become an insulator and stop the passage of electricity. The term for this is "the reverse bias."

Exactly how would a diode function?

You probably know that electricity is the transfer of electrons between atoms that are not bound. Because of its high number of unpaired electrons, copper is widely used for electrical wiring. Since rubber is an insulator—its electrons are kept very securely, so they cannot flow between atoms—it is used to wrap around the copper wires for our protection.

In a simplified form of a metal conducting atom, the nucleus is at the center, and the electrons are housed in a series of shells around it. It takes a specific amount of energy for an electron to be absorbed into each shell, and each shell has a max number of electrons it can hold. Those electrons that are furthest from the nucleus are the most energetic. Conductors have between one and three electrons in their outermost "valence" shell.

The nucleus acts as a magnet, keeping the electrons in place. However, there is yet another layer, the conduction band. If an electron gets here, it can leave its atom and travel to another. Because the valence shell and conduction band of a metal atom overlap, the electron can move quickly and easily between the two.

The insulator has a tightly packed outer layer. No free space for electrons to occupy. Because of the strong attraction between the nucleus and the electrons and the great distance between the nucleus and the conduction band, the electrons are trapped inside the nucleus and cannot leave. Because of this, electricity is unable to travel through it.

Of course, a semiconductor is also a different type of material. A semiconductor might be silicon, for instance. This material behaves as an insulator because it has one more electron than is necessary in its outermost shell to be a conductor. However, with enough external energy, a few valence electrons can generate enough momentum to hop across to the conduction band, where they can finally break free. Consequently, this substance can perform the roles of both an insulator and a conductor.

Due to the lack of free electrons in pure silicon, engineers must add a small number of materials (called "doping") to the silicon to alter its electrical properties.

This process gives rise to P-type and N-type doping, respectively. The diode itself is a combination of these doped materials.

Two leads connect the anode and cathode to various thin plates inside the diode. P-Type doped silicon is on the anode side of these plates, and the cathode side is N-Type doped silicon—an insulating and protective resin that coats the entire structure.

Consider the material to be pure silicon before it has been doped. There are four silicon atoms surrounding each one. Because silicon atoms need eight electrons to fill their valence shells but only have four available, they share one with their neighbours. Covalent bonding describes this type of interaction.

Phosphorus, an N-type element, can be substituted for a number of silicon atoms in a compound semiconductor. Phosphorus has a 5-electron valence shell because of this. This extra electron isn't needed because particles are sharing them to reach the magic number of 8. This means there's an extra electrons in the material, and it's free to go wherever it wants.

In P-type doping, a substance like aluminum is introduced. Due to its limited valence electron pool of 3, this atom is unable to share an electron with any of its four neighbours. An electron-sized void is therefore made available.

We now have silicon with either too many or too few electrons, depending on the doping method.

Upon joining, the two substances forge a p-n junction. This is a depletion region, and it forms at the intersection. Here, some of the surplus electrons on the N-type side migrate over to fill the vacancies on the P-type side. By moving in this direction, electrons and holes will accumulate on either side of a barrier. Holes are thought to be positively charged since they are the opposite of electrons, which are negatively charged. The resulting accumulation produces two distinct regions, one slightly negatively charged and the other slightly positively charged. This forms an electric field that blocks the path of any more electrons. In regular diodes, the voltage drop over this area is only 0.7V.

By applying a voltage across the diode with the P-Type anode linked to the positive and the N-Type cathode attached to the negative, a forward bias is established, and current can flow. The electrons can't get over the 0.7V barrier unless the voltage source is higher.

We can achieve this by connecting the positive terminal of the power supply to the cathode of an N-type device and the negative terminal to the anode of a P-type device. The diode functions as a conductor to block current because the barrier expands as holes are drawn toward the negative and electrons are drawn toward the positive.

Resistor functions in the circuit

A resistor is a two-terminal, non-active electrical component that reduces the amount of current in electric and electronic circuits. A certain amount can lower the current by strategically placing a resistor in a circuit. From the outside, most resistors will appear identical. But if you crack it open, you'll find a ceramic rod used for insulation within, with copper wire covering the rest of the structure. Those copper twists are crucial to the resistance. When copper is sliced thinner, resistance rises because electrons have more difficulty penetrating the material. We now know that electrons can move more freely through some conductors than insulators.

George Ohm investigated the correlation between resistor size and material thickness. His proof showed that an object's resistance (R) grows in proportion to its length. Because of this, the resistance offered by the lengthier and thin wires is greater. However, wire thickness has a negative effect on resistance.

Once everything is hooked up, you can start your server by browsing to the IP address of your RPi and entering the port you chose earlier (as mentioned in the previous section), entering your password and username and seeing a page that looks like the one below.

All it takes is a few clicks of your mouse to operate four AC home appliances from afar. This can be controlled from a mobile device (phone, tablet, etc.) and expanded with additional switches and relays. Thank you all for reading to the end.

Conclusion

This guide showed us how to set up a web-based control system for our home automation system based on the Raspberry Pi 4. We have learned how to utilize the WebIOPi API to manage, debug, and use raspberry Pi's GPIO, sensors, and adapters from an internet browser or any application. We have also implemented JavaScript, CSS, and HTML code for the web application. For those who thrive on difficulty, feel free to build upon this base and add whatever demanding module you can think of to the project. The following tutorial will teach you how to use a Raspberry Pi 4 to create a Line Follower robot that can navigate obstacles and drive itself.

Estimating the Size of a Crowd with OpenCV and Raspberry Pi 4

Welcome to the next tutorial on our raspberry pi four python programming. In the previous article, we built a system that recognizes when two people are in physical contact using OpenCV and a Raspberry Pi 4. We used the weights from the YOLO version 3 Object Recognition Algorithm to implement the Deep Neural Networks part. Regarding image processing, the Raspberry Pi consistently comes out on top compared to other controllers. A facial recognition program was among the earlier attempts to use Raspberry Pi for sophisticated picture processing. In today's world of cutting-edge technology, digital image processing has expanded rapidly to become an integral feature of many portable electronic gadgets.

Digital image processing is widely used for such tasks as item detection, facial recognition, and people counting. This guide will use a Raspberry Pi 4 and ThingSpeak to create a crowd-counting system based on OpenCV. In this case, we will utilize the pi camera module to take pictures in a continuous loop, and then we will run the images through the Histogram Based Object descriptor to find the things in the photos. Next, we'll compare these images to OpenCV's pre-trained model for facial recognition. The headcount may be seen by anybody, anywhere in the world, because of the public nature of the ThingSpeak channel.

Knowing how many people show up to an event or purchase a newly released product is vital for event management and retail shop owners. Still, it's even more critical that they can use that information to improve future events. To their relief, modern crowd-counting technology has made it simpler for event planners and business owners to acquire actionable data on event attendance that can be used to improve ROI.

Where To Buy?
No.ComponentsDistributorLink To Buy
1Raspberry Pi 4AmazonBuy Now

Components

Hardware

  • Raspberry Pi 4

  • Pi Camera

Software & Online Services

  • ThingSpeak

  • Python3

  • OpenCV3

Instructions for Setting Up OpenCV on a Raspberry Pi

In this case, the OpenCV framework will make people count. You must first upgrade your Raspberry Pi before you can install OpenCV.

sudo apt-get update

Then, get OpenCV ready for your Raspberry Pi by installing its prerequisites.

sudo apt-get install libhdf5-dev -y 

sudo apt-get install libhdf5-serial-dev –y 

sudo apt-get install libatlas-base-dev –y 

sudo apt-get install libjasper-dev -y 

sudo apt-get install libqtgui4 –y

sudo apt-get install libqt4-test –y

Once that is done, use the following command to install OpenCV on your Raspberry Pi.

pip3 install OpenCV-contrib-python==4.1.0.25

Additional Package Installation Necessary

We need to get some additional packages on the Raspberry Pi before we can begin writing the code for the Crowd Counting app.

Installing imutils: To perform basic image processing tasks like translating, rotating, resizing, skeletonizing, and displaying Matplotlib images more efficiently in OpenCV, imutils are used. So, run the following command to set up imutils:

pip3 install imutils

matplotlib: The matplotlib library should then be installed. When it comes to Python visualizations, Matplotlib is your one-stop shop for everything from static to animated to interactive.

pip3 install matplotlib

Configuring Thingspeak for Headcounting

One of the most widely used IoT platforms, ThingSpeak allows us to keep tabs on our data from any location with an Internet connection. The system can also be controlled remotely by using the Channels and web pages provided by ThingSpeak. You must first register for an account on ThingSpeak to create a channel. If you have a ThingSpeak account, please log in with your username and password.

Select Sign up and fill out the required fields.

Double-check your email address and press the "Next" button when you're done. Now that you're logged in, click the "New Channel" button to make a brand-new channel.

When you're ready to begin uploading information, select "New Channel" and give it a descriptive name and brief explanation. One new field, "People," has been added. Any number of areas may be made, as needed. Then, click the "Save Channel" button after entering the necessary information. You'll need to pass your API and channel ID into a Python script whenever you want to submit data to ThingSpeak.

Hardware Configuration

For this OpenCV people-countering project, all you need is a Raspberry Pi and a Pi camera; to get started, plug the camera's ribbon connector into the Raspberry pi's designated camera slot.

The Pi 4 Camera board is a purpose-built expansion board for the Raspberry Pi computer. The Raspberry Pi hardware is connected via a specialized CSI interface. In its native still-capture mode, the sensor's resolution is 5 megapixels. Capturing at up to 1080p and 30 frames/second in video mode is possible. Because of its portability and compact size, this camera module is fantastic for handheld applications.

Setup the Camera Board

A ribbon cable connects the camera board to the Raspberry Pi. Camera PCB and Raspberry Pi hardware are associated with a ribbon cable. If you join the ribbon cables correctly, the camera will work. The camera PCB's blue backing must face away from the PCB, while the Raspberry Pi hardware's blue backing must face the Ethernet port.

Histogram of Oriented Gradients

One example of a feature descriptor is the HOG, similar to the Canny Edge Detector algorithm. Object detection is a typical application of this technique in image processing and computer vision applications. This method uses a count of gradient orientation occurrences in the limited region of an image. There are a lot of similarities between this approach and Scale Invariant Feature Transformation. The HOG descriptor highlights object structure or form. This method of computing features is superior to other edge descriptors because it considers both the magnitude and the angle of the gradient. Histograms are created for the image's regions based on the gradient's intensity and direction.

How do we calculate the histogram of oriented gradient features?

First, load the image that will serve as the basis for the HOG feature calculation into the system. Reduce the size of the image to 128 by 64 pixels. The research authors utilized and recommended this dimension because improving detection outcomes for pedestrians was their primary goal. After achieving near-perfect scores on the MIT pedestrian's database, the authors of this study opted to create a new, more difficult dataset: the 'INRIA' dataset (http://pascal.inrialpes.fr/data/human/), which includes 1805 (128x64) photographs of individuals cut from a wide range of personal photos.

In this step, we compute the image's gradient. The gradient can be calculated using the image's magnitude and angle. First, we determine Gx and Gy for every pixel in a 3x3 grid. As a first step, we determine the Gx and Gy values for each pixel by plugging their respective values into the following formulas.

Each pixel's magnitude and angle are computed using the following formulae after Gx and are determined.

Once the gradient for each pixel has been calculated, the resulting gradient matrices are each partitioned into eight 8x8 cells that form a block. Each block is assigned a 9-point histogram. Each bin in a 9-point histogram has a 20-degree range, so the resulting histogram has nine bins total. The numbers in Figure 8 are assigned to a 9-bin histogram graphically depicting the results of the calculations. Each of these 9-point graphs can be represented graphically as a histogram whose bins output the relative strength of the gradient across the corresponding intervals. Since a block can have 64 distinct values, the calculation below is carried out for each of the 64 possible combinations of magnitude and gradient. Because 9-point histograms are being used, therefore:

The following terms will define the limits of each jth bin:

The average value of each bucket will be:

Illustration of a histogram with nine discrete bins. For a particular 8x8 block of 64 cells, there will be only one possible histogram. Each of the sixty-four cells will contribute their Vj and Vj+1 values to the array's indices at the jth and (j+1) positions.

When determining the value assigned to cell j in block I, we first determine which bin j will be assigned to it. The following equations will provide the value:

Each pixel's value, Vj, is calculated and stored in the set at the jth and (j+1)the indexes of the bin that serves as the block's bin. Upon completing the preceding steps, the resulting matrix will have dimensions 16 by eight by 9. When the histograms for all blocks have been computed, a new block is formed by joining together four cells of the 9-by-9 histogram matrix (2x2). This chopping is carried out overlappingly, with an 8-pixel stride. We create a 36-feature vector by concatenating the 9-point histograms of each of the four cells that make up the block.

A combined FBI is created from four blocks by traversing a 2x2 grid around the image.

The L2 norm is used to standardize FB values across blocks.

The value of k for normalization is found by applying the following formulae:

Normalizing is performed to lessen the impact of variations in the contrast between photographs of the same object—each section. Data is collected in the form of a 36-point feature vector. Seven blocks line up across the bottom and fifteen at the top. Therefore, the entire length of all histogram-oriented gradient features will be 3780 (7 x 15 x 36). The image's HOG characteristics are extracted.

HOG features are seen parallelly on a single image with the image library.

Explanation of the People Counting Python Program

This page includes the complete Python code for an OpenCV project that counts the people in a crowd. Here, we break down the code's crucial parts so you can understand them better—first, import all the necessary libraries that will be used later in the code.

import cv2

import imutils

from imutils.object_detection import non_max_suppression

import numpy as np

import requests

import time

import base64

from matplotlib import pyplot as plt

from urllib.request import urlopen​

  • Imutils: 

For use with OpenCV and either version of Python, this package provides a set of helper functions for everyday image processing tasks such as scaling, cropping, skeletonizing, showing Matplotlib pictures, grouping contours, identifying edges, and more.

  • Numpy:

You can manipulate arrays in Python with the help of the NumPy library. Matrix operations, the Fourier transform, and linear algebra are all within their purview. Because it is freely available to the public, anyone can use it. That's why it's called "Numerical Python," or "NumPy" for short.

Python's list data structure can replace arrays, but it could be faster. NumPy's intended benefit is an array object up to 50 times quicker than standard Python lists. To make working with NumPy's array object, ndarray, as simple as possible, the library provides several helpful utilities. Data science makes heavy use of arrays because of the importance placed on speed and efficiency.

  • Requests:

You should use the requests package if you need to send an HTTP request from Python. It hides the difficulties of requests making behind a lovely, straightforward API, freeing you to focus on the application's interactions with services and data consumption.

  • Time:

In Python, the time module has a built-in method called local time that may be used to determine the current time in a given location depending on the time in seconds that have passed since the epoch (). tm isdst will range from 0 to 1 to indicate whether or not daylight saving time applies to the current time in the region.

  • Base64:

If you need to store or transmit binary data over a medium better suited for text, you should look into using a Base64 encoding technique. There is less risk of data corruption or loss thanks to this encoding method. Base64 is widely used for many purposes, such as MIME-enabled email storing complicated data in XML and JSON.

  • Matplotlib:

When it comes to Python visualizations, Matplotlib is your one-stop shop for everything from static to animated to interactive. Matplotlib facilitates both straightforward and challenging tasks. Design graphs worthy of publication. Create movable, updatable, and zoomable figures.

  • urllib.request:

If you need to make HTTP requests with Python, you may be directed to the brilliant requests library. Though it's a great library, you may have noticed that it needs to be a built-in part of Python. If you prefer, for whatever reason, to limit your dependencies and stick to standard-library Python, then you can reach for urllib.request!

Then, after the libraries have been imported, you can paste in the channel ID and API key for the ThingSpeak account you previously copied.

channel_id = 812060 # PUT CHANNEL ID HERE

WRITE_API = 'X5AQ3EGIKMBYW31H' # PUT YOUR WRITE KEY HERE

BASE_URL = "https://api.thingspeak.com/update?api_key= {}".format(WRITE_API)

Set the default values for the HOG descriptor. Several other uses have been found for HOG, making it one of the most often implemented methods for object detection. In the past, an OpenCV pre-trained model for people detection could be accessed through cv2.HOGDescriptor getDefaultPeopleDetector().

hog = cv2.HOGDescriptor()

hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

Raspberry PI is provided with a three-channel color image inside the detector() function. It then uses imutils to scale the image down to the appropriate size. The SVM classification result is then used to inform the detectMultiScale() method, which examines the image to determine the presence or absence of a human.

def detector(image):

   image = imutils.resize(image, width=min(400, image.shape[1]))

   clone = image.copy()

   rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)

If you're getting false positive results or detection failures due to capture-box overlap, try running the below code, which uses non-max suppressing capability from imutils to activate overlapping regions.

for (x, y, w, h) in rects:

       cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)

   rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])

   result = non_max_suppression(rects, probs=None, overlapThresh=0.7)

   return result

With the help of OpenCV's VideoCapture() method, the image is retrieved from the Pi camera within the record() function, where it is resized with the imultis before being sent to ThingSpeak.

def record(sample_time=5):

  camera = cv2.VideoCapture(0)

frame = imutils.resize(frame, width=min(400, frame.shape[1]))

result = detector(frame.copy())

thingspeakHttp = BASE_URL + "&field1={}".format(result1)

OpenCV's People Counting Tool: A Quick Test

Now that everything is hooked up and ready to go, let's put it through its paces. Launch the program by extracting it to a new folder. You'll need to give Python a few seconds to load all the necessary modules. Start the program. A new window will pop up, showing the camera's output after a few seconds. Make sure your Raspberry Pi camera is operational before running the python script. The following command is used to activate the python script after a review of the camera has been completed:

At that point, a new window will appear with your live video feed inside of it. OpenCV will count the number of persons in the first frame that Pi processes. The appearance of a box will indicate the detection of humans:

Output

Now that you know how many people are expected to show up, you can check the crowd size from the comfort of your own home via your ThingSpeak channel.

You can now efficiently conduct crowd counts with OpenCV and a Raspberry Pi. This technology helps with guaranteeing the safety of those attending large-scale events, which is a top priority for event planners. Knowing how people will flow through a venue or store is crucial for offering effective crowd management services. It will also improve efficiency and customer service because it is helpful for event and store managers to track the number of people entering and leaving their establishments at any one time. Additionally, it is important for event planners to understand dwell time in order to ascertain which parts of the venue are popular with attendees and which are completely bypassed. This gives them information about how the guest felt, which lets them better use the space they have. 

Complete code

import cv2

import imutils

from imutils.object_detection import non_max_suppression

import numpy as np

import requests

import time

import base64

from matplotlib import pyplot as plt

from urllib.request import urlopen

channel_id = 812060 # PUT CHANNEL ID HERE

WRITE_API  = 'X5AQ3EGIKMBYW31H' # PUT YOUR WRITE KEY HERE

BASE_URL = "https://api.thingspeak.com/update?api_key={}".format(WRITE_API)

hog = cv2.HOGDescriptor()

hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

# In[3]:

def detector(image):

   image = imutils.resize(image, width=min(400, image.shape[1]))

   clone = image.copy()

   rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)

   for (x, y, w, h) in rects:

       cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)

   rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])

   result = non_max_suppression(rects, probs=None, overlapThresh=0.7)

   return result

def record(sample_time=5):

   print("recording")

   camera = cv2.VideoCapture(0)

   init = time.time()

   # ubidots sample limit

   if sample_time < 3:

       sample_time = 1

   while(True):

       print("cap frames")

       ret, frame = camera.read()

       frame = imutils.resize(frame, width=min(400, frame.shape[1]))

       result = detector(frame.copy())

       result1 = len(result)

       print (result1)

       for (xA, yA, xB, yB) in result:

           cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)

       plt.imshow(frame)

       plt.show()

       # sends results

       if time.time() - init >= sample_time:

           thingspeakHttp = BASE_URL + "&field1={}".format(result1)

           print(thingspeakHttp)

           conn = urlopen(thingspeakHttp)

           print("sending result")

           init = time.time()

   camera.release()

   cv2.destroyAllWindows()

# In[7]:

def main():

   record()

# In[8]:

if __name__ == '__main__':

   main() 

Conclusion

Crowd dynamics can be affected by several things, such as the passage of time, the layout of the venue, the amount of information provided to visitors, and the overall enthusiasm of the gathering. Managers of large crowds need to be flexible and responsive in case of sudden changes in the environment that affect the situation's dynamics in real-time. Trampling events, mob crushes, and acts of violence can break out without proper crowd management.

The complexity and uncertainty of large-scale events emphasize the importance of providing timely, relevant information to crowd managers. Occupancy control technology helps event planners anticipate how many people will show up to their event, so they can prepare appropriately by ensuring adequate security guards, exits, etc.

Using Raspberry Pi and some smart subtractions and blob tracking, this article describes a system for counting individuals. We show how many people have entered and left a building. The principles of HOG and the calculation of features have also been covered. The testing outcomes demonstrate the viability of using this raspberry pi based device as an essential people-counting station. In the following tutorial, we'll learn how to assemble an intelligent energy monitor based on the Internet of Things and a Raspberry Pi 4.

Latest Deep Learning Frameworks

Hello peeps. Welcome to the next tutorial on deep learning. You have learned about the neural network, and it was an interesting way to compare different types of neural networks. Now, we are talking about deep learning frameworks. In the previous sessions, we introduced you to some important frameworks to let you know about the connection of different entities, but at this level, it is not enough. We are telling you in detail about all types of frameworks that are in style because of their latest features. So before we start, have a look at the list of concepts that will be covered today:

  • Introduction to the frameworks of deep learning.

  • Why do we require frameworks in deep learning?

  • What are some important deep learning frameworks?

  • What is TensorFlow and for which purpose of using TensorBoard?

  • Why Keras is famous?

  • What is the relationship between python and PyTorch?

  • How can we choose the best framework?

What Is A Deep Learning Framework?

Deep learning is a complex field of machine learning, and it is important to have command over different types of tools and tricks so that you may design, train, and understand several types of neural networks efficiently with the minimum amount of time. Frameworks are used in many different types of programming languages, and this is the software that, by combining different tools, improves and simplifies the operation of the programming language.

The best thing about the frameworks is that they allow you to train their models without knowing or bothering about the algorithms that are running behind the programming. Isn’t it amazing to know that you will get a helping hand to understand and train your model without any worries? Once you know much about the different frameworks, it will be clear to you how these frameworks do some specific types of tasks to make your training process easy and interesting.

Why do We Need a Framework for Deep Learning?

In the beginning, when you start the programming of the deep learning process by hand, you will see some interesting results related to your task. Yet, when you move towards complex tasks or when you are at the intermediate level, you will realize that it is strenuous and time-consuming to perform a simple task at a higher level. Moreover, the repetition of the same code can sometimes make you sick.

Usually, the need for a framework arises when you start working with advanced neural networks such as convolutional neural networks, or simply CNN, where the involvement of images and video makes the task difficult and time-consuming. These frameworks have pre-defined types of networks and also provide you with an easy way to access a great deal of information. 

Detail of Deep Learning Frameworks

With the advancement of deep learning, many organizations are working to make it more user-friendly so that more people can use it for advanced technologies. It is one of the reasons behind the popularity of deep learning that a great deal of deep learning frameworks is introduced every year. We have analyzed different platforms and researched different reports. We found some amazing frameworks, and our experts have been checking them for a long time to provide you with the best framework for your learning. Here is the list of the frameworks that we will discuss in detail with you, along with the pros and cons of each. 

  1. Tensorflow

  2. Keras

  3. PyTorch

  4. Theano

  5. DL4J

  6. Lasagna

  7. Caffe

  8. Chainer

We are not going to discuss all of them because it may be confusing for you to understand all the frameworks. Moreover, we believe in smart working, and therefore, we are simply discussing the most popular frameworks so that you may learn the way to compare different parameters, and after that, you will get the perfect way to make modules, train, and test the projects in different ways for smart working.

TensorFlow

The first framework to be discussed here is TensorFlow, which is undoubtedly the most popular framework for deep learning because of its easy availability and great performance. The backbone of this platform is directly connected to Google’s brain team, which has represented it for deep learning and provided easy access to almost all types of users. It supports Python and some other programming languages, and the good thing about it is that it also works with dataflows. This point makes it more useful because, when dealing with different types of neural networks, it is extremely useful to understand the progress and the efficiency of your model. 

Another important point to notice about TensorFlow is, it creates models that are undemanding to build and contain robust.

TensorBoard

A plus point about this framework is another large package called TensorBoard. There are several advantages to this fabulous data package, but some of them are listed below:

  • The basic working of this package is to provide data visualization to the user, which is a great step for the ease of the user, but unfortunately, people are less aware of this, although it is a useful item. 

  • Another advantage of tensorBoard is that it makes the sharing of the data with the shareholders easy and comfortable because of its fantastic data display. 

  • You can use different packages with the help of TensorBoard.

You can get other basic information about TensorFlow by paying attention to the following table:

TensorFlow

Releasing Dates

November 9, 2015, and January 21, 2021.

Programming Languages

Python, C++, CUDA

category

Library of machine learning

Name of Platforms

JavaScript, Linux, Windows, Android,  macOS,

License

Apache License 2.0

Website’s Link

www.tensorflow.org



Keras

The next on the list is another famous and useful library for deep learning that most of you may know about. Keras is one of the favourite frameworks for deep learning developers because of its demand and open-source contributors. An estimate says that 35,000+ users are making this platform more and more popular. 

Keras is written in the Python programming language, and it can support high-level neural networks. You must keep in mind that Keras is an API, and it runs on top of highly popular libraries such as TensorFlow and Theano. You will see this in action in our coming lectures. Because of its user-friendly features, Keras is used by a large number of companies as a startup and is a great tool for researchers and students.

User-Friendly 

The most prominent feature of Keras is its user-friendly nature. It seems that the developers have presented this framework to all types of users, no matter if they are professionals or learners. If users encounter an error or issue, they should receive transparent and actionable feedback. 

Modular System

For me, modularity is a useful feature because it makes tasks easier and faster. Moreover, the errors are easily detectable, which is a big relief. The modularity is shown with a graphical representation or sequence of information so that the user may understand it well. 

Perfect for Advanced Research

Here's some good news for researchers and students. Keras is one of the best options for researchers because it allows them to make their own modules and test them according to their choice. Adding the modules to your project is super easy on Keras, and you can do advanced research without any issues.



Keras

Releasing Dates

March 27, 2015, and June 17, 2020.

Programming Languages

N/A

category

Almost all types of neural networks

Name of Platforms

Cross-platforms

License

Massachusetts Institute of Technology (MIT)

Website’s Link

https://keras.io/




PyTorch

Our next topic of discussion is PyTorch. It is another open-source library for deep learning and is used to build complex neural networks in an easy way. The thing that attracted me to this library is the platform that introduced it. It is developed under the umbrella of Facebook's AI Research Lab. I'm curious about how powerful it is because every time I open my Facebook app, I find the content I've chosen and wished for. People have been using it for deep learning, computer vision, and other related purposes since 2016, as it is a free open source for AI and related fields. By using PyTorch with other powerful libraries such as NumPy, Tensor, etc., you can build, train, and test complex neural networks. Because of its easy accessibility, PyTorch is popular among people. The versatility of the programming languages and different libraries working with PyTorch is another reason for its success.

Hybrid Front-end

A feature that makes it easy to use is its hybrid front-end nature, which makes it faster and more flexible to use. The user-friendly nature of this library makes it the perfect choice for professionally non-technical people. 

Optimized Performance

With the help of its torch-distributed backend, you can have optimal performance all the time and keep an eye on the training and working of the network you are using. It has a powerful architecture, and on an advanced level, you can use it for complex neural networks.

Versatility

As you can guess, PyTorch is run with the help of Python, which is one of the most popular and trending programming languages, and the plus point is that it allows many libraries to be used with it and work on neural networks.


PyTorch

Releasing Dates

September 2016, and December 10, 2020.

Category

Machine learning library, Deep learning library

Name of Platforms

Cross-platforms

License

Berkeley Software Distribution (BSD)

Website’s Link

https://pytorch.org/


How Can You Choose Best Framework For You?

Since now, we have been talking about the frameworks, and the basic purpose of discussing different features was to tell you the difference between them. A beginner may believe that all frameworks are the same, but this is incorrect because each framework has its own specialities and the difficulty level of using them varies. So, if you want to work perfectly in your field, first you must learn how to choose the best framework for your task. Keep in mind, these are not the only points that you need to know; all the parameters change according to the complexity of your project.

Consider The Needs of Your Project

Not all projects are the same. You do not have to use the same framework every time. You must know more than one framework and choose one according to your needs. For example, for simple tasks, there is no need to use a complex framework or a higher-level neural network. There is versatility in the projects in deep learning, and you have to understand the needs of your project every time before choosing your required framework. As a result, before you begin, you should ask yourself the following questions about the project:

  1. What are you using? Modern deep learning framework or are you interested in the classic ML algorithms?

  2. What is your preferred programming language for the AI modules?

  3. For the process of scaling, which type of hardware and software do you have for the working?

Once you know the different features of the frameworks, you may get the answers to all the questions given above.

Optimization of Parameter?

Machine learning is a vast field, and with the advancement of different techniques, there is always a need to compare the parameters all the time. Different algorithms follow different types of parameters, and you must know all of them while choosing your framework. Moreover, you must also know if you are going with the classic built-in machine-learning algorithms or want to create your own. 

Hence, we learned a lot about the frameworks of deep learning today. It was an interesting lecture where we saw the detailed introduction of the framework and compared TensorFlow, PyTorch, and Keras by discussing several features and requirements of all these frameworks. We will see all the discussion in action in the coming lectures. The purpose of this session was to clear the concept of working and variations in the framework and in this way, you have the idea how deep learning is useful in different ways. Researcher are working in deep learning and it is one of the basic reason behind the develorpment of different frameworks.

What is Neural Network?

Hello Learners! Welcome to the next lecture on deep learning. We have read the detailed introduction to deep learning and are moving forward with the introduction of the neural network. I am excited to tell you about the neural network because of the interesting and fantastic applications of neural networks in real life. Here are the topics of today that will be covered in this lecture:

  • What do we mean by the neural network?

  • How can we know about the structure of the neural network?

  • What are the basic types of neural networks?

  • What are some applications of these networks?

  • Give an example of a case where we are implementing neural networks.

Artificial intelligence has numerous features that make it special and magical in different ways, and we will be exploring many of them in different ways in this course. So, first of all, let us start with the introduction.

Neural Network

Have you ever observed that your favorite videos are shown to you on Facebook or other social media platforms? Or does the advertisement for the product you are searching for pop up when using the phone applications? All of these are because of the artificial intelligence of the system that is running in the backend of the app and we have discussed it many times before. 

To understand well about the neural network, let us discuss the inspiration and the model that has resulted in the formation of the neural network. We all have the idea of the neural network of the human brain. We are the best creation because of the complex and the best brain that calculates, estimate, and predict the results of the repeating processes in a better way. The same is the case with the neural network of computer systems. We have discussed the basic structure of the neural network many times but now, it's time to know about the structure of the neural network. 

Structure of Neural Network

I always wonder how the answering software and apps such as Siri reply to us accurately and without any delay. The answer to this question was found in the workings and architecture of the network working behind this beautiful voice. I do not want to start the biology class here, but for proper understanding, I have to discuss the process when we hear a voice and understand it through the brain.

When we hear a sound in the surroundings, it is first caught by the ear, and this raw audio is acting as an input for the nerve of the ear. These nerves then pass this signal to the next layers that in return, pass these signals further to the next layers.

The layer makes the result more refined and accurate. Finally, the last layer reaches the brain where the brain makes the decision to respond. The same process is used in the neural network. This statement will be clear to you how it works, but for that, you have to know about the seven types of neural networks.

  1. Feed Forward Neural Network

  2. Recurrent Neural Network

  3. Radial Basis Function (RBF) Neural Network

  4. Convolution Neural Network

  5. Modular Neural Network

  6. Kohonen Self-organizing Neural Network

  7. Multi-Layer Perception

Feed Forward Neural Network

Let me start with the very basic type of neural network so that you may understand it slowly and gradually. The workings of this network are related to its name. The motion of the information or the nerves is unidirectional, and the process is ended in the output. In this type, there is no way to move a neural nerve backwards and train the previous layer. The basic application of this type of network is found in face recognition and related projects, people who are interested in the applications such as speech recognition prefer to choose this type of network to avoid the complexity.

 Radial Basis Function (RBF) Neural Network

This layer includes the radial function. The working of this function will be clearer when you know about the structure of this layer. Usually, this network has two layers:

  • Hidden Layer

  • Output Layer

The radial function is present in the hidden layer. The function is proved helpful in reasonable interpolation during the process in which the data is fitted into the layers. The layer works by measuring the distance of the nerve from the distance of the central part of the network. For the best implementation, this network checks for all types of data points and groups similar data points. In this way, this type of network can be used to make the systems such as power restoration. 

Recurrent Neural Network

As you can guess from the name of this network, it has the ability to recur. It is my favourite type of neural network because it learns from the previous layer, and the data is used to predict the output in a precise way. This is one of the main layers, and its work has been discussed many times in this tutorial.

Contrary to the first type of neural network that we have discussed before, the information can be recurred or moved to the previous layer. Here are some important points about this layer:

  • The first layer is a simple feed-forward layer. In other words, it can not move to the previous layer. 

  • Each layer transmits the data to the next layer unidirectional in the first phase. 

  • If during the transmission of data, the layer is predicting inaccurate results then it is the responsibility of the network to learn by repeating the saving of data. 

  • The main building block of this network is the process of saving the data into the memory and then working automatically to get accurate results. 

  • It is widely used in the text to speech conversations.

Convolution Neural Network

Now coming towards an important type of neural network that has a scope worldwide and engineers are working day and night in this field because of the interesting and beneficial applications of this network. Before going deep into the definition of this network, I must clarify what exactly a convolution is. It is the process of filtering the results in a way that can be used to enable activation. The filtering mechanism is repeating and therefore, it yields the perfect results all the time. Usually, it is used in image processing, natural language processing, and similar tasks because it breaks the image into parts and then represents the results according to the choice of the user. It is one of the classical techniques that are used for different purposes when people are working on images, videos, or other related projects. For example, if you want to find the edges or details of the images to replace or edit them in a better way, then this technique will be helpful all the time because, through it, you can play with the images and the components of the images as we are using the pixels for our purpose. If these things seem difficult or complex to you at the moment, do not worry, because all the things will be cleared with the passage of time.

Modular Neural Network

Modularity is considered the basic building block of the neural network. It is the process of dividing complex tasks into different modules or parts and solving them individually so that in the end, the results may be combined together and, finally, we get the accurate ending. It is a faster way of working. You can understand well by considering the example of the human brain, which is divided into the left and right sides and, therefore, can work simultaneously. There are different tasks that are assigned to each part and they work best in their duties.

Kohonen Self-organizing Neural Network

Random input vectors are fed into a discrete map of neurons. Dimensions and planes are other names for vectors. Its applications include recognizing patterns in data, such as in medical analysis.

Multi-Layer Perception

Here, I am now discussing the type of network that has more than one hidden layer. It is a little bit complex, but if you have an idea of the cases discussed before, you will easily understand this one. The purpose of using this network is to provide the type of data that is not linearly separable. There are several functions that can be used while working on this network. The interesting thing about this network is, it consists of a non-linear activation function for work.

Here n is the number of the last layer, which can be from 0 to any number according to the complexity of the network. A more useful network contains more layers and in return, is more useful usually.  

Example of Recurrent Neural Network

At the moment, I want to discuss an example of this network because it has a slightly different type of work, and I hope that with the help of this example, you will get the concept of what I am trying to teach you, Consider the case where we want to talk to the personal assistance in our divide and on the practical implementation, it is a simple task of few seconds yet at the backend, there is a long procedure that is being followed so that you may get the required results. Here is a simple sentence that is to be asked of the personal assistant. 

The first step of the network is to divide the whole sentence into words so that these can be scanned easily. 

We all know that each word has a specific pattern of sound, and therefore, the word is then sampled into the discrete sound waves. Let me revise that "discrete sound signals are the ones that consists of discontinuous points. We get the results in the following form.

Now, it is the time when the system further divides the single word into a single alphabet. As you can see in the image given above, each alphabet has a specific amplitude. In this way, the values of different alphabets are obtained and this data is then stored in the array. 

In the next step, the whole data obtained is then fed into the input layer of the network and here the working of recurrent neural network stars. By passing through the input layer, each weight of the alphabet is assigned to the interconnection between the input layer and the hidden layer of the network. At this moment, we need a transfer function that is calculated with the help of the following formula:

In the hidden layers, the weights get assigned to the hidden layers. This process continues for all types of layers, and as we know, the output of the first layer is used as the input by the second layer, and this process continues until the last layer. But keep in mind, this process is only for the hidden layers.

While using speech recognition with the help of the neural network, we use different types of terms, and some of them are :

  1. Acoustic model

  2. Lexicon

By the same token, there are different types of exits. I am not going to explain these terms right now because it is unnecessary to discuss them at the moment. 

In the end, we are reaching the conclusion that neural networks are amazing to learn and interesting to understand while working with deep learning. You will get all the necessary information about these networks in this course. We started with the basic introduction of the neural network and saw the structure of the network in detail. Moreover, we found the types of neural networks in detail and all the basic networks are discussed here so that you may compare well why we are using these networks and what type of network will be best for you for learning and training. We suggest feed-forward neural networks for basic use, and you will see the reason behind this suggestion in our coming lecture. Till then, you have to search for other networks; if you find any, discuss them with us. In the next lecture, you will learn about deep learning and neural networks, so stay tuned with us.

Getting Started with Deep Learning

Hello students, welcome to the second tutorial on deep learning in the first one, we have learned the simplest but basic introduction of deep learning to have a solid base about what we are actually going to do with deep learning. In the present lecture, we will take this to the advanced level and will learn the introduction with the intention of learning more and more about the introduction and understanding what we want to learn and how will we implement the concepts easily. So, here is a quick glance at the concepts that will be cleared today:

  • What do we mean by Deep learning?

  • What is the structure of calculation in neural networks?

  • How can you examine the Neural Networks?

  • What are some platforms of deep learning?

  • Why did we choose TensorFlow?

  • How can you work with TensorFlow?

Deep Learning as the Sub-branch of AI

As we have said earlier, artificial intelligence is the field that works to take the tasks and work of the human being from the computer that is, the computer act like the human. Computers are expected to think. It is a revolutionary branch of science that deals with the feeding of the intelligence of a human being in the computer for the welfare of mankind and with the passage of time, it is proving itself successful in the real world. With the advancement and enhancement of the complexity of artificial intelligence, the field is divided into different branches therefore, AI has a branch named machine learning and then it is subdivided into deep learning. The main focus of this course is deep learning therefore, we describe it in detail. 

Calculations in Neural Network

All this discussion was to tell you about the basics and the important introduction of deep learning and if it is still not clear to you then do not worry because by getting the information about it throughout the series you will start practising, things will be cleared here. 

We have seen the discussion about the neural network before but it was just related to the concept of the weights in the neural network. In the present tutorial, you are going to see another concept about the neural network and the proper working on these networks will be started in the coming sessions.

The neural network is just like the multiple layers of the human brain that contain the input layer where the data is fed in different ways according to the requirement of the network. Moreover, the multiple layers are responsible for the proper training process again and again in such a way that every second layer is more mature and accurate than the first one and in this way, the last one has the most accurate data among the others and this is then fed into the output layer where we can get the results. All these processes occur in a sequence while we are working on the neural network and it is listed below:

In the first step, the product is calculated by keeping the weight of each channel and the value of the input in mind. 

  • The sum of all the products obtained is then calculated and this is called the weighted sum of the layers. 

  • In the next step, the bias of added to the resultant calculation according to the estimation of the neural network.

  • In the final step, the sum is then subjected to the particular function that is named the activation function. 

Working of Neural Network 

As we have mentioned the steps, we know that it is not clear that much now in your mind therefore, we are discussing an example of this. By keeping all the steps in mind, we are now working on the practical application of working on a neural network.

We are considering the example in which the 28*28 pixel of an image is observed for its shape. Each pixel is considered as the input for the neurons of the first layer. 

The first step is then calculated by using the formula given below:

x1*w1 + x2*w2 + b1

We have taken the simple text example but added the process of each layer with the product of corresponding weight occurs till you reach the last layer. The next step here is calculated as:

Φ(x1* w1 + x2*w2 + b1)

Here the Φ sign indicates the presence of the activation function as mentioned above in the steps. Now, these steps are performed again and again according to the complexity of the task and training until all the inner layers are calculated well and the results are reached by the output layer and we get the results. An interesting thing here is the presence of a single neuron in the last layer that contain the result of the calculation to be shown as the output. The detail of how the neural network work will be discussed in the next tutorials. For now, just understand the outputs. 

Platforms for Deep Learning

It seems that you are now ready to move forward. Till now, you were learning what is deep learning and why it is useful but now, you are going to learn how can you use deep learning for different tasks. If you are a programmer you must know that there are different platforms that provide the platform for the compilation ad working of the programming language and these are specific to the limited programming languages.

For deep learning, there are certain platforms that are used worldwide and the most important one will be disused here:

TensorFlow

TensorFlow is one of the most powerful platforms specially designed for machine learning and deep learning and it is a free source software library. Although it is a multi-purpose platform it has special features that are used to train machine learning and deep learning projects. You can have an idea of its popularity by the fact that it is presented by the google brain team and it contains the perfect functionality. 

DL4J

The full form of DL4J is Deep Learning For You and as you can guess, it is specialized for deep learning and is written in java for the java virtual machine. Many people prefer this library because of its lightweight and specific design for deep learning.

Torch

If you are wondering if we are talking about a device then it is not true. Torch is an open-source library for deep learning and it provides the algorithm for the working of deep learning projects.

Keras

It is the API that is written in the TensorFlow deep learning platform. It is used for the best practice and experience for deep learning experts. The purpose of using this API is to have a clean, easy, and more reusable result of the code in less time. You will see this with the help of examples in the next sessions. 

Why Choosing TensorFlow

As you can guess, we have chosen TensorFlow for our tutorial and lectures because of some important reasons that we’ll share with you. For these classes, I have tested a lot of software that was specially designed for deep learning as I have mentioned some of them. Yet, I found TensorFlow most suitable for our task and therefore, I want to tell you the core reasons behind this choice.

Easy to Built the Models

You will see that the training and other phases depend upon different models this is super easy to do with the help of TensorFlow. The main reason is, it provides multi-level models so that the one that suits you best will be present for you all the time according to the complexity and working of your project. As we have mentioned earlier, the Keras API is used with TensorFlow therefore, the high-level performance of both of them results in marvelous projects.

Easiest Production in Machine Learning Anywhere

In machine learning and related branches such as deep learning, production is made easiest with the help of the fantastic performance of TensorFlow. It always provides the perfect path towards production and results. It also allows us to have the independence of using different languages and platforms and therefore, it attracts a large audience towards itself. 

A Powerful Tool for Experimentation

What is more important in research than perfect experimentation? Tensorflow is always here for the multiple types of experimentation and research options so that you may test your project in different ways and get the best results through a single software. The advantage of the presence of multiple APIs and the availability of handling several languages makes it best for experimentation.

Another advantage of choosing it for the tutorial is, it supports powerful add-on libraries and interesting models, therefore, it will become easy for us to experiment more and explain the results in a different way to approach all types of students. 

These are some highlighted points that attracted us towards this software but overall, it has a lot in it and you will understand the points when you will see all of them in action in this series, We will be working totally on the TensorFlow and will discuss each and every step in detail without skipping any single step. The practical performance of each step will lead you to move forward with more interest and to understand each concept, we will use different examples. Yet, I have an idea that more explanation makes the discussion confusing so there will be a balance in the explanation.

Working With TensorFlow

As we have described before, TensorFlow is introduced by the google brain team and it was closely collaborating with the machine learning research organization.

TensorFlow is the software library that works in the collaboration with some other libraries for the best implementations of deep learning projects and you will see its work and projects in detail soon when we will move forward in this series. There are different libraries that are important to attach with the TensorFlow when we try to make it ready for the working of deep learning. Some of them are listed below:

  • Python package Index

  • Django

  • Scipy

  • Numpy

Following are the steps that are used to work on the TensorFlow. Yet, keep in mind, these steps vary according to the need of the time and type of the project.

  • Import the libraries in TensorFlow.

  • Assign paths of data sets. It is important to provide the path to column variables as well. 

  • Create the test and train data and for this, use the Pandas library.

  • In the next step, the shape of the test and train data is printed. 

  • For the training data sheet, the data type is printed for each column. 

  • Set the label column values of the data.

  • You have to cunt the total number of unique values related to the datasheets.

  • Add features for the different types of variables. 

  • Built the relationship for the features with a bucket. 

  • Add features for the proper definition of the features. 

  • Train and evaluate the model. 

  • Predict the model and set the output to the test set. 

Do not worry if these steps are new to you or if they are confusing for you at the moment, you will see the detail of them in the coming future. Moreover, Some of these steps may be different for different people because coding is a vast area and therefore, it has multiple ways to work in a different environments. So, today we learnt several concepts through this single lecture. We have revised and added some other information in the introduction of deep learning, We also have a discussion about neural networks and saw it working. Moreover, the platforms of deep learning were discussed here out of which, we chose the tensor flow and the reason for this choice was also explained well with the help of different points. In the end, we saw the brief procedure to train and predict the project and you will see all these concepts in action in the coming lectures so stay tuned with us.

Introduction to Deep Learning

Hello friends, I hope you all are having fun. Today, we are bringing you one of the most advanced and trending courses named "Deep Learning". Today, I am sharing the first tutorial, so we will discuss the basic Introduction to Deep Learning, and in my upcoming lectures, we will explore complex concepts related to it. Deep Learning has an extensive range of applications and trends and is normally used in advanced research. So, no matter which field you belong to, you can easily understand all the details with simple reading and practicing. So without any further delay, let me show you the topics that we are going to cover today:

  • What is deep learning?
  • What are artificial intelligence and machine learning?
  • Working with deep learning using neural networks.
  • Trends in deep learning.
  • Deep learning as a career.

So, let's get started:

What is Deep Learning?

  • Deep Learning is a branch of machine learning, that enables machines to think like a human brain, by using neural networks.
  • Using Deep Learning techniques, machines get smart and recognize the pattern from the data already saved/fed in the database and then use this data for the prediction of future results.
  • So, we can say that the deep learning neural network has the ability to learn from the previous results and predict the behavior of the network.
  • Deep learning algorithms have multiple layers to analyze the millions of data points.
  • An automatic Driverless car is an excellent real-life application of Deep Learning.

Deep Learning is considered a branch of Machine Learning which itself comes under Artificial Intelligence. So, let's have a look at these two cornerstone concepts in the computing world:

What is Artificial Intelligence?

Artificial intelligence or AI is the science/engineering behind the creation of intelligent machines, particularly intelligent computer programs. It enables computers to understand human intelligence and behave like it. AI has a broader expertise and does not have to limit itself to biologically observable methods as in deep learning.

It is a field that combines the computer and the robust data set to solve the problems of life. Moreover, it is important here to mention the definition of machine learning:

What is Machine Learning?

Machine learning is the branch of artificial intelligence, it learns from the experience and data fed into it and works intelligently on its own without the instruction of the human being. For instance, the news feed that arises on Facebook is directed by machine learning on the data so the content of the user’s choice appears every time when they scroll Facebook. As you put more and more data into the machine, it will learn in a better way to provide intelligent results.

Deep Learning Working Principle

Deep learning uses neural network techniques to analyze the data. The best way to describe the neural network is to relate it to the cells of the brain. A neural network is the layers of nodes much like the network in our brain and all these nodes are connected to each other either directly or indirectly. Neural Network has multiple layers to refine the output and it gets deeper as the number of layers increases.

Deep Learning vs. Human Brain

In the human brain, each neuron is able to receive hundreds or thousands of signals from the other neurons and selects signals based on priority. Similarly, in deep learning networks, the signals travel from node to node according to the weight assigned to them. In this way, the neurons with the heavyweight have more effect on the adjacent layer. This process flows through all the layers and the final layer compiles the weight of the resultant and produces the output.

The human brain learns from its experience i.e. as you get old, you get wiser. Similarly, deep learning has the ability to learn from its mistakes and keeps on evolving.

The process of network formation and its working is so complex that it requires powerful machines and computers to perform complex mathematical operations and calculations. Even if you have a powerful tool and computer, it takes weeks to train the neurons.

Another thing that is important to mention here is that neural network works on binary numbers only. So, when the data is being processed, it classifies the answers as a series of binary numbers and performs highly complex calculations. Face recognition is the best example in this regard because in this process, the machine examines the edges and lines of the face to be recognized and it also saves the information of more significant facial parts.

Layers in Deep Learning

Understanding the layers in deep learning is important to get an idea of the complex structure of deep learning neural networks. The neurons in the deep learning architecture are not scattered but are arranged in a civilized format in different layers. These layers are broadly classified into three groups:

  1. Input layer
  2. Hidden layers
  3. Output layer

The working of each neural network in deep learning depends on the arrangement and structure of these layers. Here is a general overview of each layer:

Input Layer

This is the first layer of the neural network and takes the information in the form of raw data. The data may be in the form of text, values, images, or other formats and may be arranged in large datasets. This layer takes the data and applies processing to make it ready for the hidden layers. Every neural network can have one or many input layers.

Hidden Layer

The main processing of data occurs in the hidden layers of the neural networks. These are crucial layers because they provide the processing necessary to learn the complex relationship between the input feature layer and the required output layer.

There are a great number of neurons in the hidden layers, and the number of hidden layers varies according to the complexity of the task and the type of neural network. These layers perform operations such as improved accuracy, feature extraction, representation learning, etc.

Output Layer

These are the final layers responsible for the production of the network’s predictions. A neural network may have one or more output layers, and the activation function of the network depends on the type of problem to be solved in the network. One such example is the softmax activation function, which divides the output according to the probability distribution over different classes.

Training of Deep Learning Process

To understand well, usually, the example of object or person recognition is explained to the students. Let's say, we want to recognize or detect a cat in the picture. We know that different races of cats do not look alike. Some of them are fluffy, some are short, and some are thin in appearance. By the same token, the different angles of the images of the same cat will not be the same and the computer may be confused to recognize these cases. Therefore, the training process includes the amount of light and the shadow of the object in the observation.

In order to train a deep-learning machine to recognize a cat, the following main procedures are included:

  • Using several pictures of cats of different races from different angles.
  • Using the pictures of the other objects that are not cats and telling the network the information about these.
  • Labeling all the images and commanding the compiler to save their data.
  • Feeding all the data into the neural network and compiling the results.
  • The final layer compiles the disconnected results such as the hair type, size, features of the face, etc.
  • Once all the data is fed into the network, the results are then compared with the human-generated labels.
  • If both labels give the same results, the neural network is then considered as trained.
  • On the other hand, if the results are not the same, then the weights are calculated again and the training process is repeated.
  • The process of repeating the adjustments is known as supervised learning, and it occurs even when the neural networks are not explicitly told what "makes" a cat. They must learn on their own and recognize patterns in data over time.

Deep Learning Applications

In the modern computing world, deep learning has a wide range of applications in almost every field. We have mentioned a few examples in our above discussion i.e. Facebook newsfeed and driverless cars. Let's have a look at a few other services utilizing deep learning techniques:

Digital Assistance 

Digital assistants i.e. voice recognition, facial recognition, text-to-speech conversion, voice-to-text conversion, language translation, plagiarism checker etc. are using deep learning techniques to recognize the voice or to process languages. Grammarly, Copyscape, Ahrefs etc. are a few real-life examples using Deep Learning techniques.

Safe Banking

Paypal is using deep learning to prevent fraud and illegal transactions. This is one of the most common examples of the banking facility that I am mentioning here otherwise, there are different applications in security and privacy that are connected to deep learning. 

Object Recognition

Some object recognition applications such as CamFind allow the user to use pictures of the objects and with the help of mobile vision technology, these apps can easily understand what type of objects have been captured.

Self-driven Cars

Another major application of deep learning is the self-driven car that will not only be able to minus the need for drivers in the car but are also able to avoid traffic jams and road accidents. It is an important topic and most companies are working day and night in deep learning to get the perfect results.

Future Prediction and Deep Learning 

As we have said earlier, deep learning is the process of training the computer like humans, therefore, people are working best to train the machine so they can easily examine trends and predict future outcomes such as stock marketing and weather prediction. Isn't it helpful that your computer or the assistant tells you about the stock marketing rates and predicts the best option to be picked for your investments? 

Medical Assistance

In the medical field, where doctors and experts are working hard to save the lives of people, there is no need to explain the importance of technologies such as deep learning that can predict and control the values in body changes and suggest the best remedy and solution of the problem being observed.

Careers in Deep Learning 

Once you have read about the application and the working process of deep learning, you must be thinking if it is the future, why choose deep learning as your career? Let me tell you, if you excel in deep learning, the future is yours. The careers in deep learning are not yet declared but in the coming few years, you are going to see tremendous exposure to deep learning and related subjects and if you are an expert in it, you will be in demand all the time because it is coming with the endless opportunities. Machine learning engineers are in high demand because neither data scientists nor software engineers possess the necessary skills.

To fill the void, the role of machine learning engineer has evolved. According to experts, the deep learning developer will be one of the most highly paid ones in the future. I hope, in the future, almost all fields will require the involvement of deep learning in their network to work easily and to get more and more efficient work without the involvement of human beings. In simple words, with the help of a neural network, we are replacing human beings with machines and these machines will be more accurate.

So in this way, we have introduced you to the amazing and interesting sub-branch of machine learning that is connected to artificial intelligence. We have seen the working and procedures of deep learning and to understand well, we have seen the examples of deep learning processes. Moreover, the trends and techniques discussed related to deep learning where we have seen that most popular apps and websites are using deep learning to make their platforms more user-friendly and exciting. In the end, we saw the careers and professions of deep learning for the motivation of the students. I know at this step, you will have many questions in your mind but do not worry because I am going to explain everything without skipping a single concept and will learn new things with you while explaining to you. So stay with us for more interesting lectures.

Social Distancing Detector with OpenCV in Raspberry Pi 4

During the era of Covid-19, social distancing has proven to be an efficient method of reducing the spread of contagious viruses. It is recommended that people avoid close contact as much as possible because of the potential for disease transmission. Many public spaces, including workplaces, banks, bus terminals, train stations, etc., struggle with the issue of keeping a safe distance.

The previous guide covered the steps necessary to connect the PCF8591 ADC/DAC Analog Digital Converter Module to a Raspberry Pi 4. On our Terminal, we saw the results displayed as integers. We dug deeper into the topic, figuring out exactly how the ADC produces its output signals. In this article, however, we will use OpenCV and a Raspberry Pi to create a system that can detect when people are trying to avoid eye contact with one another. We will employ the YOLO version 3 Object Recognition Algorithm's weights to implement the Deep Neural Networks component. Compared to other controllers, the Raspberry Pi always comes out as the best option for image processing tasks. Previous efforts utilizing Raspberry Pi for advanced image processing included a face recognition application.


Where To Buy?
No.ComponentsDistributorLink To Buy
1Jumper WiresAmazonBuy Now
2PCF8591AmazonBuy Now
3Raspberry Pi 4AmazonBuy Now

Components 

  • Raspberry Pi 4

Only a Raspberry pi 4 having OpenCV pre-installed will do for this purpose. Digital image processing is handled with OpenCV. Digital Image Processing is often used for people counting, facial identification, and detecting objects in images.

YOLO

The savvy YOLO (You Only Look Once) Convolution neural networks (CNN) in real-time Object Detection are invaluable. The most recent version of YOLO, YOLOv3, is a fast and accurate object identification algorithm that can identify eighty distinct types of objects in both still and moving media. The algorithm first runs a unique neural net over the entire image before breaking it up into areas and computing border boxes and probability for each. The YOLO base model has a 45 fps real-time frame rate for processing photos. Compared to other detection approaches, such as SSD and R-CNN, the YOLO model is superior.

In the past, computers relied on input devices like keyboards and mice; today, they can also analyze data from visual sources like photos and videos. Computer Vision is a computer's (or a machine's) capacity to read and interpret graphic data. Computing vision has advanced to the point that it can now evaluate the nature of people and objects and even read their emotions. This is feasible because of deep learning and artificial intelligence, which allow an algorithm to learn from examples like recognizing relevant features in an unlabeled image. The technology has matured to the point where it can be employed in critical infrastructure protection, hotel management, and online banking payment portals.

OpenCV is the most widely used computer vision library. It is a free and open-source Intel cross-platform library that may be used with any OS, including Windows, Mac OS X, and Linux. This will make it possible for OpenCV to function on a mobile device like a Pi, which will have a wide range of applications. Let's dive in, then.

Setup of OpenCV on a Raspberry Pi 4

OpenCV and its prerequisites won't run without updating the Raspberry Pi to the latest version. To install the most recent software for your Raspberry Pi, type in the following commands:

sudo apt-get update

Then, use the scripts below to set up the prerequisites on your RPi so you can install OpenCV.

sudo apt-get install libhdf5-dev -y 

sudo apt-get install libhdf5-serial-dev –y 

sudo apt-get install libatlas-base-dev –y 

sudo apt-get install libjasper-dev -y 

sudo apt-get install libqtgui4 –y

sudo apt-get install libqt4-test –y

Finally, run the following lines to install OpenCV on your Raspberry Pi.

pip3 install OpenCV-contrib-python==4.1.0.25

Setting up Opencv using Cmake

OpenCV's installation on a Raspberry Pi can be nerve-wracking because it takes a long time, and there's a good possibility you'll make a mistake. Given my own experiences with this, I've tried to make this lesson as straightforward and helpful as possible so that you won't have to go through the same things I did. Even though OpenCV 4.0.1 had been out for three months when I started writing this lesson, I decided to use the older version (4.0.0) because of some issues with compiling the newer version.

This approach involves retrieving OpenCV's source package and compiling it on a Raspberry Pi with the help of CMake. Installing OpenCV in a virtual environment allows users to run many versions of Python and OpenCV on the same computer. But I'm not going to do that since I'd rather keep this essay brief and because I don't anticipate that it will be required any time soon.

Step 1: Before we get started, let's ensure that our system is up to date by executing the command below:

sudo apt-get update && sudo apt-get upgrade

If there are updated packages, they should be downloaded and installed automatically. There is a 15-20 minute wait time for the process to complete.

Step 2: We must now update the apt-get package to download CMake.

sudo apt-get update

Step 3: When we've finished updating apt-get, we can use the following command to retrieve the CMake package and put it in place on our machine.

sudo apt-get install build-essential cmake unzip pkg-config

When installing CMake, your screen should look similar to the one below.

Step 4: Then, use the following command to set up Python 3's development headers:

sudo apt-get install python3-dev

Since it was pre-installed on mine, the screen looks like this.

Step 5: The following action would be to obtain the OpenCV archive from GitHub. Here's the command you may use to replicate the effect:

wget -O opencv.zip https://github.com/opencv/opencv/archive/4.0.0.zip

You can see that we are collecting version 4.0.0 right now.

Step 6: The OpenCV contrib contains various python pre-built packages that will make our development efforts more efficient. Therefore, let's also download that with the command that is identical to the one shown below.

wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.0.0.zip

The "OpenCV-4.0.0" and "OpenCV-contrib-4.0.0" zip files should now be in your home directory. If you need to know for sure, you may always go ahead and check it out.

Step 7: Let's extract OpenCV-4.0.0 from its.zip archive with the following command.

unzip opencv.zip

Step 8: Extraction of OpenCV contrib-4.0.0 via the command line is identical.

unzip opencv_contrib.zip

Step 9: OpenCV cannot function without NumPy. Follow the command below to begin the installation.

pip install numpy

Step 10: In our new setup, the home directory would now contain two folders: OpenCV-4.0.0 and OpenCV contrib-4.0.0. Next, we'll make a new directory inside OpenCV-4.0.0 named "build" to perform the actual compilation of the Opencv library. The steps needed to achieve the same result are detailed below.

cd~/opencv

mkdir build

cd build

Step 11: OpenCV's CMake process must now be initiated. In this section, we specify the requirements for compiling OpenCV. Verify that "/OpenCV-4.0.0/build" is in your path. Then, paste the lines below into the Terminal.

cmake -D CMAKE_BUILD_TYPE=RELEASE \

    -D CMAKE_INSTALL_PREFIX=/usr/local \

    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-4.0.0/modules \

    -D ENABLE_NEON=ON \

    -D ENABLE_VFPV3=ON \

    -D BUILD_TESTS=OFF \

    -D WITH_TBB=OFF \

    -D INSTALL_PYTHON_EXAMPLES=OFF \

    -D BUILD_EXAMPLES=OFF ..

Hopefully, the configuration will proceed without a hitch, and you'll see "Configuring done" and "Generating done" in the output.

If you encounter an issue during this procedure, check to see if the correct path was entered and if the "OpenCV-4.0.0" and "OpenCV contrib-4.0.0" directories exist in the root directory path.

Step 12: This is the most comprehensive process that needs to be completed. Using the following command, you can compile OpenCV, but only if you are in the "/OpenCV-4.0.0/build" directory.

Make –j4

Using this method, you may initiate the OpenCV compilation process and view the status in percentage terms as it unfolds. After three to four hours, you will see a completed build screen.

The command "make -j4" utilizes all four processor cores when compiling OpenCV. Some people may feel impatient waiting for a 99% success rate, but eventually, it will be worth it.

After waiting an hour, I had to cancel the process and rebuild it with "make -j1," which did the trick. It is advisable first to use make j4 since that will utilize all four of pi's cores, and then use make j1, as make j4 will complete most of the compilation.

Step 13: If you are at this point, congratulations. You have made it through the entire procedure with flying colors. The final action is to run the following command to install libopecv.

sudo apt-get install libopencv-dev python-OpenCV

Step 14: Finally, a little python script can be run to verify that the library was successfully installed. Try "import cv2" in Python, as demonstrated below. You shouldn't get any error message when you do this.

Completing the Raspberry Pi Software Installation of Necessary Additional Packages

Let's get the necessary packages set up on the Raspberry Pi before we begin writing the code for the social distance detector.

Installing imutils

utils are designed to simplify the use of OpenCV for standard image processing tasks like translating, rotating, resizing, skeletonizing, and presenting pictures via Matplotlib. If you want to get the imutils, type in the following command:

pip3 install imutils

A Breakdown of the Program

The complete code may be found at the bottom of the page. In this section, we'll walk you through the most crucial parts of the code so you can understand it better. All the necessary libraries for this project should be imported at the beginning of the code.

import numpy as np

import cv2

import imutils

import os

import time

Distances between objects or points in a video frame can be determined with the Check() function. The two things in the picture are represented by the a and b points. The Euclidean distance is determined using these two positions as the starting and ending points.

def Check(a,  b):

    dist = ((a[0] - b[0]) ** 2 + 550 / ((a[1] + b[1]) / 2) * (a[1] - b[1]) ** 2) ** 0.5

    calibration = (a[1] + b[1]) / 2     

    if 0 < dist < 0.25 * calibration:

        return True

    else:

        return False

The YOLO weights, configuration file, and COCO names file all have specific locations that can be set in the setup function. The os.path module is everything you need to do ordinary things with pathnames. The os.path.join() sub-module intelligently combines two or more path components. cv2.dnn.read The net is reloaded with the saved weights using the netfromdarknet() function. Once the weights have been loaded, the network layers can be extracted using the getLayerNames model.

def Setup(yolo):

    global neural_net, ln, LABELS

    weights = os.path.sep.join([yolo, "yolov3.weights"])

    config = os.path.sep.join([yolo, "yolov3.cfg"])

    labelsPath = os.path.sep.join([yolo, "coco.names"])

    LABELS = open(labelsPath).read().strip().split("\n") 

    neural_net = cv2.dnn.readNetFromDarknet(config, weights)

    ln = neural_net.getLayerNames()

    ln = [ln[i[0] - 1] for i in neural_net.getUnconnectedOutLayers()]

In the image processing section, we extract a still image from the video and analyze it to find the distance between the people in the crowd. The function's first two lines specify an empty string for both the width and height of the video frame. To process many images simultaneously, we utilized the cv2.dnn.blobFromImage() method in the following line. The blob function adjusts a frame's Mean, Scale, and Channel.

(H, W) = (None, None)

    frame = image.copy()

    if W is None or H is None:

        (H, W) = frame.shape[:2]

    blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)

    neural_net.setInput(blob)

    starttime = time.time()

    layerOutputs = neural_net.forward(ln)

YOLO's layer outputs are numerical values. With these numbers, we may determine which objects belong to which classes with greater precision. To identify persons, we iterate over all layerOutputs and assign the "person" class label to each. Each detection generates a bounding box whose output includes the coordinates of the detection's center on X and Y as well as its width and height.

            scores = detection[5:]

            maxi_class = np.argmax(scores)

            confidence = scores[maxi_class]

            if LABELS[maxi_class] == "person":

                if confidence > 0.5:

                    box = detection[0:4] * np.array([W, H, W, H])

                    (centerX, centerY, width, height) = box.astype("int")

                    x = int(centerX - (width / 2))

                    y = int(centerY - (height / 2))

                    outline.append([x, y, int(width), int(height)])

                    confidences.append(float(confidence))

Then, determine how far apart the middle of the active box is from the centers of all other boxes. If the rectangles overlap only a little, set the value to "true."

for i in range(len(center)):

            for j in range(len(center)):

                close = Check(center[i], center[j])

                if close:

                    pairs.append([center[i], center[j]])

                    status[i] = True

                    status[j] = True

        index = 0

In the following lines, we'll use the model's box dimensions to create a square around the individual and evaluate whether or not they are in a secure area. If there is little space between the boxes, the box's color will be red; otherwise, it will be green.

            (x, y) = (outline[i][0], outline[i][1])

            (w, h) = (outline[i][2], outline[i][3])

            if status[index] == True:

                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 150), 2)

            elif status[index] == False:

                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

Now we're inside the iteration function, where we're reading each film frame and analyzing it to determine how far apart the people are.

ret, frame = cap.read()

    if not ret:

        break

    current_img = frame.copy()

    current_img = imutils.resize(current_img, width=480)

    video = current_img.shape

    frameno += 1

    if(frameno%2 == 0 or frameno == 1):

        Setup(yolo)

        ImageProcess(current_img)

        Frame = processedImg

In the following lines, we'll utilize the opname-defined cv2.VideoWriter() function to save the output video to the provided place.

if create is None:

            fourcc = cv2.VideoWriter_fourcc(*'XVID')

            create = cv2.VideoWriter(opname, fourcc, 30, (Frame.shape[1], Frame.shape[0]), True)

    create.write(Frame)

Testing the program

When satisfied with your code, launch a terminal on your Pi and go to the directory where you kept it. The following folder structure is recommended for storing the code, Yolo framework, and demonstration video.

The yoloV3 directory is downloadable from the;

https://pjreddie.com/media/files/yolov3.weights

videos from:

https://www.pexels.com/search/videos/pedestrians/

Finally, paste the Python scripts provided below into the same folder as the one displayed above. The following command must be run once you've entered the project directory:

python3 detector.py

I applied this code to a sample video I found on pexels, and the results were interesting. The frame rate was terrible, and the film played back in almost 11 minutes.

Changing line 98 from cv2.VideoCapture(input) to cv2.VideoCapture(0) allows you to test the code without needing a video. Follow these steps to utilize OpenCV on a Raspberry Pi to identify inappropriate social distances.

Complete code

import numpy as np

import cv2

import imutils

import os

import time

def Check(a,  b):

    dist = ((a[0] - b[0]) ** 2 + 550 / ((a[1] + b[1]) / 2) * (a[1] - b[1]) ** 2) ** 0.5

    calibration = (a[1] + b[1]) / 2       

    if 0 < dist < 0.25 * calibration:

        return True

    else:

        return False

def Setup(yolo):

    global net, ln, LABELS

    weights = os.path.sep.join([yolo, "yolov3.weights"])

    config = os.path.sep.join([yolo, "yolov3.cfg"])

    labelsPath = os.path.sep.join([yolo, "coco.names"])

    LABELS = open(labelsPath).read().strip().split("\n")  

    net = cv2.dnn.readNetFromDarknet(config, weights)

    ln = net.getLayerNames()

    ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

def ImageProcess(image):

    global processedImg

    (H, W) = (None, None)

    frame = image.copy()

    if W is None or H is None:

        (H, W) = frame.shape[:2]

    blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)

    net.setInput(blob)

    starttime = time.time()

    layerOutputs = net.forward(ln)

    stoptime = time.time()

    print("Video is Getting Processed at {:.4f} seconds per frame".format((stoptime-starttime))) 

    confidences = []

    outline = []

    for output in layerOutputs:

        for detection in output:

            scores = detection[5:]

            maxi_class = np.argmax(scores)

            confidence = scores[maxi_class]

            if LABELS[maxi_class] == "person":

                if confidence > 0.5:

                    box = detection[0:4] * np.array([W, H, W, H])

                    (centerX, centerY, width, height) = box.astype("int")

                    x = int(centerX - (width / 2))

                    y = int(centerY - (height / 2))

                    outline.append([x, y, int(width), int(height)])

                    confidences.append(float(confidence))

    box_line = cv2.dnn.NMSBoxes(outline, confidences, 0.5, 0.3)

    if len(box_line) > 0:

        flat_box = box_line.flatten()

        pairs = []

        center = []

        status = [] 

        for i in flat_box:

            (x, y) = (outline[i][0], outline[i][1])

            (w, h) = (outline[i][2], outline[i][3])

            center.append([int(x + w / 2), int(y + h / 2)])

            status.append(False)

        for i in range(len(center)):

            for j in range(len(center)):

                close = Check(center[i], center[j])

                if close:

                    pairs.append([center[i], center[j]])

                    status[i] = True

                    status[j] = True

        index = 0

        for i in flat_box:

            (x, y) = (outline[i][0], outline[i][1])

            (w, h) = (outline[i][2], outline[i][3])

            if status[index] == True:

                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 150), 2)

            elif status[index] == False:

                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

            index += 1

        for h in pairs:

            cv2.line(frame, tuple(h[0]), tuple(h[1]), (0, 0, 255), 2)

    processedImg = frame.copy()

create = None

frameno = 0

filename = "newVideo.mp4"

yolo = "yolov3/"

opname = "output2.avi"

cap = cv2.VideoCapture(filename)

time1 = time.time()

while(True):

    ret, frame = cap.read()

    if not ret:

        break

    current_img = frame.copy()

    current_img = imutils.resize(current_img, width=480)

    video = current_img.shape

    frameno += 1

    if(frameno%2 == 0 or frameno == 1):

        Setup(yolo)

        ImageProcess(current_img)

        Frame = processedImg

        cv2.imshow("Image", Frame)

        if create is None:

            fourcc = cv2.VideoWriter_fourcc(*'XVID')

            create = cv2.VideoWriter(opname, fourcc, 30, (Frame.shape[1], Frame.shape[0]), True)

    create.write(Frame)

    if cv2.waitKey(1) & 0xFF == ord('s'):

        break

time2 = time.time()

print("Completed. Total Time Taken: {} minutes".format((time2-time1)/60))

cap.release()

cv2.destroyAllWindows()

Why is Social Distancing Detection helpful?

  1. Convincing Workers

Since 41% of workers won't return to their desks until they feel comfortable, installing social distancing detection is an excellent approach to reassure them that the situation has been rectified. People without fevers can still be contagious; hence this solution is preferable to thermal imaging cameras.

  1. Space Utilization

Using the detection program, you can find out which places in the workplace are the most popular. As a result, you'll have all the information you need to implement the best precautions.

  1. The Practice of Keeping Tabs and Taking Measures

The software can also be connected to security video systems outside the workplace, such as in a factory where workers are frequently close to one another. To be able to keep an eye on the office atmosphere and single out those whose personal space is too close to others.

  1. Tracking the Queues

Queue monitoring is a valuable addition to security cameras for businesses in retail, healthcare, and other sectors, where waiting in line is unnecessary. As a result, the cameras will be able to monitor and recognize whether or not people are following the social distance requirements. The system can be configured to function with automatic barricades and digital billboards to provide real-time alerts and health and security information.

Consequences of Isolation

The adverse effects of social isolation include the following:

  • Its efficacy decreases when mosquitoes, infected food or water, or other vectors are predominantly responsible for spreading disease.

  • If a person isn't used to being in a social setting, they may become lonely and depressed.

  • Productivity drops, and other benefits of interacting with other people are lost.

Conclusion

This tutorial showed us how to build a social distance detection system. This technology makes use of AI and deep learning to analyze visual data. Incorporating computer vision allows for accurate distance calculations between people. A red box will appear around any group that violates the minimum acceptable threshold value. The system's designers used previously shot footage of a busy roadway to build their algorithm. The system can determine an approximation of the distance between individuals. In social interactions, there are two types of space between people: the "Safe" and "Unsafe" distances. In addition, it shows labels according to item detection and classification. The classifier may be utilized to create real-time applications and put into practice live video streams. During pandemics, this technology can be combined with CCTV to keep an eye on the public. Since it is practical to conduct such screening of the mass population, they are routinely implemented in high-traffic areas such as airports, bus terminals, markets, streets, shopping mall entrances, campuses, and even workplaces and restaurants. Keeping an eye on the distance between two people allows us to ensure sufficient space is maintained between them.

Interface PCF8591 ADC/DAC Analog Digital Converter Module with Raspberry Pi 4

Welcome back to another Python tutorial for the Raspberry Pi 4! The previous tutorial showed us how to construct a Raspberry Pi-powered cell phone with a microphone and speaker for making and receiving calls and reading text messages (SMS). To make our Raspberry Pi 4 into a fully functional smartphone, we built software in Python. As we monitored text and phone calls being sent and received between the raspberry pi and our mobile phone, we experienced no technical difficulties. But in this tutorial, you'll learn how to hook up the PCF8591 ADC/DAC module to a Raspberry Pi 4.

Since most sensors only output their data in analog values, converting them to binary values that a microcontroller can understand is a crucial part of any integrated electronics project. A microcontroller's ability to process analog data necessitates using an analog-to-digital converter.

Some microcontrollers, including the Arduino, MSP430, and PIC16F877A, contain an onboard analog-to-digital converter (ADC), whereas others, like the 8051 and Raspberry Pi, do not.

Where To Buy?
No.ComponentsDistributorLink To Buy
1Jumper WiresAmazonBuy Now
2PCF8591AmazonBuy Now
3Raspberry Pi 4AmazonBuy Now

Required Components

  1. Raspberry-pi 4

  2. PCF8591 ADC Module

  3. 100K Pot

  4. Jumper wires

You are expected to have a Raspberry Pi 4 with the most recent version of Raspbian OS installed on it, and that you are familiar with using a terminal program like putty to connect to the Pi via the Internet and access its file system remotely. Those unfamiliar with Raspberry Pi can learn the basics by reading the articles below.

PCF8591 ADC/DAC Module

Each of the ten pins on the PCF8591 module may read analog values as high as 256 on the PCF8591's digital side or vice versa. The board has a thermistor and LDR circuit. Input and output from this module are both analogs. To facilitate the I2C protocol, it has a dedicated serial clock and serial data address pins. The supply voltage ranges from 2.5 to 6V, and the stand-by current is minimal. We can further turn the module's potentiometer knob to control the input voltage. A total of three jumpers can be found on the board. Switching between the thermistor, LDR/photoresistor, and adjustable voltage access circuits is possible by connecting J4, J5, and J6. D1 and D2 are two LEDs on the board, with D1 displaying the strength of the output voltage and D2 indicating the power of the supply voltage. When the supply or output voltage is increased, the brightness of LEDs D1 and D2 are correspondingly enhanced. Potentiometers connected to the LEDs' VCC or AOUT pins also allow testing.

Microprocessors, Arduinos, Raspberry Pis, and other digital logic circuits can interact with the physical environment thanks to Analogue-to-Digital Converters (ADCs). Many digital systems gather information about their settings by analyzing the analog signals produced by transducers such as microphones, light detectors, thermometers, and accelerometers. These signals constantly vary in value since they are derived from the physical world.

Digital circuits use binary signals, which can only be in one of two states, "1" (HIGH) or "0" (LOW), as opposed to the infinitely variable voltage values provided by analog signals (LOW). Therefore, Analogue-to-Digital Converters (A/D) is an essential electronic circuit for translating between constantly varying analog impulses and discrete digital signals.

To put it simply, an analog-to-digital converter (ADC) is a device that, given a single instantaneous reading of an analog voltage, generates a unique digital output code that stands in for that reading. The precision of an A/D converter determines how many binary digits, or bits, are utilized to represent the original analog voltage value.

Analogue and Digital Signals

By rotating the potentiometer's wiper terminal between 0 and VMAX, we may see a continuous output signal with an endless set of output values related to the wiper position. In a potentiometer, the output voltage constantly varies while the wiper is moved between fixed positions. Variations in temperature, pressure, liquid levels, and brightness are all examples of analog signals.

A digital circuit uses a single rotary switch to control the potential divider network, taking the place of the potentiometer's wiper at each node. The output voltage, VOUT, rapidly transitions from one node to the next as the switch is turned, with each node's value representing a multiple of 1.0 volts.

The output is guaranteed at 2-volt, 3-volt, 5 volts, etc., but NOT a 2.5-volt, 3.1-volt, or 4.6-volt output. Using a multi-position switch and more resistive components in the voltage-divider network, resulting in more discrete switching steps, would allow for generating finer output voltage levels.

By this definition, we can see that a digital signal has discrete (step-by-step) values, while an analog signal's values change continuously over time. We are going from "LOW" to "HIGH" or "HIGH" to "LOW."

So the question becomes how to transform an infinitely variable signal into one with discrete values or steps that a digital circuit can work with.

Converting from Analog to Digital

Although several commercially available analog-to-digital converter (ADC) chips exist, such as the ADC08xx family, for converting analog voltage signals to their digital equivalents, a primary ADC can be constructed out of discrete components.

Using comparators to detect various voltage levels and output their switching signal state to an encoder is a straightforward method known as parallel encoding, flash encoding, simultaneous encoding, or multiple comparator converters.

The equivalence output script for a given n-bit resolution is formed by a chain network of accuracy resistors and a series of comparators that are connected but equally spaced.

As soon as an analog signal is provided to the comparator input, it is evaluated with a reference voltage, making parallel converters advantageous because of their ease of construction and lack of need for timing clocks. The following comparator circuit may be of interest.

A Logic Comparator

The LM339N is an analog comparator that compares the relative magnitudes of two voltage levels via its two analog inputs (one positive and one negative).

The comparator receives two signals, one representing the input voltage (VIN) and the other representing the reference value (VREF). The comparator's digital circuits state, "1" or "0," is determined by comparing two output voltages at the input of the comparator.

One input (VREF) receives a reference voltage, and the other input (VIN) receives the input voltage to be compared to it. Output is "OFF" by an LM339 comparator when the input power is lower than (VIN VREF) and "ON" when the input power is higher than the standard voltage (VIN > VREF). A comparator is a device to determine which of two voltages is greater.

Using the potential divider network established by R1 and R2, we can calculate VREF. If the two resistors are identical in value (R1 = R2), then the reference voltage will be half the input power (V/2). Therefore, like with a 1-bit ADC, the output of an open-collector comparator is HIGH if VIN is lower than V/2 and LOW otherwise.

However, by increasing the number of resistors in the voltage divider circuit, we can "divide" the voltage source by an amount equal to the ratio of the resistors' resistances. However, the number of comparators needed increases with the number of resistors in the voltage-divider network.

For an "n"-bit binary output, where "n" is commonly between 8 and 16 bits, a 2n- 1 comparator would be needed in general. As we saw previously, the comparator utilized by the one-bit ADC to determine whether or not VIN was more significant than the V/2 voltage output was 21 minus 1, which equals 1.

If we want to build a 2-bit ADC, we'll need 22-1 or "3" comparators since the 4-to-2-bit encoder circuitry depicted above requires four distinct voltage levels to represent the four digital values.

Circuit for 2-bit A/D Conversion

For each of the four potential values of the analog input of:

A/D Conversion Output, 2-Bit

Where X is a "don't care" statement, representing a logical 0 or 1.

Explain how this analog-to-digital device operates. An analog-to-digital converter (A/D) must generate a faithful digital copy of the Analog input signal to be of any value. To keep things straightforward, we've assumed that VIN is somewhere between 0 and 4 volts and have adjusted VREF and the voltage divider network so that there is a 1 V drop between each resistor in this simple 2-bit Analog - to - digital example.

A binary zero (00) is output by the encoder on pins Q0 and Q1 when the input voltage, VIN, is less than the reference voltage level, which occurs when VIN is between 0 and 1 volts (1V). Since comparator U1's reference voltage input is set to 1 volt, when VIN rises above 1 volt but is below 2 volts, U1's HIGH output is triggered. When the input changes at D1, the priority encoder, used for the 4-to-2-bit encoding, generates a binary result of "1." (01).

Remember that the inputs of a Priority Encoder, like the TTL 74LS148, are all assigned different priority levels. The highest priority input is always used as the output of the priority encoder. So, when a higher priority input is available, lesser priority inputs are disregarded. Therefore, if there are many inputs simultaneously at logic state "1", only the input with high priority will have its output code reflected on D0 and D1.

Thus, now that VIN is greater than 2 volts—the next reference voltage level—comparator U2 will sense the difference and output HIGH. However, when VIN is more than 3 volts, the priority encoder will output a binary "3" (11), as input D2 has a high priority than inputs D0 and D1. Each comparator outputs a HIGH or LOW state to the encoder, generating 2-bit binary data between 00 and 11 as VIN decreases or changes between every reference voltage level.

This is great and all, but commercially available priority encoders, like the TTL, are 8-bit circuits, and if we use one of these, six of the binary numbers will go unused. A digital Ex-OR gate and a grid of signaling diodes can create a straightforward encoder circuit.

Diode-based 2-bit ADC

Before feeding the diodes, the results of the comparators go through an Exclusive-OR gate to be encoded. Whenever the diode is reverse biased, an external pull-down resistor is connected between the diodes' outputs and ground (0V) to maintain a LOW state and prevent the outputs from floating.

Also, as with the main board, the value of VIN controls which comparator sends a HIGH (or LOW) signal to the exclusive-OR gates, which provide a HIGH output if either of the inputs is HIGH but not both (the corresponding Boolean is Q = A.B + A.B). The AND-OR-NAND gates of combinational logic could also be used to build these Ex-OR gates.

The difficulty with both of these 4-to-2 converter designs is that the input analog voltage at VIN needs to vary by one full volt for the encoder to vary its output code, limiting the precision of the simple two-bit A/D converter to 1 volt. The output resolution can be improved by employing more comparators to convert to a three-bit A/D converter.

D/A Converter, 3-Bit

The aforementioned parallel ADC takes a voltage reading between 0 and over 3 volts as an analog input and turns it into a binary code with only 2 bits. Since there are 23 = 8 possible digital outputs from a 3-bit digital circuits system, the input analog voltage can be compared to a scale of eight voltages, each of which is one-eighth (1/8) of the voltage supply. This means that we can now measure to an accuracy of 0.5 (4/8) volts and that 23-1 comparators are needed to generate a binary code with a 3-bit resolution (from 000 (0) to 111 (7)).

Circuit for 3-bit Analog-to-Digital Conversion

This will provide us with a three-bit code for each of the eight potential values of the analog input of:

The result of a Three-Bit Analog-to-Digital Converter

An "X" may be a logic 0 or a logic 1 to indicate a "don't care" state.

Then we can see that more comparators and power levels are required and more output binary bits when the ADC's resolution is increased.

Therefore, an analog-to-digital converter with a 4-bit resolution needs only 15 (24-1) comparators. An eight-bit resolution requires 255 (28-1) comparators. A 10-bit resolution needs 1023 comparators, etc. Therefore, the complexity of this type of Analog-to-Digital Converter circuit increases as the number of output bits increases.

Only if a few binary bits are needed to make a read on a display unit to represent the reference voltage of an input analog signal can a parallel or flashed A/D converter quickly be developed as part of a project due to its fast real-time conversion rate.

As an input interface circuit component, an analog signal from sensors or transducers is converted into a digital binary code by an analog-to-digital converter. Similarly, a digital binary code can be converted into a comparable analog quantity using a Digital-to-Analog Conversion for output interfacing to operate a motor or actuator or, more often, in audio applications.

Raspberry Pi's I2C pins

Knowing the Raspberry Pi's I2C port pins and setting up the I2C connection in the pi 4 are the initial steps in using a PCF8591 with the Pi.

GPIO2 and GPIO3 on the Rpi Model are utilized for I2C communication in this guide.

Raspberry Pi I2C Configuration

Raspberry Pi lacks I2C support by default. Therefore, it must be activated before anything else. Turn on Raspberry Pi's I2C port.

  1. First, open a terminal and enter sudo raspi-config.

  2. The RPi 4 Software Configuration Tool has opened.

  3. Third, activate the I2C by selecting Interfacing options.

    1. Restart the Pi after enabling I2C.

    Reading the PCF8591's I2C Address with a Raspberry Pi

    The Raspberry Pi has to know the I2C address of the PCF8591 IC before communication can begin. You may get the address by linking the PCF8591's SDA and SCL pins to the Raspberry Pi's own SDA and SCL jacks. The 5-volts and GND pins should be connected as well.

    You may find the address of an attached I2C device by opening a terminal and entering the following command.

    sudo i2cdetect –y 1 or sudo i2cdetect –y 0

    After locating the I2C address, the next step is constructing the circuit and setting up the required libraries to use PCF8591 and a Raspberry Pi 4.

    Connecting the PCF8591 ADC/DAC Module to the Raspberry Pi 4

    The circuit diagram to interface the PCF8591 with the Raspberry Pi is straightforward. In this example of interfacing, we'll read the analog signal from any analog inputs and display them in the Raspberry Pi terminal. We have a 100K pot to adjust the settings.

    Pi's GPIO2 and GPIO must be connected to the power supply and ground. Then, hook up GPIO3 and GPIO5 to SDA and SCL, respectively. Last but not least, link AIN0 to a 100K pot. Instead of using the Terminal to view the ADC values, a 16x2 LCD can be added.

    The A/D Conversion Python Program

    The complete code and demo video are included after this guide.

    To communicate with the I2C bus, you must first import the SMBus library and then use the time library to specify how long to wait before outputting the value.

    import smbus

    import time

    Create some variables now. The I2C bus address is stored in the first variable, and the first analog input pin's address is stored in the second variable.

    address = 0x48

    A0 = 0x40

    Next, we've invoked the library smbus's SMBus(1) function to create an object.

    bus = smbus.SMBus(1)

    The first line in the while instructs IC to take a reading from the first analog signal pin. Address information read from an Analog pin is saved as a numeric variable in the second line. Exit with the value printed.

    While True:

        bus.write_byte(address,A0)

        value = bus.read_byte(address)

        print(value)

        time.sleep(0.1)

    Finally, put the Python script in a file ending in.py and run it in the Raspberry Pi terminal with the command below.

    python filename.py

    Ensure that the I2C communication is turned on and that the pins are linked according to the diagram before running the code, or else you will get errors. It's time for the analog readings to appear in the terminal format below. The values gradually shift as you turn the pot's knob. Find out more about getting the software to work in

    Here is the full Python script.

    import smbus

    import time

    address = 0x48

    bus = smbus.SMBus(1)

    while True:

        bus.write_byte(address,A0)

        value = bus.read_byte(address)

        print(value)

        time.sleep(0.1)

    ADC's Practical Uses

    We rely heavily on electronic gadgets in today's high-tech society. The digital signal is the driving force behind these digital devices. While most numbers are represented digitally, few still use analog notation. Thus, an ADC is employed to transform analog impulses into digital ones. ADC can be used in an infinite variety of contexts. Here are only a few examples of their use:

    • The digitized voice signal is used by cell phones. The voice is first transformed to digital form using an ADC before being sent to the cell phone's transmitter.

    • Digital photos and movies shot with a camera can be viewed on any computer or mobile device thanks to an analog-to-digital converter.

    • X-rays and MRIs are just two examples of medical imaging techniques that use ADC to go from Analog to digital before further processing. Then, they're adjusted so that everyone can follow along.

    • ADC converters can also transfer music from a cassette tape to a digital format, such as a CD or a USB flash drive.

    • The Analog-to-Digital Converter (ADC) in a digital oscilloscope converts analog signals to digital ones that can then be displayed and used for other reasons.

    • The air conditioner's built-in temperature sensors allow for consistent comfort levels. The onboard controller reads the temperature and makes adjustments based on the data it receives from the ADC.

    Nowadays, practically everything has a digital counterpart, so every gadget must also include an ADC. For the simple reason that its operations require a digital domain accessible only via an analog-to-digital converter (ADC).

    Conclusion

    This piece taught us how to connect a Raspberry Pi 4 to a PCF8591 Analogue - to - digital decoder module. We have observed the output being shown as integers on our Terminal. We have also researched how the ADC generates its output signals. Here we will use OpenCV and a Raspberry Pi 4 to create a social distance detector.

    Sending SMS & Call with GSM Module and Raspberry Pi 4

    Greetings, and welcome to another tutorial in our series on the raspberry pi 4 Python programming. The previous guide covered the basics of transmitting data over the radio using the nrf24l01 chip in Pi 4. We also learned about interfacing Arduino and raspberry pi 4 and sending radio signals between the two devices. However, this tutorial will walk you through building a Raspberry Pi-based mobile phone with a microphone and speaker for making and receiving calls and reading text messages (SMS). This Project also serves as a proper GSM Module for the Raspberry Pi interface, with all the necessary Code to run the most fundamental features of any modern smartphone. First, we will understand what gsm is, its architecture and how it works, then we will learn how to program it in our pi 4; therefore, let us begin.

    Where To Buy?
    No.ComponentsDistributorLink To Buy
    1Jumper WiresAmazonBuy Now
    2LCD 16x2AmazonBuy Now
    3Raspberry Pi 4AmazonBuy Now

    Components:

    • Raspberry Pi 4

    • GSM Module

    • 16x2 LCD

    • 4 *4 Keypad

    • 10k pot

    • Breadboard

    • Connecting jumper wire

    • Power supply

    • Speaker

    • Microphone

    • SIM Card

    • Loudspeaker

    Structure and Uses of the Global System for Mobile Communications

    The acronym "GSM" refers to the "global system for mobile communication" and is the name of a type of mobile communication modem (GSM). Bell Labs was responsible for conceptualizing GSM in the 1970s. It's one of the most common forms of mobile communication around the globe. The 850MegaHertz, 900MegaHertz, 1800 Megahertz, and 1900 Megahertz frequency bands are utilized by GSM networks, which are part of an open and digital mobile network used to carry voice and data services.

    Using the telecommunications method of multiple time division access (TDMA), GSM technology was created as a digital system. For transmission, a GSM converts analog signals to digital ones, compresses them further and delivers them through a channel sharing bandwidth with two data streams from separate clients. The data rates transported by the digital system range from 64 kilobytes per second to 120 Megabytes per second.

    In a GSM network, macro, micro, and umbrella cells coexist. The implementation context determines the specifics of each cell. The macro, micro, and umbrella cell sizes are in use in a GSM network. Each cell may have a different range of coverage depending on the setting.

    Time-division multiple access (TDMA) works by giving each user a specific amount of time to transmit on the same frequency. It's flexible, supporting data rates from 64kbps to 120Mbps and allowing for clear voice communications.

    Structure of GSM-Based Technologies

    The following are the primary components of the GSM architecture.

    • Connectivity and Switching Infrastructure (NSS)

    • All three of these components—the Base Station (BS), the Mobile Station (MS), and the Operations and Maintenance Subsystem (OSS)—are necessary for proper communication (OSS)

    Network Switching Subsystem (NSS)

    Each component of the GSM system design contributes to what is collectively called the core system/network. In this case, the mobile network system is primarily controlled and interfaced with via a data network consisting of several different components. Listed below are some of the most crucial elements of the underlying network.

    Mobile Switching Centre (MSC)

    One of the essential parts of a GSM network is its core network, where the Mobile Switching Center (MSC) resides. This MSC performs the same functions as a common switching node in an ISDN or PSTN. Still, it provides additional features to accommodate mobile users' requirements, such as authentication, registration, inter-MSC handovers, call localization, and routing.

    In addition, it gives users an advantage in connecting their mobile phone networks to the PSTN (public switched telephone network) for making and receiving landline calls. To facilitate mobile-to-mobile calls across different networks, interfaces to all other switched telephone networks ( PSTN center servers are given.

    Home Location Register (HLR)

    Every subscriber's administrative details, including their last known location, are stored in this HLR database. This manner, calls can be routed over the GSM network to the appropriate mobile switch base station. If a call comes in when an operator has their phone turned on, the network can determine which base transmitter station the call is coming from and link it to the correct phone.

    When the phone is turned on but not being used, it nevertheless registers to ensure the HLR system is aware of its current location. Each network has a single HLR, which may be physically split across several data centers for practical reasons.

    Visitor Location Register (VLR)

    To facilitate the VLR's desired services for the individual subscriber, it incorporates data from the HLR network. It is possible to run the visitor coordinates register independently, but it is most commonly implemented as a core component of the MSC. Because of this, getting access is more manageable, and it takes less time overall.

    Equipment Identity Register (EIR)

    The Equipment Identity Register (EIR) is the part of the network infrastructure in charge of deciding whether or not certain pieces of mobile equipment are allowed access. The International Mobile Equipment Identification (IMEI) numbers uniquely identify each mobile technology work.

    This IMEI number is permanently embedded within the mobile device and checked by the network after registration. Depending on the data in the EIR, the mobile phone may be given one of three possible network access states: allowed, banned, or monitored.

    Authentication Centre (AuC)

    When users insert their SIM card into their phone, the secret key is stored in a secure file known as the AUC (authentication center). The AUC sees the extensive application as a radio channel coding and verification standard.

    Gateway Mobile Switching Centre (GMSC)

    In the absence of location information for the mobile station (MS), a call placed by a ME terminates with the GMSC (Gateway Mobile Switching Centre). Using the Mobile Subscriber Identifier Service Data Number (MSISDN) and the HLR, the GMSC can locate the specific MSC that has been visited and connect the call to the appropriate location. It's unclear what the "MSC" part of GMSC stands for, as the gateway procedure does not require relating to an MSC.

    SMS Gateway (SMS-G)

    Both SMS-Gateways are referred to collectively as the SMS gateway in the GSM specifications. The messages passing via these gateways are directed in various ways.

    Sending a short message to mobile equipment (ME) requires the usage of the Short Messaging Service Gateway Switching Center. Short messages sent over a mobile network are routed through the  SMS Inter-Working Switching Center. While the SMS-primary GMSC's function concerns the GMSC, the SMS-IWMSC serves as a constant endpoint for access to the Short Message Service Centre.

    These were the primary nodes in the GSM system's infrastructure. While they frequently shared physical space, the entire middle network would sometimes be broadcast throughout the country. In the event of a failure, it will provide a degree of leeway.

    Base Station Subsystem (BSS)

    The connection point between the mobile node and the broader network infrastructure. The radio transceivers and protocol management for mobile devices are housed in the Base Transceiver Station. In addition, a Base Station Controller manages the Base Transceiver and serves as a bridge between mobile devices and the mobile switching hub.

    The network subsystem handles connectivity between the network and the mobile stations. The Phone Service Switch Centre is the backbone of the Network Subsystem, allowing users to connect to other networks (ISDN, PSTN, etc.). The GSM system's ability to route calls and allow for roaming depends on two additional components, the Home Location Record and the guest Location Record.

    In addition, it stores the Equipment Identity Register, which keeps track of all the mobile devices and their associated IMEI numbers. The acronym IMEI refers to the unique identifier for mobile devices worldwide.

    In the second generation of GSM network design, the mobile devices communicate with the BSS, or Base Station Subsystem. These components comprise this subsystem, and each will be examined.

    Base Transceiver Station (BTS)

    As part of a GSM network, the radio Tx, Rx, and their associated antennas make up the base Transceiver Station, which is used for transmitting, receiving, and communicating directly through mobiles. The base station is the central component of each cell, and it communicates with mobile devices using an interface known as the Um interface and related protocols.

    Base Station Controller (BSC)

    The base station controller (BSC) is employed for the following step back into GSM technology. This controller is typically co-located within one of the base transceiver stations it controls. This controller handles radio resource management, including channel allocation and handover between base station groups. Over the Abis interface, it communicates with the BTSs.

    The acceptable radio technology is used by the GSM network's subsystems component in the ground station to ensure that multiple operators can utilize the system at the same time. Each base station can support many operators because each channel can support up to eight users.

    The network provider strategically places these to ensure comprehensive coverage. A base station, sometimes known as a "cell," can surround this space. Signals can't be prevented from bleeding into neighbouring cells, and the channels used in one don't transfer to the next.

    Mobile Station

    Mobile phones include a transceiver, a display, and a CPU, all of which are network-connected and operated using a SIM card. In a GSM mobile transmission medium, the operator monitors and controls the mobile station or mobile equipment, which are most commonly represented by cell phones. Their size has shrunk significantly while their functionality has skyrocketed. The benefit of a much longer interval between charges is still another advantage. Phone hardware and the subscriber identity module (SIM) are two of many components.

    A mobile device's hardware consists of its primary components, such as the housing, screen, battery, and electronics used to generate the signal and process the signal receiver before transmission. The IMEI is a unique number assigned to each mobile device. This feature can be permanently programmed into a phone throughout its manufacturing process. During the registration process, the network accesses this database to see if the device has been flagged as stolen.

    A user's identity on the network is stored in the information contained in their SIMcard. It also includes other data, such as the IMSI number. With this IMSI stored in the Sim, the phone user could easily switch phones by swapping SIM cards. As a result, if switching to a new mobile phone were simple and didn't require a unique phone number, more people would do it, generating more revenue for network operators and contributing to GSM's overall economic triumph.

    Operation and Support Subsystem (OSS)

    The OSS is an integral aspect of any functional GSM network. The NSS and BSC parts are linked here. The GSM network and BSS traffic load are the primary areas of focus for this OSS. It is worth noting that some preservation responsibilities are relocated to the base station controller to lower the maintenance expense of the system when the amount of BS increases through the consumer population growth.

    The 2G GSM network architecture is predicated on a rational functioning method. This approach is remarkably straightforward compared to today's mobile network architectures, which rely on software-defined units to facilitate highly adaptable operations. However, the 2G GSM architecture will show how the necessary voice and essential operational functions are organized. 

    Specifications of a GSM Module

    The following are some of the functions provided by the GSM module.

    • Enhanced spectrum efficiency

    • Features including "international roaming," "integrated services digital network" (ISDN) compatibility, and "support for future services" are also included.

    • High-quality voice communications; encrypted phone conversations;

    • Features like a programmable alarm clock, high-quality voice communication, a fixed calling number, a real-time clock, and the ability to send and receive SMS messages are all standard on modern smartphones (SMS)

    As a result of its rigorous security measures, the GSM system is currently the safest available for use in the telecommunications industry. Call privacy and subscriber anonymity for GSM users are only protected during transmission, but this is still a massive step toward attaining end-to-end security.

    GSM Modem

    In either its mobile phone or modem form, a Global System for Mobile Communications (GSM) modem enables two computers or processors to connect across a network. A SIM card is needed to run a GSM modem, and it can only be used within the coverage area the network provider has paid for. It has serial, USB, and Bluetooth connectivity options for linking to a personal computer.

    Any regular GSM cell phone can double as a GSM modem if you have a suitable cable and driver installed on your PC. It would be best if you used a GSM modem instead of a GSM cell phone. The GSM modem is helpful in many devices, including POS terminals, inventory management systems, surveillance cameras, weather stations, and GPRS-mode remote data loggers.

    GSM Module Operation

    Below is a circuit showing how to connect a GSM modem to the MC using the level-shifting IC Max232. When a numeric command is received by short message service (SMS) from any mobile device, the SIM card-mounted GSM modem transfers that information to the MC via serial connection. The GSM modem is programmed to respond to the order "STOP" by producing an MC output, the point which is utilized to deactivate the ignition switch.

    If the input is driven low, the GSM modem will send a predetermined message (in this case, "ALERT") to the user. A 162 LCD screen displays the entirety of the procedure.

    In-depth working Explanation of raspberry pi 4

    We have utilized a GSM module and a Raspberry Pi 4 to manage the entire system and interface its many parts in this Project. You can input data of any kind, including phone numbers, text messages, and phone calls, read and respond to text messages, and more, using a 4x4 alphanumeric keypad. The SIM900A GSM module connects mobile phones to wireless networks for making and receiving calls and sending and receiving text messages. We've integrated a microphone, a loudspeaker for making and receiving voice calls, and a 16 * 2 liquid crystal displays information like menu options and alarms.

    With alphanumeric input, you can use the same keyboard to type in both numbers and letters. For the Code we used to allow alphabets in addition to numbers in this method, scroll down to the "Code in Code" section.

    It's simple to put this plan into action. The alphanumeric keypad is used for all functions. Below you'll find a link to the complete Code and a demonstration video. This section will elaborate on the four aspects of the listed projects.

    The Pi 4 Mobile Phone Four Main Attributes

    Make Call

    The Pi 4 phone we built requires us to press the letter "C" and provide the cellphone number we wish to call. We'll use an alphanumeric keyboard to enter the number. Once the correct number has been entered, we must hit "C" again. The AT command is now processed by pi 4 to connect the call to a specified number.

    ATDxxxxxxxxxx; <Enter>     where xxxxxxxxx is entered Mobile Number.

    Receive Call

    Answering a phone call is simple. When a call comes into the SIM number stored in the GSM Module of your system, the LCD will display the message "Incoming..." along with the caller's number. All that's left to do is hit the 'A' key to answer the call. Pi 4 will send the following command to the GSM Module when the "A" button is pressed:

    ATA <enter>

    Transmit SMS

    Pressing "D" on our Raspberry Pi phone allows us to send a text message. To whom (or what) should we address the SMS message that the system has just requested? Once the number has been entered, pressing "D" again will prompt the LCD to request a message. To send an SMS, enter the message using the keypad as you would with any other mobile device, and then hit the 'D' key again. Raspberry Pi can send SMS with the following command:

    AT+CMGF=1 <enter>

    AT+CMGS=”xxxxxxxxxx” <enter>     where: xxxxxxxxxx is entered mobile number

    Receive and Read SMS

    Even this component is easy to use. Here, the SIM card is used to receive SMS messages from the GSM. The Raspberry Pi also keeps a close eye on the UART SMS signal. New notes are shown by the LCD displaying the text "New message," and reading them is as simple as pressing the "B" key. This is an SMS Received signal:

    +CMTI: "SM," 6  Where 6 is the message location where it is stored in the SIM card.

    When the RPi detects the 'SMS received' signal, it will get the SMS storage location and instruct the Global system for mobile to read the message. Moreover, the LCD will flash the words "New Message" in a prominent location.

    AT+CMGR=<SMS stored location><enter>

    The GSM now delivers the saved message to the Raspberry Pi, and the Pi, having extracted the primary SMS, shows it on the LCD. When it comes to MIC and Speaker, there is no secret code.

    Detailed Circuit Layout and Explanation

    The GPIO pins of the Raspberry Pi are wired to the RS, EN, D4, D5, D6, and D7 pins of the 16 * 2 liquid crystal display. A direct connection is made between the GSM module's Rx and Tx pins and the Raspberry Pi's Tx and Rx pins. Connectors R1, R2, R3, and R4 of a 4 * 4 keypad are connected to GPIOs 12, 16, 20, and 21, whereas pins C1, C2, C3, and C4 are connected to GPIOs 26, 19, 13, and 6. If you want to boost the audio volume from the GSM Module, you can join the microphone directly to the mic+ and mic- pins and the loudspeaker to the sp+ and sp- pins. The loudspeaker can be connected directly to the GSM module without using the Audio Amplifier circuit.

    Explanation of the Code

    This Pi 4 mobile phone's programming interface may be challenging to novices—the programming language of choice for this Project is Python.

    Here, we define the keypad() function to be used with a basic numeric keypad. We've also added a def alpha keypad(): for typing alphabets so that you may use the same keypad for both purposes. To make it compatible with the Arduino keypad library, we've given this keypad a wide range of new capabilities. This keypad only takes 10 presses to enter a whole string of text or a numeric value.

    For example, if we push key 2 (abc2) once, the LCD will display the letter 'a.' If we press it again, the letter 'b' will take its place, and if we hit it three more times, the letter 'c' will appear in the same spot. After holding down a key for a short time, the LCD pointer will advance to the following available location. We can now proceed to the next character or number. Any other keys can be processed in the same way.

    def keypad():

       for j in range(4):

         gpio.setup(COL[j], gpio.OUT)

         gpio.output(COL[j], 0)

         ch=0

         for i in range(4):

           if gpio.input(ROW[i])==0:

             ch=MATRIX[i][j]

             return ch

             while (gpio.input(ROW[i]) == 0):

               pass

         gpio.output(COL[j],1)

    def alphaKeypad():

        lcdclear()

        setCursor(x,y)

        lcdcmd(0x0f)

        msg=""

        while 1:

            key=0

            count=0

            key=keypad()

            if key == '1':

                ind=0

                maxInd=6

                Key='1'

                getChar(Key, ind, maxInd)

                .... .....

                ..... .....

    To begin, we have declared the pins for the liquid crystal display, the keypad, and other components, as well as included the necessary libraries in this python script:

    import RPi.GPIO as gpio

    import serial

    import time


    msg=""

    alpha="1!@.,:?ABC2DEF3GHI4JKL5MNO6PQRS7TUV8WXYZ90 *#"

    x=0

    y=0


    MATRIX = [

                ['1','2','3','A'],

                ['4','5','6','B'],

                ['7','8','9','C'],

                ['*','0','#','D']

             ]

    ROW = [21,20,16,12]

    COL = [26,19,13,6]

    ... .....

    ..... .....

    The pins need to be pointed in the proper direction.

    gpio.setwarnings(False)

    gpio.setmode(gpio.BCM)

    gpio.setup(RS, gpio.OUT)

    gpio.setup(EN, gpio.OUT)

    gpio.setup(D4, gpio.OUT)

    gpio.setup(D5, gpio.OUT)

    gpio.setup(D6, gpio.OUT)

    gpio.setup(D7, gpio.OUT)

    gpio.setup(led, gpio.OUT)

    gpio.setup(buz, gpio.OUT)

    gpio.setup(m11, gpio.OUT)

    gpio.setup(m12, gpio.OUT)

    gpio.setup(button, gpio.IN)

    gpio.output(led , 0)

    gpio.output(buz , 0)

    gpio.output(m11 , 0)

    gpio.output(m12 , 0)

    To begin Serial communication, follow the steps below.

    Serial = serial.Serial("/dev/ttyS0", baudrate=9600, timeout=2)

    We must now create a liquid crystal display driving function. The def lcdcmd(ch): and def lcdwrite(ch): functions are used to deliver commands and data to the LCD, respectively. The liquid crystal display may also be cleared with def lcdclear(), the cursor position can be set with def setCursor(x,y), and a string can be sent to the liquid crystal display with def lcdprint(Str).

    def lcdcmd(ch): 

      gpio.output(RS, 0)

      gpio.output(D4, 0)

      gpio.output(D5, 0)

      gpio.output(D6, 0)

      gpio.output(D7, 0)

      if ch&0x10==0x10:

        gpio.output(D4, 1)

        .... .....

        ..... ....

    def lcdwrite(ch): 

      gpio.output(RS, 1)

      gpio.output(D4, 0)

      gpio.output(D5, 0)

      gpio.output(D6, 0)

      gpio.output(D7, 0)

      if ch&0x10==0x10:

        gpio.output(D4, 1)

      if ch&0x20==0x20:

        gpio.output(D5, 1)

        .... .....

        ..... ....

    def lcdclear():

      lcdcmd(0x01)

     

    def lcdprint(Str):

      l=0;

      l=len(Str)

      for i in range(l):

        lcdwrite(ord(Str[i]))

    def setCursor(x,y):

        if y == 0:

            n=128+x

        elif y == 1:

            n=192+x

        lcdcmd(n)

    Next, we'll need to code some features for interacting with text messages, phone calls, and incoming calls.

    The call is placed using the function def call():. Also, the LCD can display the receiving message and number via the function def receiveCall(data):. Finally, the call is answered with def attendCall():.

    The message is composed and sent using the alphaKeypad() method, accessed via the def sendSMS(): function. The SMS is received, and its location is retrieved using the def receive SMS(data) function. And finally, the LCD gets updated with the message thanks to def readSMS(index:).

    All of the operations mentioned above are included in the Code that follows.

    import RPi.GPIO as gpio

    import serial

    import time

    msg=""

    #     0      7   11  15  19  23  27   32  36   414244   ROLL45

    alpha="1!@.,:?ABC2DEF3GHI4JKL5MNO6PQRS7TUV8WXYZ90 *#"

    x=0

    y=0

    MATRIX = [

                ['1','2','3','A'],

                ['4','5','6','B'],

                ['7','8','9','C'],

                ['*','0','#','D']

             ]

    ROW = [21,20,16,12]

    COL = [26,19,13,6]

    moNum=['0','0','0','0','0','0','0','0','0','0']

    m11=17

    m12=27

    led=5

    buz=26

    button=19

    RS =18

    EN =23

    D4 =24

    D5 =25

    D6 =8

    D7 =7

    HIGH=1

    LOW=0

    gpio.setwarnings(False)

    gpio.setmode(gpio.BCM)

    gpio.setup(RS, gpio.OUT)

    gpio.setup(EN, gpio.OUT)

    gpio.setup(D4, gpio.OUT)

    gpio.setup(D5, gpio.OUT)

    gpio.setup(D6, gpio.OUT)

    gpio.setup(D7, gpio.OUT)

    gpio.setup(led, gpio.OUT)

    gpio.setup(buz, gpio.OUT)

    gpio.setup(m11, gpio.OUT)

    gpio.setup(m12, gpio.OUT)

    gpio.setup(button, gpio.IN)

    gpio.output(led , 0)

    gpio.output(buz , 0)

    gpio.output(m11 , 0)

    gpio.output(m12 , 0)

    for j in range(4):

        gpio.setup(COL[j], gpio.OUT)

        gpio.setup(COL[j],1)

    for i in range (4):

        gpio.setup(ROW[i],gpio.IN,pull_up_down=gpio.PUD_UP)

    Serial = serial.Serial("/dev/ttyS0", baudrate=9600, timeout=2)

     

    data=""

    def begin():

      lcdcmd(0x33) 

      lcdcmd(0x32) 

      lcdcmd(0x06)

      lcdcmd(0x0C) 

      lcdcmd(0x28) 

      lcdcmd(0x01) 

      time.sleep(0.0005)

    def lcdcmd(ch): 

      gpio.output(RS, 0)

      gpio.output(D4, 0)

      gpio.output(D5, 0)

      gpio.output(D6, 0)

      gpio.output(D7, 0)

      if ch&0x10==0x10:

        gpio.output(D4, 1)

      if ch&0x20==0x20:

        gpio.output(D5, 1)

      if ch&0x40==0x40:

        gpio.output(D6, 1)

      if ch&0x80==0x80:

        gpio.output(D7, 1)

      gpio.output(EN, 1)

      time.sleep(0.005)

      gpio.output(EN, 0)

      # Low bits

      gpio.output(D4, 0)

      gpio.output(D5, 0)

      gpio.output(D6, 0)

      gpio.output(D7, 0)

      if ch&0x01==0x01:

        gpio.output(D4, 1)

      if ch&0x02==0x02:

        gpio.output(D5, 1)

      if ch&0x04==0x04:

        gpio.output(D6, 1)

      if ch&0x08==0x08:

        gpio.output(D7, 1)

      gpio.output(EN, 1)

      time.sleep(0.005)

      gpio.output(EN, 0)

    def lcdwrite(ch): 

      gpio.output(RS, 1)

      gpio.output(D4, 0)

      gpio.output(D5, 0)

      gpio.output(D6, 0)

      gpio.output(D7, 0)

      if ch&0x10==0x10:

        gpio.output(D4, 1)

      if ch&0x20==0x20:

        gpio.output(D5, 1)

      if ch&0x40==0x40:

        gpio.output(D6, 1)

      if ch&0x80==0x80:

        gpio.output(D7, 1)

      gpio.output(EN, 1)

      time.sleep(0.005)

      gpio.output(EN, 0)

      # Low bits

      gpio.output(D4, 0)

      gpio.output(D5, 0)

      gpio.output(D6, 0)

      gpio.output(D7, 0)

      if ch&0x01==0x01:

        gpio.output(D4, 1)

      if ch&0x02==0x02:

        gpio.output(D5, 1)

      if ch&0x04==0x04:

        gpio.output(D6, 1)

      if ch&0x08==0x08:

        gpio.output(D7, 1)

      gpio.output(EN, 1)

      time.sleep(0.005)

      gpio.output(EN, 0)

    def lcdclear():

      lcdcmd(0x01)

    def lcdprint(Str):

      l=0;

      l=len(Str)

      for i in range(l):

        lcdwrite(ord(Str[i]))

    def setCursor(x,y):

        if y == 0:

            n=128+x

        elif y == 1:

            n=192+x

        lcdcmd(n)

    def keypad():

       for j in range(4):

         gpio.setup(COL[j], gpio.OUT)

         gpio.output(COL[j], 0)

         ch=0

         for i in range(4):

           if gpio.input(ROW[i])==0:

             ch=MATRIX[i][j]

             #lcdwrite(ord(ch))

            # print "Key Pressed:",ch

            # time.sleep(2)

             return ch

             while (gpio.input(ROW[i]) == 0):

               pass

         gpio.output(COL[j],1)

        # callNum[n]=ch

    def serialEvent():

        data = Serial.read(20)

        #if data != '\0':

        print data

        data=""

    def gsmInit():

        lcdclear()

        lcdprint("Finding Module");

        time.sleep(1)

        while 1:

            data=""

            Serial.write("AT\r");

            data=Serial.read(10)

            print data

            r=data.find("OK")

            if r>=0:

                break

            time.sleep(0.5)

        while 1:

            data=""

            Serial.write("AT+CLIP=1\r");

            data=Serial.read(10)

            print data

            r=data.find("OK")

            if r>=0:

                break

            time.sleep(0.5)

        lcdclear()

        lcdprint("Finding Network")

        time.sleep(1)

        while 1:

            data=""

            Serial.flush()

            Serial.write("AT+CPIN?\r");

            data=Serial.read(30)

            print data

            r=data.find("READY")

            if r>=0:

                break

            time.sleep(0.5)

        lcdclear()

        lcdprint("Finding Operator")

        time.sleep(1)

        while 1:

            data=""

            Serial.flush()

            Serial.read(20)

            Serial.write("AT+COPS?\r");

            data=Serial.read(40)

            #print data

            r=data.find("+COPS:")

            if r>=0:

                l1=data.find(",\"")+2

                l2=data.find("\"\r")

                operator=data[l1:l2]

                lcdclear()

                lcdprint(operator)

                time.sleep(3)

                print operator

                break;

            time.sleep(0.5)

        Serial.write("AT+CMGF=1\r");

        time.sleep(0.5)

       # Serial.write("AT+CNMI=2,2,0,0,0\r");

       # time.sleep(0.5)

        Serial.write("AT+CSMP=17,167,0,0\r");

        time.sleep(0.5)

    def receiveCall(data):

            inNumber=""

            r=data.find("+CLIP:")

            if r>0:

                inNumber=""

                inNumber=data[r+8:r+21]

                lcdclear()

                lcdprint("incoming")

                setCursor(0,1)

                lcdprint(inNumber)

                time.sleep(1)

                return 1

    def receive SMS(data):

        print data

        r=data.find("\",")

        print r

        if r>0:

            if data[r+4] == "\r":

                smsNum=data[r+2:r+4]

            elif data[r+3] == "\r":

                smsNum=data[r+2]

            elif data[r+5] == "\r":

                smsNum=data[r+2:r+5]

            else:

                print "else"

            print smsNum

            if r>0:

                lcdclear()

                lcdprint("SMS Received")

                setCursor(0,1)

                lcdprint("Press Button B")

                print "AT+CMGR="+smsNum+"\r"

                time.sleep(2)

                return str(smsNum)

        else:

            return 0

    def attendCall():

        print "Attend call"

        Serial.write("ATA\r")

        data=""

        data=Serial.read(10)

        l=data.find("OK")

        if l>=0:

            lcdclear()

            lcdprint("Call attended")

            time.sleep(2)

            flag=-1;

            while flag<0:

                data=Serial.read(12);

                print data

                flag=data.find("NO CARRIER")

                #flag=data.find("BUSY")

                print flag

            lcdclear()

            lcdprint("Call Ended")

            time.sleep(1)

            lcdclear()

    def readSMS(index):

                    print index

                    Serial.write("AT+CMGR="+index+"\r")

                    data=""

                    data=Serial.read(200)

                    print data

                    r=data.find("OK")

                    if r>=0:

                        r1=data.find("\"\r\n")

                        msg=""

                        msg=data[r1+3:r-4]

                        lcdclear()

                        lcdprint(msg)

                        print msg

                        time.sleep(5)

                        lcdclear();

                        smsFlag=0

                        print "Receive SMS"

    def getChar(Key, ind, maxInd):

                ch=0

                ch=ind

                lcdcmd(0x0e)

               Char=''

                count=0

     

                global msg

                global x

                global y

                while count<20:

                    key=keypad()

                    print key

                    if key== Key:

                        setCursor(x,y)

                        Char=alpha[ch]

                        lcdwrite(ord(Char))

                        ch=ch+1

                        if ch>maxInd:

                            ch=ind

                        count=0

                    count=count+1

                    time.sleep(0.1)

                msg+=Char

                x=x+1

                if x>15:

                    x=0

                    y=1

                lcdcmd(0x0f)

     

    def alphaKeypad():

        lcdclear()

        setCursor(x,y)

        lcdcmd(0x0f)

        msg=""

        while 1:

            key=0

            count=0

            key=keypad()

            if key == '1':

                ind=0

                maxInd=6

                Key='1'

                getChar(Key, ind, maxInd)

            elif key == '2':

                ind=7

                maxInd=10

                Key='2'

                getChar(Key, ind, maxInd)

            elif key == '3':

                ind=11

                maxInd=14

                Key='3'

                getChar(Key, ind, maxInd)

            elif key == '4':

                ind=15

                maxInd=18

                Key='4'

                getChar(Key, ind, maxInd)

            elif key == '5':

                ind=19

                maxInd=22

                Key='5'

                getChar(Key, ind, maxInd)

            elif key == '6':

                ind=23

                maxInd=26

                Key='6'

                getChar(Key, ind, maxInd)

            elif key == '7':

                ind=27

                maxInd=31

                Key='7'

                getChar(Key, ind, maxInd)

            elif key == '8':

                ind=32

                maxInd=35

                Key='8'

                getChar(Key, ind, maxInd)

            elif key == '9':

                ind=36

                maxInd=40

                Key='9'

                getChar(Key, ind, maxInd)

            elif key == '0':

                ind=41

                maxInd=42

                Key='0'

                getChar(Key, ind, maxInd)

            elif key == '*':

                ind=43

                maxInd=43

                Key='*'

                getChar(Key, ind, maxInd)

            elif key == '#':

                ind=44

                maxInd=44

                Key='#'

                getChar(Key, ind, maxInd)

            elif key== 'D':

                return

    def sendSMS():

        print"Sending sms"

        lcdclear()

        lcdprint("Enter Number:")

        setCursor(0,1)

        time.sleep(2)

        moNum=""

        while 1:

            key=0;

            key=keypad()

            #print key

            if key>0:

                if key == 'A'  or key== 'B' or key== 'C':

                    print key

                    return

                elif key == 'D':

                    print key

                    print moNum

                    Serial.write("AT+CMGF=1\r")

                    time.sleep(1)

                    Serial.write("AT+CMGS=\"+91"+moNum+"\"\r")

                    time.sleep(2)

                    data=""

                    data=Serial.read(60)

                    print data

                    alphaKeypad()

                    print msg

                    lcdclear()

                    lcdprint("Sending.....")

                    Serial.write(msg)

                    time.sleep(1)

                    Serial.write("\x1A")

                    while 1:

                        data=""

                        data=Serial.read(40)

                        print data

                        l=data.find("+CMGS:")

                       if l>=0:

                            lcdclear()

                            lcdprint("SMS Sent.")

                            time.sleep(2)

                            return;

                        l=data.find("Error")

                        if l>=0:

                            lcdclear()

                            lcdprint("Error")

                            time.sleep(1)

                            return

                else:

                    print key

                    moNum+=key

                    lcdwrite(ord(key))

                    time.sleep(0.5)

    def call():

        print "Call"

        n=0

        moNum=""

        lcdclear()

        lcdprint("Enter Number:")

        setCursor(0,1)

        time.sleep(2)

        while 1:

            key=0;

            key=keypad()

            #print key

            if key>0:

                if key == 'A'  or key== 'B' or key== 'D':

                    print key

                    return

                elif key == 'C':

                    print key

                    print moNum

                    Serial.write("ATD+91"+moNum+";\r")

                    data=""

                    time.sleep(2)

                    data=Serial.read(30)

                    l=data.find("OK")

                    if l>=0:

                        lcdclear()

                        lcdprint("Calling.....")

                       setCursor(0,1)

                        lcdprint("+91"+moNum)

                        time.sleep(30)

                        lcdclear()

                        return

                    #l=data.find("Error")

                    #if l>=0:

                    else:

                        lcdclear()

                        lcdprint("Error")

                        time.sleep(1)

                        return

                else:

                    print key

                    moNum+=key

                    lcdwrite(ord(key))

                    n=n+1

                    time.sleep(0.5)

    begin()

    lcdcmd(0x01)

    lcdprint("  Mobile Phone  ")

    lcdcmd(0xc0)

    lcdprint("    Using RPI     ")

    time.sleep(3)

    lcdcmd(0x01)

    lcdprint("Circuit Digest")

    lcdcmd(0xc0)

    lcdprint("Welcomes you")

    time.sleep(3)

    gsmInit()

    smsFlag=0

    index=""

    while 1:

        key=0

        key=keypad()

        print key

        if key == 'A':

          attendCall()

        elif key == 'B':

          readSMS(index)

          smsFlag=0

        elif key == 'C':

          call()

        elif key == 'D':

          sendSMS()

        data=""

        Serial.flush()

        data=Serial.read(150)

        print data

        l=data.find("RING")

        if l>=0:

          callstr=data

          receiveCall(data)

        l=data.find("\"SM\"")

        if l>=0:

          smsstr=data

          smsIndex=""

          (smsIndex)=receiveSMS(smsstr)

          print smsIndex

          if smsIndex>0:

              smsFlag=1

              index=smsIndex

        if smsFlag == 1:

            lcdclear()

            lcdprint("New Message")

            time.sleep(1)

        setCursor(0,0)

        lcdprint("C--> Call <--A");

        setCursor(0,1);

        lcdprint("D--> SMS  <--B")

    GSM Technology Applications

    Here are some examples of how GSM technology can be put to use.

    1. Automation and Safety via Smart GSM Technology

    Nowadays, we can't live without our GSM mobile terminal. The Mobile phone terminal is essentially an extension of ourselves, allowing us to connect with the world in the same way our wallet/purse, keys, or watch does. Many people like not having to worry about being unavailable or who they can call at any given moment.

    It's clear from the name that this Project relies on the SMS transmission capabilities of GSM networks. The ability to send and receive text messages is widely utilized to provide access to equipment and facilitate home security breach management. There are two proposed subsystems in the system. Controlling appliances in one's house from afar is made possible by the appliance control subsystem, while the security alert subsystem provides automatic security monitoring.

    The system can send consumers instructions via SMS from a designated phone number to adjust the home appliance's state as needed. An automatic SMS can be generated by the system upon detection of an intrusion, warning the user of a potential threat to their data.

    The advent of GSM technology will make global, instantaneous, and universal communication possible. GSM's functional architecture employs intelligent networking principles as the first step toward a genuinely personal communication system with sufficient standards to ensure interoperability.

    1. Medical Uses for GSM-Based Systems

    Here are two examples of similar situations to think about.

    • The patient has sustained a life-threatening injury or illness and requires emergency medical attention. A mobile phone is the only thing he (or his companion) has.

    • After being released from the hospital, the patient plans to rest at home but is reminded that he must return for routine exams. A mobile phone and perhaps some health monitoring or other medical sensor gadgets may be in his possession.

    The only way to solve either problem is via a mobile communication system. In other words, the above scenarios are easily manageable with today's communication technology because all that needs to be done is send the patient's information across a network and have it processed at the receiving end, which may be a hospital or the doctor's office.

    In the first scenario, the doctor keeps tabs on the patient's information and returns the instructions to him so he can take whatever precautions before getting to the hospital. In the second scenario, the doctor keeps tabs on the patient's test results and, if necessary, proceeds with treatment.

    Telemedicine services are the driving force behind this entire operation. The telemedicine system has three different applications.

    • Video conferencing lets patients in one location have face-to-face contact with their doctors and nurses, speeding up the healing process.

    • With the help of sensors that constantly report on a patient's condition and direct medical staff on how to proceed with treatment.

    • By sending the gathered health information for further review and analysis.

    A wireless method of communication is used for the three options mentioned above. When providing healthcare, it is necessary to have many data retrieval mechanisms in place. These can be online medical databases or hosts with equipment that aid recovery and health monitoring. Broadband networks, medium-throughput media, and narrowband GSM access are all viable possibilities.

    There are several benefits to using GSM technology in a telemedicine setup.

    • Cost savings and widespread availability of GSM receivers (including cell phones and modems)

    • It can transfer data quickly.

    1. Typical Telemedical Infrastructure

    The four components that make up a standard telemedicine system are as follows:

    1. The Patient Unit: It takes data from the patient, either in its original analog form or after being converted to digital format, and then manages the data stream before sending it. It is made up of several different types of medical sensors, such as those used to track heart rate, blood pressure, body temperature, spirometry, etc., each of which generates an electrical signal that is sent to a processor or controller for analysis before being transmitted over a wireless network.

    2. Communication Network: As such, it is employed for both data transmission and security. Networks, mobile stations, and base stations are all components of the Global System for Mobile Communication (GSM) system. The mobile station, also known as the mobile phone or primary mobile access point, is the device responsible for connecting mobile devices to the global system for mobile communications (GSM) network.

    3. Receiving/Server Side: This is a healthcare system with a GSM modem installed to receive, decode, and forward signals to the presenting device.

    4. Presentation Unit: This is the brains of the operation. This processor saves the data in a standard format for later retrieval and analysis by doctors and from which they can send text messages to the client side if necessary.

    To demonstrate the fundamentals of telemedicine, a rudimentary model will suffice. It has a sender and a receiver, both of which are separate components. The sensor input is transmitted by the transmitter and received by the receiver unit for processing.

    See below for a simplified telemedicine system to track a patient's heart rate and apply the results as needed.

    The data collected by the heartbeat detector (a light-emitting device whose light is modified as it flows through human blood) is transformed into electrical pulses at the transmitter unit. When the Microcontroller picks up on these pulses, it calculates the heart rate and communicates that information and other data collected to the medical team via a Gsm network. An IC called a Max 232 connects the Microcontroller to the GSM modem.

    The GSM modem at the receiving end grabs the information and passes it to the Microcontroller. The Microcontroller then performs an analysis using the input from the Personal computer and displays the outcome on the LCD. Medical professionals can keep tabs on the patient and begin the necessary treatment after reviewing the results on the screen.

    Medical Applications of Global Systems for Mobile Communication

    The following are some real-world applications for GSM technology.

    1. AT&T Health GlowCaps

    These plain pill bottles serve as a gentle prompt to the patient to take their prescribed medications. It uses GSM technology to contact the patient on their mobile phone at the specified pill-taking time, at which point the cap will light up, the buzzer will sound, and the patient will be reminded to take their medication. Each time a bottle is uncorked, it is documented.

    1. Ultrasound technology

    With the help of a portable ultrasound transducer that connects to a smartphone, it is possible to send ultrasound images captured with a handheld device to a distant location using a global system for mobile communications (GSM).

    1. A Continuous Glucose Monitor (CGM)

    The patient's blood sugar levels can be tracked and reported to the doctor. A sensor is implanted under the skin and monitors blood glucose levels, sending the data to a receiver (a mobile phone) at regular intervals.

    Conclusion

    As part of this guide, we analyzed GSM's architecture and learned how it operates in practice. We wrote a Python program to turn our Raspberry Pi 4 into a fully functional mobile phone. No technical difficulties were encountered as we watched text and phone calls travel between the raspberry pi and our mobile phone. You should feel confident in your ability to apply the ideas and understand the circuits of GSM now. One way to up the difficulty level of this Project is to try to make a live video call using the raspberry pi 4 mobile. Next, we'll look at connecting the pcf8591 ADC/DAC analog-digital converter module to a Raspberry Pi.

    Syed Zain Nasir

    I am Syed Zain Nasir, the founder of <a href=https://www.TheEngineeringProjects.com/>The Engineering Projects</a> (TEP). I am a programmer since 2009 before that I just search things, make small projects and now I am sharing my knowledge through this platform.I also work as a freelancer and did many projects related to programming and electrical circuitry. <a href=https://plus.google.com/+SyedZainNasir/>My Google Profile+</a>

    Share
    Published by
    Syed Zain Nasir