Welcome to the next tutorial on our raspberry pi four python programming. In the previous article, we built a system that recognizes when two people are in physical contact using OpenCV and a Raspberry Pi 4. We used the weights from the YOLO version 3 Object Recognition Algorithm to implement the Deep Neural Networks part. Regarding image processing, the Raspberry Pi consistently comes out on top compared to other controllers. A facial recognition program was among the earlier attempts to use Raspberry Pi for sophisticated picture processing. In today's world of cutting-edge technology, digital image processing has expanded rapidly to become an integral feature of many portable electronic gadgets.
Digital image processing is widely used for such tasks as item detection, facial recognition, and people counting. This guide will use a Raspberry Pi 4 and ThingSpeak to create a crowd-counting system based on OpenCV. In this case, we will utilize the pi camera module to take pictures in a continuous loop, and then we will run the images through the Histogram Based Object descriptor to find the things in the photos. Next, we'll compare these images to OpenCV's pre-trained model for facial recognition. The headcount may be seen by anybody, anywhere in the world, because of the public nature of the ThingSpeak channel.
Knowing how many people show up to an event or purchase a newly released product is vital for event management and retail shop owners. Still, it's even more critical that they can use that information to improve future events. To their relief, modern crowd-counting technology has made it simpler for event planners and business owners to acquire actionable data on event attendance that can be used to improve ROI.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Raspberry Pi 4 | Amazon | Buy Now |
Raspberry Pi 4
Pi Camera
ThingSpeak
Python3
OpenCV3
In this case, the OpenCV framework will make people count. You must first upgrade your Raspberry Pi before you can install OpenCV.
sudo apt-get update
Then, get OpenCV ready for your Raspberry Pi by installing its prerequisites.
sudo apt-get install libhdf5-dev -y
sudo apt-get install libhdf5-serial-dev –y
sudo apt-get install libatlas-base-dev –y
sudo apt-get install libjasper-dev -y
sudo apt-get install libqtgui4 –y
sudo apt-get install libqt4-test –y
Once that is done, use the following command to install OpenCV on your Raspberry Pi.
pip3 install OpenCV-contrib-python==4.1.0.25
We need to get some additional packages on the Raspberry Pi before we can begin writing the code for the Crowd Counting app.
Installing imutils: To perform basic image processing tasks like translating, rotating, resizing, skeletonizing, and displaying Matplotlib images more efficiently in OpenCV, imutils are used. So, run the following command to set up imutils:
pip3 install imutils
matplotlib: The matplotlib library should then be installed. When it comes to Python visualizations, Matplotlib is your one-stop shop for everything from static to animated to interactive.
pip3 install matplotlib
One of the most widely used IoT platforms, ThingSpeak allows us to keep tabs on our data from any location with an Internet connection. The system can also be controlled remotely by using the Channels and web pages provided by ThingSpeak. You must first register for an account on ThingSpeak to create a channel. If you have a ThingSpeak account, please log in with your username and password.
Select Sign up and fill out the required fields.
Double-check your email address and press the "Next" button when you're done. Now that you're logged in, click the "New Channel" button to make a brand-new channel.
When you're ready to begin uploading information, select "New Channel" and give it a descriptive name and brief explanation. One new field, "People," has been added. Any number of areas may be made, as needed. Then, click the "Save Channel" button after entering the necessary information. You'll need to pass your API and channel ID into a Python script whenever you want to submit data to ThingSpeak.
For this OpenCV people-countering project, all you need is a Raspberry Pi and a Pi camera; to get started, plug the camera's ribbon connector into the Raspberry pi's designated camera slot.
The Pi 4 Camera board is a purpose-built expansion board for the Raspberry Pi computer. The Raspberry Pi hardware is connected via a specialized CSI interface. In its native still-capture mode, the sensor's resolution is 5 megapixels. Capturing at up to 1080p and 30 frames/second in video mode is possible. Because of its portability and compact size, this camera module is fantastic for handheld applications.
A ribbon cable connects the camera board to the Raspberry Pi. Camera PCB and Raspberry Pi hardware are associated with a ribbon cable. If you join the ribbon cables correctly, the camera will work. The camera PCB's blue backing must face away from the PCB, while the Raspberry Pi hardware's blue backing must face the Ethernet port.
One example of a feature descriptor is the HOG, similar to the Canny Edge Detector algorithm. Object detection is a typical application of this technique in image processing and computer vision applications. This method uses a count of gradient orientation occurrences in the limited region of an image. There are a lot of similarities between this approach and Scale Invariant Feature Transformation. The HOG descriptor highlights object structure or form. This method of computing features is superior to other edge descriptors because it considers both the magnitude and the angle of the gradient. Histograms are created for the image's regions based on the gradient's intensity and direction.
First, load the image that will serve as the basis for the HOG feature calculation into the system. Reduce the size of the image to 128 by 64 pixels. The research authors utilized and recommended this dimension because improving detection outcomes for pedestrians was their primary goal. After achieving near-perfect scores on the MIT pedestrian's database, the authors of this study opted to create a new, more difficult dataset: the 'INRIA' dataset (http://pascal.inrialpes.fr/data/human/), which includes 1805 (128x64) photographs of individuals cut from a wide range of personal photos.
In this step, we compute the image's gradient. The gradient can be calculated using the image's magnitude and angle. First, we determine Gx and Gy for every pixel in a 3x3 grid. As a first step, we determine the Gx and Gy values for each pixel by plugging their respective values into the following formulas.
Each pixel's magnitude and angle are computed using the following formulae after Gx and are determined.
Once the gradient for each pixel has been calculated, the resulting gradient matrices are each partitioned into eight 8x8 cells that form a block. Each block is assigned a 9-point histogram. Each bin in a 9-point histogram has a 20-degree range, so the resulting histogram has nine bins total. The numbers in Figure 8 are assigned to a 9-bin histogram graphically depicting the results of the calculations. Each of these 9-point graphs can be represented graphically as a histogram whose bins output the relative strength of the gradient across the corresponding intervals. Since a block can have 64 distinct values, the calculation below is carried out for each of the 64 possible combinations of magnitude and gradient. Because 9-point histograms are being used, therefore:
The following terms will define the limits of each jth bin:
The average value of each bucket will be:
Illustration of a histogram with nine discrete bins. For a particular 8x8 block of 64 cells, there will be only one possible histogram. Each of the sixty-four cells will contribute their Vj and Vj+1 values to the array's indices at the jth and (j+1) positions.
When determining the value assigned to cell j in block I, we first determine which bin j will be assigned to it. The following equations will provide the value:
Each pixel's value, Vj, is calculated and stored in the set at the jth and (j+1)the indexes of the bin that serves as the block's bin. Upon completing the preceding steps, the resulting matrix will have dimensions 16 by eight by 9. When the histograms for all blocks have been computed, a new block is formed by joining together four cells of the 9-by-9 histogram matrix (2x2). This chopping is carried out overlappingly, with an 8-pixel stride. We create a 36-feature vector by concatenating the 9-point histograms of each of the four cells that make up the block.
A combined FBI is created from four blocks by traversing a 2x2 grid around the image.
The L2 norm is used to standardize FB values across blocks.
The value of k for normalization is found by applying the following formulae:
Normalizing is performed to lessen the impact of variations in the contrast between photographs of the same object—each section. Data is collected in the form of a 36-point feature vector. Seven blocks line up across the bottom and fifteen at the top. Therefore, the entire length of all histogram-oriented gradient features will be 3780 (7 x 15 x 36). The image's HOG characteristics are extracted.
HOG features are seen parallelly on a single image with the image library.
This page includes the complete Python code for an OpenCV project that counts the people in a crowd. Here, we break down the code's crucial parts so you can understand them better—first, import all the necessary libraries that will be used later in the code.
import cv2
import imutils
from imutils.object_detection import non_max_suppression
import numpy as np
import requests
import time
import base64
from matplotlib import pyplot as plt
from urllib.request import urlopen​
Imutils:
For use with OpenCV and either version of Python, this package provides a set of helper functions for everyday image processing tasks such as scaling, cropping, skeletonizing, showing Matplotlib pictures, grouping contours, identifying edges, and more.
Numpy:
You can manipulate arrays in Python with the help of the NumPy library. Matrix operations, the Fourier transform, and linear algebra are all within their purview. Because it is freely available to the public, anyone can use it. That's why it's called "Numerical Python," or "NumPy" for short.
Python's list data structure can replace arrays, but it could be faster. NumPy's intended benefit is an array object up to 50 times quicker than standard Python lists. To make working with NumPy's array object, ndarray, as simple as possible, the library provides several helpful utilities. Data science makes heavy use of arrays because of the importance placed on speed and efficiency.
Requests:
You should use the requests package if you need to send an HTTP request from Python. It hides the difficulties of requests making behind a lovely, straightforward API, freeing you to focus on the application's interactions with services and data consumption.
Time:
In Python, the time module has a built-in method called local time that may be used to determine the current time in a given location depending on the time in seconds that have passed since the epoch (). tm isdst will range from 0 to 1 to indicate whether or not daylight saving time applies to the current time in the region.
Base64:
If you need to store or transmit binary data over a medium better suited for text, you should look into using a Base64 encoding technique. There is less risk of data corruption or loss thanks to this encoding method. Base64 is widely used for many purposes, such as MIME-enabled email storing complicated data in XML and JSON.
Matplotlib:
When it comes to Python visualizations, Matplotlib is your one-stop shop for everything from static to animated to interactive. Matplotlib facilitates both straightforward and challenging tasks. Design graphs worthy of publication. Create movable, updatable, and zoomable figures.
urllib.request:
If you need to make HTTP requests with Python, you may be directed to the brilliant requests library. Though it's a great library, you may have noticed that it needs to be a built-in part of Python. If you prefer, for whatever reason, to limit your dependencies and stick to standard-library Python, then you can reach for urllib.request!
Then, after the libraries have been imported, you can paste in the channel ID and API key for the ThingSpeak account you previously copied.
channel_id = 812060 # PUT CHANNEL ID HERE
WRITE_API = 'X5AQ3EGIKMBYW31H' # PUT YOUR WRITE KEY HERE
BASE_URL = "https://api.thingspeak.com/update?api_key= {}".format(WRITE_API)
Set the default values for the HOG descriptor. Several other uses have been found for HOG, making it one of the most often implemented methods for object detection. In the past, an OpenCV pre-trained model for people detection could be accessed through cv2.HOGDescriptor getDefaultPeopleDetector().
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
Raspberry PI is provided with a three-channel color image inside the detector() function. It then uses imutils to scale the image down to the appropriate size. The SVM classification result is then used to inform the detectMultiScale() method, which examines the image to determine the presence or absence of a human.
def detector(image):
image = imutils.resize(image, width=min(400, image.shape[1]))
clone = image.copy()
rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)
If you're getting false positive results or detection failures due to capture-box overlap, try running the below code, which uses non-max suppressing capability from imutils to activate overlapping regions.
for (x, y, w, h) in rects:
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
result = non_max_suppression(rects, probs=None, overlapThresh=0.7)
return result
With the help of OpenCV's VideoCapture() method, the image is retrieved from the Pi camera within the record() function, where it is resized with the imultis before being sent to ThingSpeak.
def record(sample_time=5):
camera = cv2.VideoCapture(0)
frame = imutils.resize(frame, width=min(400, frame.shape[1]))
result = detector(frame.copy())
thingspeakHttp = BASE_URL + "&field1={}".format(result1)
Now that everything is hooked up and ready to go, let's put it through its paces. Launch the program by extracting it to a new folder. You'll need to give Python a few seconds to load all the necessary modules. Start the program. A new window will pop up, showing the camera's output after a few seconds. Make sure your Raspberry Pi camera is operational before running the python script. The following command is used to activate the python script after a review of the camera has been completed:
At that point, a new window will appear with your live video feed inside of it. OpenCV will count the number of persons in the first frame that Pi processes. The appearance of a box will indicate the detection of humans:
Now that you know how many people are expected to show up, you can check the crowd size from the comfort of your own home via your ThingSpeak channel.
You can now efficiently conduct crowd counts with OpenCV and a Raspberry Pi. This technology helps with guaranteeing the safety of those attending large-scale events, which is a top priority for event planners. Knowing how people will flow through a venue or store is crucial for offering effective crowd management services. It will also improve efficiency and customer service because it is helpful for event and store managers to track the number of people entering and leaving their establishments at any one time. Additionally, it is important for event planners to understand dwell time in order to ascertain which parts of the venue are popular with attendees and which are completely bypassed. This gives them information about how the guest felt, which lets them better use the space they have.
import cv2
import imutils
from imutils.object_detection import non_max_suppression
import numpy as np
import requests
import time
import base64
from matplotlib import pyplot as plt
from urllib.request import urlopen
channel_id = 812060 # PUT CHANNEL ID HERE
WRITE_API = 'X5AQ3EGIKMBYW31H' # PUT YOUR WRITE KEY HERE
BASE_URL = "https://api.thingspeak.com/update?api_key={}".format(WRITE_API)
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
# In[3]:
def detector(image):
image = imutils.resize(image, width=min(400, image.shape[1]))
clone = image.copy()
rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)
for (x, y, w, h) in rects:
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
result = non_max_suppression(rects, probs=None, overlapThresh=0.7)
return result
def record(sample_time=5):
print("recording")
camera = cv2.VideoCapture(0)
init = time.time()
# ubidots sample limit
if sample_time < 3:
sample_time = 1
while(True):
print("cap frames")
ret, frame = camera.read()
frame = imutils.resize(frame, width=min(400, frame.shape[1]))
result = detector(frame.copy())
result1 = len(result)
print (result1)
for (xA, yA, xB, yB) in result:
cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)
plt.imshow(frame)
plt.show()
# sends results
if time.time() - init >= sample_time:
thingspeakHttp = BASE_URL + "&field1={}".format(result1)
print(thingspeakHttp)
conn = urlopen(thingspeakHttp)
print("sending result")
init = time.time()
camera.release()
cv2.destroyAllWindows()
# In[7]:
def main():
record()
# In[8]:
if __name__ == '__main__':
main()
Crowd dynamics can be affected by several things, such as the passage of time, the layout of the venue, the amount of information provided to visitors, and the overall enthusiasm of the gathering. Managers of large crowds need to be flexible and responsive in case of sudden changes in the environment that affect the situation's dynamics in real-time. Trampling events, mob crushes, and acts of violence can break out without proper crowd management.
The complexity and uncertainty of large-scale events emphasize the importance of providing timely, relevant information to crowd managers. Occupancy control technology helps event planners anticipate how many people will show up to their event, so they can prepare appropriately by ensuring adequate security guards, exits, etc.
Using Raspberry Pi and some smart subtractions and blob tracking, this article describes a system for counting individuals. We show how many people have entered and left a building. The principles of HOG and the calculation of features have also been covered. The testing outcomes demonstrate the viability of using this raspberry pi based device as an essential people-counting station. In the following tutorial, we'll learn how to assemble an intelligent energy monitor based on the Internet of Things and a Raspberry Pi 4.
Hello peeps. Welcome to the next tutorial on deep learning. You have learned about the neural network, and it was an interesting way to compare different types of neural networks. Now, we are talking about deep learning frameworks. In the previous sessions, we introduced you to some important frameworks to let you know about the connection of different entities, but at this level, it is not enough. We are telling you in detail about all types of frameworks that are in style because of their latest features. So before we start, have a look at the list of concepts that will be covered today:
Introduction to the frameworks of deep learning.
Why do we require frameworks in deep learning?
What are some important deep learning frameworks?
What is TensorFlow and for which purpose of using TensorBoard?
Why Keras is famous?
What is the relationship between python and PyTorch?
How can we choose the best framework?
Deep learning is a complex field of machine learning, and it is important to have command over different types of tools and tricks so that you may design, train, and understand several types of neural networks efficiently with the minimum amount of time. Frameworks are used in many different types of programming languages, and this is the software that, by combining different tools, improves and simplifies the operation of the programming language.
The best thing about the frameworks is that they allow you to train their models without knowing or bothering about the algorithms that are running behind the programming. Isn’t it amazing to know that you will get a helping hand to understand and train your model without any worries? Once you know much about the different frameworks, it will be clear to you how these frameworks do some specific types of tasks to make your training process easy and interesting.
In the beginning, when you start the programming of the deep learning process by hand, you will see some interesting results related to your task. Yet, when you move towards complex tasks or when you are at the intermediate level, you will realize that it is strenuous and time-consuming to perform a simple task at a higher level. Moreover, the repetition of the same code can sometimes make you sick.
Usually, the need for a framework arises when you start working with advanced neural networks such as convolutional neural networks, or simply CNN, where the involvement of images and video makes the task difficult and time-consuming. These frameworks have pre-defined types of networks and also provide you with an easy way to access a great deal of information.
With the advancement of deep learning, many organizations are working to make it more user-friendly so that more people can use it for advanced technologies. It is one of the reasons behind the popularity of deep learning that a great deal of deep learning frameworks is introduced every year. We have analyzed different platforms and researched different reports. We found some amazing frameworks, and our experts have been checking them for a long time to provide you with the best framework for your learning. Here is the list of the frameworks that we will discuss in detail with you, along with the pros and cons of each.
Tensorflow
Keras
PyTorch
Theano
DL4J
Lasagna
Caffe
Chainer
We are not going to discuss all of them because it may be confusing for you to understand all the frameworks. Moreover, we believe in smart working, and therefore, we are simply discussing the most popular frameworks so that you may learn the way to compare different parameters, and after that, you will get the perfect way to make modules, train, and test the projects in different ways for smart working.
The first framework to be discussed here is TensorFlow, which is undoubtedly the most popular framework for deep learning because of its easy availability and great performance. The backbone of this platform is directly connected to Google’s brain team, which has represented it for deep learning and provided easy access to almost all types of users. It supports Python and some other programming languages, and the good thing about it is that it also works with dataflows. This point makes it more useful because, when dealing with different types of neural networks, it is extremely useful to understand the progress and the efficiency of your model.
Another important point to notice about TensorFlow is, it creates models that are undemanding to build and contain robust.
A plus point about this framework is another large package called TensorBoard. There are several advantages to this fabulous data package, but some of them are listed below:
The basic working of this package is to provide data visualization to the user, which is a great step for the ease of the user, but unfortunately, people are less aware of this, although it is a useful item.
Another advantage of tensorBoard is that it makes the sharing of the data with the shareholders easy and comfortable because of its fantastic data display.
You can use different packages with the help of TensorBoard.
You can get other basic information about TensorFlow by paying attention to the following table:
TensorFlow |
|
Releasing Dates |
November 9, 2015, and January 21, 2021. |
Programming Languages |
Python, C++, CUDA |
category |
Library of machine learning |
Name of Platforms |
JavaScript, Linux, Windows, Android, macOS, |
License |
Apache License 2.0 |
Website’s Link |
|
The next on the list is another famous and useful library for deep learning that most of you may know about. Keras is one of the favourite frameworks for deep learning developers because of its demand and open-source contributors. An estimate says that 35,000+ users are making this platform more and more popular.
Keras is written in the Python programming language, and it can support high-level neural networks. You must keep in mind that Keras is an API, and it runs on top of highly popular libraries such as TensorFlow and Theano. You will see this in action in our coming lectures. Because of its user-friendly features, Keras is used by a large number of companies as a startup and is a great tool for researchers and students.
The most prominent feature of Keras is its user-friendly nature. It seems that the developers have presented this framework to all types of users, no matter if they are professionals or learners. If users encounter an error or issue, they should receive transparent and actionable feedback.
For me, modularity is a useful feature because it makes tasks easier and faster. Moreover, the errors are easily detectable, which is a big relief. The modularity is shown with a graphical representation or sequence of information so that the user may understand it well.
Here's some good news for researchers and students. Keras is one of the best options for researchers because it allows them to make their own modules and test them according to their choice. Adding the modules to your project is super easy on Keras, and you can do advanced research without any issues.
Keras |
|
Releasing Dates |
March 27, 2015, and June 17, 2020. |
Programming Languages |
N/A |
category |
Almost all types of neural networks |
Name of Platforms |
Cross-platforms |
License |
Massachusetts Institute of Technology (MIT) |
Website’s Link |
|
Our next topic of discussion is PyTorch. It is another open-source library for deep learning and is used to build complex neural networks in an easy way. The thing that attracted me to this library is the platform that introduced it. It is developed under the umbrella of Facebook's AI Research Lab. I'm curious about how powerful it is because every time I open my Facebook app, I find the content I've chosen and wished for. People have been using it for deep learning, computer vision, and other related purposes since 2016, as it is a free open source for AI and related fields. By using PyTorch with other powerful libraries such as NumPy, Tensor, etc., you can build, train, and test complex neural networks. Because of its easy accessibility, PyTorch is popular among people. The versatility of the programming languages and different libraries working with PyTorch is another reason for its success.
A feature that makes it easy to use is its hybrid front-end nature, which makes it faster and more flexible to use. The user-friendly nature of this library makes it the perfect choice for professionally non-technical people.
With the help of its torch-distributed backend, you can have optimal performance all the time and keep an eye on the training and working of the network you are using. It has a powerful architecture, and on an advanced level, you can use it for complex neural networks.
As you can guess, PyTorch is run with the help of Python, which is one of the most popular and trending programming languages, and the plus point is that it allows many libraries to be used with it and work on neural networks.
PyTorch |
|
Releasing Dates |
September 2016, and December 10, 2020. |
Category |
Machine learning library, Deep learning library |
Name of Platforms |
Cross-platforms |
License |
Berkeley Software Distribution (BSD) |
Website’s Link |
Since now, we have been talking about the frameworks, and the basic purpose of discussing different features was to tell you the difference between them. A beginner may believe that all frameworks are the same, but this is incorrect because each framework has its own specialities and the difficulty level of using them varies. So, if you want to work perfectly in your field, first you must learn how to choose the best framework for your task. Keep in mind, these are not the only points that you need to know; all the parameters change according to the complexity of your project.
Not all projects are the same. You do not have to use the same framework every time. You must know more than one framework and choose one according to your needs. For example, for simple tasks, there is no need to use a complex framework or a higher-level neural network. There is versatility in the projects in deep learning, and you have to understand the needs of your project every time before choosing your required framework. As a result, before you begin, you should ask yourself the following questions about the project:
What are you using? Modern deep learning framework or are you interested in the classic ML algorithms?
What is your preferred programming language for the AI modules?
For the process of scaling, which type of hardware and software do you have for the working?
Once you know the different features of the frameworks, you may get the answers to all the questions given above.
Machine learning is a vast field, and with the advancement of different techniques, there is always a need to compare the parameters all the time. Different algorithms follow different types of parameters, and you must know all of them while choosing your framework. Moreover, you must also know if you are going with the classic built-in machine-learning algorithms or want to create your own.
Hence, we learned a lot about the frameworks of deep learning today. It was an interesting lecture where we saw the detailed introduction of the framework and compared TensorFlow, PyTorch, and Keras by discussing several features and requirements of all these frameworks. We will see all the discussion in action in the coming lectures. The purpose of this session was to clear the concept of working and variations in the framework and in this way, you have the idea how deep learning is useful in different ways. Researcher are working in deep learning and it is one of the basic reason behind the develorpment of different frameworks.
Hello Learners! Welcome to the next lecture on deep learning. We have read the detailed introduction to deep learning and are moving forward with the introduction of the neural network. I am excited to tell you about the neural network because of the interesting and fantastic applications of neural networks in real life. Here are the topics of today that will be covered in this lecture:
What do we mean by the neural network?
How can we know about the structure of the neural network?
What are the basic types of neural networks?
What are some applications of these networks?
Give an example of a case where we are implementing neural networks.
Artificial intelligence has numerous features that make it special and magical in different ways, and we will be exploring many of them in different ways in this course. So, first of all, let us start with the introduction.
Have you ever observed that your favorite videos are shown to you on Facebook or other social media platforms? Or does the advertisement for the product you are searching for pop up when using the phone applications? All of these are because of the artificial intelligence of the system that is running in the backend of the app and we have discussed it many times before.
To understand well about the neural network, let us discuss the inspiration and the model that has resulted in the formation of the neural network. We all have the idea of the neural network of the human brain. We are the best creation because of the complex and the best brain that calculates, estimate, and predict the results of the repeating processes in a better way. The same is the case with the neural network of computer systems. We have discussed the basic structure of the neural network many times but now, it's time to know about the structure of the neural network.
I always wonder how the answering software and apps such as Siri reply to us accurately and without any delay. The answer to this question was found in the workings and architecture of the network working behind this beautiful voice. I do not want to start the biology class here, but for proper understanding, I have to discuss the process when we hear a voice and understand it through the brain.
When we hear a sound in the surroundings, it is first caught by the ear, and this raw audio is acting as an input for the nerve of the ear. These nerves then pass this signal to the next layers that in return, pass these signals further to the next layers.
The layer makes the result more refined and accurate. Finally, the last layer reaches the brain where the brain makes the decision to respond. The same process is used in the neural network. This statement will be clear to you how it works, but for that, you have to know about the seven types of neural networks.
Feed Forward Neural Network
Recurrent Neural Network
Radial Basis Function (RBF) Neural Network
Convolution Neural Network
Modular Neural Network
Kohonen Self-organizing Neural Network
Multi-Layer Perception
Let me start with the very basic type of neural network so that you may understand it slowly and gradually. The workings of this network are related to its name. The motion of the information or the nerves is unidirectional, and the process is ended in the output. In this type, there is no way to move a neural nerve backwards and train the previous layer. The basic application of this type of network is found in face recognition and related projects, people who are interested in the applications such as speech recognition prefer to choose this type of network to avoid the complexity.
This layer includes the radial function. The working of this function will be clearer when you know about the structure of this layer. Usually, this network has two layers:
Hidden Layer
Output Layer
The radial function is present in the hidden layer. The function is proved helpful in reasonable interpolation during the process in which the data is fitted into the layers. The layer works by measuring the distance of the nerve from the distance of the central part of the network. For the best implementation, this network checks for all types of data points and groups similar data points. In this way, this type of network can be used to make the systems such as power restoration.
As you can guess from the name of this network, it has the ability to recur. It is my favourite type of neural network because it learns from the previous layer, and the data is used to predict the output in a precise way. This is one of the main layers, and its work has been discussed many times in this tutorial.
Contrary to the first type of neural network that we have discussed before, the information can be recurred or moved to the previous layer. Here are some important points about this layer:
The first layer is a simple feed-forward layer. In other words, it can not move to the previous layer.
Each layer transmits the data to the next layer unidirectional in the first phase.
If during the transmission of data, the layer is predicting inaccurate results then it is the responsibility of the network to learn by repeating the saving of data.
The main building block of this network is the process of saving the data into the memory and then working automatically to get accurate results.
It is widely used in the text to speech conversations.
Now coming towards an important type of neural network that has a scope worldwide and engineers are working day and night in this field because of the interesting and beneficial applications of this network. Before going deep into the definition of this network, I must clarify what exactly a convolution is. It is the process of filtering the results in a way that can be used to enable activation. The filtering mechanism is repeating and therefore, it yields the perfect results all the time. Usually, it is used in image processing, natural language processing, and similar tasks because it breaks the image into parts and then represents the results according to the choice of the user. It is one of the classical techniques that are used for different purposes when people are working on images, videos, or other related projects. For example, if you want to find the edges or details of the images to replace or edit them in a better way, then this technique will be helpful all the time because, through it, you can play with the images and the components of the images as we are using the pixels for our purpose. If these things seem difficult or complex to you at the moment, do not worry, because all the things will be cleared with the passage of time.
Modularity is considered the basic building block of the neural network. It is the process of dividing complex tasks into different modules or parts and solving them individually so that in the end, the results may be combined together and, finally, we get the accurate ending. It is a faster way of working. You can understand well by considering the example of the human brain, which is divided into the left and right sides and, therefore, can work simultaneously. There are different tasks that are assigned to each part and they work best in their duties.
Random input vectors are fed into a discrete map of neurons. Dimensions and planes are other names for vectors. Its applications include recognizing patterns in data, such as in medical analysis.
Here, I am now discussing the type of network that has more than one hidden layer. It is a little bit complex, but if you have an idea of the cases discussed before, you will easily understand this one. The purpose of using this network is to provide the type of data that is not linearly separable. There are several functions that can be used while working on this network. The interesting thing about this network is, it consists of a non-linear activation function for work.
Here n is the number of the last layer, which can be from 0 to any number according to the complexity of the network. A more useful network contains more layers and in return, is more useful usually.
At the moment, I want to discuss an example of this network because it has a slightly different type of work, and I hope that with the help of this example, you will get the concept of what I am trying to teach you, Consider the case where we want to talk to the personal assistance in our divide and on the practical implementation, it is a simple task of few seconds yet at the backend, there is a long procedure that is being followed so that you may get the required results. Here is a simple sentence that is to be asked of the personal assistant.
The first step of the network is to divide the whole sentence into words so that these can be scanned easily.
We all know that each word has a specific pattern of sound, and therefore, the word is then sampled into the discrete sound waves. Let me revise that "discrete sound signals are the ones that consists of discontinuous points. We get the results in the following form.
Now, it is the time when the system further divides the single word into a single alphabet. As you can see in the image given above, each alphabet has a specific amplitude. In this way, the values of different alphabets are obtained and this data is then stored in the array.
In the next step, the whole data obtained is then fed into the input layer of the network and here the working of recurrent neural network stars. By passing through the input layer, each weight of the alphabet is assigned to the interconnection between the input layer and the hidden layer of the network. At this moment, we need a transfer function that is calculated with the help of the following formula:
In the hidden layers, the weights get assigned to the hidden layers. This process continues for all types of layers, and as we know, the output of the first layer is used as the input by the second layer, and this process continues until the last layer. But keep in mind, this process is only for the hidden layers.
While using speech recognition with the help of the neural network, we use different types of terms, and some of them are :
Acoustic model
Lexicon
By the same token, there are different types of exits. I am not going to explain these terms right now because it is unnecessary to discuss them at the moment.
In the end, we are reaching the conclusion that neural networks are amazing to learn and interesting to understand while working with deep learning. You will get all the necessary information about these networks in this course. We started with the basic introduction of the neural network and saw the structure of the network in detail. Moreover, we found the types of neural networks in detail and all the basic networks are discussed here so that you may compare well why we are using these networks and what type of network will be best for you for learning and training. We suggest feed-forward neural networks for basic use, and you will see the reason behind this suggestion in our coming lecture. Till then, you have to search for other networks; if you find any, discuss them with us. In the next lecture, you will learn about deep learning and neural networks, so stay tuned with us.
Hello students, welcome to the second tutorial on deep learning in the first one, we have learned the simplest but basic introduction of deep learning to have a solid base about what we are actually going to do with deep learning. In the present lecture, we will take this to the advanced level and will learn the introduction with the intention of learning more and more about the introduction and understanding what we want to learn and how will we implement the concepts easily. So, here is a quick glance at the concepts that will be cleared today:
What do we mean by Deep learning?
What is the structure of calculation in neural networks?
How can you examine the Neural Networks?
What are some platforms of deep learning?
Why did we choose TensorFlow?
How can you work with TensorFlow?
As we have said earlier, artificial intelligence is the field that works to take the tasks and work of the human being from the computer that is, the computer act like the human. Computers are expected to think. It is a revolutionary branch of science that deals with the feeding of the intelligence of a human being in the computer for the welfare of mankind and with the passage of time, it is proving itself successful in the real world. With the advancement and enhancement of the complexity of artificial intelligence, the field is divided into different branches therefore, AI has a branch named machine learning and then it is subdivided into deep learning. The main focus of this course is deep learning therefore, we describe it in detail.
All this discussion was to tell you about the basics and the important introduction of deep learning and if it is still not clear to you then do not worry because by getting the information about it throughout the series you will start practising, things will be cleared here.
We have seen the discussion about the neural network before but it was just related to the concept of the weights in the neural network. In the present tutorial, you are going to see another concept about the neural network and the proper working on these networks will be started in the coming sessions.
The neural network is just like the multiple layers of the human brain that contain the input layer where the data is fed in different ways according to the requirement of the network. Moreover, the multiple layers are responsible for the proper training process again and again in such a way that every second layer is more mature and accurate than the first one and in this way, the last one has the most accurate data among the others and this is then fed into the output layer where we can get the results. All these processes occur in a sequence while we are working on the neural network and it is listed below:
In the first step, the product is calculated by keeping the weight of each channel and the value of the input in mind.
The sum of all the products obtained is then calculated and this is called the weighted sum of the layers.
In the next step, the bias of added to the resultant calculation according to the estimation of the neural network.
In the final step, the sum is then subjected to the particular function that is named the activation function.
As we have mentioned the steps, we know that it is not clear that much now in your mind therefore, we are discussing an example of this. By keeping all the steps in mind, we are now working on the practical application of working on a neural network.
We are considering the example in which the 28*28 pixel of an image is observed for its shape. Each pixel is considered as the input for the neurons of the first layer.
The first step is then calculated by using the formula given below:
x1*w1 + x2*w2 + b1
We have taken the simple text example but added the process of each layer with the product of corresponding weight occurs till you reach the last layer. The next step here is calculated as:
Φ(x1* w1 + x2*w2 + b1)
Here the Φ sign indicates the presence of the activation function as mentioned above in the steps. Now, these steps are performed again and again according to the complexity of the task and training until all the inner layers are calculated well and the results are reached by the output layer and we get the results. An interesting thing here is the presence of a single neuron in the last layer that contain the result of the calculation to be shown as the output. The detail of how the neural network work will be discussed in the next tutorials. For now, just understand the outputs.
It seems that you are now ready to move forward. Till now, you were learning what is deep learning and why it is useful but now, you are going to learn how can you use deep learning for different tasks. If you are a programmer you must know that there are different platforms that provide the platform for the compilation ad working of the programming language and these are specific to the limited programming languages.
For deep learning, there are certain platforms that are used worldwide and the most important one will be disused here:
TensorFlow is one of the most powerful platforms specially designed for machine learning and deep learning and it is a free source software library. Although it is a multi-purpose platform it has special features that are used to train machine learning and deep learning projects. You can have an idea of its popularity by the fact that it is presented by the google brain team and it contains the perfect functionality.
The full form of DL4J is Deep Learning For You and as you can guess, it is specialized for deep learning and is written in java for the java virtual machine. Many people prefer this library because of its lightweight and specific design for deep learning.
If you are wondering if we are talking about a device then it is not true. Torch is an open-source library for deep learning and it provides the algorithm for the working of deep learning projects.
It is the API that is written in the TensorFlow deep learning platform. It is used for the best practice and experience for deep learning experts. The purpose of using this API is to have a clean, easy, and more reusable result of the code in less time. You will see this with the help of examples in the next sessions.
As you can guess, we have chosen TensorFlow for our tutorial and lectures because of some important reasons that we’ll share with you. For these classes, I have tested a lot of software that was specially designed for deep learning as I have mentioned some of them. Yet, I found TensorFlow most suitable for our task and therefore, I want to tell you the core reasons behind this choice.
You will see that the training and other phases depend upon different models this is super easy to do with the help of TensorFlow. The main reason is, it provides multi-level models so that the one that suits you best will be present for you all the time according to the complexity and working of your project. As we have mentioned earlier, the Keras API is used with TensorFlow therefore, the high-level performance of both of them results in marvelous projects.
In machine learning and related branches such as deep learning, production is made easiest with the help of the fantastic performance of TensorFlow. It always provides the perfect path towards production and results. It also allows us to have the independence of using different languages and platforms and therefore, it attracts a large audience towards itself.
What is more important in research than perfect experimentation? Tensorflow is always here for the multiple types of experimentation and research options so that you may test your project in different ways and get the best results through a single software. The advantage of the presence of multiple APIs and the availability of handling several languages makes it best for experimentation.
Another advantage of choosing it for the tutorial is, it supports powerful add-on libraries and interesting models, therefore, it will become easy for us to experiment more and explain the results in a different way to approach all types of students.
These are some highlighted points that attracted us towards this software but overall, it has a lot in it and you will understand the points when you will see all of them in action in this series, We will be working totally on the TensorFlow and will discuss each and every step in detail without skipping any single step. The practical performance of each step will lead you to move forward with more interest and to understand each concept, we will use different examples. Yet, I have an idea that more explanation makes the discussion confusing so there will be a balance in the explanation.
As we have described before, TensorFlow is introduced by the google brain team and it was closely collaborating with the machine learning research organization.
TensorFlow is the software library that works in the collaboration with some other libraries for the best implementations of deep learning projects and you will see its work and projects in detail soon when we will move forward in this series. There are different libraries that are important to attach with the TensorFlow when we try to make it ready for the working of deep learning. Some of them are listed below:
Python package Index
Django
Scipy
Numpy
Following are the steps that are used to work on the TensorFlow. Yet, keep in mind, these steps vary according to the need of the time and type of the project.
Import the libraries in TensorFlow.
Assign paths of data sets. It is important to provide the path to column variables as well.
Create the test and train data and for this, use the Pandas library.
In the next step, the shape of the test and train data is printed.
For the training data sheet, the data type is printed for each column.
Set the label column values of the data.
You have to cunt the total number of unique values related to the datasheets.
Add features for the different types of variables.
Built the relationship for the features with a bucket.
Add features for the proper definition of the features.
Train and evaluate the model.
Predict the model and set the output to the test set.
Do not worry if these steps are new to you or if they are confusing for you at the moment, you will see the detail of them in the coming future. Moreover, Some of these steps may be different for different people because coding is a vast area and therefore, it has multiple ways to work in a different environments. So, today we learnt several concepts through this single lecture. We have revised and added some other information in the introduction of deep learning, We also have a discussion about neural networks and saw it working. Moreover, the platforms of deep learning were discussed here out of which, we chose the tensor flow and the reason for this choice was also explained well with the help of different points. In the end, we saw the brief procedure to train and predict the project and you will see all these concepts in action in the coming lectures so stay tuned with us.
Hello friends, I hope you all are having fun. Today, we are bringing you one of the most advanced and trending courses named "Deep Learning". Today, I am sharing the first tutorial, so we will discuss the basic Introduction to Deep Learning, and in my upcoming lectures, we will explore complex concepts related to it. Deep Learning has an extensive range of applications and trends and is normally used in advanced research. So, no matter which field you belong to, you can easily understand all the details with simple reading and practicing. So without any further delay, let me show you the topics that we are going to cover today:
So, let's get started:
Deep Learning is considered a branch of Machine Learning which itself comes under Artificial Intelligence. So, let's have a look at these two cornerstone concepts in the computing world:
Artificial intelligence or AI is the science/engineering behind the creation of intelligent machines, particularly intelligent computer programs. It enables computers to understand human intelligence and behave like it. AI has a broader expertise and does not have to limit itself to biologically observable methods as in deep learning.
It is a field that combines the computer and the robust data set to solve the problems of life. Moreover, it is important here to mention the definition of machine learning:
Machine learning is the branch of artificial intelligence, it learns from the experience and data fed into it and works intelligently on its own without the instruction of the human being. For instance, the news feed that arises on Facebook is directed by machine learning on the data so the content of the user’s choice appears every time when they scroll Facebook. As you put more and more data into the machine, it will learn in a better way to provide intelligent results.
Deep learning uses neural network techniques to analyze the data. The best way to describe the neural network is to relate it to the cells of the brain. A neural network is the layers of nodes much like the network in our brain and all these nodes are connected to each other either directly or indirectly. Neural Network has multiple layers to refine the output and it gets deeper as the number of layers increases.
In the human brain, each neuron is able to receive hundreds or thousands of signals from the other neurons and selects signals based on priority. Similarly, in deep learning networks, the signals travel from node to node according to the weight assigned to them. In this way, the neurons with the heavyweight have more effect on the adjacent layer. This process flows through all the layers and the final layer compiles the weight of the resultant and produces the output.
The human brain learns from its experience i.e. as you get old, you get wiser. Similarly, deep learning has the ability to learn from its mistakes and keeps on evolving.
The process of network formation and its working is so complex that it requires powerful machines and computers to perform complex mathematical operations and calculations. Even if you have a powerful tool and computer, it takes weeks to train the neurons.
Another thing that is important to mention here is that neural network works on binary numbers only. So, when the data is being processed, it classifies the answers as a series of binary numbers and performs highly complex calculations. Face recognition is the best example in this regard because in this process, the machine examines the edges and lines of the face to be recognized and it also saves the information of more significant facial parts.
Understanding the layers in deep learning is important to get an idea of the complex structure of deep learning neural networks. The neurons in the deep learning architecture are not scattered but are arranged in a civilized format in different layers. These layers are broadly classified into three groups:
The working of each neural network in deep learning depends on the arrangement and structure of these layers. Here is a general overview of each layer:
This is the first layer of the neural network and takes the information in the form of raw data. The data may be in the form of text, values, images, or other formats and may be arranged in large datasets. This layer takes the data and applies processing to make it ready for the hidden layers. Every neural network can have one or many input layers.
The main processing of data occurs in the hidden layers of the neural networks. These are crucial layers because they provide the processing necessary to learn the complex relationship between the input feature layer and the required output layer.
There are a great number of neurons in the hidden layers, and the number of hidden layers varies according to the complexity of the task and the type of neural network. These layers perform operations such as improved accuracy, feature extraction, representation learning, etc.
These are the final layers responsible for the production of the network’s predictions. A neural network may have one or more output layers, and the activation function of the network depends on the type of problem to be solved in the network. One such example is the softmax activation function, which divides the output according to the probability distribution over different classes.
To understand well, usually, the example of object or person recognition is explained to the students. Let's say, we want to recognize or detect a cat in the picture. We know that different races of cats do not look alike. Some of them are fluffy, some are short, and some are thin in appearance. By the same token, the different angles of the images of the same cat will not be the same and the computer may be confused to recognize these cases. Therefore, the training process includes the amount of light and the shadow of the object in the observation.
In order to train a deep-learning machine to recognize a cat, the following main procedures are included:
In the modern computing world, deep learning has a wide range of applications in almost every field. We have mentioned a few examples in our above discussion i.e. Facebook newsfeed and driverless cars. Let's have a look at a few other services utilizing deep learning techniques:
Digital assistants i.e. voice recognition, facial recognition, text-to-speech conversion, voice-to-text conversion, language translation, plagiarism checker etc. are using deep learning techniques to recognize the voice or to process languages. Grammarly, Copyscape, Ahrefs etc. are a few real-life examples using Deep Learning techniques.
Paypal is using deep learning to prevent fraud and illegal transactions. This is one of the most common examples of the banking facility that I am mentioning here otherwise, there are different applications in security and privacy that are connected to deep learning.
Some object recognition applications such as CamFind allow the user to use pictures of the objects and with the help of mobile vision technology, these apps can easily understand what type of objects have been captured.
Another major application of deep learning is the self-driven car that will not only be able to minus the need for drivers in the car but are also able to avoid traffic jams and road accidents. It is an important topic and most companies are working day and night in deep learning to get the perfect results.
As we have said earlier, deep learning is the process of training the computer like humans, therefore, people are working best to train the machine so they can easily examine trends and predict future outcomes such as stock marketing and weather prediction. Isn't it helpful that your computer or the assistant tells you about the stock marketing rates and predicts the best option to be picked for your investments?
In the medical field, where doctors and experts are working hard to save the lives of people, there is no need to explain the importance of technologies such as deep learning that can predict and control the values in body changes and suggest the best remedy and solution of the problem being observed.
Once you have read about the application and the working process of deep learning, you must be thinking if it is the future, why choose deep learning as your career? Let me tell you, if you excel in deep learning, the future is yours. The careers in deep learning are not yet declared but in the coming few years, you are going to see tremendous exposure to deep learning and related subjects and if you are an expert in it, you will be in demand all the time because it is coming with the endless opportunities. Machine learning engineers are in high demand because neither data scientists nor software engineers possess the necessary skills.
To fill the void, the role of machine learning engineer has evolved. According to experts, the deep learning developer will be one of the most highly paid ones in the future. I hope, in the future, almost all fields will require the involvement of deep learning in their network to work easily and to get more and more efficient work without the involvement of human beings. In simple words, with the help of a neural network, we are replacing human beings with machines and these machines will be more accurate.
So in this way, we have introduced you to the amazing and interesting sub-branch of machine learning that is connected to artificial intelligence. We have seen the working and procedures of deep learning and to understand well, we have seen the examples of deep learning processes. Moreover, the trends and techniques discussed related to deep learning where we have seen that most popular apps and websites are using deep learning to make their platforms more user-friendly and exciting. In the end, we saw the careers and professions of deep learning for the motivation of the students. I know at this step, you will have many questions in your mind but do not worry because I am going to explain everything without skipping a single concept and will learn new things with you while explaining to you. So stay with us for more interesting lectures.
During the era of Covid-19, social distancing has proven to be an efficient method of reducing the spread of contagious viruses. It is recommended that people avoid close contact as much as possible because of the potential for disease transmission. Many public spaces, including workplaces, banks, bus terminals, train stations, etc., struggle with the issue of keeping a safe distance.
The previous guide covered the steps necessary to connect the PCF8591 ADC/DAC Analog Digital Converter Module to a Raspberry Pi 4. On our Terminal, we saw the results displayed as integers. We dug deeper into the topic, figuring out exactly how the ADC produces its output signals. In this article, however, we will use OpenCV and a Raspberry Pi to create a system that can detect when people are trying to avoid eye contact with one another. We will employ the YOLO version 3 Object Recognition Algorithm's weights to implement the Deep Neural Networks component. Compared to other controllers, the Raspberry Pi always comes out as the best option for image processing tasks. Previous efforts utilizing Raspberry Pi for advanced image processing included a face recognition application.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Jumper Wires | Amazon | Buy Now | |
2 | PCF8591 | Amazon | Buy Now | |
3 | Raspberry Pi 4 | Amazon | Buy Now |
Raspberry Pi 4
Only a Raspberry pi 4 having OpenCV pre-installed will do for this purpose. Digital image processing is handled with OpenCV. Digital Image Processing is often used for people counting, facial identification, and detecting objects in images.
The savvy YOLO (You Only Look Once) Convolution neural networks (CNN) in real-time Object Detection are invaluable. The most recent version of YOLO, YOLOv3, is a fast and accurate object identification algorithm that can identify eighty distinct types of objects in both still and moving media. The algorithm first runs a unique neural net over the entire image before breaking it up into areas and computing border boxes and probability for each. The YOLO base model has a 45 fps real-time frame rate for processing photos. Compared to other detection approaches, such as SSD and R-CNN, the YOLO model is superior.
In the past, computers relied on input devices like keyboards and mice; today, they can also analyze data from visual sources like photos and videos. Computer Vision is a computer's (or a machine's) capacity to read and interpret graphic data. Computing vision has advanced to the point that it can now evaluate the nature of people and objects and even read their emotions. This is feasible because of deep learning and artificial intelligence, which allow an algorithm to learn from examples like recognizing relevant features in an unlabeled image. The technology has matured to the point where it can be employed in critical infrastructure protection, hotel management, and online banking payment portals.
OpenCV is the most widely used computer vision library. It is a free and open-source Intel cross-platform library that may be used with any OS, including Windows, Mac OS X, and Linux. This will make it possible for OpenCV to function on a mobile device like a Pi, which will have a wide range of applications. Let's dive in, then.
OpenCV and its prerequisites won't run without updating the Raspberry Pi to the latest version. To install the most recent software for your Raspberry Pi, type in the following commands:
sudo apt-get update
Then, use the scripts below to set up the prerequisites on your RPi so you can install OpenCV.
sudo apt-get install libhdf5-dev -y
sudo apt-get install libhdf5-serial-dev –y
sudo apt-get install libatlas-base-dev –y
sudo apt-get install libjasper-dev -y
sudo apt-get install libqtgui4 –y
sudo apt-get install libqt4-test –y
Finally, run the following lines to install OpenCV on your Raspberry Pi.
pip3 install OpenCV-contrib-python==4.1.0.25
OpenCV's installation on a Raspberry Pi can be nerve-wracking because it takes a long time, and there's a good possibility you'll make a mistake. Given my own experiences with this, I've tried to make this lesson as straightforward and helpful as possible so that you won't have to go through the same things I did. Even though OpenCV 4.0.1 had been out for three months when I started writing this lesson, I decided to use the older version (4.0.0) because of some issues with compiling the newer version.
This approach involves retrieving OpenCV's source package and compiling it on a Raspberry Pi with the help of CMake. Installing OpenCV in a virtual environment allows users to run many versions of Python and OpenCV on the same computer. But I'm not going to do that since I'd rather keep this essay brief and because I don't anticipate that it will be required any time soon.
Step 1: Before we get started, let's ensure that our system is up to date by executing the command below:
sudo apt-get update && sudo apt-get upgrade
If there are updated packages, they should be downloaded and installed automatically. There is a 15-20 minute wait time for the process to complete.
Step 2: We must now update the apt-get package to download CMake.
sudo apt-get update
Step 3: When we've finished updating apt-get, we can use the following command to retrieve the CMake package and put it in place on our machine.
sudo apt-get install build-essential cmake unzip pkg-config
When installing CMake, your screen should look similar to the one below.
Step 4: Then, use the following command to set up Python 3's development headers:
sudo apt-get install python3-dev
Since it was pre-installed on mine, the screen looks like this.
Step 5: The following action would be to obtain the OpenCV archive from GitHub. Here's the command you may use to replicate the effect:
wget -O opencv.zip https://github.com/opencv/opencv/archive/4.0.0.zip
You can see that we are collecting version 4.0.0 right now.
Step 6: The OpenCV contrib contains various python pre-built packages that will make our development efforts more efficient. Therefore, let's also download that with the command that is identical to the one shown below.
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.0.0.zip
The "OpenCV-4.0.0" and "OpenCV-contrib-4.0.0" zip files should now be in your home directory. If you need to know for sure, you may always go ahead and check it out.
Step 7: Let's extract OpenCV-4.0.0 from its.zip archive with the following command.
unzip opencv.zip
Step 8: Extraction of OpenCV contrib-4.0.0 via the command line is identical.
unzip opencv_contrib.zip
Step 9: OpenCV cannot function without NumPy. Follow the command below to begin the installation.
pip install numpy
Step 10: In our new setup, the home directory would now contain two folders: OpenCV-4.0.0 and OpenCV contrib-4.0.0. Next, we'll make a new directory inside OpenCV-4.0.0 named "build" to perform the actual compilation of the Opencv library. The steps needed to achieve the same result are detailed below.
cd~/opencv
mkdir build
cd build
Step 11: OpenCV's CMake process must now be initiated. In this section, we specify the requirements for compiling OpenCV. Verify that "/OpenCV-4.0.0/build" is in your path. Then, paste the lines below into the Terminal.
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-4.0.0/modules \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D BUILD_TESTS=OFF \
-D WITH_TBB=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D BUILD_EXAMPLES=OFF ..
Hopefully, the configuration will proceed without a hitch, and you'll see "Configuring done" and "Generating done" in the output.
If you encounter an issue during this procedure, check to see if the correct path was entered and if the "OpenCV-4.0.0" and "OpenCV contrib-4.0.0" directories exist in the root directory path.
Step 12: This is the most comprehensive process that needs to be completed. Using the following command, you can compile OpenCV, but only if you are in the "/OpenCV-4.0.0/build" directory.
Make –j4
Using this method, you may initiate the OpenCV compilation process and view the status in percentage terms as it unfolds. After three to four hours, you will see a completed build screen.
The command "make -j4" utilizes all four processor cores when compiling OpenCV. Some people may feel impatient waiting for a 99% success rate, but eventually, it will be worth it.
After waiting an hour, I had to cancel the process and rebuild it with "make -j1," which did the trick. It is advisable first to use make j4 since that will utilize all four of pi's cores, and then use make j1, as make j4 will complete most of the compilation.
Step 13: If you are at this point, congratulations. You have made it through the entire procedure with flying colors. The final action is to run the following command to install libopecv.
sudo apt-get install libopencv-dev python-OpenCV
Step 14: Finally, a little python script can be run to verify that the library was successfully installed. Try "import cv2" in Python, as demonstrated below. You shouldn't get any error message when you do this.
Let's get the necessary packages set up on the Raspberry Pi before we begin writing the code for the social distance detector.
utils are designed to simplify the use of OpenCV for standard image processing tasks like translating, rotating, resizing, skeletonizing, and presenting pictures via Matplotlib. If you want to get the imutils, type in the following command:
pip3 install imutils
The complete code may be found at the bottom of the page. In this section, we'll walk you through the most crucial parts of the code so you can understand it better. All the necessary libraries for this project should be imported at the beginning of the code.
import numpy as np
import cv2
import imutils
import os
import time
Distances between objects or points in a video frame can be determined with the Check() function. The two things in the picture are represented by the a and b points. The Euclidean distance is determined using these two positions as the starting and ending points.
def Check(a, b):
dist = ((a[0] - b[0]) ** 2 + 550 / ((a[1] + b[1]) / 2) * (a[1] - b[1]) ** 2) ** 0.5
calibration = (a[1] + b[1]) / 2
if 0 < dist < 0.25 * calibration:
return True
else:
return False
The YOLO weights, configuration file, and COCO names file all have specific locations that can be set in the setup function. The os.path module is everything you need to do ordinary things with pathnames. The os.path.join() sub-module intelligently combines two or more path components. cv2.dnn.read The net is reloaded with the saved weights using the netfromdarknet() function. Once the weights have been loaded, the network layers can be extracted using the getLayerNames model.
def Setup(yolo):
global neural_net, ln, LABELS
weights = os.path.sep.join([yolo, "yolov3.weights"])
config = os.path.sep.join([yolo, "yolov3.cfg"])
labelsPath = os.path.sep.join([yolo, "coco.names"])
LABELS = open(labelsPath).read().strip().split("\n")
neural_net = cv2.dnn.readNetFromDarknet(config, weights)
ln = neural_net.getLayerNames()
ln = [ln[i[0] - 1] for i in neural_net.getUnconnectedOutLayers()]
In the image processing section, we extract a still image from the video and analyze it to find the distance between the people in the crowd. The function's first two lines specify an empty string for both the width and height of the video frame. To process many images simultaneously, we utilized the cv2.dnn.blobFromImage() method in the following line. The blob function adjusts a frame's Mean, Scale, and Channel.
(H, W) = (None, None)
frame = image.copy()
if W is None or H is None:
(H, W) = frame.shape[:2]
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
neural_net.setInput(blob)
starttime = time.time()
layerOutputs = neural_net.forward(ln)
YOLO's layer outputs are numerical values. With these numbers, we may determine which objects belong to which classes with greater precision. To identify persons, we iterate over all layerOutputs and assign the "person" class label to each. Each detection generates a bounding box whose output includes the coordinates of the detection's center on X and Y as well as its width and height.
scores = detection[5:]
maxi_class = np.argmax(scores)
confidence = scores[maxi_class]
if LABELS[maxi_class] == "person":
if confidence > 0.5:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
outline.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
Then, determine how far apart the middle of the active box is from the centers of all other boxes. If the rectangles overlap only a little, set the value to "true."
for i in range(len(center)):
for j in range(len(center)):
close = Check(center[i], center[j])
if close:
pairs.append([center[i], center[j]])
status[i] = True
status[j] = True
index = 0
In the following lines, we'll use the model's box dimensions to create a square around the individual and evaluate whether or not they are in a secure area. If there is little space between the boxes, the box's color will be red; otherwise, it will be green.
(x, y) = (outline[i][0], outline[i][1])
(w, h) = (outline[i][2], outline[i][3])
if status[index] == True:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 150), 2)
elif status[index] == False:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
Now we're inside the iteration function, where we're reading each film frame and analyzing it to determine how far apart the people are.
ret, frame = cap.read()
if not ret:
break
current_img = frame.copy()
current_img = imutils.resize(current_img, width=480)
video = current_img.shape
frameno += 1
if(frameno%2 == 0 or frameno == 1):
Setup(yolo)
ImageProcess(current_img)
Frame = processedImg
In the following lines, we'll utilize the opname-defined cv2.VideoWriter() function to save the output video to the provided place.
if create is None:
fourcc = cv2.VideoWriter_fourcc(*'XVID')
create = cv2.VideoWriter(opname, fourcc, 30, (Frame.shape[1], Frame.shape[0]), True)
create.write(Frame)
When satisfied with your code, launch a terminal on your Pi and go to the directory where you kept it. The following folder structure is recommended for storing the code, Yolo framework, and demonstration video.
The yoloV3 directory is downloadable from the;
https://pjreddie.com/media/files/yolov3.weights
videos from:
https://www.pexels.com/search/videos/pedestrians/
Finally, paste the Python scripts provided below into the same folder as the one displayed above. The following command must be run once you've entered the project directory:
python3 detector.py
I applied this code to a sample video I found on pexels, and the results were interesting. The frame rate was terrible, and the film played back in almost 11 minutes.
Changing line 98 from cv2.VideoCapture(input) to cv2.VideoCapture(0) allows you to test the code without needing a video. Follow these steps to utilize OpenCV on a Raspberry Pi to identify inappropriate social distances.
import numpy as np
import cv2
import imutils
import os
import time
def Check(a, b):
dist = ((a[0] - b[0]) ** 2 + 550 / ((a[1] + b[1]) / 2) * (a[1] - b[1]) ** 2) ** 0.5
calibration = (a[1] + b[1]) / 2
if 0 < dist < 0.25 * calibration:
return True
else:
return False
def Setup(yolo):
global net, ln, LABELS
weights = os.path.sep.join([yolo, "yolov3.weights"])
config = os.path.sep.join([yolo, "yolov3.cfg"])
labelsPath = os.path.sep.join([yolo, "coco.names"])
LABELS = open(labelsPath).read().strip().split("\n")
net = cv2.dnn.readNetFromDarknet(config, weights)
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
def ImageProcess(image):
global processedImg
(H, W) = (None, None)
frame = image.copy()
if W is None or H is None:
(H, W) = frame.shape[:2]
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
net.setInput(blob)
starttime = time.time()
layerOutputs = net.forward(ln)
stoptime = time.time()
print("Video is Getting Processed at {:.4f} seconds per frame".format((stoptime-starttime)))
confidences = []
outline = []
for output in layerOutputs:
for detection in output:
scores = detection[5:]
maxi_class = np.argmax(scores)
confidence = scores[maxi_class]
if LABELS[maxi_class] == "person":
if confidence > 0.5:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
outline.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
box_line = cv2.dnn.NMSBoxes(outline, confidences, 0.5, 0.3)
if len(box_line) > 0:
flat_box = box_line.flatten()
pairs = []
center = []
status = []
for i in flat_box:
(x, y) = (outline[i][0], outline[i][1])
(w, h) = (outline[i][2], outline[i][3])
center.append([int(x + w / 2), int(y + h / 2)])
status.append(False)
for i in range(len(center)):
for j in range(len(center)):
close = Check(center[i], center[j])
if close:
pairs.append([center[i], center[j]])
status[i] = True
status[j] = True
index = 0
for i in flat_box:
(x, y) = (outline[i][0], outline[i][1])
(w, h) = (outline[i][2], outline[i][3])
if status[index] == True:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 150), 2)
elif status[index] == False:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
index += 1
for h in pairs:
cv2.line(frame, tuple(h[0]), tuple(h[1]), (0, 0, 255), 2)
processedImg = frame.copy()
create = None
frameno = 0
filename = "newVideo.mp4"
yolo = "yolov3/"
opname = "output2.avi"
cap = cv2.VideoCapture(filename)
time1 = time.time()
while(True):
ret, frame = cap.read()
if not ret:
break
current_img = frame.copy()
current_img = imutils.resize(current_img, width=480)
video = current_img.shape
frameno += 1
if(frameno%2 == 0 or frameno == 1):
Setup(yolo)
ImageProcess(current_img)
Frame = processedImg
cv2.imshow("Image", Frame)
if create is None:
fourcc = cv2.VideoWriter_fourcc(*'XVID')
create = cv2.VideoWriter(opname, fourcc, 30, (Frame.shape[1], Frame.shape[0]), True)
create.write(Frame)
if cv2.waitKey(1) & 0xFF == ord('s'):
break
time2 = time.time()
print("Completed. Total Time Taken: {} minutes".format((time2-time1)/60))
cap.release()
cv2.destroyAllWindows()
Convincing Workers
Since 41% of workers won't return to their desks until they feel comfortable, installing social distancing detection is an excellent approach to reassure them that the situation has been rectified. People without fevers can still be contagious; hence this solution is preferable to thermal imaging cameras.
Space Utilization
Using the detection program, you can find out which places in the workplace are the most popular. As a result, you'll have all the information you need to implement the best precautions.
The Practice of Keeping Tabs and Taking Measures
The software can also be connected to security video systems outside the workplace, such as in a factory where workers are frequently close to one another. To be able to keep an eye on the office atmosphere and single out those whose personal space is too close to others.
Tracking the Queues
Queue monitoring is a valuable addition to security cameras for businesses in retail, healthcare, and other sectors, where waiting in line is unnecessary. As a result, the cameras will be able to monitor and recognize whether or not people are following the social distance requirements. The system can be configured to function with automatic barricades and digital billboards to provide real-time alerts and health and security information.
The adverse effects of social isolation include the following:
Its efficacy decreases when mosquitoes, infected food or water, or other vectors are predominantly responsible for spreading disease.
If a person isn't used to being in a social setting, they may become lonely and depressed.
Productivity drops, and other benefits of interacting with other people are lost.
This tutorial showed us how to build a social distance detection system. This technology makes use of AI and deep learning to analyze visual data. Incorporating computer vision allows for accurate distance calculations between people. A red box will appear around any group that violates the minimum acceptable threshold value. The system's designers used previously shot footage of a busy roadway to build their algorithm. The system can determine an approximation of the distance between individuals. In social interactions, there are two types of space between people: the "Safe" and "Unsafe" distances. In addition, it shows labels according to item detection and classification. The classifier may be utilized to create real-time applications and put into practice live video streams. During pandemics, this technology can be combined with CCTV to keep an eye on the public. Since it is practical to conduct such screening of the mass population, they are routinely implemented in high-traffic areas such as airports, bus terminals, markets, streets, shopping mall entrances, campuses, and even workplaces and restaurants. Keeping an eye on the distance between two people allows us to ensure sufficient space is maintained between them.
Welcome back to another Python tutorial for the Raspberry Pi 4! The previous tutorial showed us how to construct a Raspberry Pi-powered cell phone with a microphone and speaker for making and receiving calls and reading text messages (SMS). To make our Raspberry Pi 4 into a fully functional smartphone, we built software in Python. As we monitored text and phone calls being sent and received between the raspberry pi and our mobile phone, we experienced no technical difficulties. But in this tutorial, you'll learn how to hook up the PCF8591 ADC/DAC module to a Raspberry Pi 4.
Since most sensors only output their data in analog values, converting them to binary values that a microcontroller can understand is a crucial part of any integrated electronics project. A microcontroller's ability to process analog data necessitates using an analog-to-digital converter.
Some microcontrollers, including the Arduino, MSP430, and PIC16F877A, contain an onboard analog-to-digital converter (ADC), whereas others, like the 8051 and Raspberry Pi, do not.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Jumper Wires | Amazon | Buy Now | |
2 | PCF8591 | Amazon | Buy Now | |
3 | Raspberry Pi 4 | Amazon | Buy Now |
Raspberry-pi 4
PCF8591 ADC Module
100K Pot
Jumper wires
You are expected to have a Raspberry Pi 4 with the most recent version of Raspbian OS installed on it, and that you are familiar with using a terminal program like putty to connect to the Pi via the Internet and access its file system remotely. Those unfamiliar with Raspberry Pi can learn the basics by reading the articles below.
Each of the ten pins on the PCF8591 module may read analog values as high as 256 on the PCF8591's digital side or vice versa. The board has a thermistor and LDR circuit. Input and output from this module are both analogs. To facilitate the I2C protocol, it has a dedicated serial clock and serial data address pins. The supply voltage ranges from 2.5 to 6V, and the stand-by current is minimal. We can further turn the module's potentiometer knob to control the input voltage. A total of three jumpers can be found on the board. Switching between the thermistor, LDR/photoresistor, and adjustable voltage access circuits is possible by connecting J4, J5, and J6. D1 and D2 are two LEDs on the board, with D1 displaying the strength of the output voltage and D2 indicating the power of the supply voltage. When the supply or output voltage is increased, the brightness of LEDs D1 and D2 are correspondingly enhanced. Potentiometers connected to the LEDs' VCC or AOUT pins also allow testing.
Microprocessors, Arduinos, Raspberry Pis, and other digital logic circuits can interact with the physical environment thanks to Analogue-to-Digital Converters (ADCs). Many digital systems gather information about their settings by analyzing the analog signals produced by transducers such as microphones, light detectors, thermometers, and accelerometers. These signals constantly vary in value since they are derived from the physical world.
Digital circuits use binary signals, which can only be in one of two states, "1" (HIGH) or "0" (LOW), as opposed to the infinitely variable voltage values provided by analog signals (LOW). Therefore, Analogue-to-Digital Converters (A/D) is an essential electronic circuit for translating between constantly varying analog impulses and discrete digital signals.
To put it simply, an analog-to-digital converter (ADC) is a device that, given a single instantaneous reading of an analog voltage, generates a unique digital output code that stands in for that reading. The precision of an A/D converter determines how many binary digits, or bits, are utilized to represent the original analog voltage value.
By rotating the potentiometer's wiper terminal between 0 and VMAX, we may see a continuous output signal with an endless set of output values related to the wiper position. In a potentiometer, the output voltage constantly varies while the wiper is moved between fixed positions. Variations in temperature, pressure, liquid levels, and brightness are all examples of analog signals.
A digital circuit uses a single rotary switch to control the potential divider network, taking the place of the potentiometer's wiper at each node. The output voltage, VOUT, rapidly transitions from one node to the next as the switch is turned, with each node's value representing a multiple of 1.0 volts.
The output is guaranteed at 2-volt, 3-volt, 5 volts, etc., but NOT a 2.5-volt, 3.1-volt, or 4.6-volt output. Using a multi-position switch and more resistive components in the voltage-divider network, resulting in more discrete switching steps, would allow for generating finer output voltage levels.
By this definition, we can see that a digital signal has discrete (step-by-step) values, while an analog signal's values change continuously over time. We are going from "LOW" to "HIGH" or "HIGH" to "LOW."
So the question becomes how to transform an infinitely variable signal into one with discrete values or steps that a digital circuit can work with.
Although several commercially available analog-to-digital converter (ADC) chips exist, such as the ADC08xx family, for converting analog voltage signals to their digital equivalents, a primary ADC can be constructed out of discrete components.
Using comparators to detect various voltage levels and output their switching signal state to an encoder is a straightforward method known as parallel encoding, flash encoding, simultaneous encoding, or multiple comparator converters.
The equivalence output script for a given n-bit resolution is formed by a chain network of accuracy resistors and a series of comparators that are connected but equally spaced.
As soon as an analog signal is provided to the comparator input, it is evaluated with a reference voltage, making parallel converters advantageous because of their ease of construction and lack of need for timing clocks. The following comparator circuit may be of interest.
The LM339N is an analog comparator that compares the relative magnitudes of two voltage levels via its two analog inputs (one positive and one negative).
The comparator receives two signals, one representing the input voltage (VIN) and the other representing the reference value (VREF). The comparator's digital circuits state, "1" or "0," is determined by comparing two output voltages at the input of the comparator.
One input (VREF) receives a reference voltage, and the other input (VIN) receives the input voltage to be compared to it. Output is "OFF" by an LM339 comparator when the input power is lower than (VIN VREF) and "ON" when the input power is higher than the standard voltage (VIN > VREF). A comparator is a device to determine which of two voltages is greater.
Using the potential divider network established by R1 and R2, we can calculate VREF. If the two resistors are identical in value (R1 = R2), then the reference voltage will be half the input power (V/2). Therefore, like with a 1-bit ADC, the output of an open-collector comparator is HIGH if VIN is lower than V/2 and LOW otherwise.
However, by increasing the number of resistors in the voltage divider circuit, we can "divide" the voltage source by an amount equal to the ratio of the resistors' resistances. However, the number of comparators needed increases with the number of resistors in the voltage-divider network.
For an "n"-bit binary output, where "n" is commonly between 8 and 16 bits, a 2n- 1 comparator would be needed in general. As we saw previously, the comparator utilized by the one-bit ADC to determine whether or not VIN was more significant than the V/2 voltage output was 21 minus 1, which equals 1.
If we want to build a 2-bit ADC, we'll need 22-1 or "3" comparators since the 4-to-2-bit encoder circuitry depicted above requires four distinct voltage levels to represent the four digital values.
Where X is a "don't care" statement, representing a logical 0 or 1.
Explain how this analog-to-digital device operates. An analog-to-digital converter (A/D) must generate a faithful digital copy of the Analog input signal to be of any value. To keep things straightforward, we've assumed that VIN is somewhere between 0 and 4 volts and have adjusted VREF and the voltage divider network so that there is a 1 V drop between each resistor in this simple 2-bit Analog - to - digital example.
A binary zero (00) is output by the encoder on pins Q0 and Q1 when the input voltage, VIN, is less than the reference voltage level, which occurs when VIN is between 0 and 1 volts (1V). Since comparator U1's reference voltage input is set to 1 volt, when VIN rises above 1 volt but is below 2 volts, U1's HIGH output is triggered. When the input changes at D1, the priority encoder, used for the 4-to-2-bit encoding, generates a binary result of "1." (01).
Remember that the inputs of a Priority Encoder, like the TTL 74LS148, are all assigned different priority levels. The highest priority input is always used as the output of the priority encoder. So, when a higher priority input is available, lesser priority inputs are disregarded. Therefore, if there are many inputs simultaneously at logic state "1", only the input with high priority will have its output code reflected on D0 and D1.
Thus, now that VIN is greater than 2 volts—the next reference voltage level—comparator U2 will sense the difference and output HIGH. However, when VIN is more than 3 volts, the priority encoder will output a binary "3" (11), as input D2 has a high priority than inputs D0 and D1. Each comparator outputs a HIGH or LOW state to the encoder, generating 2-bit binary data between 00 and 11 as VIN decreases or changes between every reference voltage level.
This is great and all, but commercially available priority encoders, like the TTL, are 8-bit circuits, and if we use one of these, six of the binary numbers will go unused. A digital Ex-OR gate and a grid of signaling diodes can create a straightforward encoder circuit.
Before feeding the diodes, the results of the comparators go through an Exclusive-OR gate to be encoded. Whenever the diode is reverse biased, an external pull-down resistor is connected between the diodes' outputs and ground (0V) to maintain a LOW state and prevent the outputs from floating.
Also, as with the main board, the value of VIN controls which comparator sends a HIGH (or LOW) signal to the exclusive-OR gates, which provide a HIGH output if either of the inputs is HIGH but not both (the corresponding Boolean is Q = A.B + A.B). The AND-OR-NAND gates of combinational logic could also be used to build these Ex-OR gates.
The difficulty with both of these 4-to-2 converter designs is that the input analog voltage at VIN needs to vary by one full volt for the encoder to vary its output code, limiting the precision of the simple two-bit A/D converter to 1 volt. The output resolution can be improved by employing more comparators to convert to a three-bit A/D converter.
The aforementioned parallel ADC takes a voltage reading between 0 and over 3 volts as an analog input and turns it into a binary code with only 2 bits. Since there are 23 = 8 possible digital outputs from a 3-bit digital circuits system, the input analog voltage can be compared to a scale of eight voltages, each of which is one-eighth (1/8) of the voltage supply. This means that we can now measure to an accuracy of 0.5 (4/8) volts and that 23-1 comparators are needed to generate a binary code with a 3-bit resolution (from 000 (0) to 111 (7)).
This will provide us with a three-bit code for each of the eight potential values of the analog input of:
An "X" may be a logic 0 or a logic 1 to indicate a "don't care" state.
Then we can see that more comparators and power levels are required and more output binary bits when the ADC's resolution is increased.
Therefore, an analog-to-digital converter with a 4-bit resolution needs only 15 (24-1) comparators. An eight-bit resolution requires 255 (28-1) comparators. A 10-bit resolution needs 1023 comparators, etc. Therefore, the complexity of this type of Analog-to-Digital Converter circuit increases as the number of output bits increases.
Only if a few binary bits are needed to make a read on a display unit to represent the reference voltage of an input analog signal can a parallel or flashed A/D converter quickly be developed as part of a project due to its fast real-time conversion rate.
As an input interface circuit component, an analog signal from sensors or transducers is converted into a digital binary code by an analog-to-digital converter. Similarly, a digital binary code can be converted into a comparable analog quantity using a Digital-to-Analog Conversion for output interfacing to operate a motor or actuator or, more often, in audio applications.
Knowing the Raspberry Pi's I2C port pins and setting up the I2C connection in the pi 4 are the initial steps in using a PCF8591 with the Pi.
GPIO2 and GPIO3 on the Rpi Model are utilized for I2C communication in this guide.
Raspberry Pi I2C Configuration
Raspberry Pi lacks I2C support by default. Therefore, it must be activated before anything else. Turn on Raspberry Pi's I2C port.
First, open a terminal and enter sudo raspi-config.
The RPi 4Â Software Configuration Tool has opened.
Third, activate the I2C by selecting Interfacing options.
Restart the Pi after enabling I2C.
The Raspberry Pi has to know the I2C address of the PCF8591 IC before communication can begin. You may get the address by linking the PCF8591's SDA and SCL pins to the Raspberry Pi's own SDA and SCL jacks. The 5-volts and GND pins should be connected as well.
You may find the address of an attached I2C device by opening a terminal and entering the following command.
sudo i2cdetect –y 1 or sudo i2cdetect –y 0
After locating the I2C address, the next step is constructing the circuit and setting up the required libraries to use PCF8591 and a Raspberry Pi 4.
The circuit diagram to interface the PCF8591 with the Raspberry Pi is straightforward. In this example of interfacing, we'll read the analog signal from any analog inputs and display them in the Raspberry Pi terminal. We have a 100K pot to adjust the settings.
Pi's GPIO2 and GPIO must be connected to the power supply and ground. Then, hook up GPIO3 and GPIO5 to SDA and SCL, respectively. Last but not least, link AIN0 to a 100K pot. Instead of using the Terminal to view the ADC values, a 16x2 LCD can be added.
The complete code and demo video are included after this guide.
To communicate with the I2C bus, you must first import the SMBus library and then use the time library to specify how long to wait before outputting the value.
import smbus
import time
Create some variables now. The I2C bus address is stored in the first variable, and the first analog input pin's address is stored in the second variable.
address = 0x48
A0 = 0x40
Next, we've invoked the library smbus's SMBus(1) function to create an object.
bus = smbus.SMBus(1)
The first line in the while instructs IC to take a reading from the first analog signal pin. Address information read from an Analog pin is saved as a numeric variable in the second line. Exit with the value printed.
While True:
    bus.write_byte(address,A0)
    value = bus.read_byte(address)
    print(value)
    time.sleep(0.1)
Finally, put the Python script in a file ending in.py and run it in the Raspberry Pi terminal with the command below.
python filename.py
Ensure that the I2C communication is turned on and that the pins are linked according to the diagram before running the code, or else you will get errors. It's time for the analog readings to appear in the terminal format below. The values gradually shift as you turn the pot's knob. Find out more about getting the software to work in
Here is the full Python script.
import smbus
import time
address = 0x48
bus = smbus.SMBus(1)
while True:
  bus.write_byte(address,A0)
  value = bus.read_byte(address)
  print(value)
  time.sleep(0.1)
We rely heavily on electronic gadgets in today's high-tech society. The digital signal is the driving force behind these digital devices. While most numbers are represented digitally, few still use analog notation. Thus, an ADC is employed to transform analog impulses into digital ones. ADC can be used in an infinite variety of contexts. Here are only a few examples of their use:
The digitized voice signal is used by cell phones. The voice is first transformed to digital form using an ADC before being sent to the cell phone's transmitter.
Digital photos and movies shot with a camera can be viewed on any computer or mobile device thanks to an analog-to-digital converter.
X-rays and MRIs are just two examples of medical imaging techniques that use ADC to go from Analog to digital before further processing. Then, they're adjusted so that everyone can follow along.
ADC converters can also transfer music from a cassette tape to a digital format, such as a CD or a USB flash drive.
The Analog-to-Digital Converter (ADC) in a digital oscilloscope converts analog signals to digital ones that can then be displayed and used for other reasons.
The air conditioner's built-in temperature sensors allow for consistent comfort levels. The onboard controller reads the temperature and makes adjustments based on the data it receives from the ADC.
Nowadays, practically everything has a digital counterpart, so every gadget must also include an ADC. For the simple reason that its operations require a digital domain accessible only via an analog-to-digital converter (ADC).
This piece taught us how to connect a Raspberry Pi 4 to a PCF8591 Analogue - to - digital decoder module. We have observed the output being shown as integers on our Terminal. We have also researched how the ADC generates its output signals. Here we will use OpenCV and a Raspberry Pi 4 to create a social distance detector.
Greetings, and welcome to another tutorial in our series on the raspberry pi 4 Python programming. The previous guide covered the basics of transmitting data over the radio using the nrf24l01 chip in Pi 4. We also learned about interfacing Arduino and raspberry pi 4 and sending radio signals between the two devices. However, this tutorial will walk you through building a Raspberry Pi-based mobile phone with a microphone and speaker for making and receiving calls and reading text messages (SMS). This Project also serves as a proper GSM Module for the Raspberry Pi interface, with all the necessary Code to run the most fundamental features of any modern smartphone. First, we will understand what gsm is, its architecture and how it works, then we will learn how to program it in our pi 4; therefore, let us begin.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Jumper Wires | Amazon | Buy Now | |
2 | LCD 16x2 | Amazon | Buy Now | |
3 | Raspberry Pi 4 | Amazon | Buy Now |
Raspberry Pi 4
GSM Module
16x2 LCD
4 *4 Keypad
10k pot
Breadboard
Connecting jumper wire
Power supply
Speaker
Microphone
SIM Card
Loudspeaker
The acronym "GSM" refers to the "global system for mobile communication" and is the name of a type of mobile communication modem (GSM). Bell Labs was responsible for conceptualizing GSM in the 1970s. It's one of the most common forms of mobile communication around the globe. The 850MegaHertz, 900MegaHertz, 1800 Megahertz, and 1900 Megahertz frequency bands are utilized by GSM networks, which are part of an open and digital mobile network used to carry voice and data services.
Using the telecommunications method of multiple time division access (TDMA), GSM technology was created as a digital system. For transmission, a GSM converts analog signals to digital ones, compresses them further and delivers them through a channel sharing bandwidth with two data streams from separate clients. The data rates transported by the digital system range from 64 kilobytes per second to 120 Megabytes per second.
In a GSM network, macro, micro, and umbrella cells coexist. The implementation context determines the specifics of each cell. The macro, micro, and umbrella cell sizes are in use in a GSM network. Each cell may have a different range of coverage depending on the setting.
Time-division multiple access (TDMA) works by giving each user a specific amount of time to transmit on the same frequency. It's flexible, supporting data rates from 64kbps to 120Mbps and allowing for clear voice communications.
The following are the primary components of the GSM architecture.
Connectivity and Switching Infrastructure (NSS)
All three of these components—the Base Station (BS), the Mobile Station (MS), and the Operations and Maintenance Subsystem (OSS)—are necessary for proper communication (OSS)
Each component of the GSM system design contributes to what is collectively called the core system/network. In this case, the mobile network system is primarily controlled and interfaced with via a data network consisting of several different components. Listed below are some of the most crucial elements of the underlying network.
One of the essential parts of a GSM network is its core network, where the Mobile Switching Center (MSC) resides. This MSC performs the same functions as a common switching node in an ISDN or PSTN. Still, it provides additional features to accommodate mobile users' requirements, such as authentication, registration, inter-MSC handovers, call localization, and routing.
In addition, it gives users an advantage in connecting their mobile phone networks to the PSTN (public switched telephone network) for making and receiving landline calls. To facilitate mobile-to-mobile calls across different networks, interfaces to all other switched telephone networks ( PSTN center servers are given.
Every subscriber's administrative details, including their last known location, are stored in this HLR database. This manner, calls can be routed over the GSM network to the appropriate mobile switch base station. If a call comes in when an operator has their phone turned on, the network can determine which base transmitter station the call is coming from and link it to the correct phone.
When the phone is turned on but not being used, it nevertheless registers to ensure the HLR system is aware of its current location. Each network has a single HLR, which may be physically split across several data centers for practical reasons.
To facilitate the VLR's desired services for the individual subscriber, it incorporates data from the HLR network. It is possible to run the visitor coordinates register independently, but it is most commonly implemented as a core component of the MSC. Because of this, getting access is more manageable, and it takes less time overall.
The Equipment Identity Register (EIR) is the part of the network infrastructure in charge of deciding whether or not certain pieces of mobile equipment are allowed access. The International Mobile Equipment Identification (IMEI) numbers uniquely identify each mobile technology work.
This IMEI number is permanently embedded within the mobile device and checked by the network after registration. Depending on the data in the EIR, the mobile phone may be given one of three possible network access states: allowed, banned, or monitored.
When users insert their SIM card into their phone, the secret key is stored in a secure file known as the AUC (authentication center). The AUC sees the extensive application as a radio channel coding and verification standard.
In the absence of location information for the mobile station (MS), a call placed by a ME terminates with the GMSC (Gateway Mobile Switching Centre). Using the Mobile Subscriber Identifier Service Data Number (MSISDN) and the HLR, the GMSC can locate the specific MSC that has been visited and connect the call to the appropriate location. It's unclear what the "MSC" part of GMSC stands for, as the gateway procedure does not require relating to an MSC.
Both SMS-Gateways are referred to collectively as the SMS gateway in the GSM specifications. The messages passing via these gateways are directed in various ways.
Sending a short message to mobile equipment (ME) requires the usage of the Short Messaging Service Gateway Switching Center. Short messages sent over a mobile network are routed through the SMS Inter-Working Switching Center. While the SMS-primary GMSC's function concerns the GMSC, the SMS-IWMSC serves as a constant endpoint for access to the Short Message Service Centre.
These were the primary nodes in the GSM system's infrastructure. While they frequently shared physical space, the entire middle network would sometimes be broadcast throughout the country. In the event of a failure, it will provide a degree of leeway.
The connection point between the mobile node and the broader network infrastructure. The radio transceivers and protocol management for mobile devices are housed in the Base Transceiver Station. In addition, a Base Station Controller manages the Base Transceiver and serves as a bridge between mobile devices and the mobile switching hub.
The network subsystem handles connectivity between the network and the mobile stations. The Phone Service Switch Centre is the backbone of the Network Subsystem, allowing users to connect to other networks (ISDN, PSTN, etc.). The GSM system's ability to route calls and allow for roaming depends on two additional components, the Home Location Record and the guest Location Record.
In addition, it stores the Equipment Identity Register, which keeps track of all the mobile devices and their associated IMEI numbers. The acronym IMEI refers to the unique identifier for mobile devices worldwide.
In the second generation of GSM network design, the mobile devices communicate with the BSS, or Base Station Subsystem. These components comprise this subsystem, and each will be examined.
As part of a GSM network, the radio Tx, Rx, and their associated antennas make up the base Transceiver Station, which is used for transmitting, receiving, and communicating directly through mobiles. The base station is the central component of each cell, and it communicates with mobile devices using an interface known as the Um interface and related protocols.
The base station controller (BSC) is employed for the following step back into GSM technology. This controller is typically co-located within one of the base transceiver stations it controls. This controller handles radio resource management, including channel allocation and handover between base station groups. Over the Abis interface, it communicates with the BTSs.
The acceptable radio technology is used by the GSM network's subsystems component in the ground station to ensure that multiple operators can utilize the system at the same time. Each base station can support many operators because each channel can support up to eight users.
The network provider strategically places these to ensure comprehensive coverage. A base station, sometimes known as a "cell," can surround this space. Signals can't be prevented from bleeding into neighbouring cells, and the channels used in one don't transfer to the next.
Mobile phones include a transceiver, a display, and a CPU, all of which are network-connected and operated using a SIM card. In a GSM mobile transmission medium, the operator monitors and controls the mobile station or mobile equipment, which are most commonly represented by cell phones. Their size has shrunk significantly while their functionality has skyrocketed. The benefit of a much longer interval between charges is still another advantage. Phone hardware and the subscriber identity module (SIM) are two of many components.
A mobile device's hardware consists of its primary components, such as the housing, screen, battery, and electronics used to generate the signal and process the signal receiver before transmission. The IMEI is a unique number assigned to each mobile device. This feature can be permanently programmed into a phone throughout its manufacturing process. During the registration process, the network accesses this database to see if the device has been flagged as stolen.
A user's identity on the network is stored in the information contained in their SIMcard. It also includes other data, such as the IMSI number. With this IMSI stored in the Sim, the phone user could easily switch phones by swapping SIM cards. As a result, if switching to a new mobile phone were simple and didn't require a unique phone number, more people would do it, generating more revenue for network operators and contributing to GSM's overall economic triumph.
The OSS is an integral aspect of any functional GSM network. The NSS and BSC parts are linked here. The GSM network and BSS traffic load are the primary areas of focus for this OSS. It is worth noting that some preservation responsibilities are relocated to the base station controller to lower the maintenance expense of the system when the amount of BS increases through the consumer population growth.
The 2G GSM network architecture is predicated on a rational functioning method. This approach is remarkably straightforward compared to today's mobile network architectures, which rely on software-defined units to facilitate highly adaptable operations. However, the 2G GSM architecture will show how the necessary voice and essential operational functions are organized.
The following are some of the functions provided by the GSM module.
Enhanced spectrum efficiency
Features including "international roaming," "integrated services digital network" (ISDN) compatibility, and "support for future services" are also included.
High-quality voice communications; encrypted phone conversations;
Features like a programmable alarm clock, high-quality voice communication, a fixed calling number, a real-time clock, and the ability to send and receive SMS messages are all standard on modern smartphones (SMS)
As a result of its rigorous security measures, the GSM system is currently the safest available for use in the telecommunications industry. Call privacy and subscriber anonymity for GSM users are only protected during transmission, but this is still a massive step toward attaining end-to-end security.
In either its mobile phone or modem form, a Global System for Mobile Communications (GSM) modem enables two computers or processors to connect across a network. A SIM card is needed to run a GSM modem, and it can only be used within the coverage area the network provider has paid for. It has serial, USB, and Bluetooth connectivity options for linking to a personal computer.
Any regular GSM cell phone can double as a GSM modem if you have a suitable cable and driver installed on your PC. It would be best if you used a GSM modem instead of a GSM cell phone. The GSM modem is helpful in many devices, including POS terminals, inventory management systems, surveillance cameras, weather stations, and GPRS-mode remote data loggers.
Below is a circuit showing how to connect a GSM modem to the MC using the level-shifting IC Max232. When a numeric command is received by short message service (SMS) from any mobile device, the SIM card-mounted GSM modem transfers that information to the MC via serial connection. The GSM modem is programmed to respond to the order "STOP" by producing an MC output, the point which is utilized to deactivate the ignition switch.
If the input is driven low, the GSM modem will send a predetermined message (in this case, "ALERT") to the user. A 162 LCD screen displays the entirety of the procedure.
We have utilized a GSM module and a Raspberry Pi 4 to manage the entire system and interface its many parts in this Project. You can input data of any kind, including phone numbers, text messages, and phone calls, read and respond to text messages, and more, using a 4x4 alphanumeric keypad. The SIM900A GSM module connects mobile phones to wireless networks for making and receiving calls and sending and receiving text messages. We've integrated a microphone, a loudspeaker for making and receiving voice calls, and a 16 * 2 liquid crystal displays information like menu options and alarms.
With alphanumeric input, you can use the same keyboard to type in both numbers and letters. For the Code we used to allow alphabets in addition to numbers in this method, scroll down to the "Code in Code" section.
It's simple to put this plan into action. The alphanumeric keypad is used for all functions. Below you'll find a link to the complete Code and a demonstration video. This section will elaborate on the four aspects of the listed projects.
The Pi 4 phone we built requires us to press the letter "C" and provide the cellphone number we wish to call. We'll use an alphanumeric keyboard to enter the number. Once the correct number has been entered, we must hit "C" again. The AT command is now processed by pi 4 to connect the call to a specified number.
ATDxxxxxxxxxx; <Enter> where xxxxxxxxx is entered Mobile Number.
Answering a phone call is simple. When a call comes into the SIM number stored in the GSM Module of your system, the LCD will display the message "Incoming..." along with the caller's number. All that's left to do is hit the 'A' key to answer the call. Pi 4 will send the following command to the GSM Module when the "A" button is pressed:
ATA <enter>
Pressing "D" on our Raspberry Pi phone allows us to send a text message. To whom (or what) should we address the SMS message that the system has just requested? Once the number has been entered, pressing "D" again will prompt the LCD to request a message. To send an SMS, enter the message using the keypad as you would with any other mobile device, and then hit the 'D' key again. Raspberry Pi can send SMS with the following command:
AT+CMGF=1 <enter>
AT+CMGS=”xxxxxxxxxx” <enter> where: xxxxxxxxxx is entered mobile number
Even this component is easy to use. Here, the SIM card is used to receive SMS messages from the GSM. The Raspberry Pi also keeps a close eye on the UART SMS signal. New notes are shown by the LCD displaying the text "New message," and reading them is as simple as pressing the "B" key. This is an SMS Received signal:
+CMTI: "SM," 6 Where 6 is the message location where it is stored in the SIM card.
When the RPi detects the 'SMS received' signal, it will get the SMS storage location and instruct the Global system for mobile to read the message. Moreover, the LCD will flash the words "New Message" in a prominent location.
AT+CMGR=<SMS stored location><enter>
The GSM now delivers the saved message to the Raspberry Pi, and the Pi, having extracted the primary SMS, shows it on the LCD. When it comes to MIC and Speaker, there is no secret code.
The GPIO pins of the Raspberry Pi are wired to the RS, EN, D4, D5, D6, and D7 pins of the 16 * 2 liquid crystal display. A direct connection is made between the GSM module's Rx and Tx pins and the Raspberry Pi's Tx and Rx pins. Connectors R1, R2, R3, and R4 of a 4 * 4 keypad are connected to GPIOs 12, 16, 20, and 21, whereas pins C1, C2, C3, and C4 are connected to GPIOs 26, 19, 13, and 6. If you want to boost the audio volume from the GSM Module, you can join the microphone directly to the mic+ and mic- pins and the loudspeaker to the sp+ and sp- pins. The loudspeaker can be connected directly to the GSM module without using the Audio Amplifier circuit.
This Pi 4 mobile phone's programming interface may be challenging to novices—the programming language of choice for this Project is Python.
Here, we define the keypad() function to be used with a basic numeric keypad. We've also added a def alpha keypad(): for typing alphabets so that you may use the same keypad for both purposes. To make it compatible with the Arduino keypad library, we've given this keypad a wide range of new capabilities. This keypad only takes 10 presses to enter a whole string of text or a numeric value.
For example, if we push key 2 (abc2) once, the LCD will display the letter 'a.' If we press it again, the letter 'b' will take its place, and if we hit it three more times, the letter 'c' will appear in the same spot. After holding down a key for a short time, the LCD pointer will advance to the following available location. We can now proceed to the next character or number. Any other keys can be processed in the same way.
def keypad():
for j in range(4):
gpio.setup(COL[j], gpio.OUT)
gpio.output(COL[j], 0)
ch=0
for i in range(4):
if gpio.input(ROW[i])==0:
ch=MATRIX[i][j]
return ch
while (gpio.input(ROW[i]) == 0):
pass
gpio.output(COL[j],1)
def alphaKeypad():
lcdclear()
setCursor(x,y)
lcdcmd(0x0f)
msg=""
while 1:
key=0
count=0
key=keypad()
if key == '1':
ind=0
maxInd=6
Key='1'
getChar(Key, ind, maxInd)
.... .....
..... .....
To begin, we have declared the pins for the liquid crystal display, the keypad, and other components, as well as included the necessary libraries in this python script:
import RPi.GPIO as gpio
import serial
import time
msg=""
alpha="1!@.,:?ABC2DEF3GHI4JKL5MNO6PQRS7TUV8WXYZ90 *#"
x=0
y=0
MATRIX = [
['1','2','3','A'],
['4','5','6','B'],
['7','8','9','C'],
['*','0','#','D']
]
ROW = [21,20,16,12]
COL = [26,19,13,6]
... .....
..... .....
The pins need to be pointed in the proper direction.
gpio.setwarnings(False)
gpio.setmode(gpio.BCM)
gpio.setup(RS, gpio.OUT)
gpio.setup(EN, gpio.OUT)
gpio.setup(D4, gpio.OUT)
gpio.setup(D5, gpio.OUT)
gpio.setup(D6, gpio.OUT)
gpio.setup(D7, gpio.OUT)
gpio.setup(led, gpio.OUT)
gpio.setup(buz, gpio.OUT)
gpio.setup(m11, gpio.OUT)
gpio.setup(m12, gpio.OUT)
gpio.setup(button, gpio.IN)
gpio.output(led , 0)
gpio.output(buz , 0)
gpio.output(m11 , 0)
gpio.output(m12 , 0)
To begin Serial communication, follow the steps below.
Serial = serial.Serial("/dev/ttyS0", baudrate=9600, timeout=2)
We must now create a liquid crystal display driving function. The def lcdcmd(ch): and def lcdwrite(ch): functions are used to deliver commands and data to the LCD, respectively. The liquid crystal display may also be cleared with def lcdclear(), the cursor position can be set with def setCursor(x,y), and a string can be sent to the liquid crystal display with def lcdprint(Str).
def lcdcmd(ch):
gpio.output(RS, 0)
gpio.output(D4, 0)
gpio.output(D5, 0)
gpio.output(D6, 0)
gpio.output(D7, 0)
if ch&0x10==0x10:
gpio.output(D4, 1)
.... .....
..... ....
def lcdwrite(ch):
gpio.output(RS, 1)
gpio.output(D4, 0)
gpio.output(D5, 0)
gpio.output(D6, 0)
gpio.output(D7, 0)
if ch&0x10==0x10:
gpio.output(D4, 1)
if ch&0x20==0x20:
gpio.output(D5, 1)
.... .....
..... ....
def lcdclear():
lcdcmd(0x01)
def lcdprint(Str):
l=0;
l=len(Str)
for i in range(l):
lcdwrite(ord(Str[i]))
def setCursor(x,y):
if y == 0:
n=128+x
elif y == 1:
n=192+x
lcdcmd(n)
Next, we'll need to code some features for interacting with text messages, phone calls, and incoming calls.
The call is placed using the function def call():. Also, the LCD can display the receiving message and number via the function def receiveCall(data):. Finally, the call is answered with def attendCall():.
The message is composed and sent using the alphaKeypad() method, accessed via the def sendSMS(): function. The SMS is received, and its location is retrieved using the def receive SMS(data) function. And finally, the LCD gets updated with the message thanks to def readSMS(index:).
All of the operations mentioned above are included in the Code that follows.
import RPi.GPIO as gpio
import serial
import time
msg=""
# 0 7 11 15 19 23 27 32 36 414244 ROLL45
alpha="1!@.,:?ABC2DEF3GHI4JKL5MNO6PQRS7TUV8WXYZ90 *#"
x=0
y=0
MATRIX = [
['1','2','3','A'],
['4','5','6','B'],
['7','8','9','C'],
['*','0','#','D']
]
ROW = [21,20,16,12]
COL = [26,19,13,6]
moNum=['0','0','0','0','0','0','0','0','0','0']
m11=17
m12=27
led=5
buz=26
button=19
RS =18
EN =23
D4 =24
D5 =25
D6 =8
D7 =7
HIGH=1
LOW=0
gpio.setwarnings(False)
gpio.setmode(gpio.BCM)
gpio.setup(RS, gpio.OUT)
gpio.setup(EN, gpio.OUT)
gpio.setup(D4, gpio.OUT)
gpio.setup(D5, gpio.OUT)
gpio.setup(D6, gpio.OUT)
gpio.setup(D7, gpio.OUT)
gpio.setup(led, gpio.OUT)
gpio.setup(buz, gpio.OUT)
gpio.setup(m11, gpio.OUT)
gpio.setup(m12, gpio.OUT)
gpio.setup(button, gpio.IN)
gpio.output(led , 0)
gpio.output(buz , 0)
gpio.output(m11 , 0)
gpio.output(m12 , 0)
for j in range(4):
gpio.setup(COL[j], gpio.OUT)
gpio.setup(COL[j],1)
for i in range (4):
gpio.setup(ROW[i],gpio.IN,pull_up_down=gpio.PUD_UP)
Serial = serial.Serial("/dev/ttyS0", baudrate=9600, timeout=2)
data=""
def begin():
lcdcmd(0x33)
lcdcmd(0x32)
lcdcmd(0x06)
lcdcmd(0x0C)
lcdcmd(0x28)
lcdcmd(0x01)
time.sleep(0.0005)
def lcdcmd(ch):
gpio.output(RS, 0)
gpio.output(D4, 0)
gpio.output(D5, 0)
gpio.output(D6, 0)
gpio.output(D7, 0)
if ch&0x10==0x10:
gpio.output(D4, 1)
if ch&0x20==0x20:
gpio.output(D5, 1)
if ch&0x40==0x40:
gpio.output(D6, 1)
if ch&0x80==0x80:
gpio.output(D7, 1)
gpio.output(EN, 1)
time.sleep(0.005)
gpio.output(EN, 0)
# Low bits
gpio.output(D4, 0)
gpio.output(D5, 0)
gpio.output(D6, 0)
gpio.output(D7, 0)
if ch&0x01==0x01:
gpio.output(D4, 1)
if ch&0x02==0x02:
gpio.output(D5, 1)
if ch&0x04==0x04:
gpio.output(D6, 1)
if ch&0x08==0x08:
gpio.output(D7, 1)
gpio.output(EN, 1)
time.sleep(0.005)
gpio.output(EN, 0)
def lcdwrite(ch):
gpio.output(RS, 1)
gpio.output(D4, 0)
gpio.output(D5, 0)
gpio.output(D6, 0)
gpio.output(D7, 0)
if ch&0x10==0x10:
gpio.output(D4, 1)
if ch&0x20==0x20:
gpio.output(D5, 1)
if ch&0x40==0x40:
gpio.output(D6, 1)
if ch&0x80==0x80:
gpio.output(D7, 1)
gpio.output(EN, 1)
time.sleep(0.005)
gpio.output(EN, 0)
# Low bits
gpio.output(D4, 0)
gpio.output(D5, 0)
gpio.output(D6, 0)
gpio.output(D7, 0)
if ch&0x01==0x01:
gpio.output(D4, 1)
if ch&0x02==0x02:
gpio.output(D5, 1)
if ch&0x04==0x04:
gpio.output(D6, 1)
if ch&0x08==0x08:
gpio.output(D7, 1)
gpio.output(EN, 1)
time.sleep(0.005)
gpio.output(EN, 0)
def lcdclear():
lcdcmd(0x01)
def lcdprint(Str):
l=0;
l=len(Str)
for i in range(l):
lcdwrite(ord(Str[i]))
def setCursor(x,y):
if y == 0:
n=128+x
elif y == 1:
n=192+x
lcdcmd(n)
def keypad():
for j in range(4):
gpio.setup(COL[j], gpio.OUT)
gpio.output(COL[j], 0)
ch=0
for i in range(4):
if gpio.input(ROW[i])==0:
ch=MATRIX[i][j]
#lcdwrite(ord(ch))
# print "Key Pressed:",ch
# time.sleep(2)
return ch
while (gpio.input(ROW[i]) == 0):
pass
gpio.output(COL[j],1)
# callNum[n]=ch
def serialEvent():
data = Serial.read(20)
#if data != '\0':
print data
data=""
def gsmInit():
lcdclear()
lcdprint("Finding Module");
time.sleep(1)
while 1:
data=""
Serial.write("AT\r");
data=Serial.read(10)
print data
r=data.find("OK")
if r>=0:
break
time.sleep(0.5)
while 1:
data=""
Serial.write("AT+CLIP=1\r");
data=Serial.read(10)
print data
r=data.find("OK")
if r>=0:
break
time.sleep(0.5)
lcdclear()
lcdprint("Finding Network")
time.sleep(1)
while 1:
data=""
Serial.flush()
Serial.write("AT+CPIN?\r");
data=Serial.read(30)
print data
r=data.find("READY")
if r>=0:
break
time.sleep(0.5)
lcdclear()
lcdprint("Finding Operator")
time.sleep(1)
while 1:
data=""
Serial.flush()
Serial.read(20)
Serial.write("AT+COPS?\r");
data=Serial.read(40)
#print data
r=data.find("+COPS:")
if r>=0:
l1=data.find(",\"")+2
l2=data.find("\"\r")
operator=data[l1:l2]
lcdclear()
lcdprint(operator)
time.sleep(3)
print operator
break;
time.sleep(0.5)
Serial.write("AT+CMGF=1\r");
time.sleep(0.5)
# Serial.write("AT+CNMI=2,2,0,0,0\r");
# time.sleep(0.5)
Serial.write("AT+CSMP=17,167,0,0\r");
time.sleep(0.5)
def receiveCall(data):
inNumber=""
r=data.find("+CLIP:")
if r>0:
inNumber=""
inNumber=data[r+8:r+21]
lcdclear()
lcdprint("incoming")
setCursor(0,1)
lcdprint(inNumber)
time.sleep(1)
return 1
def receive SMS(data):
print data
r=data.find("\",")
print r
if r>0:
if data[r+4] == "\r":
smsNum=data[r+2:r+4]
elif data[r+3] == "\r":
smsNum=data[r+2]
elif data[r+5] == "\r":
smsNum=data[r+2:r+5]
else:
print "else"
print smsNum
if r>0:
lcdclear()
lcdprint("SMS Received")
setCursor(0,1)
lcdprint("Press Button B")
print "AT+CMGR="+smsNum+"\r"
time.sleep(2)
return str(smsNum)
else:
return 0
def attendCall():
print "Attend call"
Serial.write("ATA\r")
data=""
data=Serial.read(10)
l=data.find("OK")
if l>=0:
lcdclear()
lcdprint("Call attended")
time.sleep(2)
flag=-1;
while flag<0:
data=Serial.read(12);
print data
flag=data.find("NO CARRIER")
#flag=data.find("BUSY")
print flag
lcdclear()
lcdprint("Call Ended")
time.sleep(1)
lcdclear()
def readSMS(index):
print index
Serial.write("AT+CMGR="+index+"\r")
data=""
data=Serial.read(200)
print data
r=data.find("OK")
if r>=0:
r1=data.find("\"\r\n")
msg=""
msg=data[r1+3:r-4]
lcdclear()
lcdprint(msg)
print msg
time.sleep(5)
lcdclear();
smsFlag=0
print "Receive SMS"
def getChar(Key, ind, maxInd):
ch=0
ch=ind
lcdcmd(0x0e)
Char=''
count=0
global msg
global x
global y
while count<20:
key=keypad()
print key
if key== Key:
setCursor(x,y)
Char=alpha[ch]
lcdwrite(ord(Char))
ch=ch+1
if ch>maxInd:
ch=ind
count=0
count=count+1
time.sleep(0.1)
msg+=Char
x=x+1
if x>15:
x=0
y=1
lcdcmd(0x0f)
def alphaKeypad():
lcdclear()
setCursor(x,y)
lcdcmd(0x0f)
msg=""
while 1:
key=0
count=0
key=keypad()
if key == '1':
ind=0
maxInd=6
Key='1'
getChar(Key, ind, maxInd)
elif key == '2':
ind=7
maxInd=10
Key='2'
getChar(Key, ind, maxInd)
elif key == '3':
ind=11
maxInd=14
Key='3'
getChar(Key, ind, maxInd)
elif key == '4':
ind=15
maxInd=18
Key='4'
getChar(Key, ind, maxInd)
elif key == '5':
ind=19
maxInd=22
Key='5'
getChar(Key, ind, maxInd)
elif key == '6':
ind=23
maxInd=26
Key='6'
getChar(Key, ind, maxInd)
elif key == '7':
ind=27
maxInd=31
Key='7'
getChar(Key, ind, maxInd)
elif key == '8':
ind=32
maxInd=35
Key='8'
getChar(Key, ind, maxInd)
elif key == '9':
ind=36
maxInd=40
Key='9'
getChar(Key, ind, maxInd)
elif key == '0':
ind=41
maxInd=42
Key='0'
getChar(Key, ind, maxInd)
elif key == '*':
ind=43
maxInd=43
Key='*'
getChar(Key, ind, maxInd)
elif key == '#':
ind=44
maxInd=44
Key='#'
getChar(Key, ind, maxInd)
elif key== 'D':
return
def sendSMS():
print"Sending sms"
lcdclear()
lcdprint("Enter Number:")
setCursor(0,1)
time.sleep(2)
moNum=""
while 1:
key=0;
key=keypad()
#print key
if key>0:
if key == 'A' or key== 'B' or key== 'C':
print key
return
elif key == 'D':
print key
print moNum
Serial.write("AT+CMGF=1\r")
time.sleep(1)
Serial.write("AT+CMGS=\"+91"+moNum+"\"\r")
time.sleep(2)
data=""
data=Serial.read(60)
print data
alphaKeypad()
print msg
lcdclear()
lcdprint("Sending.....")
Serial.write(msg)
time.sleep(1)
Serial.write("\x1A")
while 1:
data=""
data=Serial.read(40)
print data
l=data.find("+CMGS:")
if l>=0:
lcdclear()
lcdprint("SMS Sent.")
time.sleep(2)
return;
l=data.find("Error")
if l>=0:
lcdclear()
lcdprint("Error")
time.sleep(1)
return
else:
print key
moNum+=key
lcdwrite(ord(key))
time.sleep(0.5)
def call():
print "Call"
n=0
moNum=""
lcdclear()
lcdprint("Enter Number:")
setCursor(0,1)
time.sleep(2)
while 1:
key=0;
key=keypad()
#print key
if key>0:
if key == 'A' or key== 'B' or key== 'D':
print key
return
elif key == 'C':
print key
print moNum
Serial.write("ATD+91"+moNum+";\r")
data=""
time.sleep(2)
data=Serial.read(30)
l=data.find("OK")
if l>=0:
lcdclear()
lcdprint("Calling.....")
setCursor(0,1)
lcdprint("+91"+moNum)
time.sleep(30)
lcdclear()
return
#l=data.find("Error")
#if l>=0:
else:
lcdclear()
lcdprint("Error")
time.sleep(1)
return
else:
print key
moNum+=key
lcdwrite(ord(key))
n=n+1
time.sleep(0.5)
begin()
lcdcmd(0x01)
lcdprint(" Mobile Phone ")
lcdcmd(0xc0)
lcdprint(" Using RPI ")
time.sleep(3)
lcdcmd(0x01)
lcdprint("Circuit Digest")
lcdcmd(0xc0)
lcdprint("Welcomes you")
time.sleep(3)
gsmInit()
smsFlag=0
index=""
while 1:
key=0
key=keypad()
print key
if key == 'A':
attendCall()
elif key == 'B':
readSMS(index)
smsFlag=0
elif key == 'C':
call()
elif key == 'D':
sendSMS()
data=""
Serial.flush()
data=Serial.read(150)
print data
l=data.find("RING")
if l>=0:
callstr=data
receiveCall(data)
l=data.find("\"SM\"")
if l>=0:
smsstr=data
smsIndex=""
(smsIndex)=receiveSMS(smsstr)
print smsIndex
if smsIndex>0:
smsFlag=1
index=smsIndex
if smsFlag == 1:
lcdclear()
lcdprint("New Message")
time.sleep(1)
setCursor(0,0)
lcdprint("C--> Call <--A");
setCursor(0,1);
lcdprint("D--> SMS <--B")
Here are some examples of how GSM technology can be put to use.
Automation and Safety via Smart GSM Technology
Nowadays, we can't live without our GSM mobile terminal. The Mobile phone terminal is essentially an extension of ourselves, allowing us to connect with the world in the same way our wallet/purse, keys, or watch does. Many people like not having to worry about being unavailable or who they can call at any given moment.
It's clear from the name that this Project relies on the SMS transmission capabilities of GSM networks. The ability to send and receive text messages is widely utilized to provide access to equipment and facilitate home security breach management. There are two proposed subsystems in the system. Controlling appliances in one's house from afar is made possible by the appliance control subsystem, while the security alert subsystem provides automatic security monitoring.
The system can send consumers instructions via SMS from a designated phone number to adjust the home appliance's state as needed. An automatic SMS can be generated by the system upon detection of an intrusion, warning the user of a potential threat to their data.
The advent of GSM technology will make global, instantaneous, and universal communication possible. GSM's functional architecture employs intelligent networking principles as the first step toward a genuinely personal communication system with sufficient standards to ensure interoperability.
Medical Uses for GSM-Based Systems
Here are two examples of similar situations to think about.
The patient has sustained a life-threatening injury or illness and requires emergency medical attention. A mobile phone is the only thing he (or his companion) has.
After being released from the hospital, the patient plans to rest at home but is reminded that he must return for routine exams. A mobile phone and perhaps some health monitoring or other medical sensor gadgets may be in his possession.
The only way to solve either problem is via a mobile communication system. In other words, the above scenarios are easily manageable with today's communication technology because all that needs to be done is send the patient's information across a network and have it processed at the receiving end, which may be a hospital or the doctor's office.
In the first scenario, the doctor keeps tabs on the patient's information and returns the instructions to him so he can take whatever precautions before getting to the hospital. In the second scenario, the doctor keeps tabs on the patient's test results and, if necessary, proceeds with treatment.
Telemedicine services are the driving force behind this entire operation. The telemedicine system has three different applications.
Video conferencing lets patients in one location have face-to-face contact with their doctors and nurses, speeding up the healing process.
With the help of sensors that constantly report on a patient's condition and direct medical staff on how to proceed with treatment.
By sending the gathered health information for further review and analysis.
A wireless method of communication is used for the three options mentioned above. When providing healthcare, it is necessary to have many data retrieval mechanisms in place. These can be online medical databases or hosts with equipment that aid recovery and health monitoring. Broadband networks, medium-throughput media, and narrowband GSM access are all viable possibilities.
There are several benefits to using GSM technology in a telemedicine setup.
Cost savings and widespread availability of GSM receivers (including cell phones and modems)
It can transfer data quickly.
Typical Telemedical Infrastructure
The four components that make up a standard telemedicine system are as follows:
The Patient Unit: It takes data from the patient, either in its original analog form or after being converted to digital format, and then manages the data stream before sending it. It is made up of several different types of medical sensors, such as those used to track heart rate, blood pressure, body temperature, spirometry, etc., each of which generates an electrical signal that is sent to a processor or controller for analysis before being transmitted over a wireless network.
Communication Network: As such, it is employed for both data transmission and security. Networks, mobile stations, and base stations are all components of the Global System for Mobile Communication (GSM) system. The mobile station, also known as the mobile phone or primary mobile access point, is the device responsible for connecting mobile devices to the global system for mobile communications (GSM) network.
Receiving/Server Side: This is a healthcare system with a GSM modem installed to receive, decode, and forward signals to the presenting device.
Presentation Unit: This is the brains of the operation. This processor saves the data in a standard format for later retrieval and analysis by doctors and from which they can send text messages to the client side if necessary.
To demonstrate the fundamentals of telemedicine, a rudimentary model will suffice. It has a sender and a receiver, both of which are separate components. The sensor input is transmitted by the transmitter and received by the receiver unit for processing.
See below for a simplified telemedicine system to track a patient's heart rate and apply the results as needed.
The data collected by the heartbeat detector (a light-emitting device whose light is modified as it flows through human blood) is transformed into electrical pulses at the transmitter unit. When the Microcontroller picks up on these pulses, it calculates the heart rate and communicates that information and other data collected to the medical team via a Gsm network. An IC called a Max 232 connects the Microcontroller to the GSM modem.
The GSM modem at the receiving end grabs the information and passes it to the Microcontroller. The Microcontroller then performs an analysis using the input from the Personal computer and displays the outcome on the LCD. Medical professionals can keep tabs on the patient and begin the necessary treatment after reviewing the results on the screen.
The following are some real-world applications for GSM technology.
AT&T Health GlowCaps
These plain pill bottles serve as a gentle prompt to the patient to take their prescribed medications. It uses GSM technology to contact the patient on their mobile phone at the specified pill-taking time, at which point the cap will light up, the buzzer will sound, and the patient will be reminded to take their medication. Each time a bottle is uncorked, it is documented.
Ultrasound technology
With the help of a portable ultrasound transducer that connects to a smartphone, it is possible to send ultrasound images captured with a handheld device to a distant location using a global system for mobile communications (GSM).
A Continuous Glucose Monitor (CGM)
The patient's blood sugar levels can be tracked and reported to the doctor. A sensor is implanted under the skin and monitors blood glucose levels, sending the data to a receiver (a mobile phone) at regular intervals.
As part of this guide, we analyzed GSM's architecture and learned how it operates in practice. We wrote a Python program to turn our Raspberry Pi 4 into a fully functional mobile phone. No technical difficulties were encountered as we watched text and phone calls travel between the raspberry pi and our mobile phone. You should feel confident in your ability to apply the ideas and understand the circuits of GSM now. One way to up the difficulty level of this Project is to try to make a live video call using the raspberry pi 4 mobile. Next, we'll look at connecting the pcf8591 ADC/DAC analog-digital converter module to a Raspberry Pi.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | Jumper Wires | Amazon | Buy Now | |
3 | LCD 16x2 | Amazon | Buy Now | |
4 | nRF24L01 | Amazon | Buy Now | |
5 | Arduino Uno | Amazon | Buy Now | |
6 | Raspberry Pi 4 | Amazon | Buy Now |
We're glad you could join us for another lesson in our series on programming for the Raspberry Pi 4. The previous chapter covered how to interface the USB barcode scanner with raspberry pi 4. We looked at different types of barcodes and what each stripe represents as well as the different types of barcode scanners available today. We also built a python program for the intelligent shopping cart and now our familiarity with barcodes and scanners and how they function has significantly increased. The benefits and drawbacks of its use were also discussed, but what we're interested in for this article is the transmission of radio frequency signals using the nrf24l01 Module in a raspberry pi 4.
nRF24L01 RF module
Raspberry pi 4
Arduino Uno
Jumper wires
Power supply
16x2 liquid crystal display
Wireless communication systems, such as ESPS266 WiFi modules, are widely used in the design process. Further, the media chosen is determined by the function it will serve. It's no secret that the nRF24L01 is a widely used wireless channel for local area network communication. These modules have a band rate of 250Kbps to 2Mbps and transmit on the 2.4GHz (ISM band), which is permitted in many states and suitable for usage in industrial and healthcare settings. There is also the claim that these modules can communicate at a distance of up to 100 meters with the correct antennae.
This tutorial demonstrates how to set up wireless communication between an Arduino UNO and a Raspberry Pi by utilizing the nRF24L01 - 2.4GHz RF Transceiver module. Raspberry Pi will broadcast data via nRF24L01, and Arduino Board will receive the data and display it on a 16x2 LCD. In addition to its built-in WiFi and Bluetooth Low Energy (BLE) capabilities, the nRF24L01 is also capable of wireless communication via BLE.
Both parts of the tutorial are equally important. In the first, we'll see how to connect the nRF24L01 to an Arduino so that it can function as a receiver, and in the second, we'll do the same thing with a Raspberry Pi can send out signals.
There are many different types of electromagnetic waves. Still, the ones utilized for radar signals and communications fall into roughly 3 kHz to 300 GHz range, known as "radio frequencies."
The term "radio frequency" is more commonly used to refer to electrical than mechanical oscillations. There are, however, examples of mechanical RF systems. Although radio frequency (RF) refers to an oscillation rate, the term "radio frequency" (RF) is sometimes used interchangeably with "radio" to describe the practice of communicating without the need for wires.
Numerous wireless technologies rely on RF fields, including cordless and cell phones, radio and television broadcasting stations, satellite telecommunication networks, Bluetooth communication modules and WiFi, and two-way radios.
External communications include various products like garage doors and microwave ovens, which use radio frequencies. The infrared frequencies of various wireless devices, like TV remote controllers, computer mice, and some wireless computer keyboards, have shorter electromagnetic wavelengths.
The frequency of radio transmission is expressed in hertz (Hz) units, which stand for the count of cycles per second. Radio waves can travel from one thousand hertz (kHz) to several gigahertz (GHz). Microwaves, a form of radio wave, operate at much higher frequencies. Because of this, we can't see radio frequencies (RFs).
The wavelength' of a radio wave is proportional to the square root of the frequency 'f.' The relationship between frequency and wavelength can be expressed in megahertz and meters, respectively.
s = 300/f
At higher frequencies, electromagnetic radiation is manifested as infrared (IR), ultraviolet (UV), visible (Visible), X-ray (XR), and gamma-ray (GJ).
The following are some of the defining features of RF:
Low energy consumption
It has an excellent operational range (three to thirty meters), a data rate of up to two megabits per second, the ability to pass through walls, and can transmit in any direction.
Due to their half-duplex design, the nRF24L01 modules can only send or receive data but not do both. The Module's data transmission and reception are handled by the generic Nordic semi-conductor nRF24L01 IC. The IC uses the simple serial peripheral interface (SPI) protocol for communication, making it compatible with virtually all microcontrollers. Arduino makes things much simpler because there are numerous library resources available. The following table depicts the pin configurations of a typical nRF24L01 module.
The Module is battery efficient, as its operating voltage ranges from 1.9V to 3.6V, and it draws minimal current (only 12mA) during regular operation. Most pins can be connected directly with 5V chipsets like Arduino, even though the voltage rating is 3.3V. Each Module also includes 6 Pipelines, which is a huge time saver. Simply put, each Module can exchange information with up to six others. Therefore, the Module can be used for IoT applications requiring the creation of star or mesh networks. With an extensive network address of 125 unique IDs, we may use 125 such components in a contained space without worrying about them interfering with one another.
Given that the Module supports 125 separate channels, creating a network containing 125 fully available modems at a single location is theoretically possible. Each device can simultaneously interface with up to six others on the same channel.
Transmission with this Module only uses about 12mA of power, less than a single display LED screen. The Module requires a voltage of 1.9V to 3.6V to function. Still, the other pins are 5V logic compatible, allowing us to connect it directly to an Arduino without needing logic-level converters.
Three of these terminals are used for SPI communication and must be hooked up to the SPI pins on the Arduino; however, the SPI pins on different Arduino boards are labelled differently. Connecting the CSN and CE pins to any input pin on the Arduino board toggles between standby and active modes and transmit and command modes for the Module. The last connector is an interrupt pin, which is optional.
The NRF24L01 modules can be found in a wide range of versions. The model with a built-in antenna is the clear frontrunner. This reduces the transmission range of the Module to around 100 meters but allows for a smaller module size.
In the second variant, an SMA connector replaces the onboard antenna, allowing us to use a duck transmitter for enhanced signal strength.
The third variant displayed here also features the duck antenna with an RFX2401C microprocessor with an integrated Power Amplifier and Low-Noise Amplifier). This can increase the NRF24L01's transmission range in open areas by 1000.
The components in the circuit design for linking nRF24L01 to Arduino are few, and the process is straightforward. SPI will be used to link the nRF24l01, and I2C will connect the 16x2 LCD.
Because only the SPI adapter is required to link the Raspberry Pi and the nRF24L01, the corresponding circuit schematic is pretty straightforward.
Python3 will be used for Raspberry Pi's programming. The Arduino platform is not the only one that can use C/C++. However, if you're programming in Python, you can get a library for nRF24l01 that's already been made. Keep in mind that the library and the python program must be in the same folder for the python program to use it. Create a folder to house your applications and library files after you have downloaded and extracted the library. After the necessary libraries have been installed, you can begin coding immediately. Importing libraries like the GPIO library for communicating with the Raspberry Pi's GPIO pins and the time library for using the Pi's clock and date functions are the first steps in writing any program.
import RPi.GPIO as GPIO
import time
import spidev
from lib_nrf24 import NRF24
It would be best if you switched to the "Broadcom SOC channel" for the GPIO setting. Pins are referred to by their "Broadcom SOC channel" numbers, which follow the letters "GPIO" (GPIO01, GPIO02, etc.). The Board Numbers are not these.
GPIO.setmode(GPIO.BCM)
After that, we'll assign a permanent address for the pipe. To send data to Arduino, you'll need to use this address. There will be a hexadecimal representation of the address.
pipes = [[0xE0, 0xE0, 0xF1, 0xF1, 0xE0], [0xF1, 0xF1, 0xF0, 0xF0, 0xE0]]
Start the radio with the CE pin (GPIO08) and the CSN pin (GPIO25).
radio.begin(0, 25)
Change the power levels to minimal, the channel address to 76, the data rate to 1 Mbps, and the payload size to 32 bits.
radio.setPayloadSize(32)
radio.setChannel(0x76)
radio.setDataRate(NRF24.BR_1MBPS)
radio.setPALevel(NRF24.PA_MIN)
Start the data writing process by opening the pipes and displaying some nRF24l01 basics.
radio.openWritingPipe(pipes[0])
radio.printDetails()
Get your message ready to send as a string. Arduino UNO will receive this message.
sendMessage = list("Hi..Arduino UNO")
while len(sendMessage) < 32:
sendMessage.append(0)
Send the string's first character to the stereo and continue doing so until the radio is ready to receive it. In addition, a debug statement detailing the time and date the message was delivered should be printed.
While True:
start = time.time()
radio.write(sendMessage)
print("Sent the message: {}".format(sendMessage))
send
radio.start listening()
A timed-out error message should be printed if the thread is finished and the conduit is closed.
while not radio.available(0):
time.sleep(1/100)
If time.time() - start > 2:
print("Timed out.") # print error message if radio disconnected or not functioning anymore
break
If you want to send another message, turn off the radio and disconnect from the connection for three seconds.
radio.stopListening() # close radio
time.sleep(3) # give a delay of 3 seconds
If you know the fundamentals of Python, you can easily comprehend the Raspberry program. You will find a fully functional Python program at the end of this tutorial.
If you follow the steps below, running the software will be a breeze.
You should keep the Python source code and library files together.
My Sender program file is nrfsend.py, and all the related files are in the same directory.
Access Raspberry Pi's command prompt. Use the cd command to get to the directory containing the python script.
Navigate to the directory, type "sudo python3 your program.py," and hit enter to run the program. In less than a minute, you'll likely see nRf24's essentials laid out, and the broadcaster will begin broadcasting its bulletins at three-second intervals. Once the send is complete, the debug message will appear.
The Arduino UNO will now display the same code as the receiver.
The Arduino UNO can be programmed in a manner not dissimilar to that of the Raspberry Pi. Our procedures will be very similar; however, we'll use a different language for programming and other processes. The procedure will incorporate the nRF24l01 readout. Download the nRF24l01 Arduino library from GitHub. To get started, make sure all required libraries are installed. We're using a 16x2 I2C LCD, so we need to include the Wire.h library; the nRF24l01 communicates via SPI, so we also need the SPI library.
#include<SPI.h>
#include <Wire.h>
Don't forget to add the RF24 and LCD libraries so you may use them.
#include<RF24.h>
#include <LiquidCrystal_I2C.h>
Put the LCD's I2C address—27 in this case, as it's a 16x2 display—into the appropriate function.
LiquidCrystal_I2C lcd(0x27, 16, 2);
Pin 9 serves as the RF24's Common Emitter, and pin 10 serves as its Common Source Negative.
RF24 radio(9, 10) ;
Turn the radio on and tune in to channel 76. In addition, open the pipe for reading by setting the address to that of the Raspberry Pi.
radio.begin();
radio.setPALevel(RF24_PA_MAX) ;
radio.setChannel(0x76) ;
const uint64_t pipe = 0xE0E0F1F1E0LL ;
radio.openReadingPipe(1, pipe) ;
Start the I2C data transfer and initialize the LCD screen.
Wire.begin();
lcd.begin();
lcd.home();
lcd.print("Ready to Receive");
Turn on the radio's receiver and enter a message length of 32.
radio.startListening() ;
char receivedMessage[32] = {0}
The message will be read and saved immediately if a radio is connected. Display the message on the screen and send it to the serial monitor till the following message is received. Put the radio on hold while you tune in, then try again later. Right this way, in ten microseconds.
if (radio.available()) {
radio.read(receivedMessage, sizeof(receivedMessage));
Serial.println(receivedMessage) ;
Serial.println("Turning off the radio.") ;
radio.stopListening() ;
String stringMessage(receivedMessage) ;
lcd.clear();
delay(1000);
lcd.print(stringMessage);
}
Copy and paste the code below into your server and allow time for the response to arrive.
import RPi.GPIO as GPIO # import gpio
import time #import time library
import spidev
from lib_nrf24 import NRF24 #import NRF24 library
GPIO.setmode(GPIO.BCM) # set the gpio mode
# set the pipe address. This address should be entered on the receiver to
pipes = [[0xE0, 0xE0, 0xF1, 0xF1, 0xE0], [0xF1, 0xF1, 0xF0, 0xF0, 0xE0]]
radio = NRF24(GPIO, spidev.SpiDev()) # use the gpio pins
radio.begin(0, 25) # start the radio and set the ce,csn pin ce= GPIO08, csn= GPIO25
radio.setPayloadSize(32) #set the payload size as 32 bytes
radio.setChannel(0x76) # set the channel as 76 hex
radio.setDataRate(NRF24.BR_1MBPS) # set radio data rate
radio.setPALevel(NRF24.PA_MIN) # set PA level
radio.setAutoAck(True) # set acknowledgement as true
radio.enableDynamicPayloads()
radio.enableAckPayload()
radio.openWritingPipe(pipes[0]) # open the defined pipe for writing
radio.printDetails() # print basic detals of radio
sendMessage = list("Hi..Arduino UNO") #the message to be sent
while len(sendMessage) < 32:
sendMessage.append(0)
While True:
start = time.time() #start the time for checking delivery time
radio.write(sendMessage) # just write the message to radio
print("Sent the message: {}".format(sendMessage)) # print a message after succesfull send
radio.startListening() # Start listening the radio
while not radio.available(0):
time.sleep(1/100)
if time.time() - start > 2:
print("Timed out.") # print error message if the radio disconnected or not functioning anymore
break
radio.stopListening() # close radio
time.sleep(3) # give delay of 3 seconds
#include<SPI.h> // spi library for connecting nrf
#include <Wire.h> // i2c libary fro 16x2 lcd display
#include<RF24.h> // nrf library
#include <LiquidCrystal_I2C.h> // 16x2 lcd display library
LiquidCrystal_I2C lcd(0x27, 16, 2); // i2c address is 0x27
RF24 radio(9, 10) ; // ce, csn pins
void setup(void) {
while (!Serial) ;
Serial.begin(9600) ; // start serial monitor baud rate
Serial.println("Starting.. Setting Up.. Radio on..") ; // debug message
radio.begin(); // start radio at ce csn pin 9 and 10
radio.setPALevel(RF24_PA_MAX) ; // set power level
radio.setChannel(0x76) ; // set chanel at 76
const uint64_t pipe = 0xE0E0F1F1E0LL ; // pipe address same as sender i.e. raspberry pi
radio.openReadingPipe(1, pipe) ; // start reading pipe
radio.enableDynamicPayloads() ;
radio.powerUp() ;
Wire.begin(); //start i2c address
lcd.begin(); // start lcd
lcd.home();
lcd.print("Ready to Receive"); // print starting message on lcd
delay(2000);
lcd.clear();
}
void loop(void) {
radio.startListening() ; // start listening forever
char receivedMessage[32] = {0} ; // set incmng message for 32 bytes
if (radio.available()) { // check if message is coming
radio.read(receivedMessage, sizeof(receivedMessage)); // read the message and save
Serial.println(receivedMessage) ; // print message on serial monitor
Serial.println("Turning off the radio.") ; // print message on serial monitor
radio.stopListening() ; // stop listening radio
String stringMessage(receivedMessage) ; // change char to string
lcd.clear(); // clear screen for new message
delay(1000); // delay of 1 second
lcd.print(stringMessage); // print received mesage
}
delay(10);
}
The RF module's performance will be affected by the same factors as any other RF component. For instance, a transmitter's output power can be increased to extend the range of a transmission. However, this will cause a greater consumption of electricity by the transmitters (TX) device, reducing the useful life of battery-operated gadgets. Increasing the system's transmit power also makes it more vulnerable to interference from a second RF source.
Similarly, boosting the receiver's sensitivity increases the usable communication range but increases the risk of an error brought on by interference from other RF equipment. Matching antennas on both ends of a communication link can potentially boost the overall system's performance.
Finally, the regarded remote distance of any given system is typically measured in an open-air line-of-sight outline without any interference; nevertheless, problems such as floors, walls, and dense structures will frequently grasp the radio wave signals; thus, the actual operational distance will typically be less than specified.
The most common uses of radio frequency communication are in the areas of wireless data and voice transfer, home automation, and remote control, as well as in the industrial and commercial sectors.
RF-controlled switches can be used in home automation applications as an alternative to traditional switches. An RF remote allows one to operate lights and other electronics without leaving their current location. Those with mobility issues will benefit the most from this app. RF communication is helpful in industrial settings for directing autonomous robots and motorized vehicles. These robot vehicles are often employed in hazardous tasks humans cannot undertake. A data transmission unit is required to direct the motion of the robotic vehicles.
Multiple factors make radio frequency (RF) transmission preferable to infrared (IR) (infrared). The more extended range of RF signals makes them ideal for long-distance communications. Unlike radio frequency (RF), which can go across obstacles, infrared (IR) generally requires a clear path from transmitter to receiver. The reliability of RF transmission is far greater than that of infrared remote communications. While radio frequency (RF) communications require other IR-emitting devices that can disrupt a precise frequency range, infrared (IR) communications.
These are some of RF's drawbacks.
Preschoolers, expectant mothers, the elderly, those with pacemakers, little birds, flora, wildlife, insects, etc., are all negatively impacted by unregulated RF radiation.
More lightning has been seen in nearby cellular towers that use radio frequency than in other areas.
Some fruit crops in the vicinity of RF towers are also negatively impacted.
Because RF waves are accessible in both line-of-sight (LOS) and non-LOS zones of the transmitter, hackers can easily break into the system and decode sensitive personal or government data.
This problem can be avoided by employing highly protected methods like AES, WEP, WPA, etc., while transmitting data over radio frequency waves. Spread spectrum and frequency hopping modulation methods can also be applied to RF signals to prevent such eavesdropping.
This concludes the comprehensive instruction on wireless communication between a Raspberry Pi and an Arduino UNO via nRf24l01 modules. The 16 * 2 liquid crystal display will show the message. Pipe addresses are crucial on the Arduino UNO and the Raspberry Pi 4. In the following tutorial, we will learn how to Call and Text using Raspberry Pi and GSM Module in pi 4.
A low-literate audience can nevertheless have their voices heard and their questions answered by using an IVR system, as has been proven time and time again. However, achieving such aims in a development setting calls for a cheap system that welcomes input from various parties. RASP-IVR is an inexpensive IVR system that operates on a PI 4 and a local Global System for Mobile Communications modem. RASP-IVR was designed as an open-source, community-driven solution. It's unusual to find a customer-focused company that still uses human operators rather than an interactive voice response system. Credit card companies typically have IVR systems that can be used to make payments or file fraud reports. Airlines use elaborate IVR systems to schedule flights and check their current status. To facilitate medication refills, pharmacies implement interactive voice response systems. Furthermore, IVRs are widely used for forwarding calls to other extensions and providing directory assistance.
Enterprises of all sizes have adopted IVR technology due to its cost savings over using actual, flesh-and-blood staff. The number of callers who desire to speak with a human indicates an IVR system's success. As the percentage drops, the system becomes more efficient. Of course, some IVRs will never let you bypass the system and talk to a human being. However, that's frowned upon even by IVR advocates. As most people know, Raspberry Pi is a development board that packs a fair amount of processing power into a little package (about the size of a palm). When coupled with python's adaptability, this might lead to the creation of a wide variety of useful but diminutive devices. I recently had to construct an interactive voice response system using a Raspberry Pi, which essentially entails dialing phone numbers, sending messages, and receiving DTMF inputs from the recipient when they pick up the phone.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Jumper Wires | Amazon | Buy Now | |
2 | Raspberry Pi 4 | Amazon | Buy Now |
Pi 4
SIM800L GSM Module
2G SIM Card (Airtel)
Aux Cable
LM2596 Buck Converter Module
USB to TTL Converter
12V 2A Adapter
Perf Board
Berg Sticks
Connecting wires
Soldering Kit
Computer and phone systems can be integrated into various ways, and IVR systems are one such way (CTI). Key tones are the most common way a phone can exchange data with a computer. Dual-tone multi-frequency frequency is what you get. A telephone's number pad emits a low and high-frequency tone. A "1" can be generated in a 697-Hz tone or a 1209-Hz tone, both of which are recognized by the PSTN as the numeral 1. For a computer to interpret the DTMF frequency produced by a phone, specialized hardware known as a telephony circuit or telephony board is required. Using a telephony board to connect a computer to a phone line and some low-cost IVR software, it is possible to create a rudimentary IVR system. The IVR program lets you record messages and menu items that callers can navigate using their phone keypads.
Speech-recognition software is a part of more sophisticated IVR systems, enabling callers to interact with computers via voice commands. The technology behind speech recognition is now advanced enough to comprehend things like full names and long series of digits. Text-to-speech software can be used on the receiving end to fully automate a firm's outgoing telephone calls. To better serve customers, computers may now generate personalized text to answer their inquiries, such as financial statements or flight times, and have an automatic voice read to them. Voice extendable mark-up language is a popular foundation for many cutting-edge interactive voice response systems today.
The "RASP-IVR" moniker comes from the fact that it was created on the inexpensive Linux-based single-board computer known as a raspberry pi. The operating system is built on RASPBX, with several additions and modifications to improve compatibility for interactive voice response (IVR) use. Model 4 Raspberry Pi and GSM mobile phone modem make up the system. An Asterisk server is connected to the GSM modem via an interface. FreePBX is a web-based interface for managing an Asterisk server, allowing users to set up and modify IVR menus and VoIP settings. Using open-source libraries, we've written scripts to bring RASPBX's functionality for Voice over IP (VoIP), short messaging service, and localized SIP transfers to a broader.
Depending on the situation, the Asterisk domain server can route the incoming GSM call over a local or worldwide VoIP network. The VoIP communications can be directed to different extensions based on the client's dial input. They can be answered from any desktop or mobile device with a domestic SIP account and a SIP client like Zoiper. Furthermore, the VoIP conversation can be routed to third-party platforms like Twillio to use their cloud-based solutions like speech-to-text engines or cloud-based storage. In contrast, RASP-IVR requires no external resources, making it ideal for collaborative innovation or limited rollouts.
The cost of using an API to make phone calls from a Raspberry Pi can add up quickly. You can use python and Raspberry Pi to make inexpensive phone calls using several different service providers, including Twilio. These solutions are fantastic; however, the price has two significant drawbacks. Almost wherever you go, you'll be charged by the minute, which goes for both outgoing and incoming communications. You might not think this is significant, but if you make more than a hundred calls daily, you'll soon realize how much it costs. The second issue is buying and managing a different cellphone number from which to make these calls. You cannot use an existing number because these are all made up, and depending on your area, you may have to pay more to get a number that works in your nation.
After researching the rates of several API-based telephone conversation services, it became evident that the cheapest option is to use a Gsm technology like SIM800L with Pi to place calls through a network service provider like Airtel. To that aim, we'll connect a SIM800L module to a Raspberry Pi so we can send text messages and make phone calls automatically.
As 5G and the IoT become increasingly popular, numerous innovative new GSM modules have been released that are compatible with 5G networks while using less energy. But I had the bright notion of employing a ubiquitous and inexpensive commodity to purchase at any nearby hobby store. So, we put the popular SIM800L and the Ai-thinking A9 through their paces. Even when just powered by my laptop's USB port, the artificial intelligent-thinker A9 Gsm modem proved completely reliable and responsive right from the get-go. However, the primary issue was that the in-line voice input interface did not operate very well, and the artificial intelligent Thinker A9 Gsm modem did not enable DTMF recognition. So, I switched to the SIM800L Global System for Mobile communications Modem, a power-hungry beast that, if fed enough juice, proved ideal for this task. There is no need for expensive external DTMF hardware because it has both in-line audio for playing automated voice and built-in DTMF detection.
The SIM800L is only compatible with 2G networks, not 3G or 4G. Only Airtel and BSNL provide 2G services in this area. Therefore, get an Airtel SIM card or use one we already have. After purchasing a new Airtel subscriber identity module, card, I activated it on my cell phone and began using it with my SIM800L.
Complete schematics for a Raspberry Pi-based telephone and message system are provided below. It's clear how simple it is to draw connections between ideas.
The method by which the SIM800L module is supplied with electricity is crucial here. Within a voltage range of 3.7V to 4.2V, the SIM800L chip functioned. The optimum working voltage is close to 4V. For the SIM800L module, we needed to reduce the adapter's 12V 2A output to 4V; thus, we utilized an LM2596 Buck converter. Thicker, shorter wires are preferable when linking the buck component to the SIM800L so that the module can easily handle the high current. When powered, if the element detects improper power connections, it will reset and send invalid data through the serial port. You'll need a 12V 2A adapter to power the SIM800L module, and you should use bulky wires just to carry a lot of current without causing too much resistance. Connecting a fully loaded Lithium battery across the SIM800L module's Vcc and Ground pins should resolve any power difficulties.
Those unfamiliar with the jargon of electronics engineering know that "AT Commands" are what we use to talk to the SIM800L module. Calling, texting, and keypress detection are just a few of the many functions that may be accomplished with AT commands. This tutorial will use Py from RPi to transmit these AT instructions to the Global System for Mobile Communications SIM800L module. Connecting the SIM800L module's Rx and Tx pins to the Raspberry Pi's USB port required a universal serial bus to TTL converter.
Through its MIC+ and microphone- pins, the SIM800L component supports mic input. Any audio information sent to these pins after the call has been placed will play on the call recipient's phone. In our scenario, we have to play recorded audio from the Rpi; however, these go pins are primarily designed to connect a microphone. To hear it on the GSM component, we must convert our audio output from the RPi through the 3.5mm port, known as line-out audio, to mic-level audio. You might construct a line-to-mic converter circuit to complete this task professionally. Still, I connected the two devices and set the volume level on the Raspberry Pi to barely two decibels. I had no trouble getting it to function this way.
I assembled the entire circuit on the perf board, ensuring the voltage connectors got enough lead to offer low-resistance contact. This is how my board appears after soldering. Since the LM2596 is an adjustable buck converter, ensure you use the onboard trim pot to set the voltage output to 4 Volts before using it. Anything above 4.2 volts can permanently damage the SIM800L component.
Simply turn on the circuit and attach the Cell phone card when you get here. Once more, confirm that the SIM card is inserted correctly and that the antenna is securely fastened. If all is done correctly, you should see the SIM800L board's inbuilt led flashing once every three seconds. As a result, the SIM800L module may create a network using our SIM card. This guarantees the appropriate Operation of our powering circuit. Now that the board is connected, we can build our Python code without an RPi.
You may find the entire Python script for the RPi interactive voice response system at the end of this tutorial. Due to the size of the code, it is not practical to detail every step. It is incorporated with comments and procedures to do reading and comprehending the code easier. Importing the necessary header files allows us to launch the program. We used the serial module to enable the serial connection between the Raspberry Pi and SIM800L components. The pygame software is employed to play audio, in this case, audio files. We also have the time component to cause delays, so that's something. You don't need to install anything new because the Buster OS comes pre-installed with all three packages.
import serial #for serial communication with GSM SIM800L
import time
import pygame #to play music
Using this method, an AT command can be sent from a PI computer to a SIM800L device and then received in return. All AT commands transmitted to SIM800L must be written in ASCII and conclude with the character "rn." Before sending them to SIM800L, this method appends the character "rn" to all our AT instructions and transforms them into the ASCII character set. Once the response has been read, it decodes the ASCII data so we may utilize them in our code.
def SIM800(command):
AT_command = command + "\r\n"
ser.write(str(AT_command).encode('ascii'))
time.sleep(1)
if ser.inWaiting() > 0:
echo = ser.readline() #waste the echo
response_byte = ser.readline()
response_str = response_byte.decode('ascii')
return (response_str)
else:
return ("ERROR")
This function is quite similar to the one before, except instead of sending a value to the SIM800 component, it just awaits a response from the SIM800L module. As a result, it returns the answer when it receives one.
def wait_for_SIM800():
echo = ser.readline() # waste the echo
response_byte = ser.readline()
response_str = response_byte.decode('ascii')
return (response_str)
The GSM modem is ready for interactive voice response operations after being initialized. Before sending particular AT instructions to the Gsm modem to activate the message and DTMF receiver modes, it sends an "AT" to check for the module and waits for an "OK" response. In addition, it turns off all alerts so that we won't be distracted throughout a call by texting notifications.
def Init_GSM():
if "OK" in SIM800("AT"):
if ("OK" in (SIM800("AT+CLCC=1"))) and ("OK" in (SIM800("AT+DDET=1"))) and ("OK" in (SIM800("AT+CNMI =0,0,0,0,0"))) and ("OK" in (SIM800("AT+CMGF=1"))) and ("OK" in (SIM800("AT+CSMP=17,167,0,0"))): # enble DTMF / disable notifications
print("SIM800 Module -> Active and Ready")
else:
print("------->ERROR -> SIM800 Module not found")
After answering the call, the following method is employed to play the audio files. We have recorded audio files like "cancel.wav," "confirm.wav," and "intro.wav" stored. Following the user's keyboard answer, we must play every one of these files. We can use the play audio function to play any audio file of our choosing. Assume that your Python script is stored within the same folder as these audio files.
def play_wav(file_name):
pygame.mixer.init(8000)
pygame.mixer.music.load(file_name)
pygame.mixer.music.play()
#while pygame.mixer.music.get_busy() == True:
#continue
The software's most crucial feature is this one. It obtains the mobile number that has to be called and offers the callback for that session. Any response, such as NOT REACHABLE, CALL REJECTED, CONFIRMED, CANCELED, etc., can be given in return. The function calls a specified phone number using AT instructions and then plays recorded audio. The system then listens for the caller's DTMF reply and, depending on that reaction, plays the relevant recorded voice before informing us of the caller's choice. The method will also respond with that information if the caller declines the call or cannot be reached.
def Call_response_for (phone_number):
AT_call = "ATD" + phone_number + ";"
response = "NONE"
time.sleep(1)
ser.flushInput() #clear serial data in buffer if any
if ("OK" in (SIM800(AT_call))) and (",2," in (wait_for_SIM800())) and (",3," in (wait_for_SIM800())):
print("RINGING...->", phone_number)
call_status = wait_for_SIM800()
if "1,0,0,0,0" in call_status:
print("**ANSWERED**")
ser.flushInput()
play_wav("intro.wav")
time.sleep(0.5)
dtmf_response = "start_over"
while dtmf_response == "start_over":
play_wav("press_request.wav")
time.sleep(1)
dtmf_response = wait_for_SIM800()
if "+DTMF: 1" in dtmf_response:
play_wav("confirmed.wav")
response = "CONFIRMED"
hang = SIM800("ATH")
break
if "+DTMF: 2" in dtmf_response:
play_wav("canceled.wav")
response = "CANCELED"
hang = SIM800("ATH")
break
if "+DTMF: 9" in dtmf_response:
play_wav("callback_response.wav")
response = "REQ_CALLBACK"
hang = SIM800("ATH")
break
if "+DTMF: 0" in dtmf_response:
dtmf_response = "start_over"
continue
if "+DTMF: " in dtmf_response:
play_wav("invalid_input.wav")
dtmf_response = "start_over"
continue
else:
response = "REJECTED_AFTER_ANSWERING"
break
else:
#print("REJECTED")
response = "CALL_REJECTED"
hang = SIM800("ATH")
time.sleep(1)
#ser.flushInput()
else:
#print("NOT_REACHABLE")
response = "NOT_REACHABLE"
hang = SIM800("ATH")
time.sleep(1)
#ser.flushInput()
ser.flushInput()
return (response)
The send message method in this program enables us to send messages in addition to making calls and receiving responses. The message is sent after it receives both the text and the participant's mobile number.
def send_message(message, recipient):
ser.write(b'AT+CMGS="' + recipient.encode() + b'"\r')
time.sleep(0.5)
ser.write(message.encode() + b"\r")
time.sleep(0.5)
ser.write(bytes([26]))
time.sleep(0.5)
print ("Message sent to customer")
time.sleep(2)
ser.flushInput() # clear serial data in buffer if any
The very last method I created for this project allows you to figure out who is calling by their phone number. Some people might attempt to call back to this line because this SIM card will make calls to new persons. If so, you can use this function to see which line is making the call, send them a message, or create a follow-up call if necessary.
def incoming_call():
while ser.in_waiting: #if I have something in the serial monitor
print ("%%Wait got something in the buffer")
ser.flushInput()
response = SIM800("ATH") #cut the incoming call
if "+CLCC" in response:
cus_phone = response[21:31]
print("%%Incoming Phone call detect from ->", cus_phone)
return (cus_phone)
else:
print("Nope its something else")
return "0"
return "0"
It's time to create the main program, in which we will employ all of these functions to accomplish something nice now that all of them have been defined. The customer id and phone number are now being hard-coded for demonstration purposes, but you can obtain them using a Shopify API request or retrieve them from a spreadsheet as needed. For testing purposes, we use the customer's identity, "AISHA," and phone number, "9877XXXXXX."
cus_name = "Aisha"
cus_phone = "968837XXXX"
We will begin a serial conversation with a 15-second delay at 9600 baud speeds within the main endless while loop block. Considering that various SIM800L components may operate at a different baud rate, ensure that you enter the correct COM folder and baud rate in this field.
ser = serial.Serial("/dev/ttyUSB0", baudrate=9600, timeout=15) # timeout affects call duration and waiting for response currently 30sec
print("Established communication with", ser.name)
After making the call and getting the necessary reply from the consumer, we will send the recipient a text based on their response. For instance, if the answer is validated, we can send a message about it. Likewise, we can alter the message if the client provides a different reply.
response = Call_response_for(cus_phone) #place a call and get response from customer
print ("Response from customer => ", response)
if response == "CONFIRMED":
text_message = "Hi " + cus_name + ". Your booking has been confirmed.Thank you. -Circuitdigest"
send_message(text_message, cus_phone)
if response == "CANCELED": # if the response was to cancel
text_message = "Hi " + cus_name +. "Sorry that you have decided to cancel your booking. If you canceled by mistake, kindly contact us by phone. -Circuitdigest"
send_message(text_message, cus_phone)
if ((response == "CALL_REJECTED") or (response == "REJECTED_AFTER_ANSWERING")): # if the response was rejected
text_message = "Hi " + cus_name + ". We from circuitdigest.com have been trying to reach you, to confirm your
booking. You will receive another call within a few minutes, and we kindly request you answer it. Thank you"
send_message(text_message, cus_phone)
Just double-check the connectors and turn on the GSM component board and RPi. Make that the appropriate COM port is listed in the code before running the application; in my instance, it was "/dev/ttyUSB0." After that, check that the sound is set to atrioventricular ( av ) node jack by right-clicking on the loudspeaker icon and that the audio level is low.
Simply alter the contact number and client name in the following step to your liking, or modify the program to retrieve the information from an excel spreadsheet or cloud Application programming interface, and our automated interactive voice response is ready to go.
The application will dial the specified number and listen for DTMF responses. Additionally, it will play relevant sound clips and transmit a text message dependent on the recipient's response. Below are a few sample messages that I obtained on my cellphone.
The audio tracks and texts can easily be changed to meet your application's needs. I hope the assignment was enjoyable for you and that you learned something.
Call routing within a company is among the most popular uses of an interactive voice response system. Previously, you would employ a switchboard controller or receptionist to answer all callers and direct callers to the appropriate line. When answering customer calls, an interactive voice response system is beneficial. A caller may be given a menu of choices and inquiries about the system's type of call. If possible, the system will respond to the more typical inquiries while referring the less common questions to qualified experts.
Interactive voice response systems are excellent for getting detailed, current info from databases—for instance, movie times. On a significant database, the weekly movie lists are updated. The cinema theater's website can also be filled with information from these databases. When making a call to the cinema, the caller can use the keypad or voice instructions to search the database for movie timings. The same technology can be used to check account balances, evaluate the latest credit card payments, check flight times, refill medicines at a pharmacy, plan auto maintenance, and register for university classes. The list is endless.
Interactive voice response systems are helpful for sales as well. A sales department can create an interactive voice response order form, enabling callers to complete it on their phone's keyboard. When the form is finished, the computer can send a copy to a sales team representative through fax or email. The sales department could also use the IVR as a virtual flyer that highlights the characteristics of a good or service and offers customers the chance to speak with a live agent at any time.
Marketing teams and election pollsters can use the interactive voice response systems' outgoing call functions. A political campaign may send a voicemail message with a phone-in survey for voters to complete. An advertiser may determine whether a buyer is interested in his goods or services. They might press a key to speak with a sales representative if interested in the advertiser's computerized pitch.
Electronic alert systems may also be combined with interactive voice response systems. Let's assume that your company has worldwide workers working from home. Worker contact details, such as home phone numbers, mobile phone numbers, fax numbers, pager numbers, email addresses, etc., can be coded into the interactive voice response system. The IVR prog will use every form of contact until a connection is established if a call has to be directed to that worker.
Transcribing health records is an intriguing application of interactive voice response technology. Currently, doctors record their patient notes and submit the audio to a service that performs medical transcription. However, a doctor may contact the interactive voice response system, record his notes, and have a filed transcript emailed to their office thanks to advanced speech recognition software.
The creators have been collaborating with a few Rwandan health industry partners. The developers want to create solutions that will expand the coverage and reach of the partners' already-existing face-to-face operations. Interactive voice response can extend services to clients who cannot travel to existing facilities. Interestingly, one non-partner frequently visits remote communities and considers the interactive voice response as a tool for drawing people to these outreach efforts. By providing automated processes that operate outside of regular business hours or by gathering and analyzing customer data, the IVR can lessen the employees' workload. One partner will use this function to computerize donor-sponsor reports. In the opinion of one of the partners, the interactive voice response is a small step towards a more significant switch from hard copies to soft copies. Before conversing with a counselor, the web service might pull up the patient's EHR using information gathered by the IVR synchronized with the hard copy to soft copy system and the interactive voice response. After the call, a text message is sent to the customer, and another is sent to the dispatch center that is most convenient for the client.
Using its speech and texting platforms, the RASP-interactive voice response system opens the door to creating unique apps that can assist with contextually relevant difficulties. Features including automated call rerouting, user-selected content, caller audio content recording, text message-based engagement, data collecting, and tailored resources that can be accessed on caller ID are all made possible by the RASP-Interactive voice response system.
We acknowledge, however, that interactive voice response-based solutions, like all technology, must be developed to enhance current human development efforts. The following are IVR-based systems' benefits, although certain limitations are covered below.
A voice-only system is more appealing to people with limited literacy
than texting or internet-based apps.
To use an IVR system, users just dial a number on their readily available cell phone, as they would with any other service.
I am using a mobile phone to get entry to discreet and stigma-free services like psychological counseling, and family planning might be advantageous.
Interactive voice response systems can reconnect dropped calls thanks to caller ID and track many simultaneous activities from the same telephone.
Finally, the speech signal is informative, providing estimations of the speaker's age, body weight, sexuality, anxiety, and other health indicators.
Some drawbacks exist, though, which must be weighed against the advantages. Since the interactive voice response system is language-dependent, the prompts must be recorded in users' native tongues. Interactive voice response ern systems are constrained by emerging communities' cultural and social conventions, even though the telephone interface is familiar.
Users more accustomed to interpersonal communication may resist automated data inquiries and exchanges. To that end, one of the partners believes that in-person demonstrations of the IVR system to clients will provide the most positive results.
Another barrier we discovered for people with few resources was the lack of phone access. Due to fluctuating electricity prices and supply, charging a phone is not always possible. In many cases, people will use various telephones, or multiple people will use the same phone. It's possible that follow-up calls and texts won't be received.
In addition, having a shared phone line can raise questions and suspicions about who is calling. Finally, there are several ways in which users' privacy could be compromised by IVR technologies, from the storage and transmission of data to the reporting of such data to the difficulty of preventing one user from accessing the information of another.
The existing system demands developer-side programming expertise that a CBO might not have access to. The dongle's inability to talk to the Raspberry Pi occasionally resulted from a hardware malfunction. The network failed in some cases. When making a mobile-to-mobile call, one internet provider did not forward the touch-tone signals.
Unfortunately, the system can only handle a single call at a time, making it unsuitable for deployments with a high rate of incoming calls. Our co-operators who expect low call volumes can benefit from the RASP-IVR. A shift to a more scalable approach would be justifiable if volume increased.
Through this tutorial, we have gained a thorough understanding of IVRs and their inner workings. We proceeded to assemble our own Raspberry Pi 4 IVR with a handful of components and some test audio samples. We have investigated the mechanism behind this system and spoken about some of the primary uses for the IVR system. We also had a look at some of the advantages and disadvantages of using it. The following tutorial will teach you how to connect a USB barcode scanner to a Raspberry Pi so that you can read 2D barcodes.