Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | DC Motor | Amazon | Buy Now | |
3 | Jumper Wires | Amazon | Buy Now | |
4 | Raspberry Pi 4 | Amazon | Buy Now |
A growing number of us already use face recognition software without realizing it. Facial recognition is used in several applications, from basic Fb Tag suggestions to advanced security screening surveillance. Chinese schools employ facial recognition to track students' adherence and behaviour for the first time. Retail stores use face recognition to classify their clients and identify those who have a history of crime. There's no denying that this tech will be all over soon, especially with so many other developments in the works.
When it comes to facial recognition, biometric authentication goes well beyond simply being able to identify human faces in images or videos. An additional step is taken to identify the person's identity. A facial recognition software compares an image of a person's face to a database to see if the features match another person's. Since facial expressions and hair do not affect the technology's ability to identify matches, it has been built to do so.
How can face recognition be used when it comes to smart security systems?
The first thing you should do if you want to make your home "smart" is to focus on security. Your most prized possessions are housed at this location, and protecting them is a must. You can monitor your home security status from your computer or smartphone thanks to a smart security system when you're outdoors.
Installing a system that is not wireless in your house and signing you up for professional monitoring was traditionally done by a security company. The plot has been rewritten. When setting up a smart home system, you can even do it yourself. In addition, your smart smartphone acts as a professional monitor, providing you with real-time information and notifications.
Face recognition is the ability of a smart camera in your house to identify a person based on their face. Consequently, you will have to inform the algorithm what face goes with what name for face recognition to operate. Facial detection in security systems necessitates the creation of user accounts for family members, acquaintances, and others you want to be identified by the system. Your doors or the inside of your house will be alerted when they arrive.
Face-recognition technology allows you to create specific warning conditions. For example, you can configure a camera to inform you when an intruder enters your home with a face the camera doesn't recognize.
Astonishing advancements in smart tech have been made in recent years. Companies are increasingly offering automatic locks with face recognition. You may open your doors just by smiling at a face recognition system door lock. You could, however, use a passcode or a real key to open and close the smart door. You may also configure your smart house lock to email you an emergency warning if someone on the blacklist tries to unlock your smart security door.
OpenCV, as previously stated, will be used to identify and recognize faces. So, before continuing, let's set up the OpenCV library. Your Pi 4 needs a 2A power adapter and an HDMI cable because we won't be able to access the Pi's screen through SSH. The OpenCV documentation is a good place to learn how image processing works, but I'm not going to go into it here.
pip is well-known for making it simple to add new libraries to the python language. In addition, there is a technique to install OpenCV on a Raspberry Pi via PIP, but it didn't work for me. We can't obtain complete control of the OpenCV library when using pip to install OpenCV; however, this might be worth a go if time is of the essence.
Ensure pip is set up on your Raspberry Pi. Then, one by one, execute the lines of code listed below into your terminal.
sudo apt-get install libhdf5-dev libhdf5-serial-dev
sudo apt-get install libqtwebkit4 libqt4-test
sudo pip install opencv-contrib-python?
Facial recognition and face detection are not the same things, and this must be clarified before we proceed. When simply a user's face is detected using Face detection, the program has no clue who that person is. Only the face will be detected in facial recognition software, but the program will also recognize it. At this point, it's pretty evident that facial detection comes before facial recognition. To explain how OpenCV recognizes a person or other objects, I will have to go into detail.
Essentially, a webcam feed is like a long series continuously updating still photos. And every image is nothing more than a jumble of pixels with varying values arranged in a specific order. So, how does a computer software identify a face among all of these random pixels? Trying to describe the underlying techniques is outside the scope of this post, but since we're utilizing the OpenCV library, facial recognition is a straightforward process that doesn't necessitate a deeper understanding of the underlying principles.
We can only recognize a person if we can see it. Detection of an item, including a face, Classifiers are a feature of OpenCV. They are pre-trained datasets that may be utilized to recognize a certain item, such as a face. Classifiers may also detect additional objects, such as the mouth, the eyebrows, the number plate of a vehicle, and smiles.
Alternatively, OpenCV allows you to design your custom Classifier for detecting any objects in images by retraining the cascade classifier. For the sake of this tutorial, we'll be using the classifier named "haarcascade_frontalface_default.xml" to identify faces from the camera. We'll learn more about image classifiers and how to apply them in code in the following sections.
For the face training and detection, we only need the pi camera, and to install this, insert the raspberry pi camera in the pi camera slot as shown below. Then go to your terminal, open the configuration window using "sudo raspi-config", and press enter. Navigate to the interface options and activate the pi camera module. Accept the changes and finish the setup. Then reboot your RPi.
First, ensure pip is set up, and then install the following packages using it.
Install dlib: Dlib is a set of libraries for building ML and data analysis programs in the real world. To get dlib up and running, type the following command into your terminal window.
Pip install dlib
If everything goes according to plan, you should see something similar after running this command.
Install pillow: The Python Image Library, generally known as PIL, is a tool for opening, manipulating, and saving images in various formats. The following command will set up PIL for you.
pip install pillow
You should receive the message below once this app has been installed.
Install face_recognition: The face recognition package is often the most straightforward tool for detecting and manipulating human faces. Face recognition will be made easier with the help of this library. Installing this library is as simple as running the provided code.
Pip install face_recognition –no –cache-dir
If all goes well, you should see something similar to the one shown below after the installed software. Due to its size, I used the "—no –cache-dir" command-line option to configure the package without keeping any of its cache files.
A script named "haarcascade_frontalface_default.xml" is for detecting faces using a Classifier. It will also build a "face-trainner.yml" file using the training script based on the photos found in the face images directory.
The face images folder indicated above should contain subdirectories with the names of each person to be identified and several sample photographs of them. Esther and x have been identified for this tutorial. As a result, I've just generated the two sub-directories shown below, each containing a single image.
You must rename the directory and replace the photographs with the names of the people you are identifying. It appears that a minimum of five images for each individual is optimal. However, the more participants, the slower the software will run.
Face Trainer.py is a Python software that may be used to train a new face. The purpose of the software is to access the face photographs folder and scan for faces. As soon as it detects a face, it crops it, turns it to grayscale, and saves it in a file named face-trainner.yml using the face recognition package we had previously loaded. The information in this script can be used to identify the faces later. In addition to the whole Trainer program provided at the conclusion, we'll go over some more critical lines.
The first step is to import the necessary modules. The cv2 package is utilized to process photos. The NumPy library can be used for image conversion, the operating system package is used for directory navigation, and PIL will be used to process photos.
import cv2
import numpy as np
import os
from PIL import Image
Ensure that the XML file in question is located in the project directory to avoid encountering an issue. The LBPH Facial recognizer is then constructed using the recognizer parameter.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
Face_Images = os.path.join(os.getcwd(), "Face_Images")
In order to open all of the files ending in.jpg,.jpg, or .png within every subfolder in the face images folder, we must traverse the tree with for loops. In a variable named path, we record the path to every image, and in a variable named person name, we store the file location name (the name of the user who uploaded the images).
For root, dirs, files in os.walk(Face_Images):
for file in files: #check every directory in it
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):
path = os.path.join(root, file)
person_name = os.path.basename(root)
As a result, in case the name of the person changes, we increase a variable named Face_ID that will allow us to have a unique Face_ID for each individual.
if pev_person_name!=person_name:
Face_ID=Face_ID+1 #If yes increment the ID count
pev_person_name = person_name
Because the BGR values may be ignored, grayscale photos are simpler for OpenCV to deal with than colourful ones. We transform the data to grayscale and afterwards lower the image size by 50% so that all the pictures are the same size. To avoid having your face cut out, place it in the centre of the photo. To get a numerical number for these photos, transform them into NumPy arrays. Afterwards, a classifier identifies a face in a photo and saves the results in variable named faces.
Gery_Image = Image.open(path).convert("L")
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)
Our Area of Attention will be the portion of the image where the face may be found after being cropped. It will be utilized to train the face-recognition system in the ROI area. Every area of attention face must be appended to a variable named x train. We then feed the recognizer with our training data using the area of attention values and Face ID data. The information gathered will be archived.
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w]
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID))
recognizer.save("face-trainner.yml")
You'll notice that the face-trainner.yml script is modified whenever you run this program. If you make any modifications to the photographs in the Face Images folder, ensure to recompile this code. Debugging purposes include printing out the Face ID, name of the path, name of a person, and NumPy arrays.
We can begin using our trained data to identify people now that it has been prepared. We'll use a USB webcam or pi camera to feed video into the Face recognizer application, turning it into an image. Once we've found the faces in those images, we'll find similarities to all of our previously developed Face IDs. Finally, we output the identified person’s name in boxes around their face. Afterwards, the whole program is presented, and the explanation is below.
Import the required module from the training program and use the classifier because we need to do more facial detection in this program.
import cv2
import numpy as np
import os
from time import sleep
from PIL import Image
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
The people listed in the folder should be entered in the variable named labels. Insist on performing each step in the same order. It is "Esther" and "Unknown" in my situation.
labels = ["Esther", "Unknown"]
We need the trainer file to detect faces, so we import it into our software.
recognizer.load("face-trainner.yml")
The camera provides the video stream. It's possible to access any second pi camera by replacing 0 with 1.
cap = cv2.VideoCapture(0)
In the next step, we separate the footage into images and transform it into grayscale, and afterwards, we search for a face in the photo. To save the area of attention grey image, we must first detect the face and then crop the image to remove them.
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w]
id_, conf = recognizer.predict(roi_gray)
It informs us how sure the program is in its ability to identify the person. We write the code below to get the person's name based on their Identification number. A square should be drawn around the user's head, written outside their name.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
We must playback and afterwards break the video stream we just evaluated, which is done by pressing a wait key.
cv2.imshow('Preview',img)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
While running this application, ensure the Raspberry is linked to a display via HDMI. A display with your video stream and the name will appear when you open the application. There will be a box around the face identified in the video feed, and if your software recognizes the face, it displays that person’s name. As evidenced by the image below, we've trained our software to identify my face, which shows the recognition process in action.
import cv2
import numpy as np
import os
from PIL import Image
labels = ["Esther", "Unknown"]
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
recognizer.load("face-trainner.yml")
cap = cv2.VideoCapture(0)
while(True):
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5) #Recog. faces
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w]
id_, conf = recognizer.predict(roi_gray)
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('Preview',img)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
import os
from PIL import Image
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
Face_ID = -1
pev_person_name = ""
y_ID = []
x_train = []
Face_Images = os.path.join(os.getcwd(), "Face_Images")
print (Face_Images)
for root, dirs, files in os.walk(Face_Images):
for file in files:
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):
path = os.path.join(root, file)
person_name = os.path.basename(root)
print(path, person_name)
if pev_person_name!=person_name:
Face_ID=Face_ID+1
pev_person_name = person_name
Gery_Image = Image.open(path).convert("L")
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)
print (Face_ID,faces)
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w]
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID))
recognizer.save("face-trainner.yml")
Since the "How to operate DC motor in Rpi 4" guide has covered the basics of controlling a DC motor, I won't provide much detail here. Please read this topic if you haven't already. Check all the wiring before using the batteries in your circuit, as outlined in the image above. Everything must be in place before connecting your breadboard's power lines to the battery wires.
To activate the motors, open the terminal because you'll use the Python code-writing program called Nano in this location. For those of you who aren't familiar with the command-line text editor known as Nano, I'll show you how to use some of its commands as we go.
This code will activate the motor for two seconds, so try it out.
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
print "Turning motor on"
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Stopping motor"
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
The very first two lines of code tell Python whatever the program needs.
The RPi.GPIO package is what the first line is looking for. The RPi GPIO pins are controlled by this module, which takes care of all the grunt work.
It is necessary to delay the script for a few seconds to provide the package time to operate, therefore leaving a motor to run for a while.
The method set mode is used to leverage the RPi's board numbers. We'll tell Python that the pins 16 through 22 correspond to the motors.
Pin A is used to steer the L293D in one way, and pin B is used to direct it in the opposite direction. You can turn on the motor using an Enable pin, referred to as E, inside the test file.
Finally, use GPIO.OUT to inform the RPi that all these are outputs.
The RPi is ready to turn the motor after the software is set up. After a 2-second pause, some pins will be turned on and subsequently turned off, as seen in the code.
Save and quit by hitting CTRL-X, and a confirmation notice appears at the bottom. To acknowledge, tap Y and Return. You can now run the program in the terminal and watch as the motor begins to spin up.
sudo python motor.py
If the motor doesn't move, check the cabling or power supply. The debug process might be a pain, but it's an important phase in learning new things!
I'll teach you how to reverse a motor's rotation to spin in the opposite direction.
There's no need to touch the wiring at this point; it's all Python. Create a new script called motorback.py to accomplish this. Using Nano, type the command:
./script
Please type in the given program:
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
print "Going forwards"
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Going backwards"
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Now stop"
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
Save by pressing CTRL, then X, then Y, and finally Enter key.
For reverse compatibility, we've set Motor1A low in the script.
Programmers use the terms "high" and "low" to denote the state of being on or off, respectively.
Motor1E will be turned off to halt the motor.
Irrespective of what A is doing; the motor can be turned on or off using the Enable switch.
Take a peek at the Truth Table to understand better what's going on.
When Enabled, only two states allow the motor to move; A or B is high, and not both high at the same time.
At this point, we have designed our face detection system and the dc motor control circuit; now, we will put the two systems to work together. When the user is verified, the dc motor should run to open the cd rom drive and close after a few seconds.
In our verify code, we will copy the code below to spin the motor in one direction “open the door” when the user is verified. We will also increase the time to 5 seconds to simulate the door's time to open for the user to get through. This also allows the motor to spin long enough to open and close the cd room completely. I would also recommend putting a stopper on the cd room door so that it doesn't close all the war and get stuck.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_] #Get the name from the List using ID number
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
#place our motor code here
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
Print("Openning")
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(5)
print("Closing")
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(5)
print("stop")
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
An individual's biometric identity can be verified by looking at various physical and behavioural characteristics, such as a person's fingerprint, keystrokes, facial characteristics, and voice. Face recognition seems to be the winner because of the precision, simplicity, and lack of contact detection.
Face-recognition technology will continue and will get better over time. The tale has evolved, and your alternatives have grown due to smart tech.
Using an RPi as a surveillance system means you can take it with you and use it wherever you need it.
For the most part, the face-recognition software employed in security systems can reliably assess whether or not the individual attempting entry matches your record of those authorized to enter. On the other hand, certain computer programs are more precise when it comes to identifying faces from diverse angles or different countries.
Concerned users may be relieved to learn that some programs have the option of setting custom confidence criteria, which can significantly minimize the likelihood of the system giving false positives. Alternatively, 2-factor authentication can be used to secure your account.
When your smart security system discovers a match between a user and the list of persons you've given access to, it will instantly let them in. Answering the doorbell or allowing entry isn't necessary.
Face recognition solutions can be readily integrated into existing systems using an API.
A major drawback of face recognition technology is that it puts people's privacy at risk. Having one's face collected and stored in an unidentified database does not sit well with the average person.
Confidentiality is so important that several towns have prohibited law enforcement from using real-time face recognition monitoring. Rather than using live face recognition software, authorities can use records from privately-held security cameras in certain situations.
Having your face captured and stored by face recognition software might make you feel monitored and assessed for your actions. It is a form of criminal profiling since the police can use face recognition to put everybody in their databases via a virtual crime lineup.
This article walked us through creating a complete Smart Security System using a facial recognition program from the ground up. Our model can now recognize faces with the help of OpenCV image manipulation techniques. There are several ways to further your knowledge of supervised machine learning programming with raspberry pi 4, including adding an alarm to ring whenever an individual's face is not recognized or creating a database of known faces to act like a CCTV surveillance system. We'll design a security system with a motion detector and an alarm in the next session.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | DC Motor | Amazon | Buy Now | |
3 | Jumper Wires | Amazon | Buy Now | |
4 | Raspberry Pi 4 | Amazon | Buy Now |
Like the Amazon Echo, voice-activated gadgets are becoming increasingly popular, but you can also construct your own with a Raspberry, a cheap USB mic, and some appropriate software. Simply speaking to your Raspberry Pi will allow you to search YouTube, view websites, activate applications, and even answer inquiries.
Because the Raspberry Pi lacks a soundcard or audio port, this project requires a USB microphone or a camera with a built-in microphone. If your mic only has an audio jack, look for an affordable USB soundcard that connects to a USB port on one side and has a headphone and mic output on the other.
For the Raspberry Pi, there are various speech recognition programs. We're utilizing Steve Hickson's Pi AUI Toolkit for this project since it is powerful and straightforward to set up and operate. You may install a variety of programs using the Pi AUI Suite. The first question is whether or not the dependencies should be installed. These are the files that the Raspberry Pi requires to work with voice commands, so pick Yes, then press Enter to agree.
Following that, you'll be asked if you wish to download the PlayVideo software, which allows you to open and play video content using voice commands. If you select Y, you'll be prompted to enter the location of your media files, such as /home/pi/Videos. It's worth noting the upper-case letters are crucial in this scenario. The application will tell you if the route is incorrect.
Next, you'll be asked if you wish to download the Downloader application, which explores the internet for files and downloads them for you automatically. If you select Yes, you will be prompted to enter an address, port, password, and username. If you're not sure, press Return to choose the default settings in each scenario for now.
Install the Google Texts to Speech Service if you require your raspberry pi to read the contents of the text files. Since it communicates to Google servers to translate text into speech, the Raspberry Pi must be hooked up to the internet to utilize this service.
You'll require a google user account to install this. The installation requests your username—press Return after completing this step. The Google password is then requested. Return to the previous page and type this.
You may also use the installer to download Google Voice Commands. This makes use of Google's speech-to-text technology. To proceed, you must enter your Google login and password once again.
Regardless of whether you choose Google-specific software or not, the program will ask if you wish to download the YouTube scripts. These technologies allow you to say something like "play pi tutorial," An appropriate video clip will be played—press Return after typing a new welcome. You can also enable the silent flag to prevent the Raspberry Pi from responding verbally.
Lastly, the software installs the Voice command, which includes some of the more helpful scripts, such as the ability to deploy your internet browser by simply saying "internet."
Youtube: When you say "YouTube" followed by a title tag, a youtube clip of the first related YouTube clip appears. "I'm feeling lucky" is comparable to "I'm feeling lucky" on google. Say "YouTube" followed by the title of the video you want to watch, for example, "YouTube fluffy kittens."
Internet: Your internet browser is launched when you use the word "internet." Midori is the internet browser for Rpi by default, but you may alter that.
Download: When you say "download," followed by a search query, the Pirate Bay webpage searches for the files in demand. For instance, you can say "Download Into the badlands" to get the most current edition of the movie.
Play: This phrase utilizes the in-built media player to open an audio or video file. For instance, "Play football.mp4" would play the file "football.mp4" from the media directory you chose during setup, like /home/pi/movies.
Show me: When you say "show me," a directory of your choice appears. The command defaults to not going to a valid root directory, so you'll need to modify your configuration so that it points to one. For instance, show me==/Documents.
You'll be asked if you want the Voice command to set things up automatically. If an issue occurs at this point, run the following command to download and install the required software.
After installing the Voice command application, you may want to perform a few basic adjustments to the setup to fine-tune it. Execute the following command from your Raspberry Pi's Terminal or via SSH.
Following that, you'll be asked several yes/no questions. The first question is whether you wish to enable the continuous flag permanently. The Voice command application, in clear English, asks if you would want to listen to your voice commands constantly every time you launch it.
For the time being, choose Yes. After that, you'll be asked if you wish the Voice command application to set the verify flag permanently. If you select Y, the application will expect you to pronounce your keyword (the default setting is "Pi") before responding to requests.
If you like the RPi to monitor continually but not act on all words you say, this is a good option.
The next step asks if you wish to enable the ignore flag permanently. If Voice command receives a command that isn't expressly listed in the config file, it will try to find and launch a program from your installed apps. For example, if you say "leafpad," a notepad tool, the Voice command looks for it and starts it even if you don't tell it to.
This is a functionality that we would not recommend anyone to enable. Because you're using Voice command at the SuperUser level, there's a significant danger that you'll accidentally issue a command to the Raspberry Pi that will harm your files. If you wish to set up other programs to function with the Voice command, you can update the configuration file for each scenario.
The voice command afterward asks if you want to permanently enable the silence flag so that it doesn't respond verbally whenever you talk. As you see fit, select yes or no. After that, you'll be prompted to adjust the default voice recognition timeframe. If Pi is having problems hearing your commands, you should modify this.
If you select Yes, you'll be prompted to enter a number; this is the number of seconds that the Raspberry Pi will listen for a voice command; the default for RPI is 3. The application then allows you to customize your text-to-speech choices. Before you do this, make sure your volume is turned up. The application attempts to speak something and then asks if you heard it.
When the system receives your keyword, it responds with "Yes, sir?" as the default response. To modify this, select Yes in the following prompt, then enter your chosen response, for example, "Yes, ma'am?" Once you're finished, hit the enter key. The program will replay the assigned response to check that you are satisfied with the outcome.
Whenever the program receives an unidentified command, the method is the same as the default response. "Received the wrong command" is set as the default response, but you could still alter it to something more friendly by typing yes, then your desired response, like, "The command is not known."
You now have the option of configuring the voice recognition parameters. This will check to see if you have a suitable mic. The Pi will then ask you if you want it to test your sound threshold using the Voice command.
Check for background sound, then press Yes, then press enter key. It then requests you to say a command to ensure that it is using the correct audio device. Type Yes to have the application automatically select the appropriate audio threshold for your Rpi.
Lastly, the Raspberry Pi will ask if you wish to modify the default voice command term ("Pi"). After that, type Y and your new keyword, then press enter when you're finished.
After that, you'll be requested to say your keyword to acclimate the RPi to your voice. If everything looks good, press Y to finish the setup. Start with the fundamental commands outlined above when using the Voice command software.
Once you've mastered these commands, use the following line of code to exit the application and, if desired, change your config file.
The Raspberry Pi's technology is still a work-in-progress, so not everything you speak may be recognized by the program.
Stay near the mic and talk slowly and clearly to maximize your chances of being heard by the program if you still have difficulties understanding. Launch the terminal or log in through SSH to your Raspberry Pi and type the following command to access your audio settings to change your audio preferences.
Hit the F4 button on the keyboard to select audio input, then the F6 key to exit. Select your input device, the mic with the arrow up or down keys, then press Enter key. To change the mic's volume, push it up using the up-arrow key to maximum (100).
If your device isn't identified at all, it may require more current than a universal serial bus port on a Raspberry Pi can supply. A powered universal serial bus hub is the ideal solution for this.
If you have difficulty connecting after installing the Download application, please ensure that connection to The Pirate Bay site is not limited.
To download the files, you'll also require a BitTorrent application for your RPi, such as transmission. To install this, launch your terminal or access your RPi through SSH and type the following command:
The Transmission homepage has instructions about getting started and utilizing the application. You should always download files that have permission from the copyright holder.
Please remember that whatever you speak and any text documents you provide are transferred to Google servers for translation if you use Google text or speech Commands.
Google maintains that none of this information is kept on its servers. Even if this is the case, any data communicated through the worldwide web can be decrypted by any skilled third party or a hacker. Google, however, encrypts your connection to minimize the risk of this happening.
If you like this voice command tool, you might want the program to launch automatically every time the Rpi is powered on. If this is the case, launch the terminal from your RPi or access it via SSH and execute the command below:
The above command opens the script that controls which programs run whenever your Raspberry Pi is booted. By default, this script is doing nothing. Type the following line of code directly above the one reading exit 0:
To save any changes, use Ctrl+X, enter yes, and press enter key. At this point, you can restart the Raspberry Pi to ensure that it is working.
Launch your Rpi terminal, type the command below, and press enter to see a list of active processes.
Noise from air conditioners and heaters can damage your audio and make it impossible for the program to understand what you're saying. The only other alternative is to turn these systems off during recording unless the entire system is redesigned to be more acoustically friendly. However, upon listening to the audio, it becomes clear and annoying.
Computer hardware cooling fans are also sources of mechanical noise. These can be disabled manually and for a limited period. Besides that, try isolating the disturbance in another space or utilizing an isolation box as an alternative.
We learned how to configure our raspberry pi for voice control. We also looked at a few basic commands used to control the raspberry pi and the software used. However, In the following tutorial, we will learn how to tweet on Raspberry pi.
We can use this project for an engineering project’s showcase for electronics, electrical engineering students, and can be used in offices as well.
Coffee is the second most popular drink in the world and it is one of the oldest beverages of the world. According to Wikipedia, more than 2 billion cups of coffee are consumed every day in the whole world. As engineers or working professionals, we all know how coffee is very important for us. Having a good coffee makes our day better and refreshes the mood. Research shows coffee drinkers tend to live longer but when keeping it in moderate consumption. And making a good coffee is one of the most skillful jobs and time-consuming processes as we want our coffee in minutes. Now here our project comes to the picture, this smart coffee vending machine can make a good coffee in a couple of minutes. There are various flavors of coffee and our smart coffee vending machine can provide us with 4 different flavors which are the most commonly loved such as Latte, Cappuccino, Espresso, and Cafe Mocha. Here's the video demonstration of this project:
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | DC Motor | Amazon | Buy Now | |
2 | LCD 20x4 | Amazon | Buy Now | |
3 | Arduino Uno | Amazon | Buy Now |
As we are going to design this project using Proteus Simulation, instead of using real components. As in the simulation, we can figure out the issue which may occur while working on real components and that can damage our components.
Proteus is the software for simulation and designing electronics circuits. As Proteus software has a big database of electronics components but still it does not have few modules in it like Arduino boards or LCD modules etc.
So we have to install the libraries, which we are going to use in this project:
These are required components for Smart Coffee Vending Machine, as follows:
As a suggestion, whenever we make a project, it should be like a product, as it should be user friendly and interactive, so considering that we have used an LCD module to display the messages related to available coffee flavors and their individual prices so that users can easily select them using buttons and DC motors to pour the ingredients related to coffee like water, sugar, coffee powder, and milk, and a mixer for blending the coffee.
We have connected the LCD using an I2C GPIO expander as we have limited GPIO pins to connect other peripherals with Arduino UNO. I2C Gpio expander requires only two pins as we know that I2C uses SCL(Serial Clock) and SDA(Serial Data) pins for communication.
We can use any Arduino development board but here in this project, we have used an Arduino UNO board.
We have used this IC as a GPIO expander for our project as we have restrictions on the availability of GPIO pins in Arduino UNO.
The LCD display is used to show the user-related messages in this project.
Now we hope you have understood the connections and you have already done it, so it is time to move to the coding part of our project.
If you already know about the syntax and structure of Arduino sketch, it's a good thing, but if you have not been familiarized yet, no need to worry, we will explain it to you step-by-step.
Arduino coding language mostly follow the syntax and structure of C++ programming language, so if you are familiar with C++, then it would be like a cup of cake for you to understand the code but still if you don’t have any background knowledge, you don’t have to worry again, we have your back.
Arduino Coding follows a strict structure, it has mainly two sections. we have to write our code in those two functions.
As we are going to explain the Arduino code, it would be easy to understand if you have opened the code in the Arduino IDE already.
I hope you have understood the whole working of our smart vending machine project and enjoyed it as well. I think we have explained pretty much everything but still if you have any doubts or improvements please let us know in the comment section.
Thanks for giving your valuable time for reading it.
#include <LiquidCrystal.h> //Keyboard Controls: // // C - Clockwise // S - Stop // A - Anti-clockwise // Declare L298N Controller pins // Motor 1 int count=255; int dir1PinA = 2; int dir2PinA = 5; int speedPinA = 6; // PWM control LiquidCrystal lcd(8, 9, 10, 11, 12, 13); void setup() { Serial.begin(9600); // baud rate lcd.begin(20, 4); lcd.setCursor(5,0); lcd.print("DC Motor"); lcd.setCursor(5,1); lcd.print("Direction"); lcd.setCursor(5,2); lcd.print("Control"); lcd.setCursor(2,3); lcd.print("via Arduino UNO"); delay(3000); lcd.clear (); lcd.setCursor(0,2); lcd.print("www.TheEngineering"); lcd.setCursor(4,3); lcd.print("Projects.com"); //Define L298N Dual H-Bridge Motor Controller Pins pinMode(dir1PinA,OUTPUT); pinMode(dir2PinA,OUTPUT); pinMode(speedPinA,OUTPUT); analogWrite(speedPinA, 255);//Sets speed variable via PWM } void loop() { // Initialize the Serial interface: if (Serial.available() > 0) { int inByte = Serial.read(); int speed; // Local variable switch (inByte) { case 'C': // Clockwise rotation //analogWrite(speedPinA, 255);//Sets speed variable via PWM digitalWrite(dir1PinA, LOW); digitalWrite(dir2PinA, HIGH); Serial.println("Clockwise rotation"); // Prints out “Motor 1 Forward” on the serial monitor Serial.println(" "); // Creates a blank line printed on the serial monitor //lcd.clear(); lcd.setCursor(0,0); lcd.print("Clockwise rotation"); break; case 'S': // No rotation //analogWrite(speedPinA, 0); // 0 PWM (Speed) digitalWrite(dir1PinA, LOW); digitalWrite(dir2PinA, LOW); Serial.println("No rotation"); Serial.println(" "); //lcd.clear(); lcd.setCursor(0,0); lcd.print("No rotation"); break; case 'H': //Accelrating motor count=count+20; if (count>255) { count =255; } analogWrite(speedPinA,count); delay(50); //digitalWrite(dir1PinA, LOW); //digitalWrite(dir2PinA, HIGH); Serial.println("Motor is accelrating slowly"); Serial.println(" "); Serial.println(count); lcd.setCursor(0,0); lcd.print("Motor is accelrating"); break; case 'L': //Deaccelrating motor count=count-20; if (count<20) { count=20; } analogWrite(speedPinA,count); delay(50); //digitalWrite(dir1PinA, LOW); //digitalWrite(dir2PinA, HIGH); Serial.println("Motor is deaccelrating slowly"); Serial.println(" "); Serial.println(count); lcd.setCursor(0,0); lcd.print("Motor Deaccelrates"); break; case 'A': // Anti-clockwise rotation //analogWrite(speedPinA, 255); // Maximum PWM (speed) digitalWrite(dir1PinA, HIGH); digitalWrite(dir2PinA, LOW); Serial.println("Anti-clockwise rotation"); Serial.println(" "); //lcd.clear(); lcd.setCursor(0,0); lcd.print("Anti-clockwise"); break; default: // Turn off the motor if any other key is being pressed for (int thisPin = 2; thisPin < 11; thisPin++) { digitalWrite(thisPin, LOW); } Serial.println("Wrong key is pressed"); //lcd.clear(); lcd.setCursor(0,0); lcd.print("Wrong key is pressed"); } } }
In this tutorial, I will make a simple program to do the DC Motor Direction Control using Arduino. Arduino is basically an amazing micro controller and is very easy to use because it is an open source device. So, it is a student friendly device. You can also write Arduino programs for different purpose. Arduino is also a cost efficient device in comparison to the other micro-controllers e.g. raspberyy pi, NI-myRIO, galileo, single board RIO etc. First of all I prepared my complete hardware setup. Then I made a program and interfaced it with the hardware. We will discuss all the steps in detail below. The logic is pretty simple i.e. Arduino has to send commands to L298 motor controller and then L298 decides the DC Motor Direction Control by manipulating the Arduino commands. Before going into the detail, I want to show you the list of components required. You can download complete Arduino source file here:
Download Arduino Source Code Note: If you are working on DC Motor then you should also have a look at these Proteus Simulations:#include <LiquidCrystal.h> //Keyboard Controls: // // C - Clockwise // S - Stop // A - Anti-clockwise // Declare L298N Controller pins // Motor 1 int dir1PinA = 2; int dir2PinA = 5; int speedPinA = 7; // PWM control LiquidCrystal lcd(8, 9, 10, 11, 12, 13); void setup() { Serial.begin(9600); // baud rate lcd.begin(20, 4); lcd.setCursor(5,0); lcd.print("DC Motor"); lcd.setCursor(5,1); lcd.print("Direction"); lcd.setCursor(5,2); lcd.print("Control"); lcd.setCursor(2,3); lcd.print("via Arduino UNO"); //Define L298N Dual H-Bridge Motor Controller Pins pinMode(dir1PinA,OUTPUT); pinMode(dir2PinA,OUTPUT); pinMode(speedPinA,OUTPUT); } void loop() { // Initialize the Serial interface: if (Serial.available() > 0) { int inByte = Serial.read(); int speed; // Local variable switch (inByte) { case 'C': // Clockwise rotation analogWrite(speedPinA, 255);//Sets speed variable via PWM digitalWrite(dir1PinA, LOW); digitalWrite(dir2PinA, HIGH); Serial.println("Clockwise rotation"); // Prints out “Motor 1 Forward” on the serial monitor Serial.println(" "); // Creates a blank line printed on the serial monitor lcd.clear(); lcd.setCursor(0,0); lcd.print("Clockwise rotation"); break; case 'S': // No rotation analogWrite(speedPinA, 0); // 0 PWM (Speed) digitalWrite(dir1PinA, LOW); digitalWrite(dir2PinA, LOW); Serial.println("No rotation"); Serial.println(" "); //lcd.clear(); lcd.setCursor(5,1); lcd.print("No rotation"); break; case 'A': // Anti-clockwise rotation analogWrite(speedPinA, 255); // Maximum PWM (speed) digitalWrite(dir1PinA, HIGH); digitalWrite(dir2PinA, LOW); Serial.println("Anti-clockwise rotation"); Serial.println(" "); //lcd.clear(); lcd.setCursor(3,2); lcd.print("Anti-clockwise"); break; default: // Turn off the motor if any other key is being pressed for (int thisPin = 2; thisPin < 11; thisPin++) { digitalWrite(thisPin, LOW); } Serial.println("Wrong key is pressed"); //lcd.clear(); lcd.setCursor(0,3); lcd.print("Wrong key is pressed"); } } }
This Line Following Robot is not doing any extra feature i.e. turning or rotating back. It will just simply move in the straight line. I have also posted a short video at the botton of this tutorials which will give you better idea of how this robot moves. You should first read this tutorial and design the basic robot and once you are successful in designing the basic Line Following Robot then you should have a look at my recent Project Line following Robotic Waiter in which I have designed a Robotic waiter which follows the line and also take turns on different tables. So, let's get started with Line Following Robot using Arduino.
#define motorL1 8 #define motorL2 9 #define motorR1 10 #define motorR2 11 #define PwmLeft 5 #define PwmRight 6 #define SensorR 2 #define SensorL 3 #define Sensor3 A0 #define Sensor4 A1 #define TableA A4 #define TableB A2 #define TableC A5 #define TableD A3 int OriginalSpeed = 200; int TableCount = 0; int TableCheck = 0; int RFCheck = 10; void setup() { Serial.begin (9600); pinMode(motorR1, OUTPUT); pinMode(motorR2, OUTPUT); pinMode(motorL1, OUTPUT); pinMode(motorL2, OUTPUT); pinMode(PwmLeft, OUTPUT); pinMode(PwmRight, OUTPUT); pinMode(SensorL, INPUT); pinMode(SensorR, INPUT); pinMode(Sensor3, INPUT); pinMode(Sensor4, INPUT); pinMode(TableA, INPUT); pinMode(TableB, INPUT); pinMode(TableC, INPUT); pinMode(TableD, INPUT); MotorsStop(); analogWrite(PwmLeft, 0); analogWrite(PwmRight, 0); delay(2000); // Serial.println("fghfg"); } void loop() { MotorsForward(); PIDController(); } void MotorsBackward() { digitalWrite(motorL1, HIGH); digitalWrite(motorL2, LOW); digitalWrite(motorR1, HIGH); digitalWrite(motorR2, LOW); } void MotorsForward() { digitalWrite(motorL1, LOW); digitalWrite(motorL2, HIGH); digitalWrite(motorR1, LOW); digitalWrite(motorR2, HIGH); } void MotorsStop() { digitalWrite(motorL1, HIGH); digitalWrite(motorL2, HIGH); digitalWrite(motorR1, HIGH); digitalWrite(motorR2, HIGH); } void MotorsLeft() { analogWrite(PwmLeft, 0); analogWrite(PwmRight, 0); digitalWrite(motorR1, HIGH); digitalWrite(motorR2, HIGH); digitalWrite(motorL1, LOW); digitalWrite(motorL2, HIGH); } void MotorsRight() { analogWrite(PwmLeft, 0); analogWrite(PwmRight, 0); digitalWrite(motorR1, LOW); digitalWrite(motorR2, HIGH); digitalWrite(motorL1, HIGH); digitalWrite(motorL2, HIGH); } void Motors180() { analogWrite(PwmLeft, 0); analogWrite(PwmRight, 0); digitalWrite(motorL1, HIGH); digitalWrite(motorL2, LOW); digitalWrite(motorR1, LOW); digitalWrite(motorR2, HIGH); } void PIDController() { if(digitalRead(SensorL) == HIGH){analogWrite(PwmRight, 250);analogWrite(PwmLeft, 0);} if(digitalRead(SensorR) == HIGH){analogWrite(PwmLeft, 250);analogWrite(PwmRight,0);} if((digitalRead(SensorL) == LOW) && (digitalRead(SensorR) == LOW)){analogWrite(PwmRight, 0);analogWrite(PwmLeft, 0);} }
So, for DC Motor Direction Control, I have used Arduino UNO baord, so you should also download this Arduino Library for Proteus so that you can use Arduino boards in Proteus software. I have also provide the simulation and the code for DC Motor Direction Control but I would recommend you to design it on your own so that you learn from it. If you have any problem then ask in comments and I will try to resolve them. In this project, I have used Serial Terminal. So, whenever someone, sends character "C" on serial terminal then the motor will move in Clockwise Direction and when someone sends character "A" then it will move in Anti-clockwise Direction and will stop on character "S". Anyways, lets get started with DC Motor Direction Control with Arduino in Proteus ISIS.
int Motor1 = 2; int Motor2 = 3; void setup() { pinMode(Motor1, OUTPUT); pinMode(Motor2, OUTPUT); Serial.begin(9600); } void loop() { if(Serial.available()) { char data = Serial.read(); Serial.println(data); if(data == 'C'){MotorClockwise();} if(data == 'A'){MotorAntiClockwise();} if(data == 'S'){MotorStop();} } } void MotorAntiClockwise() { digitalWrite(Motor1, HIGH); digitalWrite(Motor2, LOW); } void MotorClockwise() { digitalWrite(Motor1, LOW); digitalWrite(Motor2, HIGH); } void MotorStop() { digitalWrite(Motor1, HIGH); digitalWrite(Motor2, HIGH); }
So, that's all for today. Hopefully now you have got the idea of How to do DC Motor Direction Control with Arduino in Proteus ISIS. In the next tutorial, I am gonna add speed control of DC Motor. So, till then take care and have fun. :)
To design this, we will be using LM317k. Basically, it is a Voltage Regulator IC. It has 3 pins. Pin # 2 is for input voltages, marked as VI. Pin # 3 is for output voltages, marked as VO, and pin # 1 is used for Regulating Voltages and it is marked as ADJ. Further, if you notice the circuit diagram, which is given in the figure, then you will see that pin # 1 is connected to a Potentiometer. Potentiometer is a Variable Resistor device and it is also known as Voltage Divider. The feature of this electronic device is that, we can adjust the voltage through it according to our own choice. It operates on 12 Volts and it gives us ease that, we can adjust its voltages from 0 to MAXIMUM (which is 12 volts in most cases). Further if we notice the circuit, then we will see that a LED is connected in parallel with a simple DC motor and a voltmeter is also connected in parallel with Motor to monitor the voltages appearing across it. Above information was a little demo about the individual components of the circuit, now let’s be practical and move towards Hardware and see how actually Electronic components respond. You should also have a look at Introduction to LM317, if you wanna read all the basics about it. So let's get started with LM317 Voltage Regulator in Proteus:
You can download this complete LM317 Voltage Regulator simulation by clicking the below button but I recommend you to design it on your own so that you learn most from it.
First of all, place all the components in Proteus workspace, as shown in image:
A 12-Volt DC supply is provided to input pin (# 2) of LM317 and potentiometer is connected to Adjustable pin of LM317, which is, pin # 1.
The complete circuit, ready for simulation is shown below in image:
Now, we can conclude that, LM317 is the monitoring device of this circuit. We can set the value of potentiometer according to our own choice and by this, the speed of motor can be controlled and also the corresponding voltages, appearing across it.
Here's the video in which I have given the detailed introduction of LM317 and have also run its simulation:Alright friends, that's all for today and I hope now you can easily design this LM317 Voltage Regulator. In the next post, I have discussed DC Motor Drive circuit in Proteus ISIS . Till than take care and be safe !!! :)