To perform jumping in the ladder logic program, two instructions are together, as shown in Fig. 1. The first instruction is JMP which tells the PLC you need to jump from where the jump instruction is to where you find the LBL instruction.
Fig. 1: JMP and LBL instructions
Now how to know the label to whom you want the program execution to go or jump? Well, that is an excellent question, and the answer is that you should specify the label name as a parameter of jump instruction and the label instruction. Therefore, you can notice, my friends, if Fig. 2, the JMP and LBL instructions have a question mark denoting you should specify the label name, which is the next station to programme execution.
Fig. 2: Jump and Label instruction showing the label name above of them
Now friends, let us see how jump and label instructions work together, as shown in fig. Three depicts a straightforward example of ladder logic in which JMP and LBL instruction work together, referring to the same label Q2:0. In this example, if input contact I1/0 is activated, JMP instruction will take the execution where the Q2:0 label “LBL” instruction is. As a result, the rung 001 has been bypassed.
Fig. 3: ladder logic example for jump and label instructions
How about going with a situation where we need to employ JMP and LBL instruction? Yes, in the example shown in fig. 4, You can notice here in this example. There are a couple of motors and for these motors. We need to. Use Siri. The command of the jump. And in combination with the uh label instruction. 2. Let some motors work in one scenario. And in another scenario, we will let some of these motors worworkd the others are not working. So you can imagine if you are working with Couples of pumps and want to run all pumps together in one scenario, so you do not activate the JMP command. But, in another scenario, if you’re running some of these pumps. So what we are going to do here in this example is to activate the JMP command to bypass some of these pumps, which are between the JMP and LBL commands.
Fig. 4: a real example of jump instruction
Figure 5 depicts the execution of the sample program that demonstrates JMP instruction. This case shows what happened in the first scenario example. If we do not activate jump instruction in rung 3, then all motors 1,2,3,4,5,6 and 7 will run based on the status of their firing contacts which are I:0 0 to 7, respectively. And the program execution continues running other motors after the JMP as it is not activated by touch I:0 3.
Fig. 5: test when JMP is not activated
On the other hand, Fig. 6 depicts the second scenario when activating JMP instruction by contact I0:3 at run number 3. By activating the jump command, We can notice only the motor before jump instructions are running based on their command contacts when the pumps in between jumps and label commands are bypassed, and the other engines after label instruction are running. So motors 1,2 and 3 are running, motors 4,5 and 6 have been ignored by activation of jump command instruction, while motor seven is running, which is at label command.
Fig. 6: test when JMP is activated
LinkedIn is a powerful social media platform for businesses of all sizes. It provides an opportunity to connect with a larger audience, build relationships with potential customers, and create brand awareness. But promoting your business on LinkedIn is not as easy as you might think. For one, LinkedIn is a professional network, which means that users are not always looking to be sold to. Secondly, LinkedIn’s algorithm favors content that is educational and informative over content that is promotional.
So how can you promote your small business on LinkedIn in a way that will reach your target audience and help you achieve your business goals? Here are the top LinkedIn marketing strategies that you can use to expand your small business.
If you want to reach your target audience on LinkedIn, you need to share content that is interesting and engaging. This means creating content that educates, informs, or entertains your readers. For example, if you're a construction company, you could share articles about the latest industry trends, tips for remodeling your home, or interesting case studies.
The key is to make sure that your content is relevant to your target audience and provides value. If you're not sure what type of content to create, take a look at the content that your competitors are sharing. Chances are, their content will give you some ideas.
However, that doesn't mean you should write walls of words - avoid that at all costs. No one wants to read an essay on LinkedIn. Keep your posts short and to the point. Use images, infographics, videos, and LinkedIn banners to break up your text and make your content more visually appealing.
You can't just start posting on LinkedIn with a blindfold on now, can you? You need to know who you're targeting first.
Creating personas for your target audience is a great way to get to know them better. Once you've created personas, take a look at where your target audience hangs out online. What type of content do they consume? What are their interests?
You can use this information to create content that appeals to your target audience. For example, if you're targeting millennials, you might want to create content that is relevant to their interests, such as entrepreneurship, travel, or personal development.
One of the best ways to promote your small business on LinkedIn is by creating a company page. Your company page is like a mini-website on LinkedIn, and it's a great way to showcase your products or services.
When creating your company page, make sure to include a strong headline, an engaging description, and relevant images. You should also take advantage of LinkedIn's SEO features by including keywords in your page content.
Once you've created your company page, start sharing content that will interest your target audience. This could include blog posts, product information, case studies, or even company news. You can also use your company page to run LinkedIn ads. LinkedIn offers several ad formats that you can use to promote your business, and they're a great way to reach a larger audience.
Your LinkedIn profile is one of the most important tools in your LinkedIn marketing arsenal, so optimizing your LinkedIn Profile is essential. It's your chance to make a good first impression, so make sure you're putting your best foot forward.
Start by optimizing your headline and summary. These are the first things people will see when they visit your profile, so make sure they're attention-grabbing and relevant. Include keywords that describe your business or industry, and make sure to mention your most important selling points. Your headline and summary are also a great place to showcase your personality.
Next, take a look at your profile photo. Is it professional and polished? If not, consider changing it to something that presents you in a positive light. Finally, take some time to update your work experience and education section. Include any relevant information that will help you stand out, such as awards or publications. By optimizing your LinkedIn profile, you'll be sure to make a good impression on potential customers and clients.
Another great way to reach your target audience on LinkedIn is by joining relevant groups. There are thousands of groups on LinkedIn, covering just about every topic imaginable. And chances are, there are several groups that would be a good fit for your business.
For example, if you're an apparel retailer, you could join relevant groups on LinkedIn to promote your business. These groups could include fashion professionals, small business owners, or even general interest groups.
When you join a group, make sure to participate in the discussion and add value to the conversation. This will help you build relationships with other members and position yourself as an expert in your field.
You can also use LinkedIn groups to collect leads. Many groups allow members to post their contact information in the group description. This makes it easy for you to get in touch with potential customers.
Did you know that you can use LinkedIn to generate leads? It's true! LinkedIn offers a feature called Sponsored InMail, which allows you to send messages directly to your target audience. Normally, on the free plan, you can't send priority messages to anyone. Therefore, InMail is a great way to promote your products or services, and it's an especially effective lead generation tool.
Sponsored InMail is a great way to reach out to potential customers and promote your products or services. You can use it to offer discounts, announce new products, or even invite people to events.
To get started, simply create a Sponsored InMail campaign and target your ideal customer. LinkedIn will then match your message with the right people, and you'll start seeing results in no time. Just make sure your message resonates with your target audience. Otherwise, you risk having it perceived as spam.
While the above marketing strategies will work like a charm when it comes to promoting your business and gaining new customers, bear in mind that every business is different and has a different target audience. Therefore, it's important to experiment with different strategies and find the ones that work best for you. Identifying the right marketing mix for your business is what you should be aiming for.
For example, you could use LinkedIn ads to reach out to your target audience if you're looking for immediate results. Or, if you're trying to build long-term relationships with potential customers, focus on creating a strong company page and sharing high-quality content.
Also, don't forget to harness the power of SEO! In simple words, SEO makes your content more visible in LinkedIn's search results, which means more people will see it. You can do this by optimizing your LinkedIn profile and company page for keywords, and you'll surely gain more traction on your LinkedIn profile.
Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we created a pi-hole ad blocker for our home network using raspberry pi 4. We also learned how to install pi-hole on raspberry pi four and how to access it in any way with other devices. This tutorial will implement a speech recognition system using raspberry pi and use it in our project. First, we will learn the fundamentals of speech recognition, and then we will build a game that uses the user's voice to play it and discover how it all works with a speech recognition package.
Here, you'll learn:
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Raspberry Pi 4 | Amazon | Buy Now |
Are you curious about how to incorporate speech recognition into a Python program? Well, when it comes to conducting voice recognition in Python, there are a few things you need to know first. I'm not going to overwhelm you with the technical specifics because it would take up an entire book. Things have gone a long way when it comes to modern voice recognition technologies. Several speakers can be recognized and have extensive vocabulary in several languages.
Voice is the first element of speech recognition. A mic and an analog-to-digital converter are required to turn speech into an electronic signal and digital data. The audio can be converted to text using various models once it has been digitized.
Markov models are used in most modern voice recognition programs. It is assumed that audio signals can be reasonably represented as a stationary series when seen over a short timescale.
The audio signals are broken into 10-millisecond chunks in a conventional HMM. Each fragment's spectrogram is converted into a real number called cepstral coefficients. The dimensions of this cepstral might range from 10 to 32, depending on the device's accuracy. These vectors are the end product of the HMM.
Training is required for this calculation because the voice of a phoneme changes based on the source and even within a single utterance by the same person. The most probable word to produce the specified phoneme sequence is determined using a particular algorithm.
This entire process could be computationally costly, as one might expect. Before HMM recognition, feature transformations and dimension reduction methods are employed in many current speech recognition programs. It is also possible to limit an audio input to only those parts which are probable to include speech using voice detectors. As a result, the recognizer does not have to waste time studying sections of the signal that aren't relevant.
There are a few speech recognition packages in PyPI. There are a few examples:
NLP can discern a user's purpose in some of these programs, which goes beyond simple speech recognition. Several other services are focused on speech-to-text conversion alone, such as Google Cloud-Speech.
SpeechRecognition is the most user-friendly of all the packages.
Voice recognition necessitates audio input, which SpeechRecognition makes a cinch. SpeechRecognition will get you up to speed in minutes rather than requiring you to write your code for connecting mics and interpreting audio files.
Since it wraps a variety of common speech application programming interfaces, this SpeechRecognition package offers a high degree of extensibility. The SpeechRecognition library is a fantastic choice for every Python project because of its flexibility and ease of usage. The APIs it encapsulates may or may not be able to support every feature. For SpeechRecognition to operate in your situation, you'll need to research the various choices.
You've decided to give SpeechRecognition ago, and now you need to get it deployed in your environment.
Using pip, you may set up Speech Recognition software in the terminal:
$ pip install SpeechRecognition
When you've completed the setup, you should start a command line window and type:
Import speech_recognition as sr
Sr.__version__
Let's leave this window open for now. Soon enough, you'll be able to use it.
If you only need to deal with pre-existing audio recordings, Speech Recognition will work straight out of the box. A few prerequisites are required for some use cases, though. In particular, the PyAudio library must record audio from a mic.
As you continue reading, you'll discover which components you require. For the time being, let's look at the package's fundamentals.
The recognizer is at the heart of Speech Recognition's magic.
Naturally, the fundamental function of a Recognizer class is to recognize spoken words and phrases. Each instance has a wide range of options for identifying voice from the input audio.
The process of setting up a Recognizer is straightforward. It's as simple as typing "in your active interpreter window."
sr.Recognizer()
There are seven ways to recognize the voice from input audio by utilizing a distinct application programming interface in each Recognizer class. The following are examples:
Aside from recognizing sphinx(), all the other functions fail to work offline using CMU Sphinx. Internet access is required for the remaining six activities.
This tutorial does not cover all of the capabilities and features of every Application programming interface in detail. Speech Recognition comes with a preset application programming interface key for the Google Speech Application programming interface, allowing you to immediately get up and running with the service. As a result, this tutorial will extensively use the Web Speech Application programming interface. Only the Application programming interface key and the user are required for the remaining six application programming interfaces.
Speech Recognition provides a default application programming interface key for testing reasons only, and Google reserves the right to cancel it at any time. Using the Google Web application programming interface in a production setting is not recommended. There is no method to increase the daily request quota, even if you have a valid application programming interface key. If you learn how to use the Speech Recognition application programming interface today, it will be straightforward to apply to any of your projects.
Whenever a recognize function fails to recognize the voice, it will output an error message. Request Error if the application programming interface is unavailable. A faulty Sphinx install could cause this in the case of recognizing sphinx(). If quotas are exceeded, servers are unreachable, or there isn't internet service, a Request Error will be raised for all the six methods.
Let us use recognize google() in our interpreter window and see if it works!
Exactly what has transpired?
Something like this is most likely what you've gotten.
I'm sure you could have foreseen this. How is it possible to tell something from nothing?
The Recognizer function recognize() expects an audio data parameter. If you're using Speech Recognition, then audio data should become an instance of the audio data class.
To construct an AudioData instance, you have two options: you can either use an audio file or record your audio. We'll begin with audio files because they're simpler to work with.
To proceed, you must first obtain and save an audio file. Use the same location where your Python interpreter is running to store the file.
Speech Recognition's AudioFile interface allows us to work with audio files easily. As a context manager, this class gives the ability to access the information of an audio file by providing a path to its location.
This software supports various file formats, which include:
You'll need to get a hold of the FLAC command line and a FLAC encoding tool.
To play the "har.wav" file, enter the following commands into your interpreter window:
har = sr.AudioFile('har.wav')
with harvard as source:
audio = r.record(source)
Using the AudioFile class source, the context manager stores the data read from the file. Then, using the record() function, the full file's data is saved to an AudioData class. Verify this by looking at the format of the audio:
type(audio)
You can now use recognize_google() to see if any voice can be found in the audio file. You might have to wait a few seconds for the output to appear, based on the speed of your broadband connection.
r.recognize_google(audio)
Congratulations! You've just finished your very first audio transcription!
Within the "har.wav" file, you'll find instances of Har Phrases if you're curious. In 1965, the IEEE issued these phrases to evaluate telephone lines for voice intelligibility. VoIP and telecom testing continue to make use of them nowadays.
Seventy-two lists of 10 phrases are included in the Har Phrases. On the Open Voice Repository webpage, you'll discover a free recording of these words and phrases. Each language has its own set of translations for the recordings. Put your code through its paces; they offer many free resources.
You may want to record a small section of the speaker's speech. The record() method accepts the duration term parameter, which terminates the program after a defined amount of time.
Using the example above, the first 4 secs of the file will be saved as a transcript.
with har as source:
audio = r.record(source, duration=4)
r.recognize_google(audio)
In the files stream, utilize the record() function within a block. As a result, the 4 secs of audio you recorded for 4 seconds will be returned when you record for 4 seconds again.
with har as source:
audio1 = r.record(source, duration=4)
audio2 = r.record(source, duration=4)
r.recognize_google(audio1)
r.recognize_google(audio2)
As you can see, the 3rd phrase is contained within audio2. When a timeframe is specified, the recorder can cease in the middle of a word. This can harm the transcript. In the meantime, here's what I have to say about this.
The offset keywords arguments can be passed to the record() function combined with a recording period. Before recording, this setting specifies how many frames of a file to disregard.
with har as source:
audio = r.record(source, offset=4, duration=3)
r.recognize_google(audio)
Using the duration and the offset word parameters can help you segment an audio track if you understand the language structure beforehand. They can, however, be misused if used hurriedly. Using the following command in your interpreter should get the desired result.
with har as source:
audio = r.record(source, offset=4.7, duration=2.8)
r.recognize_google(audio)
The application programming interface only received "akes heat," which matches "Mesquite," because "it t" half of the sentence was missed.
You also recorded "a co," the first word of the 3rd phrase after the recording. The application programming interface matched this to "Aiko."
Another possible explanation for the inaccuracy of your transcriptions is human error. Noise! Since the audio is relatively clean, the instances mentioned above all worked. Noise-free audio cannot be expected in the actual world except if the soundtracks can be processed in advance.
Noise is an unavoidable part of everyday existence. All audiotapes have some noise level, and speech recognition programs can suffer if the noise isn't properly handled.
I listened to the "jackhammer" audio sample to understand how noise can impair speech recognition. Ensure to save it to the root folder of your interpreter session.
The sound of a jackhammer is heard in the background while the words "the stale scent of old beer remains" are spoken.
Try to translate this file and see what unfolds.
jackmer = sr.AudioFile('jackmer.wav')
with jackhammer as source:
audio = r.record(source)
r.recognize_google(audio)
How wrong!
So, how do you go about dealing with this situation? The Recognizer class has an adjust for ambient noise() function you might want to give a shot.
with jackmer as source:
r.adjust_for_ambient_noise(source)
audio = r.record(source)
r.recognize_google(audio)
You're getting closer, but it's still not quite there yet. In addition, the statement's first word is missing: "the." How come?
Recognizer calibration is done by reading the first seconds of the audio stream and adjusting for noise level. As a result, the stream has already been consumed when you run record() to record the data.
Adjusting ambient noise() takes the duration word parameter to change the time frame for analysis. The default value for this parameter is 1, but you can change it to whatever you choose. Reduce this value by half.
with jackmer as a source:
r.adjust_for_ambient_noise(source, duration=0.5)
audio = r.record(source)
r.recognize_google(audio)
Now you've got a whole new set of problems to deal with after getting "the" at the start of the sentence. There are times when the noise can't be removed from the signal because it simply has a lot of noise to cope with. That's the case in this particular file.
These problems may necessitate some sound pre-processing if you encounter them regularly. Audio editing programs, which can add filters to the audio, can be used to accomplish this. For the time being, know that background noise can cause issues and needs to be handled to improve voice recognition accuracy.
Application programming interface responses might be useful whenever working with noisy files. There are various ways to parse the JSON text returned by most application programming interfaces. For the recognize google() function to produce the most accurate transcription, you must explicitly request it.
Using the recognize google() function and the show all boolean argument will do this.
r.recognize_google(audio, show_all=True)
A transcript list can be found in the dictionary returned by recognizing google(), with the entry 'alternative .'This response format varies in different application programming interfaces, but it's primarily useful for debugging purposes when you get it.
As you've seen, the Speech Recognition software has a lot to offer. Aside from gaining expertise with the offsets and duration arguments, you also learned about the harmful effects noise has on transcription accuracy.
The fun is about to begin. Make your project dynamic by using a mic instead of transcribing audio clips that don't require any input from the user.
For Speech Recognizer to work, you must obtain the PyAudio library.
Use the command below to install pyaudio in raspberry pi:
sudo apt-get install python-pyaudio python3-pyaudio
Using the console, you can verify that PyAudio is working properly.
python -m speech_recognition
Ensure your mic is turned on and unmuted. This is what you'll see if everything went according to plan:
Let SpeechRecognition translate your voice by talking into your mic and discovering its accuracy.
The recognizer class should be created in a separate interpreter window.
import speech_recognition as sr
r = sr.Recognizer()
After utilizing an audio recording, you'll use the system mic as your input. Instantiation your Microphone interface to get at this information!
mic = sr.Microphone()
For raspberry pi, you must provide a device's index to use a certain mic. For a list of microphones, simply call our Mic class function.
Sr.Microphone.list_microphone_names()
Keep in mind that the results may vary from those shown in the examples.
You may find the mic's device index using the list microphone names function. A mic instance might look like this if you wanted to use the "front" mic, which has a value of Three in the output.
mic = sr.Microphone(device_index=3)
A Mic instance is ready, so let's get started recording.
Similar to AudioFile, Mic serves as a context manager for the application. The listen() function of the Recognizer interface can be used in the with section to record audio from the mic. This technique uses an input source as its initial parameter to capture audio until quiet is invoked.
with mic as source:
audio = r.listen(source)
Try saying "hi" into your mic once you've completed the block. Please be patient as the interpreter prompts reappear. Once you hear the ">>>" prompt again, you should be able to hear the voice.
r.recognize_google(audio)
If the message never appears again, your mic is probably taking up the excessive background noise. Ctrl then C key can halt the execution and restore your prompts.
Recognizer class's adjustment of ambient noise() method must be used to deal with the noise level, much like you did while attempting to decipher the noisy audio track. It's wise to do this whenever you're listening for mic input because it's less unpredictable than audio file sources.
with mic as source:
r.adjust_for_ambient_noise(source)
audio = r.listen(source)
Allow for adjustment of ambient noise() to finish before speaking "hello" into the mic after executing the code mentioned above. Be patient as the interpreter's prompts reappear before ascertaining the speech.
Keep in mind that the audio input is analyzed for a second by adjusting ambient noise(). Using the duration parameter, you can shorten it if necessary.
According to the website, not under 0.5 secs is recommended by the Speech Recognition specification. There are times when greater durations are more effective. The lower the ambient noise, the lower the value you need. Sadly, this knowledge is often left out of the development process. In my opinion, the default one-second duration is sufficient for most purposes.
Using your interpreter, type in the above code snippet and mutter anything nonsensical into the mic. You may expect a response such as this:
An UnknownValueError exception is thrown if the application programming interface cannot translate speech into text. You must always encapsulate application programming interface requests in try and except statements to address this problem.
Getting the exception thrown may take more effort than you imagine. When it comes to transcribing vocal sounds, the API puts in a lot of time and effort. For me, even the tiniest of noises were translated into words like "how." A cough, claps of the hands, or clicking the tongue would all raise an exception.
To put what you've learned from the SpeechRecognition library into practice, develop a simple game that randomly selects a phrase from a set of words and allows the player three tries to guess it.
Listed below are all of the scripts:
import random
import time
import speech_recognition as sr
def recognize_speech_from_mic(recognizer, microphone):
if not isinstance(recognizer, sr.Recognizer):
raise TypeError("`recognizer` must be `Recognizer` instance")
if not isinstance(microphone, sr.Microphone):
raise TypeError("`microphone` must be `Microphone` instance")
with microphone as source:
recognizer.adjust_for_ambient_noise(source)
audio = recognizer.listen(source)
response = {
"success": True,
"error": None,
"transcription": None
}
try: response["transcription"] = recognizer.recognize_google(audio)
except sr.RequestError:
response["success"] = False
response["error"] = "API unavailable"
except sr.UnknownValueError:
response["error"] = "Unable to recognize speech"
return response
if __name__ == "__main__":
WORDS = ["apple", "banana", "grape", "orange", "mango", "lemon"]
NUM_GUESSES = 3
PROMPT_LIMIT = 5
recognizer = sr.Recognizer()
microphone = sr.Microphone()
word = random.choice(WORDS)
instructions = (
"I'm thinking of one of these words:\n"
"{words}\n"
"You have {n} tries to guess which one.\n"
).format(words=', '.join(WORDS), n=NUM_GUESSES)
print(instructions)
time.sleep(3)
for i in range(NUM_GUESSES):
for j in range(PROMPT_LIMIT):
print('Guess {}. Speak!'.format(i+1))
guess = recognize_speech_from_mic(recognizer, microphone)
if guess["transcription"]:
break
if not guess["success"]:
break
print("I didn't catch that. What did you say?\n")
if guess["error"]:
print("ERROR: {}".format(guess["error"]))
break
print("You said: {}".format(guess["transcription"]))
guess_is_correct = guess["transcription"].lower() == word.lower()
user_has_more_attempts = i < NUM_GUESSES - 1
if guess_is_correct:
print("Correct! You win!".format(word))
break
elif user_has_more_attempts:
print("Incorrect. Try again.\n")
else:
print("Sorry, you lose!\nI was thinking of '{}'.".format(word))
break
Let's analyze this a little bit further.
There are three keys to this function: Recognizer and Mic. It takes these two as inputs and outputs a dictionary. The "success" value indicates the success or failure of the application programming interface request. It is possible that the 2nd key, "error," is a notification showing that the application programming interface is inaccessible or that a user's speech was incomprehensible. As a final touch, the audio input "transcription" key includes a translation of all of the captured audio.
A TypeError is raised if the recognition system or mic parameters are invalid:
Using the listen() function, the mic's sound is recorded.
For every call to recognize speech from the mic(), the recognizer is re-calibrated using the adjust for ambient noise() technique.
After that, whether there is any voice in the audio, recognize function is invoked to translate it. RequestError and UnknownValueError are caught by the try and except block and dealt with accordingly. Recognition of voice from a microphone returns a dictionary containing the success, error, and translated voice of the application programming interface request and the dictionary keys.
In an interpreter window, execute the following code to see if the function works as expected:
import speech_recognition as sr
from guessing_game import recognize_speech_from_mic
r = sr.Recognizer()
m = sr.Microphone()
recognize_speech_from_mic(r, m)
The actual gameplay is quite basic. An initial set of phrases, a maximum of guesses permitted, and a time restriction are established:
Once this is done, a random phrase is selected from the list of WORDS and input into the Recognizer and Mic instances.
After displaying some directions, the condition statement is utilized to handle each user's attempts at guessing the selected word. This is the first operation that happens inside of the first loop. Another loop tries to identify the person's guesses at least PROMPT LIMIT instances and stores the dictionary provided to a variable guess.
Otherwise, a translation was performed, and the closed-loop will end with a break in case the guess "transcription" value is unknown. False is set as an application programming interface error when no audio is transcribed; this causes the loop to be broken again with a break. Aside from that, the application programming interface request was successful; nonetheless, the speech was unintelligible. As a precaution, the for loop repeatedly warns the user, giving them a second chance to succeed.
If there are any errors inside the guess dictionary, the inner loop will be terminated again. An error notice will be printed, and a break is used to exit the outer for loop, which will stop the program execution.
Transcriptions are checked for accuracy by comparing the entered text to a word drawn at random. As a result, the lower() function for text objects is employed to ensure a more accurate prediction. In this case, it doesn't matter if the application programming interface returns "Apple" or "apple" as the speech matching the phrase "apple."
If the user's estimate was correct, the game is over, and they have won. The outermost loop restarts when a person guesses incorrectly and a fresh guess is found. Otherwise, the user will be eliminated from the contest.
This is what you'll get when you run the program:
Speech recognition in other languages, on the other hand, is entirely doable and incredibly simple.
The language parameter must be set to the required string to use the recognize() function in a language other than English.
r = sr.Recognizer()
with sr.AudioFile('path/to/audiofile.wav') as source:
audio = r.record(source)
r.recognize_google(audio, language='fr-FR')
There are only a few methods that accept-language keywords:
Do you ever have second thoughts about how you're going to pay for future purchases? Has it occurred to you that, in the future, you may be able to pay for goods and services simply by speaking? There's a good chance that will happen soon! Several companies are already developing voice commands for money transfers.
This system allows you to speak a one-time passcode rather than entering a passcode before buying the product. When it comes to online security, think of captchas and other one-time passwords that are read aloud. This is a considerably better option than reusing a password every time. Soon, voice-activated mobile banking will be widely used.
When driving, you may use such Intelligent systems to get navigation, perform a Google search, start a playlist of songs, or even turn on the lights in your home without touching your gadget. These digital assistants are programmed to respond to every voice activation, regardless of the user.
There are new technologies that enable Ai applications to recognize individual users. This tech, for instance, allows it to respond to the voice of a certain person exclusively. Using an iPhone as an example, it's been around for a few years now. If you want Siri to only respond to your commands and queries when you speak to it, you can do so on your iPhone. Unauthorized access to your gadgets, information, and property is far less possible when your voice can only activate your Artificial intelligent assistant. Anyone who is not permitted to use the assistant will not be able to activate it. Other uses for this technology are almost probably on the horizon.
In a distant place, imagine attempting to check into an unfamiliar hotel. Since neither you nor the front desk employee is fluent in the other country's language, no one is available to act as a translator. You can use the translator device to talk into the microphone and have your speech processed and translated verbally or graphically to communicate with another person.
Additionally, this tech can benefit multinational enterprises, educational institutions, or other institutions. You can have a more productive conversation with anyone who doesn't speak your language, which helps break down the linguistic barrier.
There are many ways to use the SpeechRecognition program, including installing it and utilizing its Recognizer interface, which may be used to recognize audio from both files and the mic. You learned how to use the record offset and the duration keywords to extract segments from an audio recording.
The recognizer's tolerance to noise level can be adjusted using the adjust for the ambient noise function, which you've seen in action. Recognizer instances can throw RequestErrors and UnknownValueErrors, and you've learned how to manage them with try and except block.
More can be learned about speech recognition than what you've just read. We will implement the RTC module integration in our upcoming tutorial to enable real-time control.
Since GPS satellite technology became widely available in the late 1990s, positioning systems have played an increasingly important role in people's lives. Almost, everyone now owns a device with positioning capabilities, whether it's a mobile phone, tablet, GPS tracker, or smartwatch with built-in GPS.
Though GPS transformed outdoor positioning, we're now moving on to inside positioning, which will require new technologies. Because the signal is attenuated and scattered by roofs and walls, satellite-based location does not function indoors or on narrow streets. Other technology standards, thankfully, have arisen that enable indoor positioning, albeit with a new form of infrastructure.
Indoor positioning is useful for a variety of purposes for individuals and organizations. From making travel easier to locate what you're looking for, delivering/receiving targeted location-based information, enhancing accessibility, and gaining useful data insights, there's a lot more.
Indoor location relies heavily on BLE beacons. The device can detect when it is in the range of a Bluetooth beacon and even determine its position if it is in reach of more than two beacons using this technology.
The original BLE-based positioning prototypes could only detect which beacon was closest to the user. Hence, today we can combine proximity data from multiple beacons to place the consumer in 2D space on an indoor map. The accuracy varies depending on the situation, but it can be as accurate as 1.5 meters.
This technology is improving, and it now uses magnetic field sensing, gyroscopes, accelerator meters, and Near Field Communication circuits to provide exact positioning.
This technology is used by customers and visitors for navigation and receiving location-based content. They do it by installing an app on their smartphone, tablet, or watch. Indoor mapping and location-specific content distribution are common features of the app.
BLE positioning systems are used by businesses to deliver a better experience for their visitors or customers. Almost any form of organization can profit from location-based technologies. For instance:
Organizations utilize the CMS online platform for managing their content, floor maps, and Bluetooth beacon positions. A content management system(CMS) is often a hosted software system that maintains track of every piece of material in the app that users or customers access. Organizations need a fully working CMS because it offers them full control over the material that consumers see.
Low power, low cost, and effectiveness as asset tracking systems are all characteristics shared by BLE and UWB. UWB, on the other hand, has significantly more precision than Bluetooth. This owes in part to UWB's exact distance-based method of location determination.
BLE commonly locates devices using RSSI, which has a much lower rate of precision because it is reliant on whether a device transmits a weak or strong signal about a Bluetooth beacon or sensors.
In comparison to UWB, BLE has a substantially lesser range and data rate. Bluetooth, on the other hand, is a widely used RF technology that can be integrated into a variety of indoor settings using flexible hardware, such as BLE beacons, sensors, and asset tags.
In some cases, tracking the position of assets within a workspace is desirable, yet mounting permanent BLE receivers is impractical. Without a device to detect asset location and communicate data back to a cloud service, asset monitoring becomes difficult. It can be avoided by piggybacking on a mobile device's location.
Bluetooth beacon is placed throughout a facility, and a mobile app is installed to track where each device is at all times, similar to the previous strategy. The app can detect adjacent assets by marking them with beacons and assigning them to the same position as the device based on nearby fixed beacons.
Indoors, BLE beacons offer substantial advantages for tracking people and assets. Integrating this technique with more conventional location services, like GPS, or Wi-Fi but still has advantages. Assets with embedded beacons can be used to identify items, and then further mobile location technologies can be utilized to give context.
Connecting a Bluetooth beacon to the inside of the vehicle is used to track the whereabouts of mobile workers while they drive. It's also utilized to track asset location within an office building utilizing Wi-Fi enabled client tracking and asset tagging beacons.
In a nutshell, we can say that Bluetooth will remain a popular RF technology for wireless devices, short-range communication, and indoor positioning. The proliferation of Access Points with incorporated Bluetooth low energy beacon and sensor systems out of the box, as well as more, equipped consumer wearable, IoT devices, asset tracking tags, employee badges, and customer Bluetooth trackers, will almost certainly continue and grow.
In computer systems, plc, and microcontrollers all processing work that has been done inside is processed in digital which is 0 and 1 representation. So, one may ask how analog signals which change continuously within a specific range could be processed by computers, PLCs, or microcontrollers? Well, that is an exciting question and its answer will open the door to show the aim of this tutorial. Figure 1 shows an example of analog signals of the voltage that represents the temperature sensor reading.
Fig. 1: Output voltage of temperature sensor
As you see in fig. 1, the output of the temperature sensor would be either a voltage signal or a current signal based on the type of the sensor. And the output of sensors is applied to the analog input module that converts the analog signal to a digital signal to be processed digitally by the PLC.
The output signals of sensors either voltage or current signal should be scaled to represent the physical signal and at the same time, it has its equivalent digital values. For example, let the temperature that is being measured has a range between 0 to 100 °C, and the sensor that we use produces voltage output from 0 to 10 v. Therefore, each 0.1 v change in voltage is equivalent to a change of 1°C. Also, assuming the maximum digital value that can be received is 4000. So, 0 V is equivalent to 0 as a digital value and 10 voltage is equivalent to 4000 as a digital value. So, it is crucial to scale the output voltage to determine accurately the equivalent change in sensors’ reading in terms of voltage to represent the temperature in digital value and the real range of the physical signal.
Again, we have two sides to be scaled from one to the other. Therefore, the parameters that are needed for achieving such scaling are as follows:
So we can imagine together now the scaling function block in ladder logic should be as shown in fig. 2. It shows the input minimum and maximum which could be 0 and 10v or 4 and 20mA. In addition, there are scaled min and max which could be from 0 to 32768 for 16 bits size conversion.
Fig.2: the SCP block in Allen Bradley
We are going to show a complete example of analog input processing in Siemens and AB as well to present the merits of analog conversion in both brands. Figure 3 shows a primitive rung of the ladder logic program that processes the analog input reading. The ladder rung uses a scaling with parameters (SCP) block that includes an input minimum of 4mA and an input maximum of 20mA. And it uses a scaled range from 0 to 32767 because it utilizes 16-bits word for representing the digital data.
Fig. 3: Ladder logic rung for analog input processing
Figure 4 shows the run of the simple example which shows up the processing of analog inputs. It shows that, when the input measured value was 12mA, the output was 16884 which is pretty accurate. It is good to mention that, the output is limited to be within the range of the scaled min and max meaning that it should be from 0 to 32767.
Fig. 4: Testing analog processing by the simulator
Let us give another example but in siemens s7-1200 which is represented by Fig. 5. The ladder code is very simple and it consists of two runs as shown in fig. 5; the first one is for validating the reading to be within the range which is from 0 to 27648 and the second rung is the main one which performs the analog processing in two steps. Firstly it normalizes the input based on the aforementioned range and then it scales the output back to represent it in the output signal format. In this example, we measure a battery voltage that is typically located in the range of 0 to 12v. Therefore, the min and max parameters in the scale_X block should be 0 and 12 v respectively.
Fig. 5: Example of processing analog inputs in Siemens S7-1200
Figure 6 demonstrates the test of the analog processing showing the output reported 5.858941 when the reading was 13499 which is high accuracy.
Fig. 6: Simulation of processing analog inputs in Siemens S7-1200
I am truly thrilled to have you shared with me such a very important tutorial about analog input processing and scaling with parameters because this is a very common operation you might see in every operation in the industry as there are hundreds of sensors that read analog physical signals and be processed by the controller for deciding the next stage of the operation. Next time we are going to talk about jumping and branching techniques in the ladder logic program. So please be ready to meet very soon and learn together and enjoy practicing plc ladder programming
Hello readers, I hope you all are doing great. In this tutorial, we will learn how to interface the PIR sensor to detect motion with the Raspberry Pi Pico module and MicroPython programming language. Later in this tutorial, we will also discuss the interrupts and how to generate an external interrupt with a PIR sensor.
Before interfacing and programming, the PIR and Pico boards let’s first have a look at the quick introduction to the PIR sensor and its working.
Fig. 1 Raspberry Pi Pico and PIR sensor
PIR stands for Passive Infrared sensors and the PIR module we are using is HC-SR501. As the name suggests the PIR or passive infrared sensor, produces TTL (transistor transistor logic) output (that is either HIGHT or LOW) in response to the input infrared radiation. The HC-SR501 (PIR) module is featured a pair of pyroelectric sensors to detect heat energy in the surrounding environment. Both the sensors sit beside each other, and when a motion is detected or the signal differential between the two sensors changes the PIR motion sensor will return a LOW result (logic zero volts). It means that you must wait for the pin to go low in the code. When the pin goes low, the desired function can be called.
In the PIR module, a fresnel lens is used to focus all the incoming infrared radiation to the PIR sensor.
Fig. 2 PIR motion sensor
The PIR motion sensor has a few setting options available to control or change its behaviour.
Two potentiometers are available in the HC-SR501 module as shown in the image attached below (Fig. 3). Sensitivity will be one of the options. So, one of the potentiometers is used to control the sensing range or sensitivity of the module. The sensitivity can be adjusted based on the installation location and project requirements. The second potentiometer (or the tuning option) is to control the delay time. Basically, this specifies how long the detection output should be active. It can be set to turn on for as little as a few seconds or as long as a few minutes.
Fig. 3 HC-SR501 PIR sensor module
Thermal sensing applications, such as security and motion detection, make use of PIR sensors. They're frequently used in security alarms, motion detection alarms, and automatic lighting applications.
Some of the basic technical specifications of HC-SR501 (PIR) sensor module are:
Table: 1 HC-SR501 technical specification
Fig. 4 Hardware components required
Table: 2 Interfacing HC-SR501 and Pico
Fig. 5 Interfacing PIR with Pico module
Before writing the MicroPython program make sure that you have the installed integrated development environment (IDE) to program the Pico board for interfacing the PIR sensor module.
There are multiple development environments available to program the Raspberry Pi Pico (RP2040) with MicroPython programming language like VS Code, uPyCraft IDE, Thonny IDE etc.
In this tutorial, we are using Thonny IDE with the MicroPython programming language (as mentioned earlier). We already published a tutorial on how to install the Thonny IDE for Raspberry Pi Pico Programming.
Now, let’s write the MicroPython program to interface the PIR (HC-SR501) and Pico modules and implement motion detection with Raspberry Pi Pico:
The first task is importing the necessary libraries and classes. To connect the data (OUT) pin of the PIR sensor module with Raspberry Pi Pico we can use any of the GPIO pins of the Pico module. So, here we are importing the ‘Pin’ class from the ‘machine’ library to access the GPIO pins of the Raspberry Pi Pico board.
Secondly, we are importing the ‘time’ library to access the internal clock of RP2040. This time module is used to add delay in program execution or between some events whenever required.
Fig. 6 Importing necessary libraries
Next, we are declaring some objects. The ’led’ object represents the GPIO pin to which the LED is connected (representing the status of PIR output) and the pin is configured as an output.
The ‘PirSensor’ object represents the GPIO pin to which the ‘OUT’ pin of HC-SR501 is to be connected which is GPIO_0. The pin is configured as input and pulled down.
Fig. 7 Object declaration
A ‘motion_det()’ function is defined to check the status of the PIR sensor and degenerate an event in response.
The status of the PIR sensor is observed using the ‘PirSensor.value()’ command. The default status of GPIO_0 is LOW or ‘0’ because it is pulled down. We are using a LED to represent the status of the PIR sensor. Whenever a motion is detected the LED will change its state and will remain in that state for a particular time interval.
If the motion is detected, the status of the GPIO_0 pin will turn to HIGH or ‘1’ and the respective status will be printed on the ‘Shell’ and simultaneously the status of led connected to GPIO_25 will also change to HIGH for 3sec. Otherwise, the “no motion” status will be printed on the Shell.
Fig. 8 creating a function
Here we are using the ‘while’ loop to continuously run the motion detection function. So, the PIR sensor will be responding to the infrared input continuously with the added delay.
Fig. 9 mail loop
# importing necessary libraries
from machine import Pin
import time
# Object declaration
led = Pin(25, Pin.OUT, Pin.PULL_DOWN)
PirSensor = Pin(0, Pin.IN, Pin.PULL_DOWN)
def motion_det():
if PirSensor.value() ==1: # status of PIR output
print("motion detected") # print the response
led.value(1)
time.sleep(3)
else:
print("no motion")
led.value(0)
time.sleep(1)
while True:
motion_det()
Fig. 10 Fig enable Shell
Fig. 11 Output on Shell
Fig. 12 Motion detected with LED ‘ON’
Now let’s take another example where we will discuss the interrupts with Raspberry Pi Pico.
Interrupts comes into existence in two conditions. First one is when a microcontroller is executing a task or a sequence of dedicated tasks and along with that continuously monitoring for an event to occur and then execute the task arriving with that particular event. So, instead of continuously monitoring for an event, a microcontroller can directly jump to a new task whenever an interrupt occurs meanwhile keeping the regular task on halt. Thus we can avoid the wastage memory and energy.
Fig. 13 Interrupt
In second case, a microcontroller will start executing the task only when an interrupt occurs. Otherwise the microcontroller will remain in standby or low power mode (as per the instruction provided).
In this example, we are going to implement the second case of interrupt. Where, we are using the PIR sensor to generate an interrupt. The Raspberry Pi Pico will execute the assigned task only after receiving an interrupt request.
Interrupts can either be external or an internal one. Internal interrupts are mostly software generated for example timer interrupts. On the other hand, external interrupts are mostly hardware generated for example using a push button, motion sensor, temperature sensor, light detector etc.
In this example, we are using the PIR sensor to generate an external interrupt. Whenever the motion is detected, a particular group of LEDs will turn ON (HIGH) while keeping rest of the LEDs in OFF (LOW) state. A servo motor is also interfaced with the Raspberry Pi Pico board. The motor will start rotating once an interrupt is being detected.
We already published tutorial on interfacing a servo motor with Raspberry Pi Pico. You can follow our site for more details.
Fig. 14 Schematic_2
Now let’s write the MicroPython program to generate an external interrupt for raspberry Pi Pico with PIR sensor.
As we discussed earlier, in our previous example the first task is importing necessary libraries and classes. Rest of the modules , are similar to the previous example except the ‘PWM’ one.
The ‘PWM’ class from ‘machine’ library is used to implement the PWM on the servo motor interfaces with the raspberry Pi Pico board.
Fig. 15 importing libraries
In this example, we are using three different components a PIR sensor, a servo motor, and some LEDs. Object are declared for each component. The object ‘ex_interrupt’ represents the GPIO pin to which the PIR sensor is connected where the pin is configured as an input one and pulled down.
The second object represents the GPIO pin to which the servo motor is connected. The ‘led_x’ object represents the GPIO pins to which the peripheral LEDs are connected. Here we are using six peripheral LEDs.
Fig. 16 Object declaration
Fig. 17 PIR output status
Next we are defining a interrupt handler function. The Parameter ‘Pin’ in the function represents the GPIO pin caused the interrupt.
The variable ‘pir_output’ is assigned with a ‘True’ state value which will be executed only when an interrupt occurs (in the while loop).
Fig. 18 Interrupt handling function
Interrupt is attached to GPIO_0 pin represented with ‘ex_interrupt’ variable. The interrupt will be triggered on the rising edge.
Fig. 18 Attaching interrupt
In the function defined to change the position of servo motor we are using pulse width modulation technique to change the servo position/angle. The motor will rotate to 180 degree and then again back to 0 degree.
Fig. 19 defining function for servo
This is the function where we are calling all the previously defined function and each function will be executed as per there assigned sequence whenever an interrupts is detected.
Fig. 20
The MicroPython code to generate an external interrupt with PIR sensor for Raspberry Pi Pico is attached below:
# importing necessary libraries
from machine import Pin, PWM
import time
# Object declaration PIR, PWM and LED
ex_interrupt = Pin(0, Pin.IN, Pin.PULL_DOWN)
pwm = PWM(Pin(1))
led1 = Pin(13, Pin.OUT)
led2 = Pin(14, Pin.OUT)
led3 = Pin(15, Pin.OUT)
led4 = Pin(16, Pin.OUT)
led5 = Pin(17, Pin.OUT)
led6 = Pin(18, Pin.OUT)
# PIR output status
pir_output = False
# setting PWM frequency at 50Hz
pwm.freq(50)
# interrupt handling fucntion
def intr_handler(Pin):
global pir_output
pir_output = True
# attaching interrupt to GPIO_0
ex_interrupt.irq(trigger=Pin.IRQ_RISING, handler= intr_handler)
# defining LED blinking function
def led_blink_1():
led1.value(1)
led3.value(1)
led5.value(1)
led2.value(0)
led4.value(0)
led6.value(0)
time.sleep(0.5)
def led_blink_2():
led1.value(0)
led3.value(0)
led5.value(0)
led2.value(1)
led4.value(1)
led6.value(1)
time.sleep(0.5)
def servo():
for position in range(1000, 9000, 50): # changing angular position
pwm.duty_u16(position)
time.sleep(0.00001) # delay
for position in range(9000, 1000, -50):
pwm.duty_u16(position)
time.sleep(0.00001) # delay
def motion_det():
if pir_output: # status of PIR output
print("motion detected") # print the response
led_blink_1()
servo() # rotate servo motor (180 degree)
time.sleep(0.5)
pir_output == False
else:
print("no motion")
led_blink_2()
while True:
motion_det()
The results observed are attached below:
Fig. 21 Output printed on Shell
Fig. 22 Motion Detected
Fig. 23 No motion detected
In this tutorial, we discussed how to interface the HC-SR501 PIR sensor with raspberry Pi Pico and detect the motion where we used Thonny IDE and MicroPython programming language. We also discussed the interrupts and how to generate interrupts using HC-SR501 sensor.
This concludes the tutorial. I hope you found this of some help and also hope to see you soon with a new tutorial on Raspberry Pi Pico programming.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | DC Motor | Amazon | Buy Now | |
3 | Jumper Wires | Amazon | Buy Now | |
4 | Raspberry Pi 4 | Amazon | Buy Now |
A growing number of us already use face recognition software without realizing it. Facial recognition is used in several applications, from basic Fb Tag suggestions to advanced security screening surveillance. Chinese schools employ facial recognition to track students' adherence and behaviour for the first time. Retail stores use face recognition to classify their clients and identify those who have a history of crime. There's no denying that this tech will be all over soon, especially with so many other developments in the works.
When it comes to facial recognition, biometric authentication goes well beyond simply being able to identify human faces in images or videos. An additional step is taken to identify the person's identity. A facial recognition software compares an image of a person's face to a database to see if the features match another person's. Since facial expressions and hair do not affect the technology's ability to identify matches, it has been built to do so.
How can face recognition be used when it comes to smart security systems?
The first thing you should do if you want to make your home "smart" is to focus on security. Your most prized possessions are housed at this location, and protecting them is a must. You can monitor your home security status from your computer or smartphone thanks to a smart security system when you're outdoors.
Installing a system that is not wireless in your house and signing you up for professional monitoring was traditionally done by a security company. The plot has been rewritten. When setting up a smart home system, you can even do it yourself. In addition, your smart smartphone acts as a professional monitor, providing you with real-time information and notifications.
Face recognition is the ability of a smart camera in your house to identify a person based on their face. Consequently, you will have to inform the algorithm what face goes with what name for face recognition to operate. Facial detection in security systems necessitates the creation of user accounts for family members, acquaintances, and others you want to be identified by the system. Your doors or the inside of your house will be alerted when they arrive.
Face-recognition technology allows you to create specific warning conditions. For example, you can configure a camera to inform you when an intruder enters your home with a face the camera doesn't recognize.
Astonishing advancements in smart tech have been made in recent years. Companies are increasingly offering automatic locks with face recognition. You may open your doors just by smiling at a face recognition system door lock. You could, however, use a passcode or a real key to open and close the smart door. You may also configure your smart house lock to email you an emergency warning if someone on the blacklist tries to unlock your smart security door.
OpenCV, as previously stated, will be used to identify and recognize faces. So, before continuing, let's set up the OpenCV library. Your Pi 4 needs a 2A power adapter and an HDMI cable because we won't be able to access the Pi's screen through SSH. The OpenCV documentation is a good place to learn how image processing works, but I'm not going to go into it here.
pip is well-known for making it simple to add new libraries to the python language. In addition, there is a technique to install OpenCV on a Raspberry Pi via PIP, but it didn't work for me. We can't obtain complete control of the OpenCV library when using pip to install OpenCV; however, this might be worth a go if time is of the essence.
Ensure pip is set up on your Raspberry Pi. Then, one by one, execute the lines of code listed below into your terminal.
sudo apt-get install libhdf5-dev libhdf5-serial-dev
sudo apt-get install libqtwebkit4 libqt4-test
sudo pip install opencv-contrib-python?
Facial recognition and face detection are not the same things, and this must be clarified before we proceed. When simply a user's face is detected using Face detection, the program has no clue who that person is. Only the face will be detected in facial recognition software, but the program will also recognize it. At this point, it's pretty evident that facial detection comes before facial recognition. To explain how OpenCV recognizes a person or other objects, I will have to go into detail.
Essentially, a webcam feed is like a long series continuously updating still photos. And every image is nothing more than a jumble of pixels with varying values arranged in a specific order. So, how does a computer software identify a face among all of these random pixels? Trying to describe the underlying techniques is outside the scope of this post, but since we're utilizing the OpenCV library, facial recognition is a straightforward process that doesn't necessitate a deeper understanding of the underlying principles.
We can only recognize a person if we can see it. Detection of an item, including a face, Classifiers are a feature of OpenCV. They are pre-trained datasets that may be utilized to recognize a certain item, such as a face. Classifiers may also detect additional objects, such as the mouth, the eyebrows, the number plate of a vehicle, and smiles.
Alternatively, OpenCV allows you to design your custom Classifier for detecting any objects in images by retraining the cascade classifier. For the sake of this tutorial, we'll be using the classifier named "haarcascade_frontalface_default.xml" to identify faces from the camera. We'll learn more about image classifiers and how to apply them in code in the following sections.
For the face training and detection, we only need the pi camera, and to install this, insert the raspberry pi camera in the pi camera slot as shown below. Then go to your terminal, open the configuration window using "sudo raspi-config", and press enter. Navigate to the interface options and activate the pi camera module. Accept the changes and finish the setup. Then reboot your RPi.
First, ensure pip is set up, and then install the following packages using it.
Install dlib: Dlib is a set of libraries for building ML and data analysis programs in the real world. To get dlib up and running, type the following command into your terminal window.
Pip install dlib
If everything goes according to plan, you should see something similar after running this command.
Install pillow: The Python Image Library, generally known as PIL, is a tool for opening, manipulating, and saving images in various formats. The following command will set up PIL for you.
pip install pillow
You should receive the message below once this app has been installed.
Install face_recognition: The face recognition package is often the most straightforward tool for detecting and manipulating human faces. Face recognition will be made easier with the help of this library. Installing this library is as simple as running the provided code.
Pip install face_recognition –no –cache-dir
If all goes well, you should see something similar to the one shown below after the installed software. Due to its size, I used the "—no –cache-dir" command-line option to configure the package without keeping any of its cache files.
A script named "haarcascade_frontalface_default.xml" is for detecting faces using a Classifier. It will also build a "face-trainner.yml" file using the training script based on the photos found in the face images directory.
The face images folder indicated above should contain subdirectories with the names of each person to be identified and several sample photographs of them. Esther and x have been identified for this tutorial. As a result, I've just generated the two sub-directories shown below, each containing a single image.
You must rename the directory and replace the photographs with the names of the people you are identifying. It appears that a minimum of five images for each individual is optimal. However, the more participants, the slower the software will run.
Face Trainer.py is a Python software that may be used to train a new face. The purpose of the software is to access the face photographs folder and scan for faces. As soon as it detects a face, it crops it, turns it to grayscale, and saves it in a file named face-trainner.yml using the face recognition package we had previously loaded. The information in this script can be used to identify the faces later. In addition to the whole Trainer program provided at the conclusion, we'll go over some more critical lines.
The first step is to import the necessary modules. The cv2 package is utilized to process photos. The NumPy library can be used for image conversion, the operating system package is used for directory navigation, and PIL will be used to process photos.
import cv2
import numpy as np
import os
from PIL import Image
Ensure that the XML file in question is located in the project directory to avoid encountering an issue. The LBPH Facial recognizer is then constructed using the recognizer parameter.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
Face_Images = os.path.join(os.getcwd(), "Face_Images")
In order to open all of the files ending in.jpg,.jpg, or .png within every subfolder in the face images folder, we must traverse the tree with for loops. In a variable named path, we record the path to every image, and in a variable named person name, we store the file location name (the name of the user who uploaded the images).
For root, dirs, files in os.walk(Face_Images):
for file in files: #check every directory in it
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):
path = os.path.join(root, file)
person_name = os.path.basename(root)
As a result, in case the name of the person changes, we increase a variable named Face_ID that will allow us to have a unique Face_ID for each individual.
if pev_person_name!=person_name:
Face_ID=Face_ID+1 #If yes increment the ID count
pev_person_name = person_name
Because the BGR values may be ignored, grayscale photos are simpler for OpenCV to deal with than colourful ones. We transform the data to grayscale and afterwards lower the image size by 50% so that all the pictures are the same size. To avoid having your face cut out, place it in the centre of the photo. To get a numerical number for these photos, transform them into NumPy arrays. Afterwards, a classifier identifies a face in a photo and saves the results in variable named faces.
Gery_Image = Image.open(path).convert("L")
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)
Our Area of Attention will be the portion of the image where the face may be found after being cropped. It will be utilized to train the face-recognition system in the ROI area. Every area of attention face must be appended to a variable named x train. We then feed the recognizer with our training data using the area of attention values and Face ID data. The information gathered will be archived.
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w]
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID))
recognizer.save("face-trainner.yml")
You'll notice that the face-trainner.yml script is modified whenever you run this program. If you make any modifications to the photographs in the Face Images folder, ensure to recompile this code. Debugging purposes include printing out the Face ID, name of the path, name of a person, and NumPy arrays.
We can begin using our trained data to identify people now that it has been prepared. We'll use a USB webcam or pi camera to feed video into the Face recognizer application, turning it into an image. Once we've found the faces in those images, we'll find similarities to all of our previously developed Face IDs. Finally, we output the identified person’s name in boxes around their face. Afterwards, the whole program is presented, and the explanation is below.
Import the required module from the training program and use the classifier because we need to do more facial detection in this program.
import cv2
import numpy as np
import os
from time import sleep
from PIL import Image
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
The people listed in the folder should be entered in the variable named labels. Insist on performing each step in the same order. It is "Esther" and "Unknown" in my situation.
labels = ["Esther", "Unknown"]
We need the trainer file to detect faces, so we import it into our software.
recognizer.load("face-trainner.yml")
The camera provides the video stream. It's possible to access any second pi camera by replacing 0 with 1.
cap = cv2.VideoCapture(0)
In the next step, we separate the footage into images and transform it into grayscale, and afterwards, we search for a face in the photo. To save the area of attention grey image, we must first detect the face and then crop the image to remove them.
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w]
id_, conf = recognizer.predict(roi_gray)
It informs us how sure the program is in its ability to identify the person. We write the code below to get the person's name based on their Identification number. A square should be drawn around the user's head, written outside their name.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
We must playback and afterwards break the video stream we just evaluated, which is done by pressing a wait key.
cv2.imshow('Preview',img)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
While running this application, ensure the Raspberry is linked to a display via HDMI. A display with your video stream and the name will appear when you open the application. There will be a box around the face identified in the video feed, and if your software recognizes the face, it displays that person’s name. As evidenced by the image below, we've trained our software to identify my face, which shows the recognition process in action.
import cv2
import numpy as np
import os
from PIL import Image
labels = ["Esther", "Unknown"]
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
recognizer.load("face-trainner.yml")
cap = cv2.VideoCapture(0)
while(True):
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5) #Recog. faces
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w]
id_, conf = recognizer.predict(roi_gray)
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('Preview',img)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
import os
from PIL import Image
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
Face_ID = -1
pev_person_name = ""
y_ID = []
x_train = []
Face_Images = os.path.join(os.getcwd(), "Face_Images")
print (Face_Images)
for root, dirs, files in os.walk(Face_Images):
for file in files:
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):
path = os.path.join(root, file)
person_name = os.path.basename(root)
print(path, person_name)
if pev_person_name!=person_name:
Face_ID=Face_ID+1
pev_person_name = person_name
Gery_Image = Image.open(path).convert("L")
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)
print (Face_ID,faces)
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w]
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID))
recognizer.save("face-trainner.yml")
Since the "How to operate DC motor in Rpi 4" guide has covered the basics of controlling a DC motor, I won't provide much detail here. Please read this topic if you haven't already. Check all the wiring before using the batteries in your circuit, as outlined in the image above. Everything must be in place before connecting your breadboard's power lines to the battery wires.
To activate the motors, open the terminal because you'll use the Python code-writing program called Nano in this location. For those of you who aren't familiar with the command-line text editor known as Nano, I'll show you how to use some of its commands as we go.
This code will activate the motor for two seconds, so try it out.
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
print "Turning motor on"
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Stopping motor"
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
The very first two lines of code tell Python whatever the program needs.
The RPi.GPIO package is what the first line is looking for. The RPi GPIO pins are controlled by this module, which takes care of all the grunt work.
It is necessary to delay the script for a few seconds to provide the package time to operate, therefore leaving a motor to run for a while.
The method set mode is used to leverage the RPi's board numbers. We'll tell Python that the pins 16 through 22 correspond to the motors.
Pin A is used to steer the L293D in one way, and pin B is used to direct it in the opposite direction. You can turn on the motor using an Enable pin, referred to as E, inside the test file.
Finally, use GPIO.OUT to inform the RPi that all these are outputs.
The RPi is ready to turn the motor after the software is set up. After a 2-second pause, some pins will be turned on and subsequently turned off, as seen in the code.
Save and quit by hitting CTRL-X, and a confirmation notice appears at the bottom. To acknowledge, tap Y and Return. You can now run the program in the terminal and watch as the motor begins to spin up.
sudo python motor.py
If the motor doesn't move, check the cabling or power supply. The debug process might be a pain, but it's an important phase in learning new things!
I'll teach you how to reverse a motor's rotation to spin in the opposite direction.
There's no need to touch the wiring at this point; it's all Python. Create a new script called motorback.py to accomplish this. Using Nano, type the command:
./script
Please type in the given program:
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
print "Going forwards"
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Going backwards"
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Now stop"
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
Save by pressing CTRL, then X, then Y, and finally Enter key.
For reverse compatibility, we've set Motor1A low in the script.
Programmers use the terms "high" and "low" to denote the state of being on or off, respectively.
Motor1E will be turned off to halt the motor.
Irrespective of what A is doing; the motor can be turned on or off using the Enable switch.
Take a peek at the Truth Table to understand better what's going on.
When Enabled, only two states allow the motor to move; A or B is high, and not both high at the same time.
At this point, we have designed our face detection system and the dc motor control circuit; now, we will put the two systems to work together. When the user is verified, the dc motor should run to open the cd rom drive and close after a few seconds.
In our verify code, we will copy the code below to spin the motor in one direction “open the door” when the user is verified. We will also increase the time to 5 seconds to simulate the door's time to open for the user to get through. This also allows the motor to spin long enough to open and close the cd room completely. I would also recommend putting a stopper on the cd room door so that it doesn't close all the war and get stuck.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_] #Get the name from the List using ID number
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
#place our motor code here
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
Print("Openning")
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(5)
print("Closing")
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(5)
print("stop")
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
An individual's biometric identity can be verified by looking at various physical and behavioural characteristics, such as a person's fingerprint, keystrokes, facial characteristics, and voice. Face recognition seems to be the winner because of the precision, simplicity, and lack of contact detection.
Face-recognition technology will continue and will get better over time. The tale has evolved, and your alternatives have grown due to smart tech.
Using an RPi as a surveillance system means you can take it with you and use it wherever you need it.
For the most part, the face-recognition software employed in security systems can reliably assess whether or not the individual attempting entry matches your record of those authorized to enter. On the other hand, certain computer programs are more precise when it comes to identifying faces from diverse angles or different countries.
Concerned users may be relieved to learn that some programs have the option of setting custom confidence criteria, which can significantly minimize the likelihood of the system giving false positives. Alternatively, 2-factor authentication can be used to secure your account.
When your smart security system discovers a match between a user and the list of persons you've given access to, it will instantly let them in. Answering the doorbell or allowing entry isn't necessary.
Face recognition solutions can be readily integrated into existing systems using an API.
A major drawback of face recognition technology is that it puts people's privacy at risk. Having one's face collected and stored in an unidentified database does not sit well with the average person.
Confidentiality is so important that several towns have prohibited law enforcement from using real-time face recognition monitoring. Rather than using live face recognition software, authorities can use records from privately-held security cameras in certain situations.
Having your face captured and stored by face recognition software might make you feel monitored and assessed for your actions. It is a form of criminal profiling since the police can use face recognition to put everybody in their databases via a virtual crime lineup.
This article walked us through creating a complete Smart Security System using a facial recognition program from the ground up. Our model can now recognize faces with the help of OpenCV image manipulation techniques. There are several ways to further your knowledge of supervised machine learning programming with raspberry pi 4, including adding an alarm to ring whenever an individual's face is not recognized or creating a database of known faces to act like a CCTV surveillance system. We'll design a security system with a motion detector and an alarm in the next session.
First, we will design a database for our website, then we will design the RFID circuit for scanning the student cards and displaying present students on the webpage, and finally, we will design the website that we will use to display the attendees of a class.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | Jumper Wires | Amazon | Buy Now | |
3 | LCD 16x2 | Amazon | Buy Now | |
4 | LCD 16x2 | Amazon | Buy Now | |
5 | Raspberry Pi 4 | Amazon | Buy Now |
Additionally, the Database server offers a DBMS that can be queried and connected to and can integrate with a wide range of platforms. High-volume production environments are no problem for this software. The server's connection, speed, and encryption make it a good choice for accessing the database.
There are clients and servers for MySQL. This system contains a SQL server with many threads that support a wide range of back ends, utility programs, and application programming interfaces.
We'll walk through the process of installing MySQL on the RPi in this part. The RFID kit's database resides on this server, and we'll utilize it to store the system's signed users.
There are a few steps before we can begin installing MySQL on a Raspberry Pi. There are two ways to accomplish this.
sudo apt update
sudo apt upgrade
Installing the server software is the next step.
Here's how to get MySQL running on the RPi using the command below:
sudo apt install MariaDB-server
Having installed MySQL on the Raspberry Pi, we'll need to protect it by creating a passcode for the "root" account.
If you don't specify a password for your MySQL server, you can access it without authentication.
Using this command, you may begin safeguarding MySQL.
sudo mysql_secure_installation
Follow the on-screen instructions to set a passcode for the root account and safeguard your MySQL database.
To ensure a more secured installation, select "Y" for all yes/no questions.
Remove elements that make it easy for anyone to access the database.
We may need that password to access the server and set up the database and user for applications like PHPMyAdmin.
For now, you can use this command if you wish to access the Rpi's MySQL server and begin making database modifications.
sudo MySQL –u root -p
To access MySQL, you'll need to enter the root user's password, which you created in Step 3.
Note: Typing text will not appear while typing, as it does in typical Linux password prompts.
Create, edit, and remove databases with MYSQL commands now available. Additionally, you can create, edit, and delete users from inside this interface and provide them access to various databases.
After typing "quit;" into MySQL's user interface, you can exit the command line by pressing the ESC key.
Pressing CTRL + D will also exit the MYSQL command line.
You may proceed to the next step now that you've successfully installed MySQL. In the next few sections, we'll discuss how to get the most out of our database.
The command prompt program MySQL must be restarted before we can proceed with creating a username and database on the RPi.
The MySQL command prompt can be accessed by typing the following command. After creating the "root" user, you will be asked for the password.
To get things started, run the command to create a MySQL database.
The code to create a database is "CREATE DATABASE", and then the name we like to give it.
This database would be referred to as "rfidcardsdb" in this example.
To get started, we'll need to create a MySQL user. The command below can be used to create this new user.
"rfidreader" and "password" will be the username and password for this example. Take care to change these when making your own.
create user “rfidreader" @localhost identified by "password."
We can now offer the user full access to the database after it has been built.
Thanks to this command, " "rfidreader" will now have access to all tables in our "rfidcardsdb" database.
grant all on rfidcardsdb.* to "rfidreader" identified by "password."
We have to flush the permission table to complete our database and user set up one last time. You cannot grant access to the database without flushing your privilege table.
The command below can be used to accomplish this.
Now we have our database configured, and now the next step is to set up our RFID circuit and begin authenticating users. Enter the “Exit” command to close the database configuration process.
An RFID reader reads the tag's data when a Rfid card is attached to a certain object. An RFID tag communicates with a reader via radio waves.
In theory, RFID is comparable to bar codes in that it uses radio frequency identification. While a reader's line of sight to the RFID tag is preferable, it is not required to be directly scanned by the reader. You can't read an RFID tag that is more than three feet away from the reader. To quickly scan a large number of objects, the RFID tech is used, and this makes it possible to identify a specific product rapidly and effortlessly, even if it is sandwiched between several other things.
There are major parts to Cards and tags: an IC that holds the unique identifier value and a copper wire that serves as the antenna:
Another coil of copper wire can be found inside the RFID card reader. When current passes through this coil, it generates a magnetic field. The magnetic flux from the reader creates a current within the wire coil whenever the card is swiped near the reader. This amount of current can power the inbuilt IC of the Card. The reader then reads the card's unique identifying number. For further processing, the card reader transmits the card's unique identification number to the controller or CPU, such as the Raspberry Pi.
Connect the reader to the Raspberry the following way:
use the code spi bcm2835 to see if it is displayed in the terminal.
lsmod | grep spi
SPI must be enabled in the setup for spi bcm2835 to appear (see above). Make sure that RPi is running the most recent software.
Make use of the python module.
sudo apt-get install python
The RFID RC522 can be interacted with using the Library SPI Py, found on your RPi.
cd ~
git clone https://github.com/lthiery/SPI-Py.git
cd ~/SPI-Py
sudo python setup.py install
cd ~
git clone https://github.com/pimylifeup/MFRC522-python.git
To test if the system is functioning correctly, let's write a small program:
cd ~/
sudo nano test.py
now copy the following the code into the editor
import RPi.GPIO as GPIO
import sys
sys.path.append('/home/pi/MFRC522-python')
from mfrc522 import SimpleMFRC522
reader = SimpleMFRC522()
print("Hold a tag near the reader")
try:
id, text = reader.read()
print(id)
print(text)
finally:
GPIO.cleanup()
Here we will write a short python code to register users whenever they swipe a new card on the RFID card reader. First, create a file named addcard.py.
copy the following code.
import pymysql
import cv2
from mfrc522 import SimpleMFRC522
import RPi.GPIO as GPIO
import drivers
display = drivers.Lcd()
display.lcd_display_string('Scan your', 1)
display.lcd_display_string('card', 2)
reader = SimpleMFRC522()
reader = SimpleMFRC522()
id, text = reader.read()
display = drivers.Lcd()
display.lcd_display_string('Type your name', 1)
display.lcd_display_string('in the terminal', 2)
user_id = input("user name?")
# put serial_no uppercase just in case
serial_no = '{}'.format(id)
# open an sql session
sql_con = pymysql.connect(host='localhost', user='rfidreader', passwd='password', db='rfidcardsdb')
sqlcursor = sql_con.cursor()
# first thing is to check if the card exist
sql_request = 'SELECT card_id,user_id,serial_no,valid FROM cardtbl WHERE serial_no = "' + serial_no + '"'
count = sqlcursor.execute(sql_request)
if count > 0:
print("Error! RFID card {} already in database".format(serial_no))
display = drivers.Lcd()
display.lcd_display_string('The card is', 1)
display.lcd_display_string('already registered', 2)
T = sqlcursor.fetchone()
print(T)
else:
sql_insert = 'INSERT INTO cardtbl (serial_no,user_id,valid) ' + \
'values("{}","{}","1")'.format(serial_no, user_id)
count = sqlcursor.execute(sql_insert)
if count > 0:
sql_con.commit()
# let's check it just in case
count = sqlcursor.execute(sql_request)
if count > 0:
print("RFID card {} inserted to database".format(serial_no))
T = sqlcursor.fetchone()
print(T)
display = drivers.Lcd()
display.lcd_display_string('Congratulations', 1)
display.lcd_display_string('You are registered', 2)
GPIO.cleanup()
The program starts by asking the user to scan the card.
Then it connects to the database using the pymysql.connect function.
If we enter our name successfully, the program inserts our details in the database, and a congratulations message is displayed to show that we are registered.
Using the LCD command library, you can:
sudo apt install git
cd /home/pi/
git clone https://github.com/the-raspberry-pi-guy/lcd.git
cd lcd/
sudo ./install.sh
After installation is complete, try running one of the program files
cd /home/pi/lcd/
Next, we will install the mfrc522 library, which the RFID card reader uses. This will enable us to read the card number for authentication. We will use:
Pip install mfrc522
Next, we will import the RPI.GPIO library enables us to utilize the raspberry pi pins to power the RFID card and the LCD screen.
Import RPi.GPIO
We will also import the drivers for our LCD screen. The LCD screen used here is the I2C 16 * 2 LCD.
Import drivers
Then we will import DateTime for logging the time the user has swiped the card into the system.
Import DateTime
In order to read the card using the rfid card, we will use the following code:
reader = SimpleMFRC522()
display = drivers.Lcd()
display.lcd_display_string('Hello Please', 1)
display.lcd_display_string('Scan Your ID', 2)
try:
id, text = reader.read()
print(id)
display.lcd_clear()
finally:
GPIO.cleanup()
The LCD is divided into two rows, 1 and 2. To display text in the first row, we use:
Display.lcd_display_string(“string”,1)
And 2 to display in the second row.
After scanning the card, we will connect to the database we created earlier and search whether the scanned card is in the database or not.
If the query is successful, we can display if the card is in the database; if not, we can proceed, then the user needs to register the card.
If the user is registered, the system saves the logs, the username and the time the card was swapped in a text file located in the/var/www/html root directory of the apache server.
Note that you will need to be a superuser to create the data.txt file in the apache root directory. For this, we will use the following command in the Html folder:
Sudo touch data.txt
Then we will have to change the access privileges of this data.txt file to use the program to write the log data. For this, we will use the following code:
Sudo chmod 777 –R data.txt
The next step will be to display this data on a webpage to simulate an online attendance register. The code for the RFID card can be found below.
#! /usr/bin/env python
# Import necessary libraries for communication and display use
import RPi.GPIO as GPIO
from mfrc522 import SimpleMFRC522
import pymysql
import drivers
import os
import numpy as np
import datetime
# read the card using the rfid card
reader = SimpleMFRC522()
display = drivers.Lcd()
display.lcd_display_string('Hello Please', 1)
display.lcd_display_string('Scan Your ID', 2)
try:
id, text = reader.read()
print(id)
display.lcd_clear()
# Load the driver and set it to "display"
# If you use something from the driver library use the "display." prefix first
try:
sql_con = pymysql.connect(host='localhost', user='rfidreader', passwd='password', db='rfidcardsdb')
sqlcursor = sql_con.cursor()
# first thing is to check if the card exist
cardnumber = '{}'.format(id)
sql_request = 'SELECT user_id FROM cardtbl WHERE serial_no = "' + cardnumber + '"'
now = datetime.datetime.now()
print("Current date and time: ")
print(str(now))
count = sqlcursor.execute(sql_request)
if count > 0:
print("already in database")
T = sqlcursor.fetchone()
print(T)
for i in T:
print(i)
file = open("/var/www/html/data.txt","a")
file.write(i +" Logged at "+ str(now) + "\n")
file.close()
display.lcd_display_string(i, 1)
display.lcd_display_string('Logged In', 2)
else:
display.lcd_clear()
display.lcd_display_string(“Please register”, 1)
display.lcd_display_string(cardnumber,2)
except KeyboardInterrupt:
# If there is a KeyboardInterrupt (when you press ctrl+c), exit the program and cleanup
print("Cleaning up!")
display.lcd_clear()
finally:
GPIO.cleanup()
Now we are going to design a simple website with Html that we are going to display the information of the attending students of a class, and to do this, we will have to install a local server in our raspberry pi.
Web, database, and mail servers all run on various server software. Each of these programs can access and utilize files located on a physical server.
A web server's main responsibility is to provide internet users access to various websites. It serves as a bridge between a server and a client machine to accomplish this. Each time a user makes a request, it retrieves data from the server and posts it to the web.
A web server's largest issue is to simultaneously serve many web users, each of whom requests a separate page.
For internet users, convert them to Html pages and offer them in the browser. Whenever you hear the term "webserver," consider the device in charge of ensuring successful communication in a network of computers.
Among its responsibilities is establishing a link between a server and a client's web browser (such as Chrome to send and receive data (client-server structure). As a result, the Apache software can be used on any platform, from Microsoft to Unix.
Visitors to your website, such as those who wish to view your homepage or "About Us" page, request files from your server via their browser, and Apache returns the required files in a response (text, images, etc.).
Using HTTP, the client and server exchange data with the Apache webserver, ensuring that the connection is safe and stable.
Because of its open-source foundation, Apache promotes a great deal of customization. As a result, web developers and end-users can customize the source code to fit the needs of their respective websites.
Additional server-side functionality can be enabled or disabled using Apache's numerous modules. Encryption, password authentication, and other capabilities are all available as Apache modules.
To begin, use the following code to upgrade the Pi package list.
sudo apt-get update
sudo apt-get upgrade
After that, set up the Apache2 package.
sudo apt install apache2 -y
That concludes our discussion. You can get your Raspberry Pi configured with a server in just two easy steps.
Type the code below to see if the server is up and functioning.
sudo service apache2 status
You can now verify that Apache is operating by entering your Raspberry Pi's IP address into an internet browser and seeing a simple page like this.
Use the following command in the console of your Raspberry Pi to discover your IP.
hostname-i
Only your home network and not the internet can access the server. You'll need to configure your router's port forwarding to allow this server to be accessed from any location. Our blog will not be discussing this topic.
The standard web page on the Raspberry Pi, as depicted above, is nothing more than an HTML file. First, we will generate our first Html document and develop a website.
Let's start by locating the Html document on the Raspbian system. You can do this by typing the following code in the command line.
cd /var/www/html
To see a complete listing of the items in this folder, run the following command.
ls -al
The root account possesses the index.html file; therefore, you'll see every file in the folder.
As a result, to make changes to this file, you must first change the file's ownership to your own. The username "pi" is indeed the default for the Raspberry Pi.
sudo chown pi: index.html
To view the changes you've made, all you have to do is reload your browser after saving the file.
Here, we'll begin to teach you the fundamentals of HTML.
To begin a new page, edit the index.html file and remove everything inside it using the command below.
sudo nano index.html
Alternatively, we can use a code editor to open the index.html file and edit it. We will use VS code editor that you can easily install in raspberry pi using the preferences then recommended software button.
You must first learn about HTML tags, which are a fundamental part of HTML. A web page's content can be formatted in various ways by using tags.
There are often two tags used for this purpose: the opening and closing tags. The material inside these tags behaves according to what these tags say.
The p> tag, for example, is used to add paragraphs of text to the website.
<p>The engineering projects</p>
Web pages can be made more user-friendly by using buttons, which can be activated anytime a user clicks on them.
<button>Manual Refresh</button>
<button>Sort By First Name</button>
<button>Sort By last Name</button>
A typical HTML document is organized as follows:
Let us create the page that we will use in this project.
<html>
<head>
</head>
<body>
<div id="pageDiv">
<p> The engineering projects</p>
<button type="button" id="refreshNames">Manual Refresh</button><br/>
<button type="button" id="firstSort">Sort By First Name</button><br/>
<button type="button" id="lastSort">Sort By Last Name</button>
<div id="namesFromFile">
</div>
</div>
</body>
</html>
<!DOCTYPE html>: HTML documents are identified by this tag. This does not necessitate the use of a closing tag.
<html>: This tag ensures that the material inside will meet all of the requirements for HTML. There is a /html> tag at the end of this.
</head>: It contains data about the website, but when you view it in a browser, you won't be able to see anything.
A metadata tag in the head tag can be used to set default character encoding in your website, for instance. This has a /head> tag at the end of it.
<head>
<meta charset="utf-8">
</head>
Also, you can have a title tag inside the head tag. This tag sets the title of your web page and has a closing </title> tag.
<head>
<meta charset="utf-8">
<title> My website </title>
</head>
<body>: The primary focus of the website page is included within this tag. Everything on a web page is usually contained within body tags once you've opened it. This has a /body> tag at the end of it. Many other tags can be found in this body tag, but we'll focus on the ones you need to get started with your first web page.
We will go ahead and style our webpage using CSS with the lines of codes below;
<head>
<!--
body {
width:100%;
background:#ccc;
color:#000;
text-align:left;
margin:0
;padding:10px
;font:16px/18pxArial;
}
button {
width:160px;
margin:0 0 10px;}
#pageDiv {
width:160px;
margin:20px auto;
padding:20px;
background:#ddd;
color:#000;
}
#namesFromFile {
margin:20px 0 0;
padding:10px;
background:#fff;
color:#000;
border:1px solid #000;
border-radius:10px;
}
-->
</style>
</head>
The style tags is a cascading style sheet syntax that lets developers style the webpages however they prefer.
You can add images to your web page by using the <img> tag. It is also a void element and doesn’t have a closing tag. It takes the following format
<img src="URL of image location">
For example, let’s add an image of the Seeeduino XIAO
<p>The Engineering projects</p>
<img src="https://www.theengineeringprojects.com/wp-content/uploads/2022/04/TEP-Logo.png">
Reload the browser to see the changes
This is the last step of this project, and we will implement a program that reads our data.txt file from the apache root directory and display it on the webpage that we designed. Since we already have our webpage up and running, we will use the javascript programming language to implement this function of displaying the log list on the webpage. All changes that we are about to implement will be done in the index.html file; therefore, open it in the visual studio code editor.
JavaScript is a dynamic computer programming language. It is lightweight and most commonly used as a part of web pages, whose implementations allow client-side scripts to interact with the user and make dynamic pages. It is an interpreted programming language with object-oriented capabilities.
One of the major strengths of JavaScript is that it does not require expensive development tools. You can start with a simple text editor such as Notepad.
Well, javascript as mentioned earlier is a very easy to use language that simply requires us to put the script tags inside the html tags.
<script> script program </script>
<header>
<script>
Here goes our javascript program
</script>
</header>
The javascript code first opens the data.txt file, then it reads all the contents form that file. Then it uses the xmlHttpRequest function to display the contents on the webpage. The buttons on the webpage activate different functions in the code.For instance manual refresh activates:
function refreshNamesFromFile(){
var namesNode=document.getElementById("namesFromFile");
while(namesNode.firstChild)
{ namesNode.removeChild(namesNode.firstChild);
}
getNameFile();
}
This function reads the content of the data.txt
The sort by buttons activate the sort function to sort the logged users either by first name or last name. The function that gets activated by these buttons is:
function sortByName(e)
{ var i=0, el, sortEl=[], namesNode=document.getElementById("namesFromFile"), sortMethod, evt, evtSrc, oP;
evt=e||event;
evtSrc=evt.target||evt.srcElement;
sortMethod=(evtSrc.id==="firstSort")?"first":"last";
while(el=namesNode.getElementsByTagName("P").item(i++)){
sortEl[i-1]=[el.innerHTML.split(" ")[0],el.innerHTML.split(" ")[1]];
}
sortEl.sort(function(a,b){
var x=a[0].toLowerCase(), y=b[0].toLowerCase(), s=a[1].toLowerCase(), t=b[1].toLowerCase();
if(sortMethod==="first"){
return x<y?-1:x>y?1:s<t?-1:s>t?1:0;
}
else{
return s<t?-1:s>t?1:x<y?-1:x>y?1:0;
}
});
while(namesNode.firstChild){
namesNode.removeChild(namesNode.firstChild);
}
for(i=0;i<sortEl.length;i++){
oP=document.createElement("P");
namesNode.appendChild(oP).appendChild(document.createTextNode(sortEl[i][0]+" "+sortEl[i][1]));
namesNode.appendChild(document.createTextNode("\r\n"));
//insert tests -> for style/format
if(sortEl[i][0]==="John"){
oP.style.color="#f00";
}
if(sortEl[i][0]==="Sue")
{ oP.style.color="#0c0";
oP.style.fontWeight="bold";
}
}
}
Automated attendance systems are excessively time-consuming and sophisticated in the current environment. It is possible to strengthen company ethics and work culture by using an effective smart attendance management system. Employees will only have to complete the registration process once, and images get saved in the system's database. The automated attendance system uses a computerized real-time image of a person's face to identify them. The database is updated frequently, and its findings are accurate in a user interactive state because each employee's presence is recorded.
Smart attendance systems have several advantages, including the following:
Students in elementary, secondary, and postsecondary institutions can utilize this system to keep track of their attendance. It can also keep track of workers' schedules in the workplace. Instead of using a traditional method, it uses RFID tags on ID cards to quickly and securely track each person.
1) Real-time tracking – Keeping track of staff attendance using mobile devices and desktops is possible.
2)Decreased errors – A computerized attendance system can provide reliable information with minimal human intervention, reducing the likelihood of human error and freeing up staff time.
3) Management of enormous data – It is possible to manage and organize enormous amounts of data precisely in the db.
4) Improve authentications and security – A smart system has been implemented to protect the privacy and security of the user's data.
5) Reports – Employee log-ins and log-outs can be tracked, attendance-based compensation calculated, the absent list may be viewed and required actions are taken, and employee personal information can be accessed.
This tutorial taught us to build a smart RFID card authentication project from scratch. We also learned how to set up an apache server and design a circuit for the RFID and the LCD screen. To increase your raspberry programming skills, you can proceed to building a more complex system with this code for example implementing face detection that automatically starts the authentication process once the student faces the camera or implement a student log out whenever the student leaves the system. In the following tutorial, we will learn how to build a smart security system using facial recognition.
Parts are composed of a number of surfaces, each surface has a certain size and mutual position requirements. The requirements of the relative position between the surfaces of the parts include two aspects: the dimensional accuracy of the distance between the surfaces and the relative position accuracy (such as axes, parallelism, verticality and circular runout, etc.).
The study of the relative position relationship between the part surfaces can not be separated from the datum, and the position of the part surface can not be determined without a clear datum. In its general sense, the datum is the point, line and surface on the part on which the position of other points, lines and surfaces are determined. According to its different functions, the benchmark can be divided into two categories: design benchmark and process benchmark.
A datum is a point, line, or surface from which measurements are made. In the case of the piston, the design datum refers to the centerline of the piston and the centerline of the pinhole.
The datum used by parts in the process of machining like turning and assembly is called process datum. According to different uses, the processed datum is divided into positioning datum, measuring datum and assembly datum.
1) Positioning datum: The datum is the point of reference from which all other measurements are taken. When positioning the datum, it is important to take into account the size and shape of the workpiece, as well as the type of machining operation being performed. According to the different positioning elements, the most commonly used are the following two categories:
Automatic centering positioning: such as three-claw chuck positioning.
To make a positioning sleeve, such as a stop plate, into a positioning sleeve.
Others are positioned in the V-shaped frame, positioned in the semicircular hole, and so on.
2) measuring datum: A measuring datum is a physical reference point used to take measurements. The datum provides a starting point from which all other measurements are taken. without a measuring datum, it would be difficult to take accurate measurements.
3) Assembly datum: the datum used to determine the position of the part in the assembly or product during assembly, called assembly datum.
In order to produce a surface that meets the specified technical requirements on a certain part of the workpiece, the workpiece must occupy a correct position relative to the tool on the machine tool before machining. This process is often referred to as the "positioning" of the artifact. After the workpiece is positioned, due to the action of cutting force, gravity and so on, a certain mechanism should be used to "clamp" the workpiece so that its determined position remains unchanged. The process of holding the workpiece in the correct position on the machine tool and clamping the workpiece is called "installation".
The quality of workpiece installation is an important issue in machining, which not only directly affects the machining accuracy, the speed and stability of workpiece installation, but also affects the productivity. In order to ensure the relative position accuracy between the machined surface and its design datum, the design datum of the machined surface should occupy a correct position relative to the machine tool when the workpiece is installed. For example, in the precision turning ring groove process, in order to ensure the circular run out of the ring groove bottom diameter and the skirt axis, the design basis of the workpiece must be coincident with the axis line of the machine tool spindle.
[caption id="attachment_171666" align="aligncenter" width="800"]When machining parts on different machine tools, there are different installation methods. The installation methods can be summarized as direct alignment method, marking alignment method and fixture installation method.
The main results are as follows:
1) when using this method, the correct position of the workpiece on the machine tool is obtained through a series of attempts. The specific way is to install the workpiece directly on the machine tool, use the dial meter or the needle on the needle plate, visually correct the correct position of the workpiece, and correct it at the same time until it meets the requirements.
The positioning accuracy and speed of direct alignment depend on the accuracy of alignment, the method of alignment, the tools of alignment and the technical level of workers. Its disadvantage is that it takes more time, the productivity is low, and it has to be operated on the basis of experience and requires high skills of workers, so it is only used in single-piece and small batch production. Such as hard to imitate the shape of the alignment belongs to the direct alignment method. .
2) the method of line alignment this method is a method of using a needle on a machine tool to correct the workpiece according to the line drawn on the blank or semi-finished product, so that it can get the correct position. It is obvious that this method requires an additional marking process. The drawn line itself has a certain width, and there is a marking error when marking, and there is also an observation error when correcting the position of the workpiece, so this method is mostly used in the rough machining of small batch production, low blank precision, and large workpieces that are not suitable for the use of fixtures. For example, the determination of the pin hole position of the two-stroke product is to use the marking method of the indexing head to correct it.
3) fixture installation method: the process equipment used to clamp the workpiece and make it occupy the correct position is called machine tool fixture. The fixture is an additional device of the machine tool, and its position relative to the tool on the machine tool has been adjusted in advance before the workpiece is installed, so it is no longer necessary to find the correct position one by one when machining a batch of workpieces, which can ensure the technical requirements of machining, which saves both labor and trouble. It is an efficient positioning method and is widely used in batch and mass production. Our current piston processing is the fixture installation method used.
Surface roughness is a measure of the irregularities in the surface of a material. It can be caused by a variety of factors, including manufacturing defects, wear and tear, and environmental exposure. Surface roughness can have a significant impact on the performance of a material, as it can affect its resistance to wear, corrosion, and other forms of damage. The term "roughness" refers to both the size and frequency of the irregularities on the surface of a material. The size of the irregularities is typically measured in micrometers or nanometers, while the frequency is typically measured in cycles per inch (CPI).
Surface roughness can also be characterized by its wavelength, which is the average distance between adjacent peaks or troughs. The most common method of measuring surface roughness is with a profilometer, which uses a stylus to trace the contours of the surface and generate a profile. The profile is then analyzed to determine the roughness parameters.
[caption id="attachment_171667" align="aligncenter" width="800"]There are 3 kinds of surface roughness height parameters:
Within the sampling length, the arithmetic mean of the absolute distance between the point on the contour line along the measuring direction (Y direction) and the baseline.
It refers to the sum of the average peak height of the five maximum contours and the average valley depth of the five maximum contours within the sampling length.
The distance between the top line of the highest peak and the bottom line of the lowest valley within the sampling length.
Nowadays, Ra is mainly used in the general machinery manufacturing industry.
The surface quality of the machined workpiece directly affects the physical, chemical and mechanical properties of the machined workpiece, and the working performance, reliability and service life of the product depend to a large extent on the surface quality of the main parts. Generally speaking, the surface quality requirements of important or key parts are higher than ordinary parts, because the parts with good surface quality will greatly improve their wear resistance, corrosion resistance and fatigue resistance.
Surface roughness is an important consideration in many engineering applications, as it can have a significant impact on the performance of a component or system. For example, surface roughness can affect the tribological properties of a material, such as its friction and wear resistance. It can also affect the ability of a material to resist corrosion and other forms of damage.
In addition, surface roughness can influence the optical properties of a material, such as its reflectivity and transmittance. Surface roughness is also a concern in many manufacturing processes, as it can cause defects in the finished product. For example, surface roughness can cause problems with adhesion, machining, and metallurgy. Surface roughness can also make it difficult to achieve desired tolerances in manufacturing processes. As a result, surface roughness is an important factor to consider in the design and manufacture of components and systems.
There are many ways to reduce surface roughness, including using methods such as honing, grinding, lapping, and polishing. In addition, surface treatments such as electroplating, ion implantation, and vapor deposition can also be used to improve the surface smoothness of the material.
1) the function of cutting fluid.
Cooling effect: the cutting heat can take away a large amount of cutting heat, improve the heat dissipation conditions, and reduce the temperature of the tool and the workpiece, thus prolonging the service life of the tool and preventing the dimensional error caused by thermal deformation of the workpiece.
Lubrication: the cutting fluid can penetrate between the workpiece and the tool, forming a thin adsorption film in the tiny gap between the chip and the tool, reducing the friction coefficient, so it can reduce the friction between the chip and the workpiece, reducing the cutting force and cutting heat, reduce the tool wear and improve the surface quality of the workpiece, which is especially important for finishing and lubrication.
Cleaning function: the tiny chips produced in the cleaning process are easy to adhere to the workpiece and the tool, especially when drilling deep holes and twisted holes, the chips are easy to block in the chip-holding groove, affecting the surface roughness of the workpiece and the service life of the tool. The use of cutting fluid can wash away the chips quickly, so that the cutting can be carried out smoothly.
2) types: there are two kinds of common cutting fluids.
Emulsion: mainly plays a cooling role, emulsion is made by diluting emulsified oil with 15-20 times water, this kind of cutting fluid has high specific heat, low viscosity, good fluidity and can absorb a lot of heat. This kind of cutting fluid is mainly used to cool tools and workpieces, improve tool life and reduce thermal deformation. The emulsion contains more water and has poor lubrication and antirust function.
Cutting oil: the main component of cutting oil is mineral oil, this kind of cutting fluid has low specific heat, high viscosity and poor fluidity, and mainly plays the role of lubrication. Mineral oil with low viscosity is commonly used, such as engine oil, light diesel oil, kerosene and so on.
MIG welding is the process of joining metal parts by melting the base metal and fusing it with a filler metal. The name is derived from the phrase "Metal Inert Gas," which is commonly used. The process involves using an arc between an electrode and the workpiece, which melts the base metal and fuses it with the filler metal.
Mig welding is an extremely versatile process that can be used to create virtually any shape or configuration of the welded joint. You can also use it for many different metals, including low carbon steel, stainless steel, and aluminum alloys.
The MIG welding equipment you'll need includes:
The torch supplies the heat needed to melt the wire. It has a handle, an on/off trigger, and a tip where the wire feeds. The torch also has an air regulator that controls how much oxygen is mixed with natural gas going into the burner head (the part of the torch where you attach your wire). You need to use this setting depending on what type of metal you're working with.
This holds the compressed gas used as fuel for the welding process. Welding torch to combust inside and heats up when activated by pressing down on your trigger switch or button. You'll also see these bottles or tanks. Still, they're all essentially identical in function—they vary slightly in size based on their gas capacity.
Protects your eyes from any spark produced by friction during MIG welding processes such as spatter creation during stick electrode usage.
The thickness of welding wire is measured in thousands of an inch (thou) or mils. The most common sizes of welding wire are as follows: 22-gauge (0.023"), 24-gauge (0.024"), 26-gauge (0.025"), and 28-gauge (0.028")
The gauge refers to the thickness of the wire and the higher the number, the thinner it is and the less heat it can take before melting or burning out on you when you're trying to weld with it.
You can adjust these settings to improve the quality of your weld or reduce the amount of heat you apply to the weld.
This controls how much material is deposited in each pass and determines how much heat will be required to melt it. A smaller diameter wire requires less energy and produces a cleaner weld, but it also requires more passes to build up a good bead.
This is how quickly you move through the molten pool when welding; too fast and you'll get porosity; too slow and it may take longer than necessary for each pass due to poor cooling.
MIG welding is a great way to get started in welding. It's really easy, and it's safe for beginners. You can use Mig welding for various projects, including light steelwork, ironmongery, and aluminum.
Mig welders are versatile, meaning they can handle almost any metal you want to weld with their different settings. They also include gas supplies that provide the shielding gas needed during the process, which makes them good value for money compared with other welder types.
Mig welding is a skill that anyone can pick up with only a little time and practice. If you're interested in learning how to weld but don't know where to begin, here are a few options:
Yes, you can use a TIG welder as a MIG welder. If your TIG welder has adjustable amperage, you can follow the steps listed above. TIG welders tend to have higher amperage settings than MIG welders, but they still require the same power as a MIG welder. Since a TIG welder is more versatile, it's worth investing in one if you're regularly using welding equipment or want more control over your project.
The machine uses a hollow wire and is filled with flux. Flux is the material that covers the weld and protects it from contamination when welding. You can use a flux core welder outdoors because the wind doesn't affect the weld because of the flux inside the wire. A flux core welder can also be used indoors with no gas if the welding material thickness is 1/2" or less. It isn't recommended to use Flux Core Welding for thinner materials on higher amperage settings, as there will be too much spatter and smoke
This is a common question, and the answer is: yes, but it depends on the type of wire you are using. Flux core wire doesn't need shielding gas, while solid welding wire does need shielding gas.
The type of work you do will determine whether you need a machine that uses shielding gas or flux core wire. For example, if you are working outdoors or inside with wind, you can use flux core wire since it does not require an external gas cylinder.