Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the previous tutorial, we built a motion sensor-based security system with an alarm. Additionally, we discovered how to use Twilio to notify the administrator whenever an alarm is triggered. However, in this tutorial, we'll learn how to build a stop motion film system using raspberry pi 4.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | Jumper Wires | Amazon | Buy Now | |
3 | Raspberry Pi 4 | Amazon | Buy Now |
With a Raspberry Pi, Py, and a pi-camera module to capture images, you can create a stop-motion animated video. In addition, we'll learn about the various kinds of stop motion systems and their advantages and disadvantages.
The possibilities are endless when it comes to using LEGO to create animations!
Using your RPi to build a stop motion machine, you'll discover:
How to install and utilize the picamera module on the RPi
This article explains how to take photos with the Picamera library.
RPi GPIO Pushbutton Connection
Operate the picamera by pressing the GPIO pushbutton
How to use avconv to create a video clip from the command prompt
Raspberry Pi 4
Breadboard
Jumper wires
Button
It is recommended that FFmpeg comes preconfigured on the most recent release of Raspbian. If you don't have it, launch the terminal then type:
sudo apt-get update
sudo apt-get upgrade
sudo apt install FFmpeg
Inanimate things are given life through the use of a sequence of still images in the stop-motion cinematography technique. Items inside the frame are shifted slightly between every picture to create the illusion of movement when stitched together.
You don't need expensive gadgets or Graphics to get started in stop motion. That, in my opinion, is the most intriguing aspect of it.
If you've ever wanted to learn how to make a stop-motion video, you've come to the right place.
Product Animation can also be referred to as the frame-by-frame movement of things. You're free to use any items around you to tell stories in this environment.
Changing clay items in each frame is a key part of the claymation process. We've seen a lot of clever and artistic figures on the big screen thanks to wires and clay.
Making folks move! It is rarely utilized. For an artist to relocate just a little each frame, and the number of images you would need, you'll need a lot of patience and possibly a lot of money, if you're hiring them to do so.
The degree of freedom and precision with which they can move is also an important consideration. However, if done correctly, this kind can seem cool, but it can also make you feel a little dizzy at times.
One can do so much with cuts in cutout motion because of this. two-dimensional scraps of paper may appear lifeless, yet you may color & slice them to show a depth of detail.
It's a lot of fun to play about with a cartoon style, but it also gives you a lot more control over the final product because you can add your graphics and details. However, what about the obvious drawback? I find the task of slicing and dicing hundreds of pieces daunting.
Having puppets can be a fun and creative way to tell stories, but they can also be a pain in the neck if you're dealing with a lot of cords. However, this may be a challenge for professional stop motion filmmakers who are not the greatest choice to work with at first. These puppets are of a more traditional design.
When animators use the term "puppet" to describe their wire-based clay character, they are referring to claymation as a whole. Puppets based on the marionette style are becoming less popular.
Position the items or performers behind a white sheet and light their shadows on the sheet with a backlight. Simple, low-cost methods exist for creating eye-catching animations of silhouettes.
The duration takes to create a stop-motion video is entirely dependent on the scale and nature of your project. Testing out 15- and 30-second movies should only take an hour or two. Because of the complexity of the scenes and the usage of claymation, stop-motion projects can take days to complete.
You must first attach the camera to the Pi before it can begin rebooting.
Next to Ethernet, find the camera port. Take a look at the top.
The blue side of the strip should face the Ethernet port when it is inserted into the connector. Push that tab downward while keeping the ribbon in place.
Use the app menu to bring up a command prompt. The following command should be typed into the terminal:
libcamera-hello
If all goes well, you'll see a sneak peek of what's to come. What matters is that it's not upside-down; you can fix it afterward. To close the preview, hit Ctrl + C.
For storing an image on your computer, run the command below:
libcamera-jpeg -o test.jpg
To examine what files are in your root folder, type ls in the command line and you'll see test.jpg among the results.
Files and folders will be displayed in the taskbar's file manager icon. Preview the image by double-clicking test.jpg.
There is no default way to make Python Picamera work with Raspbian newest version.
To make use of the camera module, one must activate the camera's legacy mode.
The command below must be entered into a command window:
sudo raspi-config
When you get to Interface Options, hit 'Enter' on your keyboard to save your changes.
Ensure that the 'Legacy Camera option is selected then tap the 'Return' key.
Select Yes using the pointer keys and hit the 'Return' key.
Repeat the process of pressing 'Return' to verify.
Click on Finish with your mouse cursor buttons.
To restart, simply press the 'Return' key.
Py IDLE can be accessed from the menu bar.
While in the menu, click Files and then New Window to launch a Python code editor.
Paste the code below paying attention to the capitalization with care into the newly opened window.
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
sleep(3)
camera.capture('/home/pi/Desktop/image.jpg')
camera.stop_preview()
Using the File menu, choose Save Animated film.
Use the F5 key to start your program.
You should be able to locate image.jpg on your desktop. It's as simple as clicking it twice to bring up a larger version of the image.
It's possible to fix an upside-down photo by either repositioning your picamera with a camera stand or by telling Python to turn the picture. Adding the following lines will accomplish this.
camera.rotation = 180
Once the camera is set to PiCamera(), the following is the result:
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.rotation = 180
camera.start_preview()
sleep(3)
camera.capture('/home/pi/Desktop/image.jpg')
camera.stop_preview()
A fresh photo with the proper orientation will be created when the file is re-run. Do not remove these lines of code from your program when making the subsequent modifications.
Hook the Raspberry Pi to the pushbutton as illustrated in the following diagram with a breadboard and jumper wires:
Pushbutton may be imported at the beginning of the program, attached to pin17, and the sleep line can be changed to use the pushbutton as a trigger in the following way:
from picamera import PiCamera
from time import sleep
from gpiozero import Button
button = Button(17)
camera = PiCamera()
camera.start_preview()
button.wait_for_press()
camera.capture('/home/pi/image.jpg')
camera.stop_preview()
It's time to get to work!
Soon as the new preview has begun, press the pushbutton on the Pi to take a picture.
If you go back to the folder, you will find your image.jpg there now. Double-click to see the image once more.
For a self-portrait, you'll need to include a delay so that you can get into position before the camera board takes a picture of you. Modifying your code is one way to accomplish this.
Before taking a picture, put in a line of code that tells the program to take a little snooze.
camera.start_preview()
button.wait_for_press()
sleep(3)
camera.capture('/home/pi/Desktop/image.jpg')
camera.stop_preview()
It's time to get to work.
Try taking a selfie by pressing the button. Keep the camera steady at all times! It's best if it's already mounted somewhere.
Inspect the photo in the folder once more if necessary. You can snap a second selfie by running the application again.
This is made easier with the aid of a well-designed setup. To avoid blurry photos due to camera shaking, you will most likely want to use a tripod or place your camera on a flat surface.
If you don't press the push button every time, your stop-motion movie will appear the best. To get the camera to snap a picture, use a wireless trigger.
Maintain your shutter speed, ISO, aperture, and white balance same for every photo you shoot. There are no "auto" settings here. You have the option of selecting and locking the app's configurations first. As long as your preferences remain consistent throughout all of your photos, you're good to go. The configurations will adapt automatically as you keep moving the items, which may cause flickering from image to image if you leave them on auto.
It's ideal to shoot indoors because it's easier to regulate and shields us from the ever-changing light. Remember to keep an eye out for windows if you're getting more involved. Try using a basic lighting setup, where you can easily see your items and the light isn't moving too much. In some cases, some flickering can be visible when you're outside of the frame. Other times the flickering works well with animation, but only if it does so in a way that doesn't disrupt the flow of the project.
You do not get extremely technical with this in the beginning, but you'll need to understand how many frames you'll have to shoot to achieve the series you desire. One sec of the film is typically made up of 12 images or frames. If your video is longer than a few secs, you risk seeming like a stop motion animation.
When you're filming your muted stop motion movie, you can come up with creative ways to incorporate your sound later.
The next step is to experiment with creating a stop motion video using a collection of still photos that you've captured with the picamera. Note that stills must be saved in their folder. Type "mkdir animation" in the command line.
When the button is pushed, add a loop to your program so that photographs are taken continuously.
camera.start_preview()
frame = 1
while True:
try:
button.wait_for_press()
camera.capture('/home/pi/animation/frame%03d.jpg' % frame)
frame += 1
except KeyboardInterrupt:
camera.stop_preview()
break
Since True can last indefinitely, you must be able to gently end it. If you use Ctrl + C to force it to end, the picamera preview will collapse and the loop will be terminated because it is using try-except.
Files stored as "frame" with a three-digit number preceded by a leading zero (009,005.) are known as "frame" files because of the % 03d format. This makes it simple to arrange them in the proper sequence for the video.
To capture each following frame, simply push the button a second time once you've finished rearranging the animation's main element.
To kill the program, use Ctrl + C when all the images have been saved.
Your image collection can be viewed in the folder by opening the animation directory.
To initiate the process of creating the movie, go to the terminal.
Start the movie rendering process by running the following command:
FFmpeg -r 10 -i animation/frame%03d.jpg -qscale 2 animation.mp4
Because FFmpeg and Py recognize the percent 03d formatting, the photographs are sent to the movie in the correct sequence.
Use vlc to see your movie.
vlc animation.mp4
The renderer command can be edited to change the refresh rates. Try adjusting -r 10 to a different value.
Modify the title of the rendered videos to prevent them from being overwritten. Modify animation.h264 to a different file to accomplish this.
Corporations benefit greatly from high-quality stop motion films, despite the effort and time it takes to produce them. One of these benefits is that consumers enjoy sharing these movies with friends, and their inspiring content can be associated with a company. Adding this to a company's marketing strategy can help make its product extra popular and remembered.
When it comes to spreading awareness and educating the public, stop motion films are widely posted on social media. It's important to come up with an original idea for your stop motion movie before you start looking for experienced animators.
In the early days of filmmaking, stop motion was mostly employed to give animated characters the appearance of mobility. The cameras would be constantly started and stopped, and the multiple images would all be put together to tell a gripping story.
It's not uncommon to see films employ this time-honored method as a tribute to the origins of animations. There's more, though.
In the recent resurgence of stop motion animations, strange and amazing props and procedures have been used to create these videos. Filmmakers have gone from generating stop motion with a large sheet of drawings, to constructing them with plasticine figures that need to be manually manipulated millimeters at a time, and to more esoteric props such as foodstuffs, domestic objects, and creatures.
Using this technique, you can animate any object, even one that isn't capable of moving by itself. A stop-motion movie may be made with anything, thus the options are practically limitless.
A wide range of material genres, from educational films to comedic commercials, is now being explored with stop motion animation.
When it comes to creating marketing and instructional videos, stop motion animations is a popular choice due to their adaptability. An individual video can be created.
Although the film is about five minutes long, viewers are likely to stick with it because of its originality. The sophisticated tactics employed captivate the audience. Once you start viewing this stop motion video, it's impossible to put it down till the finish.
It's easy to remember simple but innovative animations like these. These movies can assist a company's image and later recall be more positive. Stop motion video can provoke thought and awe in viewers, prompting them to spread the creative message to their social networks and professional contacts.
It is becoming increasingly common for organizations of all kinds to include stop-motion animations in their advertisements.
Stop-motion films can have a positive impact on both education and business. Employees, customers, and students all benefit from using them to learn difficult concepts and methods more enjoyably. Stop motion filmmaking can liven up any subject matter, and pupils are more likely to retain what they've learned when it's done this way.
Some subjects can be studied more effectively in this way as well. Using stop motion films, for instance, learners can see the entire course of an experiment involving a slow-occurring reaction in a short amount of time.
Learners are given a stop motion assignment to work on as a group project in the classroom. Fast stop motion animation production requires a lot of teamwork, which improves interpersonal skills. Some learners would work on the models, while others might work on the backdrops and voiceovers, while yet others might concentrate on filming the scenes and directing the actors.
The usage of stop motion movies can be utilized to explain product uses rapidly, even though the application of the device and the output may take a while. You can speed up the timeline as much as you want in stop motion animations!
For safety and health demonstrations or original sales demonstrations, stop motion instructional films may also be utilized to effectively express complex concepts. Because of the videos' originality, viewers are more likely to pay attention and retain the content.
Some incredibly creative music videos have lately been created using stop motion animations, which has recently seen a resurgence in popularity. Even the human body could be a character in this film.
Stop-motion animations have the potential to be extremely motivating. Sometimes, it's possible to achieve it by presenting things in a novel way, such as by stacking vegetables to appear like moving creatures. The sky's the limit when it comes to what you can dream up.
When it comes to creating a stop motion movie, it doesn't have to be complicated. If you don't have any of these things in your possession, you'll need to get them before you can begin filming. However, if you want to create a professional-level stop motion film, you'll need to enlist the help of an animation company.
As a marketing tool, animated videos may be highly effective when they are created by a professional team.
The story of a motion-capture movie is crucial in attracting the attention of audiences, so it should be carefully planned out before production begins. It should be appropriate for the video's intended audience, brand image, and message. If you need assistance with this, consider working with an animation studio.
But there are several drawbacks to the overall process of stop motion filmmaking, which are difficult to overcome. The time it takes to create even a min of footage is the most remarkable. The time it takes to get this film might range from a few days to many weeks, depending on the approach used.
Additionally, the amount of time and work that is required to make a stop-motion movie might be enormous. This may necessitate the involvement of a large team. Although this is dependent on the sort of video, stop motion animating is now a fairly broad area of filmmaking, which can require many different talents and approaches.
Using the Raspberry Pi 4, you were able to create a stop-motion movie system. Various stop motion technologies were also covered, along with their advantages and disadvantages. After completing the system's basic functions and integrating additional components of your choice, you're ready to go on to the next phase of programming. Using raspberry pi 4 in the next article, we will build an LED cube.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Jumper Wires | Amazon | Buy Now | |
2 | DS1307 | Amazon | Buy Now | |
3 | Raspberry Pi 4 | Amazon | Buy Now |
Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we implemented a speech recognition system using raspberry pi and used it in our game project. We also learned the fundamentals of speech recognition and later built a game that used the user's voice to play. However, this tutorial will integrate a real-time clock with our raspberry pi four and use it to build a digital clock. First, we will learn the fundamentals of the RTC module and how it works, then we will build a digital clock in python3. With the help of a library, we'll demonstrate how to integrate an RTC DS3231 chip with Pi 4 to keep time.
RTCs are clock units, as the name suggests. There are eight pins on the interface of the RTC IC, the DS1307. An SRAM cell backup of 56 bytes is included in the DS1307, a small clock, and a calendar. Secs, mins, hrs, days, and months are all included in the timer. When a month has fewer than 31 days, the date of the end of this month is automatically shifted.
They can be found in integrated circuits that monitor time and date as a calendar and clock. An RTC's key advantage is that the clock and calendar will continue to function in the event of a power outage. The RTC requires only a small amount of power to operate. Embedded devices and computer motherboards, for example, contain real-time clocks. The DS1307 RTC is the subject of this article.
The primary purpose of a real-time clock is to generate and keep track of one-second intervals.
The diagram to the right shows how this might look.
A program's method, A, is also displayed, which reads a second counter and schedules an action, B, to take place three secs from now. This kind of behavior is known as an alarm. Keep in mind that the secs counter doesn't start and stop. Accuracy and reliability are two of the most important considerations when choosing a real-time clock.
A real-time clock's hardware components are depicted in the following diagram.
An auxiliary crystal and a spectral reference can be used with a real-time clock's internal oscillator, frequently equipped with an interior crystal. The frequency of all clocks is 32,768 Hertz. A TCXO can be used with an exterior clock input since it is extremely accurate and steady.
An input to a Prescaler halves the clock by 32,768 to generate a one-second clock, which is selectable via a multiplexer.
For the most part, a real-time clock features a secs counter with at least thirty-two bits. Certain real-time clocks contain built-in counters to maintain track of the date and time.
Firmware is used to keep track of time and date in a simple real-time clock. The 1 Hertz square waveform from an output terminal is a frequent choice. It's feasible for a real-time clock to trigger a CPU interrupt with various occurrences.
Whenever the whole microcontroller is turned off, a real-time clock may have a separate power pin to keep it running. In most cases, a cell or external power source is attached to this pin's power supply.
Using a 32,768 Hertz clock supply, real-time clock accuracy is dependent on its precision. The crystals are the primary source of inaccuracy in a perfectly-designed signal generator. The internal oscillators and less costly crystals can be employed with sophisticated frequency enhancement techniques for incredibly precise timing. A crystal has three primary causes of inaccuracy.
Tolerances for the initial circuitry and the crystal.
Temperature-related crystal smearing.
Crystallization
Real-time clock accuracy is seen graphically in the figure below:
Using this graph, you can see how a particular concern tolerance changes with increased temperature. The temperature inaccuracy is visible inside the pink track. The quadratic function used to predict the future characteristics of crystals is essential to correct for temperature. Once a circuit board has been built and the temp is determined, an initial error measurement can be used to correct most of the other sources of error.
To acquire an accurate reading, you'll need to adjust to the yellow band. A year's worth of 1 ppm equals 30 seconds of your life. To some extent, the effects of crystallization can't be undone. Even though you're getting older, it's usually just a couple of years.
Pin 1, 2: The usual 32.768-kilohertz quartz crystal can be connected here.
Pin 3: Any conventional 3 Volt battery can be used as an input. To function properly, the battery voltage must be in the range of 2V to 3.5V.
Pin 4: This is the ground.
Pin 5: Streaming data input or output. It serves as the serial interface input and output, and a resistor is required to raise the power to the maximum of 5.5 volts. Irrespective of the current on the VCC line.
Pin 6: Input for the serial timer Data sync is done via the interface clock here.
Pin 7: Driver for the output waveform. A square wave frequency can be generated by connecting a signal to the out pin with the sqwe value set to 1.
Pin 8: The main source of power. Data is written and read whenever the voltage provided to the gadget is within standard limits.
an output waveform that can be programmed
Power-failure detection and switch circuitry automation
Consume less power
Real-time data and time is provided
The rtc is mainly used for writing and reading to and from its registers. Addresses for the rtc registers can be found from zero to 63. If necessary, transitory variables can be stored in random access memory in place of the first eight clock registers. Second, minute, hour, dates, months, and years are all included in the clock's top seven registers. Let's have a look at how DS1307 operates.
The sole purpose of a real-time clock is to record the passage of time. Regular monitoring of the passing of time is critical to the operation of computers and many smaller electronic devices. Although it simply serves one purpose, managing time has many applications and functions. Nearly every computer activity uses measurements taken and monitoring the present time, from the generation of random numbers to cybersecurity.
Kinematic activities or a real-time clock module keep track of time in a classic watch, so how does it work?
The answer is in crystal oscillators, as you would have guessed. The real-time clock uses oscillator pulses in electronic components to keep track of time. Quartz crystals are commonly used in this oscillator, which typically operates at a frequency of 32 kilohertz. Software cleverness is needed to take the processing times and correct any differences related to power supply variations or tiny changes in cycle speed.
Auxiliary tracking is used in real-time clock modules, which uses an exterior signal to lock onto a precise, uniform time. As a result, this does not override the internal measures but complements them to ensure the highest level of accuracy. An excellent example is a real-time clock module that relies on external signals, such as those found on cell phones. Oscillation cycles are counted again if the phone loses access to the outside world.
An object made of quartz crystals has a physical form. As a result, the real-time clock module accuracy can degrade over time due to extreme heat and cold exposure. Many modules include a temperature sensing technique to counteract the effects of temperature variations and improve the oscillator's overall accuracy. There is a wide range of frequencies available in cheap a crystal used in Computer systems. So, the error rate is around 72 milliseconds per hr in real-time. In this case, the following criteria are met:
Start data transfer: Clock and Data lines must be high for a START.
Stop data transfer: In STOP mode, data lines go from low to high while the clock lines remain high.
Data valid: START conditions must be met before a clock signal high period can be used to determine if the transmission line is stable. The info on the channel must be updated when the clock signal is at a lower frequency. Each piece of data has one clock frequency.
Each data transfer begins and ends with START and END conditions, respectively. During the START or STOP circumstances, the data rate bytes that can be sent are not restricted and are set by the master unit. Byte-by-byte, each recipient validates the transmission with a 9th bit.
A real-time clock can be used in systems to correct timing faults in two different ways. The graphic below shows the Prescaler counting the oscillation cycles per the second period.
The advantage of this method is that the time interval between each second is only slightly altered. However, a variable Prescaler and an extra register for storing the prescale counts, and the interval between applications are necessary for this technology to work.
Suppose the real-time clock does not contain a built-in prescaler that can be used to fine-tune the timing. This diagram illustrates a different way of approaching the problem.
The numbers in the rectangles indicate the secs counter. The program continuously tracks and calculates the real-time clock seconds count. A second is added or subtracted to compensate for the cumulative mistake whenever the error reaches 1 second.
This strategy has a drawback: the difference in seconds whenever an adjustment is made might be rather considerable. With this method, you can use it with any real-time clock.
To keep track of the current date and time, certain RTCs use electronic counters. Counting seconds, mins, hours, days, weeks, months, and years and taking leap years into account is necessary. Applications can also keep track of the time and date.
The second counter of a real-time clock can be used to implement this system on a microcontroller.
The proprietary method gets time(), or something similar is commonly used to access the device timer. Using get time() is as simple as taking a second counter and printing out the resulting value. The library handles the remainder of the work to convert this time in secs to the present time of day and date.
If you turn off your RPi, you won't have access to its internal clock. It uses a network clock protocol that requires an internet connection when it is turned on. A real-time timer must be installed in the raspberry pi to keep time without relying on the internet.
First, we'll need to attach our real-time control module to the RPi board. Ensure the Rpi is deactivated or unplugged before beginning any cabling.
Make use of the links in the following table and figure:
The rtc is powered by a 3.3V supply; hence it needs an input of 3.3V. Connect the RTC to the Pi 4 via a communication protocol.
We must first enable I2C in the RPi to use the RTC component.
Open a terminal window and enter the command raspi-config:
sudo raspi-config
Select the Interfacing Option in the configuration tool.
Selecting I2C will allow I2C in the future.
Before rebooting, enable the I2C.
sudo reboot
Confirm the Connections of the RTC. Then using the I2C interface, we can check to see if our real-time clock module is connected to the device.
Ensure your Pi's software is updated before putting any software on it. Defective dependencies in modules are frequently to blame for installation failures.
sudo apt-get update -y
sudo apt-get upgrade -y
If our RPi detects a connection from the real-time clock module, we'll need python-SMBus i2c-tools installed.
On the command line:
sudo apt-get install python-SMBus i2c-tools
Then:
sudo i2cdetect -y 1
Many real-time devices use the 68 address. This indicates that any driver is not using the address. If the address returned by the system is "UU," it indicates that a program actively uses it.
Install Python.
sudo apt-get install python-pip
sudo apt-get install python3-pip
To get the git library, you'll first need to get the git installed on your computer.
$sudo apt install git-all
First, we will have to download the library using the following command in the command line.
sudo git clone https://github.com/switchdoclabs/RTC_SDL_DS3231.git
A file called "RTCmodule" should be created after cloning. The following code should be copied and pasted into a new py file. Please save it.
import time
import SDL_DS3231
ds3231 = SDL_DS3231.SDL_DS3231(1, 0x68)
ds3231.write_now()
while True:
print “Raspberry Pi=\t” + time.strftime(%Y-%m-%d %H:%M:%S”)
print “Ds3231=\t\t%s” % ds3231.read_datetime()
time.sleep(10.0)
We begin by importing the packages we plan to use for this project.
The clock is initialized.
Next, the RPi and real-time clock module times are printed.
Then, execute the program.
$ python rtc.py
In this case, the output should look something like the following.
The write all() function can alter the rtc to run at a different rate than the Pi's clock.
ds3231.write_all(29,30,4,1,3,12,92,True)
import time
import SDL_DS3231
ds3231 = SDL_DS3231.SDL_DS3231(1, 0x68)
ds3231.write_all(29,30,4,1,3,12,92,True)
while True:
print “Raspberry Pi=\t” + time.strftime(%Y-%m-%d %H:%M:%S”)
print “Ds3231=\t\t%s” % ds3231.read_datetime()
time.sleep(10.0)
Time and date are shown to have adjusted on the rtc. With this, we'll be able to use the real-time clock and the package for various projects. However, more setup is required because we'll use the RPi's real-time clock for this project.
, we will configure the rtc on the RPi used in this project. The first thing to do is to follow the steps outlined above.
The real-time clock address is 0x68, so we must use that. Configuration.txt must be edited so that a device tree layer can be added.
sudo nano /boot/config.txt
Please note where your real-time clock chip is in the Pi config file and include it there.
dtoverlay=i2c-rtc,ds1307
or
dtoverlay=i2c-rtc,pcf8523
or
dtoverlay=i2c-rtc,ds3231
After saving and restarting the Pi, inspect the 0x68 address status.
sudo reboot
After reboot, run:
sudo i2cdetect -y 1
Once the "fake hwclock" has been disabled, we can use the real-time clock real hwclock again.
The commands below should be entered into the terminal to remove the bogus hwclock from use.
sudo apt-get -y remove fake-hwclock
sudo update-RC.df fake-hwclock remove
sudo systemctl disable fake-hwclock
We can use our rtc hardware as our primary clock after disabling the fake hwclock.
sudo nano /lib/udev/hwclock-set
Afterward, type in the lines below.
#if [-e/run/systemd/system];then
#exit 0
#if
#/sbin/hwclock --rtc=$dev --systz --badyear
#/sbin/hwclock --rtc=$dev --systz
We can now run some tests to see if everything is working properly.
To begin with, the real-time clock will show an inaccurate time on its screen. To use the real-time clock as a device, we must first correct its time.
To verify the current date, the real-time clock is launched.
$sudo hwclock
It is necessary to have an internet connection to our real-time clock module to synchronize the accurate time from the internet.
To verify the date of the terminal and time input, type in:
date.
Time can also be manually set using the line below. It's important to know that the internet connection will immediately correct it even if you do this manual process.
date --date="May 26 2022 13:12:10"
The real-time clock module time can also be used to set the time of the Rpi.
sudo hwclock –systems
or
sudo hwclock –w
Setting the time on our real-time clock module is also possible using:
sudo hwclock --set --date "Thu May 26 13:12:10 PDT 2022"
Or
sudo hwclock --set --date "26/05/2022 13:12:45"
Once the time has been synchronized, the real-time clock module needs a battery inserted to continue saving the timestamp. Once the real-time clock module has been installed on the RPi, the timestamp will be updated automatically!
Building our application
Next, we'll build a digital clock that includes an alarm, stopwatch, and timer features. It is written in Python 3, and it will operate on a Raspberry Pi using the Tkinter GUI library.
The library includes the Tkinter Graphical interface framework, which runs on various operating systems and platforms. Cross-platform compatibility means that the code can be used on every platform.
Tkinter is a small, portable, and simple-to-use alternative to other tools available. Because of this, it is the best platform for quickly creating cross-platform apps that don't require a modern appearance.
Python Tkinter module makes it possible to create graphical user interfaces for Python programs.
Tkinter offers a wide range of standard GUI elements and widgets to create user interfaces. Controls and menus are included in this category.
Tkinter has all of the benefits of the TK package, thanks to its layered design. When Tkinter was first developed, its Graphical interface toolkit had already had time to evolve, so it benefited from that experience when it was created. As a result, Tk software developers can learn Tkinter very quickly because converting from Tcl/Tcl to Tkinter is extremely simple.
Because Tkinter is so user-friendly, getting started with it is a breeze. The Tkinter application hides the complex and detailed calls in simple, understandable methods. When it comes to creating a working prototype rapidly, python is a natural choice. Because of this, it is anticipated that its favorite Graphical interface library will be applied similarly.
Tkinter-based Py scripts don't require any running changes on a different platform. Any environment that implements python can use Tkinter. This gives it a strong benefit over many other competitive libraries, typically limited to a single or a handful of operating systems. Tkinter, on the other hand, provides a platform-specific look and feel.
Python distributions now include Tkinter by default. Therefore, it is possible to run commands using Tkinter without additional modules.
Tkinter's slower execution may be due to the multi-layered strategy used in its design. Most computers are relatively quick enough to handle the additional processes in a reasonable period, despite this being an issue for older, slower machines. When time is of the essence, it is imperative to create an efficient program.
Import the following modules.
from Tkinter import *
from Tkinter. ttk import *
import DateTime
import platform
We are now going to build a Tkinter window.
window = Tk()
window.title("Clock")
window.geometry('700x500')
Here, we've created a basic Tkinter window. "Clock" has been officially renamed. And make it a 700X500 pixel image.
Tkinter notebook can be used to add tab controls. We'll create four new tabs, one for each of the following: Clock, Alarm, Stopwatch, and Timer.
tabs_control = Notebook(window)
clock_tab = Frame(tabs_control)
alarm_tab = Frame(tabs_control)
stopwatch_tab = Frame(tabs_control)
timer_tab = Frame(tabs_control)
tabs_control.add(clock_tab, text="Clock")
tabs_control.add(alarm_tab, text="Alarm")
tabs_control.add(stopwatch_tab, text='Stopwatch')
tabs_control.add(timer_tab, text='Timer')
tabs_control.pack(expand = 1, fill ="both")
We've created a frame for every tab and then added it to our notebook.
We are now going to add the clock Tkinter components. Instead of relying on the RPi to provide the date and time, we'll use the rtc module time and date instead.
We'll include a callback function to the real-time clock module in the clock code to obtain real-time.
def clock():
date_time = ds3231.read_datetime()
time_label.config(text = date_time)
time_label.after(1000, clock)
Timestamps are retrieved from the DateTime package and transformed to Am or Pm time. This method must be placed after Tkinter's initialization but before the notebook.
We'll design an Alarm that will activate when the allotted time has expired in the next step.
get_alarm_time_entry = Entry(alarm_tab, font = 'calibri 15 bold')
get_alarm_time_entry.pack(anchor='center')
alarm_instructions_label = Label(alarm_tab, font = 'calibri 10 bold', text = "Enter Alarm Time. Eg -> 01:30 PM, 01 -> Hour, 30 -> Minutes")
alarm_instructions_label.pack(anchor='s')
set_alarm_button = Button(alarm_tab, text = "Set Alarm", command=alarm)
set_alarm_button.pack(anchor='s')
alarm_status_label = Label(alarm_tab, font = 'calibri 15 bold')
alarm_status_label.pack(anchor='s')
Set the alarm with the following format: HH: MM (PM/AM). For example, the time at which 01:30 PM corresponds to 1:30 p.m. As a final step, a button labeled "Set Alarm Button." In addition, the alarm status label indicates if the alarm has been set and shows the current time.
The set alarm button will trigger an alarm when this method is called. Replace Clock and Notebook setup functions with this one.
def alarm():
main_time = datetime.datetime.now().strftime("%H:%M %p")
alarm_time = get_alarm_time_entry.get()
alarm_time1,alarm_time2 = alarm_time.split(' ')
alarm_hour, alarm_minutes = alarm_time1.split(':')
main_time1,main_time2 = main_time.split(' ')
main_hour1, main_minutes = main_time1.split(':')
if int(main_hour1) > 12 and int(main_hour1) < 24:
main_hour = str(int(main_hour1) - 12)
else:
main_hour = main_hour1
if int(alarm_hour) == int(main_hour) and int(alarm_minutes) == int(main_minutes) and main_time2 == alarm_time2:
for i in range(3):
alarm_status_label.config(text='Time Is Up')
if platform.system() == 'Windows':
winsound.Beep(5000,1000)
elif platform.system() == 'Darwin':
os.system('say Time is Up')
elif platform.system() == 'Linux':
os.system('beep -f 5000')
get_alarm_time_entry.config(state='enabled')
set_alarm_button.config(state='enabled')
get_alarm_time_entry.delete(0,END)
alarm_status_label.config(text = '')
else:
alarm_status_label.config(text='Alarm Has Started')
get_alarm_time_entry.config(state='disabled')
set_alarm_button.config(state='disabled')
alarm_status_label.after(1000, alarm)
In this case, the module's time is taken and formatted in this way in case the If the time provided matches the time stored, it continues. In this case, it beeps following the operating system's default chime.
As a final step, we'll add a stopwatch to our code.
To complete our timer, we'll add all Tkinter elements now.
stopwatch_label = Label(stopwatch_tab, font='calibri 40 bold', text='Stopwatch')
stopwatch_label.pack(anchor='center')
stopwatch_start = Button(stopwatch_tab, text='Start', command=lambda:stopwatch('start'))
stopwatch_start.pack(anchor='center')
stopwatch_stop = Button(stopwatch_tab, text='Stop', state='disabled',command=lambda:stopwatch('stop'))
stopwatch_stop.pack(anchor='center')
stopwatch_reset = Button(stopwatch_tab, text='Reset', state='disabled', command=lambda:stopwatch('reset'))
stopwatch_reset.pack(anchor='center')
The stopwatch method is activated by pressing the Start, Stop, and Reset Buttons located below the Stopwatch Label.
Stopwatch counters will be included in the next section. Two stopwatch counters will be added first. Tkinter Initialization and the clock's method should be added to the bottom of the list.
stopwatch_counter_num = 66600
stopwatch_running = False
The stopwatch is described in these words. Adding a Stopwatch Counter is the next step. Add it to the bottom of the alarm clock and the top of the notebook's setup procedure.
def stopwatch_counter(label):
def count():
if stopwatch_running:
global stopwatch_counter_num
if stopwatch_counter_num==66600:
display="Starting..."
else:
tt = datetime.datetime.fromtimestamp(stopwatch_counter_num)
string = tt.strftime("%H:%M:%S")
display=string
label.config(text=display)
label.after(1000, count)
stopwatch_counter_num += 1
count()
The counter controls the stopwatch on the stopwatch. At the rate of one second each second, the Stopwatch counter is incremented by 1.
The stopwatch method, which is invoked by the Stopwatch Controls, is now complete.
if work == 'start':
global stopwatch_running
stopwatch_running=True
stopwatch_start.config(state='disabled')
stopwatch_stop.config(state='enabled')
stopwatch_reset.config(state='enabled')
stopwatch_counter(stopwatch_label)
elif work == 'stop':
stopwatch_running=False
stopwatch_start.config(state='enabled')
stopwatch_stop.config(state='disabled')
stopwatch_reset.config(state='enabled')
elif work == 'reset':
global stopwatch_counter_num
stopwatch_running=False
stopwatch_counter_num=66600
stopwatch_label.config(text='Stopwatch')
stopwatch_start.config(state='enabled')
stopwatch_stop.config(state='disabled')
stopwatch_reset.config(state='disabled')
We will now create a timer that rings when the timer has expired. Based on the stopwatch, it deducts one from the counter rather than adding 1.
The timer component will now be included in Tkinter.
timer_get_entry = Entry(timer_tab, font='calibiri 15 bold')
timer_get_entry.pack(anchor='center')
timer_instructions_label = Label(timer_tab, font = 'calibri 10 bold', text = "Enter Timer Time. Eg -> 01:30:30, 01 -> Hour, 30 -> Minutes, 30 -> Seconds")
timer_instructions_label.pack(anchor='s')
timer_label = Label(timer_tab, font='calibri 40 bold', text='Timer')
timer_label.pack(anchor='center')
timer_start = Button(timer_tab, text='Start', command=lambda:timer('start'))
timer_start.pack(anchor='center')
timer_stop = Button(timer_tab, text='Stop', state='disabled',command=lambda:timer('stop'))
timer_stop.pack(anchor='center')
timer_reset = Button(timer_tab, text='Reset', state='disabled', command=lambda:timer('reset'))
timer_reset.pack(anchor='center')
Get timer provides guidance that explains how to set the timer. HH:MM: SS, For instance, 01:30:40 denotes a time interval of one hour, thirty minutes, and forty secs. It then has a toggle that calls the Timer method, which has a start, stop, and reset button.
To begin, we'll insert 2 Timer counters. The two lines of code below the stopwatch and the clock method below should be added.
timer_counter_num = 66600
timer_running = False
In this section, we learn more about the "Timer" feature. Next, we'll implement the Timer Counter feature. In between the Stopwatch method and Notebook Initiation, put it.
def timer_counter(label):
def count():
global timer_running
if timer_running:
global timer_counter_num
if timer_counter_num==66600:
for i in range(3):
display="Time Is Up"
if platform.system() == 'Windows':
winsound.Beep(5000,1000)
elif platform.system() == 'Darwin':
os.system('say Time is Up')
elif platform.system() == 'Linux':
os.system('beep -f 5000')
timer_running=False
timer('reset')
else:
tt = datetime.datetime.fromtimestamp(timer_counter_num)
string = tt.strftime("%H:%M:%S")
display=string
timer_counter_num -= 1
label.config(text=display)
label.after(1000, count)
count()
The Timer counter controls the timer. Timer counter-variable num is decreased by one each second.
def timer(work):
if work == 'start':
global timer_running, timer_counter_num
timer_running=True
if timer_counter_num == 66600:
timer_time_str = timer_get_entry.get()
hours,minutes,seconds=timer_time_str.split(':')
minutes = int(minutes) + (int(hours) * 60)
seconds = int(seconds) + (minutes * 60)
timer_counter_num = timer_counter_num + seconds
timer_counter(timer_label)
timer_start.config(state='disabled')
timer_stop.config(state='enabled')
timer_reset.config(state='enabled')
timer_get_entry.delete(0,END)
elif work == 'stop':
timer_running=False
timer_start.config(state='enabled')
timer_stop.config(state='disabled')
timer_reset.config(state='enabled')
elif work == 'reset':
timer_running=False
timer_counter_num=66600
timer_start.config(state='enabled')
timer_stop.config(state='disabled')
timer_reset.config(state='disabled')
timer_get_entry.config(state='enabled')
timer_label.config(text = 'Timer')
If the task is started, this method retrieves the Timer input text and formats it before setting the Timer counter and calling the Timer counter to set the clock running. The timer is set to "False" if programmed to Stop. The counter is set to 666600, and running is configured to "False."
Finally, here we are at the end of the project. Finally, add the code below to start Tkinter and the clock.
clock()
window.main loop()
It will open the Tkinter and clock windows.
Stopwatch
32-Bit Second Counter Problems
Even while this counter can operate for a long period, it will eventually run out of memory space. Having a narrow count range can pose problems.
Management of the streetlights
Lighting Street Management System is a one-of-a-kind solution that regulates the automatic reallocation of lights in public spaces. It can measure electric consumption and detect tampering and other electrical situations that hinder the most efficient use of street lights. IoT-based automated streetlight management systems are intended to cut electricity usage while decreasing labor costs through precession-based scheduling. Streetlights are a vital element of any town since they improve night vision, offer safety on the streets, and expose public places, but they waste a significant amount of energy. Lights in manually controlled streetlight systems run at full capacity from sunset to morning, even if there is adequate light. High reliability and lengthy stability are guaranteed by this method. This task is carried out using computer software. When compared to the previous system, the new one performs better. Automated On and Off processes are based on the RTC module for the time frame between dawn and dusk of the next day. Manual mode was removed because of human flaws and difficulties in timely on and off activities, which necessitated the relocation of certified electricians over wide distances and caused time delays. Using an Internet of things controlled from a central command center and portable devices helps us avoid these delays while also identifying and correcting faults, conserving energy, and providing better services more efficiently.
This tutorial teaches the mechanics of a real-time clock and some major applications in real life. With ease, you can integrate this module into most of your projects that require timed operations, such as robot movements. You will also understand how the RTC works if you play around with it more. In the next tutorial, we will build a stop motion movie system using Raspberry pi 4.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | DC Motor | Amazon | Buy Now | |
3 | Jumper Wires | Amazon | Buy Now | |
4 | Raspberry Pi 4 | Amazon | Buy Now |
A growing number of us already use face recognition software without realizing it. Facial recognition is used in several applications, from basic Fb Tag suggestions to advanced security screening surveillance. Chinese schools employ facial recognition to track students' adherence and behaviour for the first time. Retail stores use face recognition to classify their clients and identify those who have a history of crime. There's no denying that this tech will be all over soon, especially with so many other developments in the works.
When it comes to facial recognition, biometric authentication goes well beyond simply being able to identify human faces in images or videos. An additional step is taken to identify the person's identity. A facial recognition software compares an image of a person's face to a database to see if the features match another person's. Since facial expressions and hair do not affect the technology's ability to identify matches, it has been built to do so.
How can face recognition be used when it comes to smart security systems?
The first thing you should do if you want to make your home "smart" is to focus on security. Your most prized possessions are housed at this location, and protecting them is a must. You can monitor your home security status from your computer or smartphone thanks to a smart security system when you're outdoors.
Installing a system that is not wireless in your house and signing you up for professional monitoring was traditionally done by a security company. The plot has been rewritten. When setting up a smart home system, you can even do it yourself. In addition, your smart smartphone acts as a professional monitor, providing you with real-time information and notifications.
Face recognition is the ability of a smart camera in your house to identify a person based on their face. Consequently, you will have to inform the algorithm what face goes with what name for face recognition to operate. Facial detection in security systems necessitates the creation of user accounts for family members, acquaintances, and others you want to be identified by the system. Your doors or the inside of your house will be alerted when they arrive.
Face-recognition technology allows you to create specific warning conditions. For example, you can configure a camera to inform you when an intruder enters your home with a face the camera doesn't recognize.
Astonishing advancements in smart tech have been made in recent years. Companies are increasingly offering automatic locks with face recognition. You may open your doors just by smiling at a face recognition system door lock. You could, however, use a passcode or a real key to open and close the smart door. You may also configure your smart house lock to email you an emergency warning if someone on the blacklist tries to unlock your smart security door.
OpenCV, as previously stated, will be used to identify and recognize faces. So, before continuing, let's set up the OpenCV library. Your Pi 4 needs a 2A power adapter and an HDMI cable because we won't be able to access the Pi's screen through SSH. The OpenCV documentation is a good place to learn how image processing works, but I'm not going to go into it here.
pip is well-known for making it simple to add new libraries to the python language. In addition, there is a technique to install OpenCV on a Raspberry Pi via PIP, but it didn't work for me. We can't obtain complete control of the OpenCV library when using pip to install OpenCV; however, this might be worth a go if time is of the essence.
Ensure pip is set up on your Raspberry Pi. Then, one by one, execute the lines of code listed below into your terminal.
sudo apt-get install libhdf5-dev libhdf5-serial-dev
sudo apt-get install libqtwebkit4 libqt4-test
sudo pip install opencv-contrib-python?
Facial recognition and face detection are not the same things, and this must be clarified before we proceed. When simply a user's face is detected using Face detection, the program has no clue who that person is. Only the face will be detected in facial recognition software, but the program will also recognize it. At this point, it's pretty evident that facial detection comes before facial recognition. To explain how OpenCV recognizes a person or other objects, I will have to go into detail.
Essentially, a webcam feed is like a long series continuously updating still photos. And every image is nothing more than a jumble of pixels with varying values arranged in a specific order. So, how does a computer software identify a face among all of these random pixels? Trying to describe the underlying techniques is outside the scope of this post, but since we're utilizing the OpenCV library, facial recognition is a straightforward process that doesn't necessitate a deeper understanding of the underlying principles.
We can only recognize a person if we can see it. Detection of an item, including a face, Classifiers are a feature of OpenCV. They are pre-trained datasets that may be utilized to recognize a certain item, such as a face. Classifiers may also detect additional objects, such as the mouth, the eyebrows, the number plate of a vehicle, and smiles.
Alternatively, OpenCV allows you to design your custom Classifier for detecting any objects in images by retraining the cascade classifier. For the sake of this tutorial, we'll be using the classifier named "haarcascade_frontalface_default.xml" to identify faces from the camera. We'll learn more about image classifiers and how to apply them in code in the following sections.
For the face training and detection, we only need the pi camera, and to install this, insert the raspberry pi camera in the pi camera slot as shown below. Then go to your terminal, open the configuration window using "sudo raspi-config", and press enter. Navigate to the interface options and activate the pi camera module. Accept the changes and finish the setup. Then reboot your RPi.
First, ensure pip is set up, and then install the following packages using it.
Install dlib: Dlib is a set of libraries for building ML and data analysis programs in the real world. To get dlib up and running, type the following command into your terminal window.
Pip install dlib
If everything goes according to plan, you should see something similar after running this command.
Install pillow: The Python Image Library, generally known as PIL, is a tool for opening, manipulating, and saving images in various formats. The following command will set up PIL for you.
pip install pillow
You should receive the message below once this app has been installed.
Install face_recognition: The face recognition package is often the most straightforward tool for detecting and manipulating human faces. Face recognition will be made easier with the help of this library. Installing this library is as simple as running the provided code.
Pip install face_recognition –no –cache-dir
If all goes well, you should see something similar to the one shown below after the installed software. Due to its size, I used the "—no –cache-dir" command-line option to configure the package without keeping any of its cache files.
A script named "haarcascade_frontalface_default.xml" is for detecting faces using a Classifier. It will also build a "face-trainner.yml" file using the training script based on the photos found in the face images directory.
The face images folder indicated above should contain subdirectories with the names of each person to be identified and several sample photographs of them. Esther and x have been identified for this tutorial. As a result, I've just generated the two sub-directories shown below, each containing a single image.
You must rename the directory and replace the photographs with the names of the people you are identifying. It appears that a minimum of five images for each individual is optimal. However, the more participants, the slower the software will run.
Face Trainer.py is a Python software that may be used to train a new face. The purpose of the software is to access the face photographs folder and scan for faces. As soon as it detects a face, it crops it, turns it to grayscale, and saves it in a file named face-trainner.yml using the face recognition package we had previously loaded. The information in this script can be used to identify the faces later. In addition to the whole Trainer program provided at the conclusion, we'll go over some more critical lines.
The first step is to import the necessary modules. The cv2 package is utilized to process photos. The NumPy library can be used for image conversion, the operating system package is used for directory navigation, and PIL will be used to process photos.
import cv2
import numpy as np
import os
from PIL import Image
Ensure that the XML file in question is located in the project directory to avoid encountering an issue. The LBPH Facial recognizer is then constructed using the recognizer parameter.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
Face_Images = os.path.join(os.getcwd(), "Face_Images")
In order to open all of the files ending in.jpg,.jpg, or .png within every subfolder in the face images folder, we must traverse the tree with for loops. In a variable named path, we record the path to every image, and in a variable named person name, we store the file location name (the name of the user who uploaded the images).
For root, dirs, files in os.walk(Face_Images):
for file in files: #check every directory in it
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):
path = os.path.join(root, file)
person_name = os.path.basename(root)
As a result, in case the name of the person changes, we increase a variable named Face_ID that will allow us to have a unique Face_ID for each individual.
if pev_person_name!=person_name:
Face_ID=Face_ID+1 #If yes increment the ID count
pev_person_name = person_name
Because the BGR values may be ignored, grayscale photos are simpler for OpenCV to deal with than colourful ones. We transform the data to grayscale and afterwards lower the image size by 50% so that all the pictures are the same size. To avoid having your face cut out, place it in the centre of the photo. To get a numerical number for these photos, transform them into NumPy arrays. Afterwards, a classifier identifies a face in a photo and saves the results in variable named faces.
Gery_Image = Image.open(path).convert("L")
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)
Our Area of Attention will be the portion of the image where the face may be found after being cropped. It will be utilized to train the face-recognition system in the ROI area. Every area of attention face must be appended to a variable named x train. We then feed the recognizer with our training data using the area of attention values and Face ID data. The information gathered will be archived.
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w]
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID))
recognizer.save("face-trainner.yml")
You'll notice that the face-trainner.yml script is modified whenever you run this program. If you make any modifications to the photographs in the Face Images folder, ensure to recompile this code. Debugging purposes include printing out the Face ID, name of the path, name of a person, and NumPy arrays.
We can begin using our trained data to identify people now that it has been prepared. We'll use a USB webcam or pi camera to feed video into the Face recognizer application, turning it into an image. Once we've found the faces in those images, we'll find similarities to all of our previously developed Face IDs. Finally, we output the identified person’s name in boxes around their face. Afterwards, the whole program is presented, and the explanation is below.
Import the required module from the training program and use the classifier because we need to do more facial detection in this program.
import cv2
import numpy as np
import os
from time import sleep
from PIL import Image
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
The people listed in the folder should be entered in the variable named labels. Insist on performing each step in the same order. It is "Esther" and "Unknown" in my situation.
labels = ["Esther", "Unknown"]
We need the trainer file to detect faces, so we import it into our software.
recognizer.load("face-trainner.yml")
The camera provides the video stream. It's possible to access any second pi camera by replacing 0 with 1.
cap = cv2.VideoCapture(0)
In the next step, we separate the footage into images and transform it into grayscale, and afterwards, we search for a face in the photo. To save the area of attention grey image, we must first detect the face and then crop the image to remove them.
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w]
id_, conf = recognizer.predict(roi_gray)
It informs us how sure the program is in its ability to identify the person. We write the code below to get the person's name based on their Identification number. A square should be drawn around the user's head, written outside their name.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
We must playback and afterwards break the video stream we just evaluated, which is done by pressing a wait key.
cv2.imshow('Preview',img)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
While running this application, ensure the Raspberry is linked to a display via HDMI. A display with your video stream and the name will appear when you open the application. There will be a box around the face identified in the video feed, and if your software recognizes the face, it displays that person’s name. As evidenced by the image below, we've trained our software to identify my face, which shows the recognition process in action.
import cv2
import numpy as np
import os
from PIL import Image
labels = ["Esther", "Unknown"]
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
recognizer.load("face-trainner.yml")
cap = cv2.VideoCapture(0)
while(True):
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5) #Recog. faces
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w]
id_, conf = recognizer.predict(roi_gray)
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('Preview',img)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
import os
from PIL import Image
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
Face_ID = -1
pev_person_name = ""
y_ID = []
x_train = []
Face_Images = os.path.join(os.getcwd(), "Face_Images")
print (Face_Images)
for root, dirs, files in os.walk(Face_Images):
for file in files:
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):
path = os.path.join(root, file)
person_name = os.path.basename(root)
print(path, person_name)
if pev_person_name!=person_name:
Face_ID=Face_ID+1
pev_person_name = person_name
Gery_Image = Image.open(path).convert("L")
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)
print (Face_ID,faces)
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w]
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID))
recognizer.save("face-trainner.yml")
Since the "How to operate DC motor in Rpi 4" guide has covered the basics of controlling a DC motor, I won't provide much detail here. Please read this topic if you haven't already. Check all the wiring before using the batteries in your circuit, as outlined in the image above. Everything must be in place before connecting your breadboard's power lines to the battery wires.
To activate the motors, open the terminal because you'll use the Python code-writing program called Nano in this location. For those of you who aren't familiar with the command-line text editor known as Nano, I'll show you how to use some of its commands as we go.
This code will activate the motor for two seconds, so try it out.
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
print "Turning motor on"
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Stopping motor"
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
The very first two lines of code tell Python whatever the program needs.
The RPi.GPIO package is what the first line is looking for. The RPi GPIO pins are controlled by this module, which takes care of all the grunt work.
It is necessary to delay the script for a few seconds to provide the package time to operate, therefore leaving a motor to run for a while.
The method set mode is used to leverage the RPi's board numbers. We'll tell Python that the pins 16 through 22 correspond to the motors.
Pin A is used to steer the L293D in one way, and pin B is used to direct it in the opposite direction. You can turn on the motor using an Enable pin, referred to as E, inside the test file.
Finally, use GPIO.OUT to inform the RPi that all these are outputs.
The RPi is ready to turn the motor after the software is set up. After a 2-second pause, some pins will be turned on and subsequently turned off, as seen in the code.
Save and quit by hitting CTRL-X, and a confirmation notice appears at the bottom. To acknowledge, tap Y and Return. You can now run the program in the terminal and watch as the motor begins to spin up.
sudo python motor.py
If the motor doesn't move, check the cabling or power supply. The debug process might be a pain, but it's an important phase in learning new things!
I'll teach you how to reverse a motor's rotation to spin in the opposite direction.
There's no need to touch the wiring at this point; it's all Python. Create a new script called motorback.py to accomplish this. Using Nano, type the command:
./script
Please type in the given program:
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
print "Going forwards"
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Going backwards"
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Now stop"
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
Save by pressing CTRL, then X, then Y, and finally Enter key.
For reverse compatibility, we've set Motor1A low in the script.
Programmers use the terms "high" and "low" to denote the state of being on or off, respectively.
Motor1E will be turned off to halt the motor.
Irrespective of what A is doing; the motor can be turned on or off using the Enable switch.
Take a peek at the Truth Table to understand better what's going on.
When Enabled, only two states allow the motor to move; A or B is high, and not both high at the same time.
At this point, we have designed our face detection system and the dc motor control circuit; now, we will put the two systems to work together. When the user is verified, the dc motor should run to open the cd rom drive and close after a few seconds.
In our verify code, we will copy the code below to spin the motor in one direction “open the door” when the user is verified. We will also increase the time to 5 seconds to simulate the door's time to open for the user to get through. This also allows the motor to spin long enough to open and close the cd room completely. I would also recommend putting a stopper on the cd room door so that it doesn't close all the war and get stuck.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_] #Get the name from the List using ID number
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
#place our motor code here
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
Print("Openning")
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(5)
print("Closing")
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(5)
print("stop")
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
An individual's biometric identity can be verified by looking at various physical and behavioural characteristics, such as a person's fingerprint, keystrokes, facial characteristics, and voice. Face recognition seems to be the winner because of the precision, simplicity, and lack of contact detection.
Face-recognition technology will continue and will get better over time. The tale has evolved, and your alternatives have grown due to smart tech.
Using an RPi as a surveillance system means you can take it with you and use it wherever you need it.
For the most part, the face-recognition software employed in security systems can reliably assess whether or not the individual attempting entry matches your record of those authorized to enter. On the other hand, certain computer programs are more precise when it comes to identifying faces from diverse angles or different countries.
Concerned users may be relieved to learn that some programs have the option of setting custom confidence criteria, which can significantly minimize the likelihood of the system giving false positives. Alternatively, 2-factor authentication can be used to secure your account.
When your smart security system discovers a match between a user and the list of persons you've given access to, it will instantly let them in. Answering the doorbell or allowing entry isn't necessary.
Face recognition solutions can be readily integrated into existing systems using an API.
A major drawback of face recognition technology is that it puts people's privacy at risk. Having one's face collected and stored in an unidentified database does not sit well with the average person.
Confidentiality is so important that several towns have prohibited law enforcement from using real-time face recognition monitoring. Rather than using live face recognition software, authorities can use records from privately-held security cameras in certain situations.
Having your face captured and stored by face recognition software might make you feel monitored and assessed for your actions. It is a form of criminal profiling since the police can use face recognition to put everybody in their databases via a virtual crime lineup.
This article walked us through creating a complete Smart Security System using a facial recognition program from the ground up. Our model can now recognize faces with the help of OpenCV image manipulation techniques. There are several ways to further your knowledge of supervised machine learning programming with raspberry pi 4, including adding an alarm to ring whenever an individual's face is not recognized or creating a database of known faces to act like a CCTV surveillance system. We'll design a security system with a motion detector and an alarm in the next session.
First, we will design a database for our website, then we will design the RFID circuit for scanning the student cards and displaying present students on the webpage, and finally, we will design the website that we will use to display the attendees of a class.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | Jumper Wires | Amazon | Buy Now | |
3 | LCD 16x2 | Amazon | Buy Now | |
4 | LCD 16x2 | Amazon | Buy Now | |
5 | Raspberry Pi 4 | Amazon | Buy Now |
Additionally, the Database server offers a DBMS that can be queried and connected to and can integrate with a wide range of platforms. High-volume production environments are no problem for this software. The server's connection, speed, and encryption make it a good choice for accessing the database.
There are clients and servers for MySQL. This system contains a SQL server with many threads that support a wide range of back ends, utility programs, and application programming interfaces.
We'll walk through the process of installing MySQL on the RPi in this part. The RFID kit's database resides on this server, and we'll utilize it to store the system's signed users.
There are a few steps before we can begin installing MySQL on a Raspberry Pi. There are two ways to accomplish this.
sudo apt update
sudo apt upgrade
Installing the server software is the next step.
Here's how to get MySQL running on the RPi using the command below:
sudo apt install MariaDB-server
Having installed MySQL on the Raspberry Pi, we'll need to protect it by creating a passcode for the "root" account.
If you don't specify a password for your MySQL server, you can access it without authentication.
Using this command, you may begin safeguarding MySQL.
sudo mysql_secure_installation
Follow the on-screen instructions to set a passcode for the root account and safeguard your MySQL database.
To ensure a more secured installation, select "Y" for all yes/no questions.
Remove elements that make it easy for anyone to access the database.
We may need that password to access the server and set up the database and user for applications like PHPMyAdmin.
For now, you can use this command if you wish to access the Rpi's MySQL server and begin making database modifications.
sudo MySQL –u root -p
To access MySQL, you'll need to enter the root user's password, which you created in Step 3.
Note: Typing text will not appear while typing, as it does in typical Linux password prompts.
Create, edit, and remove databases with MYSQL commands now available. Additionally, you can create, edit, and delete users from inside this interface and provide them access to various databases.
After typing "quit;" into MySQL's user interface, you can exit the command line by pressing the ESC key.
Pressing CTRL + D will also exit the MYSQL command line.
You may proceed to the next step now that you've successfully installed MySQL. In the next few sections, we'll discuss how to get the most out of our database.
The command prompt program MySQL must be restarted before we can proceed with creating a username and database on the RPi.
The MySQL command prompt can be accessed by typing the following command. After creating the "root" user, you will be asked for the password.
To get things started, run the command to create a MySQL database.
The code to create a database is "CREATE DATABASE", and then the name we like to give it.
This database would be referred to as "rfidcardsdb" in this example.
To get started, we'll need to create a MySQL user. The command below can be used to create this new user.
"rfidreader" and "password" will be the username and password for this example. Take care to change these when making your own.
create user “rfidreader" @localhost identified by "password."
We can now offer the user full access to the database after it has been built.
Thanks to this command, " "rfidreader" will now have access to all tables in our "rfidcardsdb" database.
grant all on rfidcardsdb.* to "rfidreader" identified by "password."
We have to flush the permission table to complete our database and user set up one last time. You cannot grant access to the database without flushing your privilege table.
The command below can be used to accomplish this.
Now we have our database configured, and now the next step is to set up our RFID circuit and begin authenticating users. Enter the “Exit” command to close the database configuration process.
An RFID reader reads the tag's data when a Rfid card is attached to a certain object. An RFID tag communicates with a reader via radio waves.
In theory, RFID is comparable to bar codes in that it uses radio frequency identification. While a reader's line of sight to the RFID tag is preferable, it is not required to be directly scanned by the reader. You can't read an RFID tag that is more than three feet away from the reader. To quickly scan a large number of objects, the RFID tech is used, and this makes it possible to identify a specific product rapidly and effortlessly, even if it is sandwiched between several other things.
There are major parts to Cards and tags: an IC that holds the unique identifier value and a copper wire that serves as the antenna:
Another coil of copper wire can be found inside the RFID card reader. When current passes through this coil, it generates a magnetic field. The magnetic flux from the reader creates a current within the wire coil whenever the card is swiped near the reader. This amount of current can power the inbuilt IC of the Card. The reader then reads the card's unique identifying number. For further processing, the card reader transmits the card's unique identification number to the controller or CPU, such as the Raspberry Pi.
Connect the reader to the Raspberry the following way:
use the code spi bcm2835 to see if it is displayed in the terminal.
lsmod | grep spi
SPI must be enabled in the setup for spi bcm2835 to appear (see above). Make sure that RPi is running the most recent software.
Make use of the python module.
sudo apt-get install python
The RFID RC522 can be interacted with using the Library SPI Py, found on your RPi.
cd ~
git clone https://github.com/lthiery/SPI-Py.git
cd ~/SPI-Py
sudo python setup.py install
cd ~
git clone https://github.com/pimylifeup/MFRC522-python.git
To test if the system is functioning correctly, let's write a small program:
cd ~/
sudo nano test.py
now copy the following the code into the editor
import RPi.GPIO as GPIO
import sys
sys.path.append('/home/pi/MFRC522-python')
from mfrc522 import SimpleMFRC522
reader = SimpleMFRC522()
print("Hold a tag near the reader")
try:
id, text = reader.read()
print(id)
print(text)
finally:
GPIO.cleanup()
Here we will write a short python code to register users whenever they swipe a new card on the RFID card reader. First, create a file named addcard.py.
copy the following code.
import pymysql
import cv2
from mfrc522 import SimpleMFRC522
import RPi.GPIO as GPIO
import drivers
display = drivers.Lcd()
display.lcd_display_string('Scan your', 1)
display.lcd_display_string('card', 2)
reader = SimpleMFRC522()
reader = SimpleMFRC522()
id, text = reader.read()
display = drivers.Lcd()
display.lcd_display_string('Type your name', 1)
display.lcd_display_string('in the terminal', 2)
user_id = input("user name?")
# put serial_no uppercase just in case
serial_no = '{}'.format(id)
# open an sql session
sql_con = pymysql.connect(host='localhost', user='rfidreader', passwd='password', db='rfidcardsdb')
sqlcursor = sql_con.cursor()
# first thing is to check if the card exist
sql_request = 'SELECT card_id,user_id,serial_no,valid FROM cardtbl WHERE serial_no = "' + serial_no + '"'
count = sqlcursor.execute(sql_request)
if count > 0:
print("Error! RFID card {} already in database".format(serial_no))
display = drivers.Lcd()
display.lcd_display_string('The card is', 1)
display.lcd_display_string('already registered', 2)
T = sqlcursor.fetchone()
print(T)
else:
sql_insert = 'INSERT INTO cardtbl (serial_no,user_id,valid) ' + \
'values("{}","{}","1")'.format(serial_no, user_id)
count = sqlcursor.execute(sql_insert)
if count > 0:
sql_con.commit()
# let's check it just in case
count = sqlcursor.execute(sql_request)
if count > 0:
print("RFID card {} inserted to database".format(serial_no))
T = sqlcursor.fetchone()
print(T)
display = drivers.Lcd()
display.lcd_display_string('Congratulations', 1)
display.lcd_display_string('You are registered', 2)
GPIO.cleanup()
The program starts by asking the user to scan the card.
Then it connects to the database using the pymysql.connect function.
If we enter our name successfully, the program inserts our details in the database, and a congratulations message is displayed to show that we are registered.
Using the LCD command library, you can:
sudo apt install git
cd /home/pi/
git clone https://github.com/the-raspberry-pi-guy/lcd.git
cd lcd/
sudo ./install.sh
After installation is complete, try running one of the program files
cd /home/pi/lcd/
Next, we will install the mfrc522 library, which the RFID card reader uses. This will enable us to read the card number for authentication. We will use:
Pip install mfrc522
Next, we will import the RPI.GPIO library enables us to utilize the raspberry pi pins to power the RFID card and the LCD screen.
Import RPi.GPIO
We will also import the drivers for our LCD screen. The LCD screen used here is the I2C 16 * 2 LCD.
Import drivers
Then we will import DateTime for logging the time the user has swiped the card into the system.
Import DateTime
In order to read the card using the rfid card, we will use the following code:
reader = SimpleMFRC522()
display = drivers.Lcd()
display.lcd_display_string('Hello Please', 1)
display.lcd_display_string('Scan Your ID', 2)
try:
id, text = reader.read()
print(id)
display.lcd_clear()
finally:
GPIO.cleanup()
The LCD is divided into two rows, 1 and 2. To display text in the first row, we use:
Display.lcd_display_string(“string”,1)
And 2 to display in the second row.
After scanning the card, we will connect to the database we created earlier and search whether the scanned card is in the database or not.
If the query is successful, we can display if the card is in the database; if not, we can proceed, then the user needs to register the card.
If the user is registered, the system saves the logs, the username and the time the card was swapped in a text file located in the/var/www/html root directory of the apache server.
Note that you will need to be a superuser to create the data.txt file in the apache root directory. For this, we will use the following command in the Html folder:
Sudo touch data.txt
Then we will have to change the access privileges of this data.txt file to use the program to write the log data. For this, we will use the following code:
Sudo chmod 777 –R data.txt
The next step will be to display this data on a webpage to simulate an online attendance register. The code for the RFID card can be found below.
#! /usr/bin/env python
# Import necessary libraries for communication and display use
import RPi.GPIO as GPIO
from mfrc522 import SimpleMFRC522
import pymysql
import drivers
import os
import numpy as np
import datetime
# read the card using the rfid card
reader = SimpleMFRC522()
display = drivers.Lcd()
display.lcd_display_string('Hello Please', 1)
display.lcd_display_string('Scan Your ID', 2)
try:
id, text = reader.read()
print(id)
display.lcd_clear()
# Load the driver and set it to "display"
# If you use something from the driver library use the "display." prefix first
try:
sql_con = pymysql.connect(host='localhost', user='rfidreader', passwd='password', db='rfidcardsdb')
sqlcursor = sql_con.cursor()
# first thing is to check if the card exist
cardnumber = '{}'.format(id)
sql_request = 'SELECT user_id FROM cardtbl WHERE serial_no = "' + cardnumber + '"'
now = datetime.datetime.now()
print("Current date and time: ")
print(str(now))
count = sqlcursor.execute(sql_request)
if count > 0:
print("already in database")
T = sqlcursor.fetchone()
print(T)
for i in T:
print(i)
file = open("/var/www/html/data.txt","a")
file.write(i +" Logged at "+ str(now) + "\n")
file.close()
display.lcd_display_string(i, 1)
display.lcd_display_string('Logged In', 2)
else:
display.lcd_clear()
display.lcd_display_string(“Please register”, 1)
display.lcd_display_string(cardnumber,2)
except KeyboardInterrupt:
# If there is a KeyboardInterrupt (when you press ctrl+c), exit the program and cleanup
print("Cleaning up!")
display.lcd_clear()
finally:
GPIO.cleanup()
Now we are going to design a simple website with Html that we are going to display the information of the attending students of a class, and to do this, we will have to install a local server in our raspberry pi.
Web, database, and mail servers all run on various server software. Each of these programs can access and utilize files located on a physical server.
A web server's main responsibility is to provide internet users access to various websites. It serves as a bridge between a server and a client machine to accomplish this. Each time a user makes a request, it retrieves data from the server and posts it to the web.
A web server's largest issue is to simultaneously serve many web users, each of whom requests a separate page.
For internet users, convert them to Html pages and offer them in the browser. Whenever you hear the term "webserver," consider the device in charge of ensuring successful communication in a network of computers.
Among its responsibilities is establishing a link between a server and a client's web browser (such as Chrome to send and receive data (client-server structure). As a result, the Apache software can be used on any platform, from Microsoft to Unix.
Visitors to your website, such as those who wish to view your homepage or "About Us" page, request files from your server via their browser, and Apache returns the required files in a response (text, images, etc.).
Using HTTP, the client and server exchange data with the Apache webserver, ensuring that the connection is safe and stable.
Because of its open-source foundation, Apache promotes a great deal of customization. As a result, web developers and end-users can customize the source code to fit the needs of their respective websites.
Additional server-side functionality can be enabled or disabled using Apache's numerous modules. Encryption, password authentication, and other capabilities are all available as Apache modules.
To begin, use the following code to upgrade the Pi package list.
sudo apt-get update
sudo apt-get upgrade
After that, set up the Apache2 package.
sudo apt install apache2 -y
That concludes our discussion. You can get your Raspberry Pi configured with a server in just two easy steps.
Type the code below to see if the server is up and functioning.
sudo service apache2 status
You can now verify that Apache is operating by entering your Raspberry Pi's IP address into an internet browser and seeing a simple page like this.
Use the following command in the console of your Raspberry Pi to discover your IP.
hostname-i
Only your home network and not the internet can access the server. You'll need to configure your router's port forwarding to allow this server to be accessed from any location. Our blog will not be discussing this topic.
The standard web page on the Raspberry Pi, as depicted above, is nothing more than an HTML file. First, we will generate our first Html document and develop a website.
Let's start by locating the Html document on the Raspbian system. You can do this by typing the following code in the command line.
cd /var/www/html
To see a complete listing of the items in this folder, run the following command.
ls -al
The root account possesses the index.html file; therefore, you'll see every file in the folder.
As a result, to make changes to this file, you must first change the file's ownership to your own. The username "pi" is indeed the default for the Raspberry Pi.
sudo chown pi: index.html
To view the changes you've made, all you have to do is reload your browser after saving the file.
Here, we'll begin to teach you the fundamentals of HTML.
To begin a new page, edit the index.html file and remove everything inside it using the command below.
sudo nano index.html
Alternatively, we can use a code editor to open the index.html file and edit it. We will use VS code editor that you can easily install in raspberry pi using the preferences then recommended software button.
You must first learn about HTML tags, which are a fundamental part of HTML. A web page's content can be formatted in various ways by using tags.
There are often two tags used for this purpose: the opening and closing tags. The material inside these tags behaves according to what these tags say.
The p> tag, for example, is used to add paragraphs of text to the website.
<p>The engineering projects</p>
Web pages can be made more user-friendly by using buttons, which can be activated anytime a user clicks on them.
<button>Manual Refresh</button>
<button>Sort By First Name</button>
<button>Sort By last Name</button>
A typical HTML document is organized as follows:
Let us create the page that we will use in this project.
<html>
<head>
</head>
<body>
<div id="pageDiv">
<p> The engineering projects</p>
<button type="button" id="refreshNames">Manual Refresh</button><br/>
<button type="button" id="firstSort">Sort By First Name</button><br/>
<button type="button" id="lastSort">Sort By Last Name</button>
<div id="namesFromFile">
</div>
</div>
</body>
</html>
<!DOCTYPE html>: HTML documents are identified by this tag. This does not necessitate the use of a closing tag.
<html>: This tag ensures that the material inside will meet all of the requirements for HTML. There is a /html> tag at the end of this.
</head>: It contains data about the website, but when you view it in a browser, you won't be able to see anything.
A metadata tag in the head tag can be used to set default character encoding in your website, for instance. This has a /head> tag at the end of it.
<head>
<meta charset="utf-8">
</head>
Also, you can have a title tag inside the head tag. This tag sets the title of your web page and has a closing </title> tag.
<head>
<meta charset="utf-8">
<title> My website </title>
</head>
<body>: The primary focus of the website page is included within this tag. Everything on a web page is usually contained within body tags once you've opened it. This has a /body> tag at the end of it. Many other tags can be found in this body tag, but we'll focus on the ones you need to get started with your first web page.
We will go ahead and style our webpage using CSS with the lines of codes below;
<head>
<!--
body {
width:100%;
background:#ccc;
color:#000;
text-align:left;
margin:0
;padding:10px
;font:16px/18pxArial;
}
button {
width:160px;
margin:0 0 10px;}
#pageDiv {
width:160px;
margin:20px auto;
padding:20px;
background:#ddd;
color:#000;
}
#namesFromFile {
margin:20px 0 0;
padding:10px;
background:#fff;
color:#000;
border:1px solid #000;
border-radius:10px;
}
-->
</style>
</head>
The style tags is a cascading style sheet syntax that lets developers style the webpages however they prefer.
You can add images to your web page by using the <img> tag. It is also a void element and doesn’t have a closing tag. It takes the following format
<img src="URL of image location">
For example, let’s add an image of the Seeeduino XIAO
<p>The Engineering projects</p>
<img src="https://www.theengineeringprojects.com/wp-content/uploads/2022/04/TEP-Logo.png">
Reload the browser to see the changes
This is the last step of this project, and we will implement a program that reads our data.txt file from the apache root directory and display it on the webpage that we designed. Since we already have our webpage up and running, we will use the javascript programming language to implement this function of displaying the log list on the webpage. All changes that we are about to implement will be done in the index.html file; therefore, open it in the visual studio code editor.
JavaScript is a dynamic computer programming language. It is lightweight and most commonly used as a part of web pages, whose implementations allow client-side scripts to interact with the user and make dynamic pages. It is an interpreted programming language with object-oriented capabilities.
One of the major strengths of JavaScript is that it does not require expensive development tools. You can start with a simple text editor such as Notepad.
Well, javascript as mentioned earlier is a very easy to use language that simply requires us to put the script tags inside the html tags.
<script> script program </script>
<header>
<script>
Here goes our javascript program
</script>
</header>
The javascript code first opens the data.txt file, then it reads all the contents form that file. Then it uses the xmlHttpRequest function to display the contents on the webpage. The buttons on the webpage activate different functions in the code.For instance manual refresh activates:
function refreshNamesFromFile(){
var namesNode=document.getElementById("namesFromFile");
while(namesNode.firstChild)
{ namesNode.removeChild(namesNode.firstChild);
}
getNameFile();
}
This function reads the content of the data.txt
The sort by buttons activate the sort function to sort the logged users either by first name or last name. The function that gets activated by these buttons is:
function sortByName(e)
{ var i=0, el, sortEl=[], namesNode=document.getElementById("namesFromFile"), sortMethod, evt, evtSrc, oP;
evt=e||event;
evtSrc=evt.target||evt.srcElement;
sortMethod=(evtSrc.id==="firstSort")?"first":"last";
while(el=namesNode.getElementsByTagName("P").item(i++)){
sortEl[i-1]=[el.innerHTML.split(" ")[0],el.innerHTML.split(" ")[1]];
}
sortEl.sort(function(a,b){
var x=a[0].toLowerCase(), y=b[0].toLowerCase(), s=a[1].toLowerCase(), t=b[1].toLowerCase();
if(sortMethod==="first"){
return x<y?-1:x>y?1:s<t?-1:s>t?1:0;
}
else{
return s<t?-1:s>t?1:x<y?-1:x>y?1:0;
}
});
while(namesNode.firstChild){
namesNode.removeChild(namesNode.firstChild);
}
for(i=0;i<sortEl.length;i++){
oP=document.createElement("P");
namesNode.appendChild(oP).appendChild(document.createTextNode(sortEl[i][0]+" "+sortEl[i][1]));
namesNode.appendChild(document.createTextNode("\r\n"));
//insert tests -> for style/format
if(sortEl[i][0]==="John"){
oP.style.color="#f00";
}
if(sortEl[i][0]==="Sue")
{ oP.style.color="#0c0";
oP.style.fontWeight="bold";
}
}
}
Automated attendance systems are excessively time-consuming and sophisticated in the current environment. It is possible to strengthen company ethics and work culture by using an effective smart attendance management system. Employees will only have to complete the registration process once, and images get saved in the system's database. The automated attendance system uses a computerized real-time image of a person's face to identify them. The database is updated frequently, and its findings are accurate in a user interactive state because each employee's presence is recorded.
Smart attendance systems have several advantages, including the following:
Students in elementary, secondary, and postsecondary institutions can utilize this system to keep track of their attendance. It can also keep track of workers' schedules in the workplace. Instead of using a traditional method, it uses RFID tags on ID cards to quickly and securely track each person.
1) Real-time tracking – Keeping track of staff attendance using mobile devices and desktops is possible.
2)Decreased errors – A computerized attendance system can provide reliable information with minimal human intervention, reducing the likelihood of human error and freeing up staff time.
3) Management of enormous data – It is possible to manage and organize enormous amounts of data precisely in the db.
4) Improve authentications and security – A smart system has been implemented to protect the privacy and security of the user's data.
5) Reports – Employee log-ins and log-outs can be tracked, attendance-based compensation calculated, the absent list may be viewed and required actions are taken, and employee personal information can be accessed.
This tutorial taught us to build a smart RFID card authentication project from scratch. We also learned how to set up an apache server and design a circuit for the RFID and the LCD screen. To increase your raspberry programming skills, you can proceed to building a more complex system with this code for example implementing face detection that automatically starts the authentication process once the student faces the camera or implement a student log out whenever the student leaves the system. In the following tutorial, we will learn how to build a smart security system using facial recognition.
Welcome to the next tutorial of our Raspberry Pi programming course. Our previous tutorial taught us to how to print from a Raspberry pi. We also discussed some libraries to create a print server in our raspberry pi. We will learn how to take screenshots on Raspberry Pi using a few different methods in this lesson. We will also look at how to take snapshots on our Raspberry Pi using SSH remotely.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | Jumper Wires | Amazon | Buy Now | |
3 | PIR Sensor | Amazon | Buy Now | |
4 | Raspberry Pi 4 | Amazon | Buy Now |
This article will assist you when working with projects that require snapshots for documenting your work, sharing, or generating tutorials.
Screenshots are said to be the essential items on the internet today. And if you have seen these screenshots in tutorial videos or even used them in regular communication, you're already aware of how effective screenshots can be. They are quickly becoming a key internet currency for more efficient communication. Knowing how and when to utilize the correct ones will help you stand out from the crowd.
In this case, we'll employ Scrot, a software program, to help with the PrintScreen procedure. This fantastic software program allows you to take screenshots using commands, shortcut keys, and enabled shortcuts.
Scrot is already installed by default in the latest release of the Raspbian Operating system. In case you already have Scrot, you may skip this installation process. If you're not sure whether it's already installed, use the command below inside a pi Terminal window.
If your Pi returns a "command not found" error, you must install it. Use the following command line to accomplish this:
After installing it, you may test its functionality by using the scrot instruction again. If no errors occur, you are set to go on.
Capturing a snapshot on a Raspberry Pi isn't difficult, especially if Scrot is installed. Here are a handful of options for completing the work.
If you have the Scrot installed on your Pi successfully, your default hotkey for taking screenshots will be the Print Screen key.
You can try this quickly by pressing the Print Screen button and checking the /home/pi directory. If you find the screenshots taken, your keyboard hotkey (keyboard shortcut) is working correctly.
In addition, screenshots and print screen pictures will be stored with the suffix _scrot attached to the end of their filename. For instance,
This is easy as pie! Execute the following command on your Pi to snap a screenshot:
That is all. It is that easy.
The following approach will not work unless you have the navigation closed and have to snap a screenshot without the menu. To get a perfect snapshot with no menu, you must wait a few seconds after taking the picture. You may then close your menu and allow the Scrot to initiate the image capture.
To capture in this manner, send the following command to postpone the operation for five seconds.
Other Scrot settings are as follows:
You might need to give the images a unique name and directory on occasion. Add the correct root directory, followed by the individual title and filename extension, exactly after scrot.
For instance, if you prefer to assign the title raspberryexpert to it and store it in the downloads directory, do the following command:
Remember that the extension should always follow the file name .png.
If the capture command isn't already mapped as a hotkey, you'll have to map it by altering your Pi's config file, and it'll come in handy.
It would be best if you defined a hotkey inside the lxde-pi-rc.xml script to use it. To proceed, use this syntax to open the script.
We'll briefly demonstrate how to add the snapshot hotkey to the XML script. It would be best to locate the <keyboard> section and put the following lines directly below it.
We will map the scrot function to the snapshot hotkeys on the keyboard by typing the above lines.
Save the script by hitting CTRL X, Yes, and afterward ENTER key when you've successfully added those lines.
Enter the command below to identify the new changes made.
You may discover that taking snapshots on the raspberry is impractical in some situations. You'll need to use SSH to take the image here.
When dealing with Ssh, you must first activate it, as is customary. You may get more information about this in our previous tutorials.
Log in with the command below after you have enabled SSH:
Now use the command below to snap an image.
If you've previously installed the Scrot, skip line 2.
Using the command below, you can snap as many snapshots as you like using varying names and afterward transferring them over to your desktop:
Remember to change the syntax to reflect the correct username and Ip.
you can snap a screenshot and save it immediately to your Linux PC. However, if you regularly have to take snapshots, inputting the passcode each time you access the Rpi via SSH will be a tedious chore. So you can use publicly or privately keys to configure no passcode ssh in raspberry pi.
To proceed, use the following command to install maim on raspberry pi.
Return to your computer and use the command below to take a snapshot.
We're utilizing the maim instead of other approaches since it's a more elegant method. It sends the image to stdout, allowing us to save it to our laptop via a simple URL redirect.
Raspi2png is a screenshot software that you may use to take a screenshot. Use the code below for downloading and installing the scripts.
After that, please place it in the /usr/local/bin directory.
Enter the following command to capture a screenshot.
Ensure to use your actual folder name rather than <directory_name> used.
Because we are using a GUI, this solution is relatively simple and easy to implement.
First, use the following command to download the GNOME Snapshot tool.
After it has been installed, go to the Raspberry navbar, select the menu, select Accessories, and finally, Screenshot.
This opens the GNOME Picture window, where you can find three different taking options, as seen below.
Choose the appropriate capture method and select Capture Image. If you pick the third choice, you will have to use a mouse to choose the location you wish to snip. If you use this option, you will not need a picture editor to resize the snapshot image. The first choice will record the entire screen, while the second will snip the active window.
GNOME gives you two alternatives once you capture a screen. The first is to save the snapshot, and the other is to copy it to the clipboard. So select it based on your requirements.
It all begins with a simple screenshot. You don't need any additional programs or software to capture a basic screenshot. At this moment, this feature is built into almost all Raspberry Pi versions and Windows, Mac PCs, and cellphones.
It is the process of capturing all or a part of the active screen and converting it to a picture or video.
While it may appear the same thing as a screenshot and a screen capture, they are not the same. A screenshot is simply a static picture. A desktop window capture is a process of collecting something on the screen, such as photographs or films.
Assume you wish to save a whole spreadsheet. It's becoming a little more complicated now.
Generally, you would be able to record whatever is on your window. Still, in case you need to snip anything beyond that, such as broad, horizontal spreadsheets or indefinitely lengthy website pages, you'll need to get a screen capture application designed for that purpose. Snagit includes Scrolling snapshot and Panorama Capture capabilities to snap all of the material you need in a single picture rather than stitching together many images.
This is a GIF file containing a moving image. An animated succession of picture frames is exhibited.
While gif Images aren't limited to screen material, they may be a proper (and underappreciated) method to express what's on your display.
Instead of capturing multiple pictures to demonstrate a process to a person, you may create a single animated Version of what is going on on your computer. These animations have small file sizes, and they play automatically, making them quick and simple to publish on websites and in documents.
This is making a video out of screen material to educate a program or sell a product by displaying functionality.
If you want to go further than a simple snapshot or even gif Animation, they are a good option. If you have ever looked for help with a software program, you have come across a screencast.
They display your screen and generally contain some commentary to make you understand what you are viewing.
Screencasts can range from polished movies used among professional educators to fast recordings showing a coworker how to file a ticket to Information technology. The concept is all the same.
Using screenshots to communicate removes the guesswork from graphical presentations and saves time.
The snapshot tool is ideal for capturing screenshots of graphical designs, new websites, or social media posts pending approval.
This is a must-have tool for anybody working in Information Technology, human resource, or supervisors training new workers. Choose screenshots over lengthy emails, or print screen pictures with instructions. A snapshot may save you a lot of time and improve team communication.
Furthermore, by preserving the snapshot in Screencast-O-Matic, your staff will be able to retrieve your directions at any time via a shareable link.
To avoid confusion, utilize screen captures to show. IT supervisors, for instance, can utilize images to teach their colleagues where to obtain computer upgrades. Take a snapshot of the system icon on your desktop, then use the Screen capture Editor to convert your screen capture into a graphical how-to instruction.
Any image editing tool may be used to improve pictures. You may use the highlighting tool to draw attention to the location of the icons.
Everybody has encountered computer difficulties. However, if you can't articulate exactly what has happened, diagnosing the problem afterward will be challenging. It's simple to capture a snapshot of the issue.
This is useful when talking with customer service representatives. Rather than discussing the issue, email them an image to help them see it. Publish your image immediately to Screencast and obtain a URL to share it. Sharing photos might help you get help quickly.
It can also help customer support personnel and their interactions with users. They may assist consumers more quickly by sending screenshots or photographs to assist them in resolving difficulties.
Snapshots are a simple method for social media administrators to categorize, emphasize, or record a specific moment. Pictures are an easy method to keep track of shifting stats or troublesome followers. It might be challenging to track down subscribers who breach social network regulations. Comments and users are frequently lost in ever-expanding discussions.
Take a snapshot of the problem to document it. Save this image as a file or store it in the screenshots folder of Screencast. Even if people remove their remarks, you will have proof of inappropriate activity.
This tutorial taught us how to take screenshots from a Raspberry Pi using different methods. We also went through how to remotely take snapshots on our Pi using SSH and discussed some of the benefits of using the screenshot tool. However, In the following tutorial, we will learn how to use a raspberry pi as a webserver.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | DC Motor | Amazon | Buy Now | |
3 | Jumper Wires | Amazon | Buy Now | |
4 | Raspberry Pi 4 | Amazon | Buy Now |
Like the Amazon Echo, voice-activated gadgets are becoming increasingly popular, but you can also construct your own with a Raspberry, a cheap USB mic, and some appropriate software. Simply speaking to your Raspberry Pi will allow you to search YouTube, view websites, activate applications, and even answer inquiries.
Because the Raspberry Pi lacks a soundcard or audio port, this project requires a USB microphone or a camera with a built-in microphone. If your mic only has an audio jack, look for an affordable USB soundcard that connects to a USB port on one side and has a headphone and mic output on the other.
For the Raspberry Pi, there are various speech recognition programs. We're utilizing Steve Hickson's Pi AUI Toolkit for this project since it is powerful and straightforward to set up and operate. You may install a variety of programs using the Pi AUI Suite. The first question is whether or not the dependencies should be installed. These are the files that the Raspberry Pi requires to work with voice commands, so pick Yes, then press Enter to agree.
Following that, you'll be asked if you wish to download the PlayVideo software, which allows you to open and play video content using voice commands. If you select Y, you'll be prompted to enter the location of your media files, such as /home/pi/Videos. It's worth noting the upper-case letters are crucial in this scenario. The application will tell you if the route is incorrect.
Next, you'll be asked if you wish to download the Downloader application, which explores the internet for files and downloads them for you automatically. If you select Yes, you will be prompted to enter an address, port, password, and username. If you're not sure, press Return to choose the default settings in each scenario for now.
Install the Google Texts to Speech Service if you require your raspberry pi to read the contents of the text files. Since it communicates to Google servers to translate text into speech, the Raspberry Pi must be hooked up to the internet to utilize this service.
You'll require a google user account to install this. The installation requests your username—press Return after completing this step. The Google password is then requested. Return to the previous page and type this.
You may also use the installer to download Google Voice Commands. This makes use of Google's speech-to-text technology. To proceed, you must enter your Google login and password once again.
Regardless of whether you choose Google-specific software or not, the program will ask if you wish to download the YouTube scripts. These technologies allow you to say something like "play pi tutorial," An appropriate video clip will be played—press Return after typing a new welcome. You can also enable the silent flag to prevent the Raspberry Pi from responding verbally.
Lastly, the software installs the Voice command, which includes some of the more helpful scripts, such as the ability to deploy your internet browser by simply saying "internet."
Youtube: When you say "YouTube" followed by a title tag, a youtube clip of the first related YouTube clip appears. "I'm feeling lucky" is comparable to "I'm feeling lucky" on google. Say "YouTube" followed by the title of the video you want to watch, for example, "YouTube fluffy kittens."
Internet: Your internet browser is launched when you use the word "internet." Midori is the internet browser for Rpi by default, but you may alter that.
Download: When you say "download," followed by a search query, the Pirate Bay webpage searches for the files in demand. For instance, you can say "Download Into the badlands" to get the most current edition of the movie.
Play: This phrase utilizes the in-built media player to open an audio or video file. For instance, "Play football.mp4" would play the file "football.mp4" from the media directory you chose during setup, like /home/pi/movies.
Show me: When you say "show me," a directory of your choice appears. The command defaults to not going to a valid root directory, so you'll need to modify your configuration so that it points to one. For instance, show me==/Documents.
You'll be asked if you want the Voice command to set things up automatically. If an issue occurs at this point, run the following command to download and install the required software.
After installing the Voice command application, you may want to perform a few basic adjustments to the setup to fine-tune it. Execute the following command from your Raspberry Pi's Terminal or via SSH.
Following that, you'll be asked several yes/no questions. The first question is whether you wish to enable the continuous flag permanently. The Voice command application, in clear English, asks if you would want to listen to your voice commands constantly every time you launch it.
For the time being, choose Yes. After that, you'll be asked if you wish the Voice command application to set the verify flag permanently. If you select Y, the application will expect you to pronounce your keyword (the default setting is "Pi") before responding to requests.
If you like the RPi to monitor continually but not act on all words you say, this is a good option.
The next step asks if you wish to enable the ignore flag permanently. If Voice command receives a command that isn't expressly listed in the config file, it will try to find and launch a program from your installed apps. For example, if you say "leafpad," a notepad tool, the Voice command looks for it and starts it even if you don't tell it to.
This is a functionality that we would not recommend anyone to enable. Because you're using Voice command at the SuperUser level, there's a significant danger that you'll accidentally issue a command to the Raspberry Pi that will harm your files. If you wish to set up other programs to function with the Voice command, you can update the configuration file for each scenario.
The voice command afterward asks if you want to permanently enable the silence flag so that it doesn't respond verbally whenever you talk. As you see fit, select yes or no. After that, you'll be prompted to adjust the default voice recognition timeframe. If Pi is having problems hearing your commands, you should modify this.
If you select Yes, you'll be prompted to enter a number; this is the number of seconds that the Raspberry Pi will listen for a voice command; the default for RPI is 3. The application then allows you to customize your text-to-speech choices. Before you do this, make sure your volume is turned up. The application attempts to speak something and then asks if you heard it.
When the system receives your keyword, it responds with "Yes, sir?" as the default response. To modify this, select Yes in the following prompt, then enter your chosen response, for example, "Yes, ma'am?" Once you're finished, hit the enter key. The program will replay the assigned response to check that you are satisfied with the outcome.
Whenever the program receives an unidentified command, the method is the same as the default response. "Received the wrong command" is set as the default response, but you could still alter it to something more friendly by typing yes, then your desired response, like, "The command is not known."
You now have the option of configuring the voice recognition parameters. This will check to see if you have a suitable mic. The Pi will then ask you if you want it to test your sound threshold using the Voice command.
Check for background sound, then press Yes, then press enter key. It then requests you to say a command to ensure that it is using the correct audio device. Type Yes to have the application automatically select the appropriate audio threshold for your Rpi.
Lastly, the Raspberry Pi will ask if you wish to modify the default voice command term ("Pi"). After that, type Y and your new keyword, then press enter when you're finished.
After that, you'll be requested to say your keyword to acclimate the RPi to your voice. If everything looks good, press Y to finish the setup. Start with the fundamental commands outlined above when using the Voice command software.
Once you've mastered these commands, use the following line of code to exit the application and, if desired, change your config file.
The Raspberry Pi's technology is still a work-in-progress, so not everything you speak may be recognized by the program.
Stay near the mic and talk slowly and clearly to maximize your chances of being heard by the program if you still have difficulties understanding. Launch the terminal or log in through SSH to your Raspberry Pi and type the following command to access your audio settings to change your audio preferences.
Hit the F4 button on the keyboard to select audio input, then the F6 key to exit. Select your input device, the mic with the arrow up or down keys, then press Enter key. To change the mic's volume, push it up using the up-arrow key to maximum (100).
If your device isn't identified at all, it may require more current than a universal serial bus port on a Raspberry Pi can supply. A powered universal serial bus hub is the ideal solution for this.
If you have difficulty connecting after installing the Download application, please ensure that connection to The Pirate Bay site is not limited.
To download the files, you'll also require a BitTorrent application for your RPi, such as transmission. To install this, launch your terminal or access your RPi through SSH and type the following command:
The Transmission homepage has instructions about getting started and utilizing the application. You should always download files that have permission from the copyright holder.
Please remember that whatever you speak and any text documents you provide are transferred to Google servers for translation if you use Google text or speech Commands.
Google maintains that none of this information is kept on its servers. Even if this is the case, any data communicated through the worldwide web can be decrypted by any skilled third party or a hacker. Google, however, encrypts your connection to minimize the risk of this happening.
If you like this voice command tool, you might want the program to launch automatically every time the Rpi is powered on. If this is the case, launch the terminal from your RPi or access it via SSH and execute the command below:
The above command opens the script that controls which programs run whenever your Raspberry Pi is booted. By default, this script is doing nothing. Type the following line of code directly above the one reading exit 0:
To save any changes, use Ctrl+X, enter yes, and press enter key. At this point, you can restart the Raspberry Pi to ensure that it is working.
Launch your Rpi terminal, type the command below, and press enter to see a list of active processes.
Noise from air conditioners and heaters can damage your audio and make it impossible for the program to understand what you're saying. The only other alternative is to turn these systems off during recording unless the entire system is redesigned to be more acoustically friendly. However, upon listening to the audio, it becomes clear and annoying.
Computer hardware cooling fans are also sources of mechanical noise. These can be disabled manually and for a limited period. Besides that, try isolating the disturbance in another space or utilizing an isolation box as an alternative.
We learned how to configure our raspberry pi for voice control. We also looked at a few basic commands used to control the raspberry pi and the software used. However, In the following tutorial, we will learn how to tweet on Raspberry pi.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | Jumper Wires | Amazon | Buy Now | |
3 | Raspberry Pi 4 | Amazon | Buy Now |
Connect button pushes to function calls using the Python gpiozero package and uses the Python dictionary data structure
For this project, you'll need some audio samples. On Raspbian, there are many audio files; however, playing them with Python can be challenging. You can, however, transform the audio files to a more simply used format for Python.
Please make a new folder and rename it because all of your project files will be saved in the new location.
Create a new folder named samples using the same technique as before in your new folder.
In /usr/share/sonic-pi/sample, you'll find a bunch of sample sounds. These sounds will be copied into the samples folder in the following phase.
Open the command window by selecting the icon in the upper left corner of your screen.
To transfer all items from one folder to another, execute the following lines:
You should now see all the .flac audio files inside the sample folder.
To use Python to play sound files, you must first convert them from .flac to .wav format.
Go to the sample folder inside a terminal.
FFmpeg is a software program that can quickly transcode video files on Raspberry Pi. This comes preloaded on the most recent Raspbian releases.
You may use this simple command to convert music or movie files format:
To transform an audio (.wav) to a mp3 format (.mp3), for example, type:
A batch operation is a process for processing or treating a large quantity of material. When creating modest amounts of goods, the batch operation is preferred. Because this procedure provides superior traceability and flexibility, it is highly prevalent in pharmaceutical and specialized chemical manufacturing.
It's simple to rename a file with bash. For example, you may use the mv command.
But what if you had a thousand files to rename?
When you run a script or a series of commands on several files, this is known as a batch operation.
The first step is to notify bash that you wish to work with all the files inside the folder.
The dollar symbol indicates that you're referring to f in this case. Finally, inform bash that you're finished.
If you press enter after each statement, bash will wait until you input done before running the loop; therefore, the command would appear as shown below:
You might also use a semi-colon to separate the instructions rather than pressing Return after every line.
The process of processing and analyzing strings is referred to as string manipulation. It entails several processes involving altering and processing strings to utilize and change their data.
That final command was meaningless, but batch operations can accomplish a lot more. You can use the following command to rename each file.
As a result, the $f percent.txt.MD syntax replaces all.txt strings with .md strings. Use the hash operator instead of the % sign to remove a string from the beginning rather than the end.
Write the following lines in your terminal. All .flac files will be converted to .wav files, and the old ones will be deleted.
Based on the Raspberry version you're using, it might take a couple of minutes.
All of the new.wav files will then be visible inside the sample folder.
After that, you'll begin writing Python code. You can do this with any code editor or IDE, but Mu is often an excellent option.
To play audio files, import and initialize the pygame library.
This file should be saved in the gpio-music-box directory.
Select four audio files to utilize in your composition, such as:
Then make an object which references an audio file. Give the file a different name. Consider the following case:
For the following three sounds, create labelled objects.
Because all of your audio files are in the sample folder, the path will be as follows:
Each audio object should be given a name, for example, cymbal:
This is how your program should appear:
Please make a backup of your code and execute it. Then use the .play() function inside the shell from the code editor to play the audio.
Ensure that the speaker is functional and the volume is cranked up if you don't hear anything.
Four pushbuttons will be required, each connected to a different GPIO pin.
There are 26 GPIO pins on a Raspberry Pi. You may use them to transmit on/off pulses to/from electrical components like Led, actuators, and switches.
The GPIO pins are laid out as below.
There are extra pins for 3.3 V, 5v, and Grounded connections and a number for each pin.
Another illustration of the pin placement may be seen here. It also displays some of the unique pins that are available as options.
A figure with a quick explanation is shown below.
One of the basic input components is a button.
Buttons come in various shapes and sizes, with two or four legs, for example. Flying wire is commonly used to link the two-leg variants to a control system. Four-legged buttons are usually put on a breadboard.
The illustrations below illustrate how to connect a Raspberry Pi to a button. Pin 17 is indeed the input in both scenarios.
The breadboard's negative rail may be connected to a single GND, allowing all pushbuttons to share the same grounded rail.
On the breadboard, attach the four buttons.
Connect each pushbutton to a GPIO pin with a specific number. You are free to select whatever pin you prefer, but you must recall its number.
Connect one Ground pin to the breadboard's neutral rail. Then connect one of each button's legs to this rail. Finally, connect the remaining buttons' legs to separate GPIO pins.
Here's a wiring schematic that may be of use. The extra legs of the pushbuttons are connected to Pin 4, pin 17, pin 27, and Pin 10 in this case.
A single pushbutton has been connected to pin 17 in the figure below.
The button may be used to invoke methods that don't need any arguments:
First, use Python language and the gpiozero library to configure the button.
The next step is to design a function with no parameters. This straightforward method prints the phrase Hello in the terminal.
Finally, build a function-calling trigger.
You can now see Hello displayed in the terminal each time that button is clicked.
You may make your function as complicated as you want it to be, and you can also call methods which are part of modules. In this case, hitting the button causes an LED in pin 4 to turn on.
The application should execute a function like drum.play() whenever the button is pushed.
However, brackets are not used when using an action (like a button push) to invoke a function
. The software must call that method whenever the button is pushed, not immediately. All you have to do is use drum.play in this situation.
First, establish one of the buttons. Ensure to replace the numbers from the example with the ones of the pins you've used.
Add the following line of code at the end of your script to play the audio whenever the button is pushed:
Push the button after running the software. If you don't hear the sound, double-check your button's connections.
Add code to the three remaining buttons to have them play their audio.
For the second button, you may use the code below.
Good code is clear, intelligible, tested, never overly convoluted, and does the task at hand.
You should be able to run your code with no issues. However, once you have a working prototype, it's typically a good practice to tidy up your code.
Subsequent stages are entirely optional. If you're satisfied with your writing, leave it alone. Follow the instructions on this page and make your code a little cleaner.
Instead of creating eight individual objects, you may keep your pushbutton objects and audio in a dictionary. These are, nonetheless, the seven characteristics of good code.
If you're writing one-time discard code that no one, including yourself, will have to see in the future, you may write in any way you like. However, the majority of useful software that has been produced has to be updated regularly.
The next important feature of excellent programming is scalability, or the capacity to expand with the demands of your organization. Scalability is primarily concerned with the code's efficiency. Scalable code doesn't always require frequent design changes to maintain performance and resolve various workloads.
Last-minute adjustments are unavoidable while developing software. It will be difficult to send new changes through if the code can not be rapidly and automatically tested.
Every piece of software that is built has a certain goal in mind. For persons familiar with the functionality, a code that adheres to its requirements is simple to understand. As a result, one important characteristic of excellent code is that it meets the functional requirements.
Mistakes and flaws are an inherent element of software development. You can't anticipate every conceivable manufacturing case, no matter how cautious you were during the design process. You should, however, shield your application against the negative effects of such situations.
This is an important feature of excellent coding. A reusable and sustainable code is extendable. You write code and double-check that it works as expected. Extensions and related advances can be added to the feature by modules that require it.
For every software application, reusable coding is essential and highly beneficial. It aids in the simplification of your source code and avoids redundancy. Reusable codes save time and are cheaper in the long term.
Learn how to create simple dictionaries and iterate over them by following the methods below.
First, establish a dictionary where the Buttons serve as keys and audio as values.
Whenever the pushbutton is pressed, you can loop through the dictionary to instruct the computer to play the audio:
We learned how to make a "music box" with buttons that play sounds depending on which button is pushed in this lesson. We also learnt how to interface our raspberry pi with buttons via the GPIO pins and wrote a python program to control the effects of pressing the buttons. In the following tutorial, we will learn how to perform a voice control on Raspberry pi.
In order to send or to receive data, you have to make one module as a master and the other module as a slave. If both the modules are acting as master then, data will not transmit and if both the modules are acting as a slave then, again data will not transmit or receive. The hardware of Bluetooth module contains a large no of features. For example it is much sensitive and it is sensitive up to -80dBm and it catches a Bluetooth signal even from far away. If you wish to transmit data through it then, it also have much power to transmit data to a wider range. You can image its transmitting power from the digits that it has transmitting power of +4dBm. This module operates on a low voltages, that's why the power rating of this module is very low. The hardware of the module comes with a integrated antenna and also its hardware contains edged pins. These edged pins gives us the ease that it becomes very easy to plug in or plug out the wires. and also if you are going to use it within a circuit then, it becomes very easy to connect the cable with module. Above was a little introduction about Bluetooth module and its features. now lets move to the basic theme of our project, which is to do interfacing between arduino and HC-05.
Note:
HC-05 Bluetooth module has total 6 pins. A simple HC-05 Bluetooth module is shown in the image given below and you can also see its pin configuration from this image. The pin configuration and the purpose of each pin is listed below as:
int Input = A0; int SensorVal = 0; void setup() { Serial.begin(9600); pinMode(Input, INPUT); Serial.println("Interfacing of Smoke Sensor with Arduino"); Serial. println("Design by www.TheEngineeringProjects.com"); Serial.println(); } void loop() { SensorVal = analogRead(Input); Serial.println(SensorVal); delay(500); }
int Input = A0; int SensorVal = 0; int Check = 0; void setup() { Serial.begin(9600); pinMode(Input, INPUT); Serial.println("Interfacing of Smoke Sensor with Arduino"); Serial. println("Design by www.TheEngineeringProjects.com"); Serial.println(); } void loop() { SensorVal = analogRead(Input); if((SensorVal > 500) && (Check == 1)) { Serial.println("Smoke Detected . . ."); Check = 0; } if((SensorVal < 500) && (Check == 0)) { Serial.println("All Clear . . ."); Check = 1; } //Serial.println(SensorVal); delay(500); }
int relay1 = 6; int relay2 = 7; void setup() { pinMode(relay1, OUTPUT); pinMode(relay2, OUTPUT); } void loop() { digitalWrite(relay1,LOW); delay(1000); digitalWrite(relay1,HIGH); delay(1000); digitalWrite(relay2,LOW); delay(1000); digitalWrite(relay2,HIGH); delay(1000); }