IoT based Web Controlled Home Automation using Raspberry Pi 4

Greetings, and welcome to today's tutorial. In the last tutorial, we learned how to construct a system for tallying individuals using Raspberry Pi, astute subtraction, and blob tracking. We demonstrated the total number of building entrances and exits. Feature computation and HOG theory were also discussed. The tests proved that a device based on the raspberry pi could effectively function as a people counting station. One of the many benefits of the Pi 4 is its internet connectivity, which is especially useful for home automation projects due to its low price and ease of use. We're going to see if we can use a web page's buttons to manage our air conditioner today. With this Internet of Things (IoT) based home automation, you can command your home gadgets from the comfort of your couch. The user can access this web server from any gadget capable of loading HTML apps, such as a smartphone, tablet, computer, etc.

Where To Buy?
No.ComponentsDistributorLink To Buy
1BreadboardAmazonBuy Now
2DiodesAmazonBuy Now
3Jumper WiresAmazonBuy Now
4LEDsAmazonBuy Now
5ResistorAmazonBuy Now
6TransistorAmazonBuy Now
7Raspberry Pi 4AmazonBuy Now

Components

The needs of this project can be broken down into two broad classes: hardware and software.

Hardware Requirement

  • Raspberry Pi 4

  • Memory card 8 or 16GB running Raspbian Jessie

  • 5v Relays

  • 2n222 transistors

  • Diodes

  • Jumper Wires

  • Connection Blocks

  • LEDs to test.

  • AC lamp to Test

  • Breadboard and jumper cables

  • 220 or 100 ohms resistor

Software Requirement

We'll be using the WebIOPi framework, notepad++ on your PC, and FileZilla to transfer files (particularly web app files) from your computer to the raspberry pi and the Raspbian operating system.

The Raspberry Pi Setup Process

As a good habit, I constantly update the Raspberry Pi before using it for the first time. In this project phase, we will handle the web-to-raspberry-pi connection by upgrading the Pi and setting up the WebIOPi framework. The python Flask framework provides a potentially more straightforward alternative, but getting your hands dirty and looking at how things operate makes DIY appealing. When you get to that point, the fun of DIY begins. Use the updated commands below to upgrade your Raspberry Pi and restart the RPi.

sudo apt-get update

sudo apt-get upgrade

sudo reboot

After this is finished, we can set up the webIOPi framework. Using, verify that you are in your home directory.

cd ~

To download the files from the google page, type wget.

wget http://sourceforge.net/projects/webiopi/files/WebIOPi-0.7.1.tar.gz

Then, once the download is complete, unzip the file and enter the directory;

tar xvzf WebIOPi-0.7.1.tar.gz

cd WebIOPi-0.7.1/

Unfortunately, I could not locate a version of WebIOPi that is compatible with the Pi 4; thus, we have to download a patch before proceeding with the setup. Run the instructions below from within the WebIOPi directory to apply the patch.

wget https://raw.githubusercontent.com/doublebind/raspi/master/webiopi-pi2bplus.patch

patch -p1 -i webiopi-pi2bplus.patch

Once we have those things, we can begin the WebIOPi setup installation process by using the;

sudo ./setup.sh

Just click "Yes" when prompted to install more components during setup. Upon completion, restart your Pi.

sudo reboot

Verify the WebIOPi Setup

Before diving into the schematics and programs, we should power on the Raspberry Pi and ensure our WebIOPi installation is functioning as expected. Execute the command below;

sudo webiopi -d -c /etc/webiopi/config

After running the above command on the pi, open a web browser and navigate to http://raspberrypi.mshome.net:8000 (or HTTP;//thepi'sIPaddress:8000) on the computer that is attached to the pi. When logging in, you'll be asked for a username and password.

Username is webiopi

Password is raspberry

You may permanently disable this login if you no longer need it. Still, it's important to keep unauthorized users from taking control of your home's appliances and Internet of Things (IoT) components. After you've logged in, go to the GPIO header link.

Make GPIO 17 an output; we'll use it to power an LED in this Test.

Following this, attach the led to the Pi 4 as depicted in the schematics.

When you're ready to activate or deactivate the LED, return to the web page where you made the connection and select the pin 11 button. This allows us to use WebIOPi to manage the Raspberry Pi's GPIO pins. If the Test is successful, we can return to the console and exit the program by pressing CTRL + C. Please let me know in the comments if this arrangement has any problems. Once the pilot is finished, we can begin the actual project.

Developing a Web-Based Home-Control application for the Raspberry Pi

In this section, we will alter the WebIOPi service's standard setup and inject our code to be executed on demand. FileZilla or another FTP/SCP copy program will be the first tool we install on our computer. You'll agree that using the terminal to write code on the Pi is a stressful experience, so having access to Filezilla or another SCP program will be helpful. Let's make a project directory in which all our web scripts will be stored before we begin writing the HTML, CSS, and javascript programs for this Internet - of - things Home automated Web app and transferring them to the RPi.

First, make sure you're in your home directory using; next, create the folder; finally, open the newly constructed folder and make an HTML folder inside it.

cd ~

mkdir webapp

cd webapp

mkdir HTML

Make subfolders inside the HTML folder for scripts, CSS, and graphics.

mkdir html/css

mkdir html/img

mkdir html/scripts

Now that we have our files prepared, we can start coding on the computer and transfer our work to the Pi using Filezilla.

The JavaScript Code

Writing the javascript will be our first order of business. An easy-to-use script for interacting with the WebIOPi server. Our four-button web app will only use two relays in the demonstration, and we only intend to control four GPIO pins for this project.


 webiopi().ready(function() {

                        webiopi().setFunction(17,"out");

                        webiopi().setFunction(18,"out");

                        webiopi().setFunction(22,"out");

                        webiopi().setFunction(23,"out");

                                                var content, button;

                        content = $("#content");

                                                button = webiopi().createGPIOButton(17," Relay 1");

                        content.append(button);

                                                button = webiopi().createGPIOButton(18,"Relay 2");

                        content.append(button);

                                                button = webiopi().createGPIOButton(22,"Relay 3");

                        content.append(button);

                                                button = webiopi().createGPIOButton(23,"Relay 4");

                        content.append(button);

                                });

Once the WebIOPi is ready, the preceding code is executed. To help you understand JavaScript, we've explained below:

  • webiopi().ready(function()

All this tells our system to make this function and call it once the webiopi is set.

  • webiopi().setFunction(23,"out")

We can instruct the WebIOPi program to use GPIO23 for output. Four buttons are now available, but you may add more if necessary.

  • var content, button

With this line, we're instructing the system to make a new variable called content into a button.

  • content = $("#content")

We will continue using the content variable in our HTML and CSS. As a result, the WebIOPi framework generates everything connected to #content when it is mentioned.

  • button = webiopi().createGPIOButton(17,"Relay 1")

WebIOPi can make several distinct types of push buttons. This code instructs the WebIOPi program to generate a GPIO key that operates on the GPIO pin identified as "Relay 1" above. The other ones are the same, too.

  • content.append(button)

Add this code to the button's existing HTML or external code. New buttons can be made that are identical to this one in every respect. This is especially helpful while coding or writing CSS.

If you made your JS files the same way I did, you can save them and then move them with Filezilla to webapp/HTML/scripts after you've finished making them. Now we can move on to developing the CSS.

The CSS Code:

With the aid of CSS, our Internet of Things (IoT) Rpi 4 home automation website now looks fantastic. So that the website will look like the one in the picture below, I built a custom style sheet called smarthome.css.

I don't want to paste the entire CSS script here, so I'll use a subset for the explanation. If you want to learn CSS, all you have to do is read the code. You can skip this and use our CSS code if you want to.

The first section of the script, displayed below, represents the web application's main stylesheet.

 body {

         background-color:#ffffff;

         background-image:URL('/img/smart.png');

         background-repeat:no-repeat;

         background-position:center;

         background-size:cover;

         font: bold 18px/25px Arial, sans-serif;

         color:LightGray;

     }

The above code, which I hope needs no explanation, begins by setting the background colour to white (#ffffff), adds a background image to the document from the specified folder (remember the one we created earlier? ), makes sure the picture doesn't duplicate by setting the background-repeat to no-repeat, and finally tells the CSS to center the background. Next, we adjust the background's text size, font, and colour.

After finishing the main content, we styled the buttons with CSS.

button {

         display: block;

         position: relative;

         margin: 10px;

         padding: 0 10px;

         text-align: center;

         text-decoration: none;

         width: 130px;

         height: 40px;

         font: bold 18px/25px Arial, sans-serif;  color: black;

         text-shadow: 1px 1px 1px rgba(255,255,255, .22);

         -WebKit-border-radius: 30px;

          -Moz-border-radius: 30px;

          border-radius: 30px;

}

Everything else in the script is similarly optimized for readability and brevity. You can play with them and see what happens; this kind of learning is known as "learning by doing," I believe. However, CSS's strengths lie in its simplicity, and its rules are written in plain English. The button's text shadow and button shadow are two of the few supplementary features found in the block's other section. To top it all off, pressing the button triggers a subtle transition effect, making it look polished and lifelike. To guarantee optimal page performance on all browsers, these are defined independently for WebKit, firefox, opera, etc.

The following code snippet notifies the WebIOPi service that it is receiving data as input.

input[type="range"] {

                                                display: block;

                                                width: 160px;

                                                height: 45px;

                        }

Providing feedback on when a button is pressed will be the last element we want to implement. As a result, the screen's colour scheme and button hues provide a quick indicator of progress. To accomplish this, the following line of code is added to each button's HTML.

                        #gpio17.LOW {

                                                background-color: Gray;

                                                color: Black;

                        }

                        #gpio17.HIGH {

                                                background-color: Red;

                                                color: LightGray;

                        }

The code snippets up top alter the button's color depending on the user's selection. The button's background is gray when it is inactive (at LOW) and red when it is active (at HIGH). Now that we have our CSS under control let's save it as smarthome.css, upload it to our raspberry pi's styles folder using FileZilla (or another SCP client of your choosing), and fix the remaining HTML code.

HTML Code

The HTML code unifies the style sheets and java scripts.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">

<html>

<head>

        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

        <meta name="mobile-web-app-capable" content="yes">

        <meta name="viewport" content = "height = device-height, width = device-width, user-scalable = no" />

        <title>Smart Home</title>

        <script type="text/javascript" src="/webiopi.js"></script>

        <script type="text/javascript" src="/scripts/smarthome.js"></script>

        <link rel="stylesheet" type="text/CSS" href="/styles/smarthome.css">

        <link rel="shortcut icon" sizes="196x196" href="/img/smart.png" />

</head>

<body>

                        </br>

                        </br>

                        <div id="content" align="center"></div>

                        </br>

                        </br>

                        </br>

                        <p align="center">Push button; receive bacon</p>

                        </br>

                        </br>

</body>

</html>

The head tag contains several crucial elements.

<meta name="mobile-web-app-capable" content="yes"> 

The code line above makes it possible to add the web app to the mobile device's home screen when using Chrome or Safari. You can access this function using the Chrome menu. This makes it so the app may be quickly launched on any mobile device or desktop computer.

The following line of code provides a measure of responsiveness for the web app. Because of this, it can take up the entire display of any gadget on which it is run.

<meta name="viewport" content = "height = device-height, width = device-width, user-scalable = no" /> 

The web page's title is defined in the following line of code.

<title>Smart Home</title>

The following four lines of code all connect the Html file to multiple resources it requires to function as intended.

        <script type="text/javascript" src="/webiopi.js"></script>

        <script type="text/javascript" src="/scripts/smarthome.js"></script>

        <link rel="stylesheet" type="text/CSS" href="/styles/smarthome.css">

        <link rel="shortcut icon" sizes="196x196" href="/img/smart.png" />

The first line above directly connects to the WebIOPi framework JavaScript, which is stored in the server's root directory. This method must be invoked whenever WebIOPi is used.

The second line tells the HTML document where to find our jQuery script, and the third tells where to get our style sheet. The last line prepares an icon for the mobile desktop, which can be useful if we use the website as an app or a favicon.

To ensure that our HTML code displays whatever is contained in the JavaScript file, we include break tags in the body portion of the code. The definition of our button's content was made previously in the JavaScript code, and its id="content" should bring that to mind.

<div id="content" align="center"></div>

Everybody is familiar with the routine of saving an Html file as index.html and then transferring it to the Pi's HTML folder via Filezilla.

Modifications to the WebIOPi Server for Use in Automated Household Tasks

Before we can begin sketching out circuit diagrams and running tests on our web app, we need to make a few adjustments to the webiopi service's configuration file, instructing it to look for configuration information in our HTML folder rather than the default location.

Edit the configuration by executing the following commands as root:

sudo nano /etc/webiopi/config

Find the section of the configuration file labelled "HTTP" and look for the line that begins with "#" Modify the directory where HTML and resources are stored by default with doc-root.

Remove the # comments from anything below it, and if your folder is organized like mine, set the doc-root to the location of your project file.

doc-root = /home/pi/webapp/html

Lastly, save your work and exit. If you already have another server installed on the Pi utilizing port 8000, you may easily change it. If not, let's stop saving and call it a day.

It's worth noting that the WebIOPi service password can be changed using the command;

sudo webiopi-passwd

A new login name and password will be required. Getting rid of this entirely is possible, but safety comes first.

Finally, issue the following command to start the WebIOPi service.

sudo /etc/init.d/webiopi start

If you want to see how the server is doing, you can do so by;

sudo /etc/init.d/webiopi status

That's why there's a way to halt its execution:

sudo /etc/init.d/webiopi stop

Setup WebIOPi to start automatically with;

sudo update-RC.d webiopi defaults

To do the opposite and prevent it from starting up automatically, use the following;

sudo update-RC.d webiopi remove

Schematic and Explanation of a Circuit

Now that we have everything set up, we can begin developing the schematics for our Web-controlled home appliance.

Whereas I could not procure relay modules, which in my experience, make electronics projects simpler for do-it-yourselfers. So, I'm going to draw some diagrams for regular, single-relay, 5V-powered standalone devices.

Join the components as seen in the fritzing diagram. It's important to remember that your Relay's COM, NO (usually open), and NC (typically Close) contacts could be on opposite sides. Please verify this with a millimetre.

Relays Underlying Operating Principles

Relays can be found anywhere that electricity is being switched, from a simple traffic light controller to a high-voltage switchyard. Relays, in the broadest sense, are equivalent to any other switch. They can connect or disconnect a circuit and are frequently employed to activate or deactivate an electrical load. However, this is a comprehensive statement; there are many other relays, and each Relay behaves slightly differently depending on the task at hand; as the electromechanical Relay is one of the most widely used relays, we will devote more space to discussing it here. In spite of variations in design, all relays work according to the same fundamental concept, so let's dive into the nuts and bolts of relays and talk about how they function.

So, what exactly is Relay?

A relay is called an electromechanical switch that may either establish or rupture an electrical connection. A relay is like a mechanical switch, except that it is activated and deactivated by an electronic signal rather than by physically flipping a switch. It comprises a flexible movable mechanical portion controlled electrically through an electromagnet. Once again, this Relay operating concept is suitable exclusively for electromechanical relays.

A common and widely used relay consists of electromagnets typically employed as a switch. However, there are many kinds of relays, each with its purpose. When a signal is received on one side of the device, it controls the switching activity on the other, much like the dictionary definition of Relay. That's right, a relay is an electromechanical switch that can open and close circuits. This device's primary function is to establish or sever contact with the aid of a signal to turn it ON or OFF automatically and without human intervention. Its primary use is to allow a low-power signal to exert control over a circuit with a high power consumption. Typically, the high-voltage circuit is controlled by a direct current (DC) signal.

How the Relay is Built and Functions

The following diagram depicts the internal structure and design of a Relay.

A coil of copper wire is wound around a core, which is then placed inside a housing. When the coil is electrified, it attracts the movable armature, which is supported by a spring or stand and has a metal contact attached to one end. This assembly is positioned over the core. In most cases, the movable armature is a shared connection point for the motor's internal components and the other wiring harness. The usually closed (NC) pin is linked to the common terminal, while the ordinarily opened (NO) pin is not used in operation. By connecting the armature to the usually open contact whenever the coil is activated, current can flow uninterruptedly through the armature. When the power is turned off, it returns to its starting position.

The picture below shows a schematic of the Relay's circuit in its most basic form.

Relay Teardown: An Inside Look

In the images below, you can see the main components of an electromechanical relay—an electromagnet, a flexible armature, contacts, a yoke, and a spring/frame/stand. They have been thoughtfully placed into a relay.

The workings of a Relay's mechanical components have been outlined below.

  1. Electromagnet

An electromagnet is crucial to the operation of a relay. This metal lacks magnetic properties but can be transformed into a magnet when exposed to an electrical current. It is healthy knowledge that a conductor takes on the magnetic characteristics of the current flowing through it. Thus, a metal can operate as a magnet and attract magnetic objects within its range when wound with a conductive material and powered by an adequate power source.

  1. Movable Armature

A moveable armature is just one piece of metal that can rotate or stand on its own. It facilitates connection-making and -breaking with the contacts attached to it.

  1. Contacts

Internal conductors are the wires that run through a device and hook up to its terminals.

  1. Yoke

It's a tiny metal piece attached to a core that attracts and retains the armature whenever the coil is activated.

  1. Spring (optional)

While some relays can function without a spring, those that do have one attach it to the armature at one end to prevent any snagging or binding. One can use a metal "stand" in place of a spring.

Mechanism of Action of a Relay

Let's examine the differences between a relay's normally closed and normally open states. 

Relay's NORMALLY CLOSED state

If no current flows through the core, there will be no magnetic field, and the device will not be a magnet. As a result, it is unable to draw in the flexible framework. So, the ordinarily closed position of the armature is the starting point (NC).

Relay in NORMALLY OPENED state

When a high enough voltage is supplied to the core, it begins to have a strong magnetic field around itself, allowing it to function as a magnet. The magnetic field produced by the core attracts the movable armature whenever it comes within its field of influence, changing the armature's location. As it has been wired to a normally open relay pin, any external circuits attached to it will no longer operate in the same way.

It is important to connect the relay pins correctly so that the external circuit can do its job. When a coil is powered, the armature is drawn toward it, revealing the switching action; when the power is cut, the coil loses its magnetic property, and the armature returns to its original location. The animation provided below shows the Relay in action.

Transistor functions in the circuit

There is nothing complicated about a transistor, yet there is a lot going on inside it. Okay, so first, we'll tackle the easy stuff. An electronic transistor is a small component that can switch between two functions. It's a switch that can also act as an amplifier.

An amplifier is a device that takes in a little electric current and outputs a significantly larger electric current (called an output current). It can be thought of as a current booster. One of the earliest applications for transistors, this is particularly helpful in devices like hearing aids. A hearing aid contains a microscopic microphone that converts ambient sound into electrical signals. These are then amplified by a transistor and used to power a miniature loudspeaker, which reproduces the ambient noise at a much higher volume.

It is possible to use a transistor as a switch. A transistor is a device that allows for the passage of one electrical current to induce a much larger current to flow through the next part of the device. What this means is that a relatively small current can activate a much larger one. All computer chips function in this general way. As an illustration, a memory chip may have as many as a billion individually controllable transistors. Due to the fact that each transistor can exist in either of two states, it is capable of storing either a zero or a one. A chip's ability to hold billions of zeroes and ones, as well as almost as many regular numbers and letters, is made possible by its billions of transistors.

Diode functions in the circuit

Diodes can range in size from what's shown in the image up top. They feature a cylindrical body that is usually black with a stripe at one end and certain leads that protrude so that we may plug it into a circuit. The opposite terminal is called the cathode and is opposite the anode.

A diode is an electrical component that restricts current flow in one direction.

To illustrate, picture a swing valve fitted in a water line. The water pressure inside the pipe will force open the swing gate, allowing the water to flow uninterrupted. In contrast, the gate will be forced shut, and water flow will stop if the river alters its course. As a result, there is only one direction for water to flow.

Very much like a diode, which we also employ to alter the current flow through a circuit, it allows us to switch it on and off at will.

We have now animated this process using electron flow, in which electrons move from negative to positive. However, traditional flow, positive to negative, is the norm in electronics engineering. It's usually best to start with the conventional current because it's more familiar to most people, but feel free to use either one; we'll assume you're aware of the difference.

It's important to remember that the light-emitted diode will only light up properly if the diode is connected to the circuit in the correct orientation when adding it to a simple Light emitted diode circuit like the one shown above. Only one direction of current can travel through it. Accordingly, its conductive or insulating properties are determined by the orientation in which it is mounted.

So that it can conduct electricity, you must join the black end to the neutral and the striped end to the positive. The forward bias is the condition in which current can flow. If we invert the diode, it will become an insulator and stop the passage of electricity. The term for this is "the reverse bias."

Exactly how would a diode function?

You probably know that electricity is the transfer of electrons between atoms that are not bound. Because of its high number of unpaired electrons, copper is widely used for electrical wiring. Since rubber is an insulator—its electrons are kept very securely, so they cannot flow between atoms—it is used to wrap around the copper wires for our protection.

In a simplified form of a metal conducting atom, the nucleus is at the center, and the electrons are housed in a series of shells around it. It takes a specific amount of energy for an electron to be absorbed into each shell, and each shell has a max number of electrons it can hold. Those electrons that are furthest from the nucleus are the most energetic. Conductors have between one and three electrons in their outermost "valence" shell.

The nucleus acts as a magnet, keeping the electrons in place. However, there is yet another layer, the conduction band. If an electron gets here, it can leave its atom and travel to another. Because the valence shell and conduction band of a metal atom overlap, the electron can move quickly and easily between the two.

The insulator has a tightly packed outer layer. No free space for electrons to occupy. Because of the strong attraction between the nucleus and the electrons and the great distance between the nucleus and the conduction band, the electrons are trapped inside the nucleus and cannot leave. Because of this, electricity is unable to travel through it.

Of course, a semiconductor is also a different type of material. A semiconductor might be silicon, for instance. This material behaves as an insulator because it has one more electron than is necessary in its outermost shell to be a conductor. However, with enough external energy, a few valence electrons can generate enough momentum to hop across to the conduction band, where they can finally break free. Consequently, this substance can perform the roles of both an insulator and a conductor.

Due to the lack of free electrons in pure silicon, engineers must add a small number of materials (called "doping") to the silicon to alter its electrical properties.

This process gives rise to P-type and N-type doping, respectively. The diode itself is a combination of these doped materials.

Two leads connect the anode and cathode to various thin plates inside the diode. P-Type doped silicon is on the anode side of these plates, and the cathode side is N-Type doped silicon—an insulating and protective resin that coats the entire structure.

Consider the material to be pure silicon before it has been doped. There are four silicon atoms surrounding each one. Because silicon atoms need eight electrons to fill their valence shells but only have four available, they share one with their neighbours. Covalent bonding describes this type of interaction.

Phosphorus, an N-type element, can be substituted for a number of silicon atoms in a compound semiconductor. Phosphorus has a 5-electron valence shell because of this. This extra electron isn't needed because particles are sharing them to reach the magic number of 8. This means there's an extra electrons in the material, and it's free to go wherever it wants.

In P-type doping, a substance like aluminum is introduced. Due to its limited valence electron pool of 3, this atom is unable to share an electron with any of its four neighbours. An electron-sized void is therefore made available.

We now have silicon with either too many or too few electrons, depending on the doping method.

Upon joining, the two substances forge a p-n junction. This is a depletion region, and it forms at the intersection. Here, some of the surplus electrons on the N-type side migrate over to fill the vacancies on the P-type side. By moving in this direction, electrons and holes will accumulate on either side of a barrier. Holes are thought to be positively charged since they are the opposite of electrons, which are negatively charged. The resulting accumulation produces two distinct regions, one slightly negatively charged and the other slightly positively charged. This forms an electric field that blocks the path of any more electrons. In regular diodes, the voltage drop over this area is only 0.7V.

By applying a voltage across the diode with the P-Type anode linked to the positive and the N-Type cathode attached to the negative, a forward bias is established, and current can flow. The electrons can't get over the 0.7V barrier unless the voltage source is higher.

We can achieve this by connecting the positive terminal of the power supply to the cathode of an N-type device and the negative terminal to the anode of a P-type device. The diode functions as a conductor to block current because the barrier expands as holes are drawn toward the negative and electrons are drawn toward the positive.

Resistor functions in the circuit

A resistor is a two-terminal, non-active electrical component that reduces the amount of current in electric and electronic circuits. A certain amount can lower the current by strategically placing a resistor in a circuit. From the outside, most resistors will appear identical. But if you crack it open, you'll find a ceramic rod used for insulation within, with copper wire covering the rest of the structure. Those copper twists are crucial to the resistance. When copper is sliced thinner, resistance rises because electrons have more difficulty penetrating the material. We now know that electrons can move more freely through some conductors than insulators.

George Ohm investigated the correlation between resistor size and material thickness. His proof showed that an object's resistance (R) grows in proportion to its length. Because of this, the resistance offered by the lengthier and thin wires is greater. However, wire thickness has a negative effect on resistance.

Once everything is hooked up, you can start your server by browsing to the IP address of your RPi and entering the port you chose earlier (as mentioned in the previous section), entering your password and username and seeing a page that looks like the one below.

All it takes is a few clicks of your mouse to operate four AC home appliances from afar. This can be controlled from a mobile device (phone, tablet, etc.) and expanded with additional switches and relays. Thank you all for reading to the end.

Conclusion

This guide showed us how to set up a web-based control system for our home automation system based on the Raspberry Pi 4. We have learned how to utilize the WebIOPi API to manage, debug, and use raspberry Pi's GPIO, sensors, and adapters from an internet browser or any application. We have also implemented JavaScript, CSS, and HTML code for the web application. For those who thrive on difficulty, feel free to build upon this base and add whatever demanding module you can think of to the project. The following tutorial will teach you how to use a Raspberry Pi 4 to create a Line Follower robot that can navigate obstacles and drive itself.

Estimating the Size of a Crowd with OpenCV and Raspberry Pi 4

Welcome to the next tutorial on our raspberry pi four python programming. In the previous article, we built a system that recognizes when two people are in physical contact using OpenCV and a Raspberry Pi 4. We used the weights from the YOLO version 3 Object Recognition Algorithm to implement the Deep Neural Networks part. Regarding image processing, the Raspberry Pi consistently comes out on top compared to other controllers. A facial recognition program was among the earlier attempts to use Raspberry Pi for sophisticated picture processing. In today's world of cutting-edge technology, digital image processing has expanded rapidly to become an integral feature of many portable electronic gadgets.

Digital image processing is widely used for such tasks as item detection, facial recognition, and people counting. This guide will use a Raspberry Pi 4 and ThingSpeak to create a crowd-counting system based on OpenCV. In this case, we will utilize the pi camera module to take pictures in a continuous loop, and then we will run the images through the Histogram Based Object descriptor to find the things in the photos. Next, we'll compare these images to OpenCV's pre-trained model for facial recognition. The headcount may be seen by anybody, anywhere in the world, because of the public nature of the ThingSpeak channel.

Knowing how many people show up to an event or purchase a newly released product is vital for event management and retail shop owners. Still, it's even more critical that they can use that information to improve future events. To their relief, modern crowd-counting technology has made it simpler for event planners and business owners to acquire actionable data on event attendance that can be used to improve ROI.

Where To Buy?
No.ComponentsDistributorLink To Buy
1Raspberry Pi 4AmazonBuy Now

Components

Hardware

  • Raspberry Pi 4

  • Pi Camera

Software & Online Services

  • ThingSpeak

  • Python3

  • OpenCV3

Instructions for Setting Up OpenCV on a Raspberry Pi

In this case, the OpenCV framework will make people count. You must first upgrade your Raspberry Pi before you can install OpenCV.

sudo apt-get update

Then, get OpenCV ready for your Raspberry Pi by installing its prerequisites.

sudo apt-get install libhdf5-dev -y 

sudo apt-get install libhdf5-serial-dev –y 

sudo apt-get install libatlas-base-dev –y 

sudo apt-get install libjasper-dev -y 

sudo apt-get install libqtgui4 –y

sudo apt-get install libqt4-test –y

Once that is done, use the following command to install OpenCV on your Raspberry Pi.

pip3 install OpenCV-contrib-python==4.1.0.25

Additional Package Installation Necessary

We need to get some additional packages on the Raspberry Pi before we can begin writing the code for the Crowd Counting app.

Installing imutils: To perform basic image processing tasks like translating, rotating, resizing, skeletonizing, and displaying Matplotlib images more efficiently in OpenCV, imutils are used. So, run the following command to set up imutils:

pip3 install imutils

matplotlib: The matplotlib library should then be installed. When it comes to Python visualizations, Matplotlib is your one-stop shop for everything from static to animated to interactive.

pip3 install matplotlib

Configuring Thingspeak for Headcounting

One of the most widely used IoT platforms, ThingSpeak allows us to keep tabs on our data from any location with an Internet connection. The system can also be controlled remotely by using the Channels and web pages provided by ThingSpeak. You must first register for an account on ThingSpeak to create a channel. If you have a ThingSpeak account, please log in with your username and password.

Select Sign up and fill out the required fields.

Double-check your email address and press the "Next" button when you're done. Now that you're logged in, click the "New Channel" button to make a brand-new channel.

When you're ready to begin uploading information, select "New Channel" and give it a descriptive name and brief explanation. One new field, "People," has been added. Any number of areas may be made, as needed. Then, click the "Save Channel" button after entering the necessary information. You'll need to pass your API and channel ID into a Python script whenever you want to submit data to ThingSpeak.

Hardware Configuration

For this OpenCV people-countering project, all you need is a Raspberry Pi and a Pi camera; to get started, plug the camera's ribbon connector into the Raspberry pi's designated camera slot.

The Pi 4 Camera board is a purpose-built expansion board for the Raspberry Pi computer. The Raspberry Pi hardware is connected via a specialized CSI interface. In its native still-capture mode, the sensor's resolution is 5 megapixels. Capturing at up to 1080p and 30 frames/second in video mode is possible. Because of its portability and compact size, this camera module is fantastic for handheld applications.

Setup the Camera Board

A ribbon cable connects the camera board to the Raspberry Pi. Camera PCB and Raspberry Pi hardware are associated with a ribbon cable. If you join the ribbon cables correctly, the camera will work. The camera PCB's blue backing must face away from the PCB, while the Raspberry Pi hardware's blue backing must face the Ethernet port.

Histogram of Oriented Gradients

One example of a feature descriptor is the HOG, similar to the Canny Edge Detector algorithm. Object detection is a typical application of this technique in image processing and computer vision applications. This method uses a count of gradient orientation occurrences in the limited region of an image. There are a lot of similarities between this approach and Scale Invariant Feature Transformation. The HOG descriptor highlights object structure or form. This method of computing features is superior to other edge descriptors because it considers both the magnitude and the angle of the gradient. Histograms are created for the image's regions based on the gradient's intensity and direction.

How do we calculate the histogram of oriented gradient features?

First, load the image that will serve as the basis for the HOG feature calculation into the system. Reduce the size of the image to 128 by 64 pixels. The research authors utilized and recommended this dimension because improving detection outcomes for pedestrians was their primary goal. After achieving near-perfect scores on the MIT pedestrian's database, the authors of this study opted to create a new, more difficult dataset: the 'INRIA' dataset (http://pascal.inrialpes.fr/data/human/), which includes 1805 (128x64) photographs of individuals cut from a wide range of personal photos.

In this step, we compute the image's gradient. The gradient can be calculated using the image's magnitude and angle. First, we determine Gx and Gy for every pixel in a 3x3 grid. As a first step, we determine the Gx and Gy values for each pixel by plugging their respective values into the following formulas.

Each pixel's magnitude and angle are computed using the following formulae after Gx and are determined.

Once the gradient for each pixel has been calculated, the resulting gradient matrices are each partitioned into eight 8x8 cells that form a block. Each block is assigned a 9-point histogram. Each bin in a 9-point histogram has a 20-degree range, so the resulting histogram has nine bins total. The numbers in Figure 8 are assigned to a 9-bin histogram graphically depicting the results of the calculations. Each of these 9-point graphs can be represented graphically as a histogram whose bins output the relative strength of the gradient across the corresponding intervals. Since a block can have 64 distinct values, the calculation below is carried out for each of the 64 possible combinations of magnitude and gradient. Because 9-point histograms are being used, therefore:

The following terms will define the limits of each jth bin:

The average value of each bucket will be:

Illustration of a histogram with nine discrete bins. For a particular 8x8 block of 64 cells, there will be only one possible histogram. Each of the sixty-four cells will contribute their Vj and Vj+1 values to the array's indices at the jth and (j+1) positions.

When determining the value assigned to cell j in block I, we first determine which bin j will be assigned to it. The following equations will provide the value:

Each pixel's value, Vj, is calculated and stored in the set at the jth and (j+1)the indexes of the bin that serves as the block's bin. Upon completing the preceding steps, the resulting matrix will have dimensions 16 by eight by 9. When the histograms for all blocks have been computed, a new block is formed by joining together four cells of the 9-by-9 histogram matrix (2x2). This chopping is carried out overlappingly, with an 8-pixel stride. We create a 36-feature vector by concatenating the 9-point histograms of each of the four cells that make up the block.

A combined FBI is created from four blocks by traversing a 2x2 grid around the image.

The L2 norm is used to standardize FB values across blocks.

The value of k for normalization is found by applying the following formulae:

Normalizing is performed to lessen the impact of variations in the contrast between photographs of the same object—each section. Data is collected in the form of a 36-point feature vector. Seven blocks line up across the bottom and fifteen at the top. Therefore, the entire length of all histogram-oriented gradient features will be 3780 (7 x 15 x 36). The image's HOG characteristics are extracted.

HOG features are seen parallelly on a single image with the image library.

Explanation of the People Counting Python Program

This page includes the complete Python code for an OpenCV project that counts the people in a crowd. Here, we break down the code's crucial parts so you can understand them better—first, import all the necessary libraries that will be used later in the code.

import cv2

import imutils

from imutils.object_detection import non_max_suppression

import numpy as np

import requests

import time

import base64

from matplotlib import pyplot as plt

from urllib.request import urlopen​

  • Imutils: 

For use with OpenCV and either version of Python, this package provides a set of helper functions for everyday image processing tasks such as scaling, cropping, skeletonizing, showing Matplotlib pictures, grouping contours, identifying edges, and more.

  • Numpy:

You can manipulate arrays in Python with the help of the NumPy library. Matrix operations, the Fourier transform, and linear algebra are all within their purview. Because it is freely available to the public, anyone can use it. That's why it's called "Numerical Python," or "NumPy" for short.

Python's list data structure can replace arrays, but it could be faster. NumPy's intended benefit is an array object up to 50 times quicker than standard Python lists. To make working with NumPy's array object, ndarray, as simple as possible, the library provides several helpful utilities. Data science makes heavy use of arrays because of the importance placed on speed and efficiency.

  • Requests:

You should use the requests package if you need to send an HTTP request from Python. It hides the difficulties of requests making behind a lovely, straightforward API, freeing you to focus on the application's interactions with services and data consumption.

  • Time:

In Python, the time module has a built-in method called local time that may be used to determine the current time in a given location depending on the time in seconds that have passed since the epoch (). tm isdst will range from 0 to 1 to indicate whether or not daylight saving time applies to the current time in the region.

  • Base64:

If you need to store or transmit binary data over a medium better suited for text, you should look into using a Base64 encoding technique. There is less risk of data corruption or loss thanks to this encoding method. Base64 is widely used for many purposes, such as MIME-enabled email storing complicated data in XML and JSON.

  • Matplotlib:

When it comes to Python visualizations, Matplotlib is your one-stop shop for everything from static to animated to interactive. Matplotlib facilitates both straightforward and challenging tasks. Design graphs worthy of publication. Create movable, updatable, and zoomable figures.

  • urllib.request:

If you need to make HTTP requests with Python, you may be directed to the brilliant requests library. Though it's a great library, you may have noticed that it needs to be a built-in part of Python. If you prefer, for whatever reason, to limit your dependencies and stick to standard-library Python, then you can reach for urllib.request!

Then, after the libraries have been imported, you can paste in the channel ID and API key for the ThingSpeak account you previously copied.

channel_id = 812060 # PUT CHANNEL ID HERE

WRITE_API = 'X5AQ3EGIKMBYW31H' # PUT YOUR WRITE KEY HERE

BASE_URL = "https://api.thingspeak.com/update?api_key= {}".format(WRITE_API)

Set the default values for the HOG descriptor. Several other uses have been found for HOG, making it one of the most often implemented methods for object detection. In the past, an OpenCV pre-trained model for people detection could be accessed through cv2.HOGDescriptor getDefaultPeopleDetector().

hog = cv2.HOGDescriptor()

hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

Raspberry PI is provided with a three-channel color image inside the detector() function. It then uses imutils to scale the image down to the appropriate size. The SVM classification result is then used to inform the detectMultiScale() method, which examines the image to determine the presence or absence of a human.

def detector(image):

   image = imutils.resize(image, width=min(400, image.shape[1]))

   clone = image.copy()

   rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)

If you're getting false positive results or detection failures due to capture-box overlap, try running the below code, which uses non-max suppressing capability from imutils to activate overlapping regions.

for (x, y, w, h) in rects:

       cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)

   rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])

   result = non_max_suppression(rects, probs=None, overlapThresh=0.7)

   return result

With the help of OpenCV's VideoCapture() method, the image is retrieved from the Pi camera within the record() function, where it is resized with the imultis before being sent to ThingSpeak.

def record(sample_time=5):

  camera = cv2.VideoCapture(0)

frame = imutils.resize(frame, width=min(400, frame.shape[1]))

result = detector(frame.copy())

thingspeakHttp = BASE_URL + "&field1={}".format(result1)

OpenCV's People Counting Tool: A Quick Test

Now that everything is hooked up and ready to go, let's put it through its paces. Launch the program by extracting it to a new folder. You'll need to give Python a few seconds to load all the necessary modules. Start the program. A new window will pop up, showing the camera's output after a few seconds. Make sure your Raspberry Pi camera is operational before running the python script. The following command is used to activate the python script after a review of the camera has been completed:

At that point, a new window will appear with your live video feed inside of it. OpenCV will count the number of persons in the first frame that Pi processes. The appearance of a box will indicate the detection of humans:

Output

Now that you know how many people are expected to show up, you can check the crowd size from the comfort of your own home via your ThingSpeak channel.

You can now efficiently conduct crowd counts with OpenCV and a Raspberry Pi. This technology helps with guaranteeing the safety of those attending large-scale events, which is a top priority for event planners. Knowing how people will flow through a venue or store is crucial for offering effective crowd management services. It will also improve efficiency and customer service because it is helpful for event and store managers to track the number of people entering and leaving their establishments at any one time. Additionally, it is important for event planners to understand dwell time in order to ascertain which parts of the venue are popular with attendees and which are completely bypassed. This gives them information about how the guest felt, which lets them better use the space they have. 

Complete code

import cv2

import imutils

from imutils.object_detection import non_max_suppression

import numpy as np

import requests

import time

import base64

from matplotlib import pyplot as plt

from urllib.request import urlopen

channel_id = 812060 # PUT CHANNEL ID HERE

WRITE_API  = 'X5AQ3EGIKMBYW31H' # PUT YOUR WRITE KEY HERE

BASE_URL = "https://api.thingspeak.com/update?api_key={}".format(WRITE_API)

hog = cv2.HOGDescriptor()

hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

# In[3]:

def detector(image):

   image = imutils.resize(image, width=min(400, image.shape[1]))

   clone = image.copy()

   rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)

   for (x, y, w, h) in rects:

       cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)

   rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])

   result = non_max_suppression(rects, probs=None, overlapThresh=0.7)

   return result

def record(sample_time=5):

   print("recording")

   camera = cv2.VideoCapture(0)

   init = time.time()

   # ubidots sample limit

   if sample_time < 3:

       sample_time = 1

   while(True):

       print("cap frames")

       ret, frame = camera.read()

       frame = imutils.resize(frame, width=min(400, frame.shape[1]))

       result = detector(frame.copy())

       result1 = len(result)

       print (result1)

       for (xA, yA, xB, yB) in result:

           cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)

       plt.imshow(frame)

       plt.show()

       # sends results

       if time.time() - init >= sample_time:

           thingspeakHttp = BASE_URL + "&field1={}".format(result1)

           print(thingspeakHttp)

           conn = urlopen(thingspeakHttp)

           print("sending result")

           init = time.time()

   camera.release()

   cv2.destroyAllWindows()

# In[7]:

def main():

   record()

# In[8]:

if __name__ == '__main__':

   main() 

Conclusion

Crowd dynamics can be affected by several things, such as the passage of time, the layout of the venue, the amount of information provided to visitors, and the overall enthusiasm of the gathering. Managers of large crowds need to be flexible and responsive in case of sudden changes in the environment that affect the situation's dynamics in real-time. Trampling events, mob crushes, and acts of violence can break out without proper crowd management.

The complexity and uncertainty of large-scale events emphasize the importance of providing timely, relevant information to crowd managers. Occupancy control technology helps event planners anticipate how many people will show up to their event, so they can prepare appropriately by ensuring adequate security guards, exits, etc.

Using Raspberry Pi and some smart subtractions and blob tracking, this article describes a system for counting individuals. We show how many people have entered and left a building. The principles of HOG and the calculation of features have also been covered. The testing outcomes demonstrate the viability of using this raspberry pi based device as an essential people-counting station. In the following tutorial, we'll learn how to assemble an intelligent energy monitor based on the Internet of Things and a Raspberry Pi 4.

Stop Motion Movie System using Raspberry Pi 4

Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the previous tutorial, we built a motion sensor-based security system with an alarm. Additionally, we discovered how to use Twilio to notify the administrator whenever an alarm is triggered. However, in this tutorial, we'll learn how to build a stop motion film system using raspberry pi 4.

Where To Buy?
No.ComponentsDistributorLink To Buy
1BreadboardAmazonBuy Now
2Jumper WiresAmazonBuy Now
3Raspberry Pi 4AmazonBuy Now

What you will make

With a Raspberry Pi, Py, and a pi-camera module to capture images, you can create a stop-motion animated video. In addition, we'll learn about the various kinds of stop motion systems and their advantages and disadvantages.

The possibilities are endless when it comes to using LEGO to create animations!

What will you learn?

Using your RPi to build a stop motion machine, you'll discover:

  • How to install and utilize the picamera module on the RPi

  • This article explains how to take photos with the Picamera library.

  • RPi GPIO Pushbutton Connection

  • Operate the picamera by pressing the GPIO pushbutton

  • How to use avconv to create a video clip from the command prompt

Prerequisites

Hardware

  • Raspberry Pi 4

  • Breadboard

  • Jumper wires

  • Button

Software

It is recommended that FFmpeg comes preconfigured on the most recent release of Raspbian. If you don't have it, launch the terminal then type:

sudo apt-get update

sudo apt-get upgrade

sudo apt install FFmpeg

What is stop-motion?

Inanimate things are given life through the use of a sequence of still images in the stop-motion cinematography technique. Items inside the frame are shifted slightly between every picture to create the illusion of movement when stitched together.

You don't need expensive gadgets or Graphics to get started in stop motion. That, in my opinion, is the most intriguing aspect of it.

If you've ever wanted to learn how to make a stop-motion video, you've come to the right place. 

Types of stop-motion

  1. Object-Motion

Product Animation can also be referred to as the frame-by-frame movement of things. You're free to use any items around you to tell stories in this environment.

  1. Claymation

Changing clay items in each frame is a key part of the claymation process. We've seen a lot of clever and artistic figures on the big screen thanks to wires and clay.

  1. Pixilation Stop Motion

Making folks move! It is rarely utilized. For an artist to relocate just a little each frame, and the number of images you would need, you'll need a lot of patience and possibly a lot of money, if you're hiring them to do so.

The degree of freedom and precision with which they can move is also an important consideration. However, if done correctly, this kind can seem cool, but it can also make you feel a little dizzy at times.

  1. Cutout Animation

One can do so much with cuts in cutout motion because of this. two-dimensional scraps of paper may appear lifeless, yet you may color & slice them to show a depth of detail.

It's a lot of fun to play about with a cartoon style, but it also gives you a lot more control over the final product because you can add your graphics and details. However, what about the obvious drawback? I find the task of slicing and dicing hundreds of pieces daunting.

  1. Puppet Animation

Having puppets can be a fun and creative way to tell stories, but they can also be a pain in the neck if you're dealing with a lot of cords. However, this may be a challenge for professional stop motion filmmakers who are not the greatest choice to work with at first. These puppets are of a more traditional design.

When animators use the term "puppet" to describe their wire-based clay character, they are referring to claymation as a whole. Puppets based on the marionette style are becoming less popular.

  1. Silhouette Stop Motion

Position the items or performers behind a white sheet and light their shadows on the sheet with a backlight. Simple, low-cost methods exist for creating eye-catching animations of silhouettes.

How long does it take to make a stop-motion video?

The duration takes to create a stop-motion video is entirely dependent on the scale and nature of your project. Testing out 15- and 30-second movies should only take an hour or two. Because of the complexity of the scenes and the usage of claymation, stop-motion projects can take days to complete.

Connect the camera to the raspberry pi.

You must first attach the camera to the Pi before it can begin rebooting.

Next to Ethernet, find the camera port. Take a look at the top.

The blue side of the strip should face the Ethernet port when it is inserted into the connector. Push that tab downward while keeping the ribbon in place.

Try out the camera

Use the app menu to bring up a command prompt. The following command should be typed into the terminal:

libcamera-hello

If all goes well, you'll see a sneak peek of what's to come. What matters is that it's not upside-down; you can fix it afterward. To close the preview, hit Ctrl + C.

For storing an image on your computer, run the command below:

libcamera-jpeg -o test.jpg

To examine what files are in your root folder, type ls in the command line and you'll see test.jpg among the results.

Files and folders will be displayed in the taskbar's file manager icon. Preview the image by double-clicking test.jpg.

There is no default way to make Python Picamera work with Raspbian newest version.

To make use of the camera module, one must activate the camera's legacy mode.

The command below must be entered into a command window:

sudo raspi-config

When you get to Interface Options, hit 'Enter' on your keyboard to save your changes.

Ensure that the 'Legacy Camera option is selected then tap the 'Return' key.

Select Yes using the pointer keys and hit the 'Return' key.

Repeat the process of pressing 'Return' to verify.

Click on Finish with your mouse cursor buttons.

To restart, simply press the 'Return' key.

Py IDLE can be accessed from the menu bar.

While in the menu, click Files and then New Window to launch a Python code editor.

Paste the code below paying attention to the capitalization with care into the newly opened window.

from picamera import PiCamera

from time import sleep

camera = PiCamera()

camera.start_preview()

sleep(3)

camera.capture('/home/pi/Desktop/image.jpg')

camera.stop_preview()

Using the File menu, choose Save Animated film.

Use the F5 key to start your program.

You should be able to locate image.jpg on your desktop. It's as simple as clicking it twice to bring up a larger version of the image.

It's possible to fix an upside-down photo by either repositioning your picamera with a camera stand or by telling Python to turn the picture. Adding the following lines will accomplish this.

camera.rotation = 180

Once the camera is set to PiCamera(), the following is the result:

from picamera import PiCamera

from time import sleep

camera = PiCamera()

camera.rotation = 180

camera.start_preview()

sleep(3)

camera.capture('/home/pi/Desktop/image.jpg')

camera.stop_preview()

A fresh photo with the proper orientation will be created when the file is re-run. Do not remove these lines of code from your program when making the subsequent modifications.

Connect a physical button to a raspberry pi

Hook the Raspberry Pi to the pushbutton as illustrated in the following diagram with a breadboard and jumper wires:

Pushbutton may be imported at the beginning of the program, attached to pin17, and the sleep line can be changed to use the pushbutton as a trigger in the following way:

from picamera import PiCamera

from time import sleep

from gpiozero import Button

button = Button(17)

camera = PiCamera()

camera.start_preview()

button.wait_for_press()

camera.capture('/home/pi/image.jpg')

camera.stop_preview()

It's time to get to work!

Soon as the new preview has begun, press the pushbutton on the Pi to take a picture.

If you go back to the folder, you will find your image.jpg there now. Double-click to see the image once more.

Take a picture with Raspberry Pi 4

For a self-portrait, you'll need to include a delay so that you can get into position before the camera board takes a picture of you. Modifying your code is one way to accomplish this.

Before taking a picture, put in a line of code that tells the program to take a little snooze.

camera.start_preview()

button.wait_for_press()

sleep(3)

camera.capture('/home/pi/Desktop/image.jpg')

camera.stop_preview()

It's time to get to work.

Try taking a selfie by pressing the button. Keep the camera steady at all times! It's best if it's already mounted somewhere.

Inspect the photo in the folder once more if necessary. You can snap a second selfie by running the application again.

Things to consider for making a stop motion animation

  1. You must have a steady pi-camera!

This is made easier with the aid of a well-designed setup.  To avoid blurry photos due to camera shaking, you will most likely want to use a tripod or place your camera on a flat surface.

  1. Keep your hands away from the pi-camera

If you don't press the push button every time, your stop-motion movie will appear the best. To get the camera to snap a picture, use a wireless trigger.

  1. Shoot manually

Maintain your shutter speed, ISO, aperture, and white balance same for every photo you shoot. There are no "auto" settings here. You have the option of selecting and locking the app's configurations first. As long as your preferences remain consistent throughout all of your photos, you're good to go. The configurations will adapt automatically as you keep moving the items, which may cause flickering from image to image if you leave them on auto.

  1. Make sure you have proper lighting.

It's ideal to shoot indoors because it's easier to regulate and shields us from the ever-changing light. Remember to keep an eye out for windows if you're getting more involved. Try using a basic lighting setup, where you can easily see your items and the light isn't moving too much. In some cases, some flickering can be visible when you're outside of the frame. Other times the flickering works well with animation, but only if it does so in a way that doesn't disrupt the flow of the project.

  1. Frame Rate

You do not get extremely technical with this in the beginning, but you'll need to understand how many frames you'll have to shoot to achieve the series you desire. One sec of the film is typically made up of 12 images or frames. If your video is longer than a few secs, you risk seeming like a stop motion animation.

  1. Audio

When you're filming your muted stop motion movie, you can come up with creative ways to incorporate your sound later. 

Stop-motion video

The next step is to experiment with creating a stop motion video using a collection of still photos that you've captured with the picamera. Note that stills must be saved in their folder. Type "mkdir animation" in the command line.

When the button is pushed, add a loop to your program so that photographs are taken continuously.

camera.start_preview()

frame = 1

while True:

    try:

        button.wait_for_press()

        camera.capture('/home/pi/animation/frame%03d.jpg' % frame)

        frame += 1

    except KeyboardInterrupt:

        camera.stop_preview()

        break

Since True can last indefinitely, you must be able to gently end it. If you use Ctrl + C to force it to end, the picamera preview will collapse and the loop will be terminated because it is using try-except.

Files stored as "frame" with a three-digit number preceded by a leading zero (009,005.) are known as "frame" files because of the % 03d format. This makes it simple to arrange them in the proper sequence for the video.

To capture each following frame, simply push the button a second time once you've finished rearranging the animation's main element.

To kill the program, use Ctrl + C when all the images have been saved.

Your image collection can be viewed in the folder by opening the animation directory.

Create the video

To initiate the process of creating the movie, go to the terminal.

Start the movie rendering process by running the following command:

FFmpeg -r 10 -i animation/frame%03d.jpg -qscale 2 animation.mp4

Because FFmpeg and Py recognize the percent 03d formatting, the photographs are sent to the movie in the correct sequence.

Use vlc to see your movie.

vlc animation.mp4

The renderer command can be edited to change the refresh rates. Try adjusting -r 10 to a different value.

Modify the title of the rendered videos to prevent them from being overwritten. Modify animation.h264 to a different file to accomplish this.

What's the point of making stop motion?

Corporations benefit greatly from high-quality stop motion films, despite the effort and time it takes to produce them. One of these benefits is that consumers enjoy sharing these movies with friends, and their inspiring content can be associated with a company.  Adding this to a company's marketing strategy can help make its product extra popular and remembered.

When it comes to spreading awareness and educating the public, stop motion films are widely posted on social media. It's important to come up with an original idea for your stop motion movie before you start looking for experienced animators.

Stop Motion Movie's Advantages

In the early days of filmmaking, stop motion was mostly employed to give animated characters the appearance of mobility. The cameras would be constantly started and stopped, and the multiple images would all be put together to tell a gripping story.

It's not uncommon to see films employ this time-honored method as a tribute to the origins of animations. There's more, though. 

  1. Innovation

In the recent resurgence of stop motion animations, strange and amazing props and procedures have been used to create these videos. Filmmakers have gone from generating stop motion with a large sheet of drawings, to constructing them with plasticine figures that need to be manually manipulated millimeters at a time, and to more esoteric props such as foodstuffs, domestic objects, and creatures.

Using this technique, you can animate any object, even one that isn't capable of moving by itself. A stop-motion movie may be made with anything, thus the options are practically limitless.

  1. Animated Tutorials

A wide range of material genres, from educational films to comedic commercials, is now being explored with stop motion animation.

When it comes to creating marketing and instructional videos, stop motion animations is a popular choice due to their adaptability. An individual video can be created. 

Although the film is about five minutes long, viewers are likely to stick with it because of its originality.  The sophisticated tactics employed captivate the audience. Once you start viewing this stop motion video, it's impossible to put it down till the finish.

  1. Improve the perception of your brand

It's easy to remember simple but innovative animations like these. These movies can assist a company's image and later recall be more positive. Stop motion video can provoke thought and awe in viewers, prompting them to spread the creative message to their social networks and professional contacts.

It is becoming increasingly common for organizations of all kinds to include stop-motion animations in their advertisements. 

  1. In education 

Stop-motion films can have a positive impact on both education and business. Employees, customers, and students all benefit from using them to learn difficult concepts and methods more enjoyably. Stop motion filmmaking can liven up any subject matter, and pupils are more likely to retain what they've learned when it's done this way.

Some subjects can be studied more effectively in this way as well. Using stop motion films, for instance, learners can see the entire course of an experiment involving a slow-occurring reaction in a short amount of time.

Learners are given a stop motion assignment to work on as a group project in the classroom. Fast stop motion animation production requires a lot of teamwork, which improves interpersonal skills. Some learners would work on the models, while others might work on the backdrops and voiceovers, while yet others might concentrate on filming the scenes and directing the actors.

  1. Engage Customers and Employees

The usage of stop motion movies can be utilized to explain product uses rapidly, even though the application of the device and the output may take a while. You can speed up the timeline as much as you want in stop motion animations!

For safety and health demonstrations or original sales demonstrations, stop motion instructional films may also be utilized to effectively express complex concepts. Because of the videos' originality, viewers are more likely to pay attention and retain the content.

  1. Music Video

Some incredibly creative music videos have lately been created using stop motion animations, which has recently seen a resurgence in popularity.  Even the human body could be a character in this film.

Stop-motion animations have the potential to be extremely motivating. Sometimes, it's possible to achieve it by presenting things in a novel way, such as by stacking vegetables to appear like moving creatures. The sky's the limit when it comes to what you can dream up.

  1. Reaction-Inducing Video

When it comes to creating a stop motion movie, it doesn't have to be complicated. If you don't have any of these things in your possession, you'll need to get them before you can begin filming. However, if you want to create a professional-level stop motion film, you'll need to enlist the help of an animation company.

As a marketing tool, animated videos may be highly effective when they are created by a professional team. 

  1. Create an Intriguing idea

The story of a motion-capture movie is crucial in attracting the attention of audiences, so it should be carefully planned out before production begins. It should be appropriate for the video's intended audience, brand image, and message. If you need assistance with this, consider working with an animation studio.

Disadvantages

But there are several drawbacks to the overall process of stop motion filmmaking, which are difficult to overcome. The time it takes to create even a min of footage is the most remarkable. The time it takes to get this film might range from a few days to many weeks, depending on the approach used.

Additionally, the amount of time and work that is required to make a stop-motion movie might be enormous. This may necessitate the involvement of a large team. Although this is dependent on the sort of video, stop motion animating is now a fairly broad area of filmmaking, which can require many different talents and approaches.

Conclusion

Using the Raspberry Pi 4, you were able to create a stop-motion movie system. Various stop motion technologies were also covered, along with their advantages and disadvantages. After completing the system's basic functions and integrating additional components of your choice, you're ready to go on to the next phase of programming. Using raspberry pi 4 in the next article, we will build an LED cube.

Smart Security System using Facial Recognition with Raspberry Pi 4

Greeting, and welcome to the next tutorial of our raspberry programming tutorial. In the previous tutorial, we learned how to build a smart attendance system using an RFID card reader, which we used to sign in students in attendance in a class. When it comes to building a face-recognition program on a Raspberry Pi, this tutorial will show you how. Two Python programs will be used in the lesson, one of which is a Training program that analyzes a collection of photographs of a certain individual and generates a dataset. (YML File). The Recognizer application uses the YML script to detect a face and afterward utters the person's name when the face is detected.

Where To Buy?
No.ComponentsDistributorLink To Buy
1BreadboardAmazonBuy Now
2DC MotorAmazonBuy Now
3Jumper WiresAmazonBuy Now
4Raspberry Pi 4AmazonBuy Now

Components

  • Raspberry Pi
  • Breadboard
  • L293 or SN755410 motor driver chip
  • Jumper wires
  • DC motor
  • 5v power supply

A growing number of us already use face recognition software without realizing it. Facial recognition is used in several applications, from basic Fb Tag suggestions to advanced security screening surveillance. Chinese schools employ facial recognition to track students' adherence and behaviour for the first time. Retail stores use face recognition to classify their clients and identify those who have a history of crime. There's no denying that this tech will be all over soon, especially with so many other developments in the works.

How does facial recognition work?

When it comes to facial recognition, biometric authentication goes well beyond simply being able to identify human faces in images or videos. An additional step is taken to identify the person's identity. A facial recognition software compares an image of a person's face to a database to see if the features match another person's. Since facial expressions and hair do not affect the technology's ability to identify matches, it has been built to do so.

How can face recognition be used when it comes to smart security systems?

The first thing you should do if you want to make your home "smart" is to focus on security. Your most prized possessions are housed at this location, and protecting them is a must. You can monitor your home security status from your computer or smartphone thanks to a smart security system when you're outdoors.

Installing a system that is not wireless in your house and signing you up for professional monitoring was traditionally done by a security company. The plot has been rewritten. When setting up a smart home system, you can even do it yourself. In addition, your smart smartphone acts as a professional monitor, providing you with real-time information and notifications.

Face recognition is the ability of a smart camera in your house to identify a person based on their face. Consequently, you will have to inform the algorithm what face goes with what name for face recognition to operate. Facial detection in security systems necessitates the creation of user accounts for family members, acquaintances, and others you want to be identified by the system. Your doors or the inside of your house will be alerted when they arrive.

Face-recognition technology allows you to create specific warning conditions. For example, you can configure a camera to inform you when an intruder enters your home with a face the camera doesn't recognize.

Astonishing advancements in smart tech have been made in recent years. Companies are increasingly offering automatic locks with face recognition. You may open your doors just by smiling at a face recognition system door lock. You could, however, use a passcode or a real key to open and close the smart door. You may also configure your smart house lock to email you an emergency warning if someone on the blacklist tries to unlock your smart security door.

How to install OpenCV for Raspberry Pi 4.

OpenCV, as previously stated, will be used to identify and recognize faces. So, before continuing, let's set up the OpenCV library. Your Pi 4 needs a 2A power adapter and an HDMI cable because we won't be able to access the Pi's screen through SSH. The OpenCV documentation is a good place to learn how image processing works, but I'm not going to go into it here.

Installing OpenCV using pip

pip is well-known for making it simple to add new libraries to the python language. In addition, there is a technique to install OpenCV on a Raspberry Pi via PIP, but it didn't work for me. We can't obtain complete control of the OpenCV library when using pip to install OpenCV; however, this might be worth a go if time is of the essence.

Ensure pip is set up on your Raspberry Pi. Then, one by one, execute the lines of code listed below into your terminal.

sudo apt-get install libhdf5-dev libhdf5-serial-dev

sudo apt-get install libqtwebkit4 libqt4-test

sudo pip install opencv-contrib-python?

How OpenCV Recognizes Face

Facial recognition and face detection are not the same things, and this must be clarified before we proceed. When simply a user's face is detected using Face detection, the program has no clue who that person is. Only the face will be detected in facial recognition software, but the program will also recognize it. At this point, it's pretty evident that facial detection comes before facial recognition. To explain how OpenCV recognizes a person or other objects, I will have to go into detail.

Essentially, a webcam feed is like a long series continuously updating still photos. And every image is nothing more than a jumble of pixels with varying values arranged in a specific order. So, how does a computer software identify a face among all of these random pixels? Trying to describe the underlying techniques is outside the scope of this post, but since we're utilizing the OpenCV library, facial recognition is a straightforward process that doesn't necessitate a deeper understanding of the underlying principles.

Using Cascade Classifiers for Face Detection

We can only recognize a person if we can see it. Detection of an item, including a face, Classifiers are a feature of OpenCV. They are pre-trained datasets that may be utilized to recognize a certain item, such as a face. Classifiers may also detect additional objects, such as the mouth, the eyebrows, the number plate of a vehicle, and smiles.

Alternatively, OpenCV allows you to design your custom Classifier for detecting any objects in images by retraining the cascade classifier. For the sake of this tutorial, we'll be using the classifier named "haarcascade_frontalface_default.xml" to identify faces from the camera. We'll learn more about image classifiers and how to apply them in code in the following sections.

Setup the raspberry pi camera

For the face training and detection, we only need the pi camera, and to install this, insert the raspberry pi camera in the pi camera slot as shown below. Then go to your terminal, open the configuration window using "sudo raspi-config", and press enter. Navigate to the interface options and activate the pi camera module. Accept the changes and finish the setup. Then reboot your RPi.

How to Setup the Necessary Software

First, ensure pip is set up, and then install the following packages using it.

Install dlib: Dlib is a set of libraries for building ML and data analysis programs in the real world. To get dlib up and running, type the following command into your terminal window.

Pip install dlib

If everything goes according to plan, you should see something similar after running this command.

Install pillow: The Python Image Library, generally known as PIL, is a tool for opening, manipulating, and saving images in various formats. The following command will set up PIL for you.

pip install pillow

You should receive the message below once this app has been installed.

Install face_recognition: The face recognition package is often the most straightforward tool for detecting and manipulating human faces. Face recognition will be made easier with the help of this library. Installing this library is as simple as running the provided code.

Pip install face_recognition –no –cache-dir

If all goes well, you should see something similar to the one shown below after the installed software. Due to its size, I used the "—no –cache-dir" command-line option to configure the package without keeping any of its cache files.

Face Recognition Folders

A script named "haarcascade_frontalface_default.xml" is for detecting faces using a Classifier. It will also build a "face-trainner.yml" file using the training script based on the photos found in the face images directory.

Start the face images folder with a collection of face images.

The face images folder indicated above should contain subdirectories with the names of each person to be identified and several sample photographs of them. Esther and x have been identified for this tutorial. As a result, I've just generated the two sub-directories shown below, each containing a single image.

You must rename the directory and replace the photographs with the names of the people you are identifying. It appears that a minimum of five images for each individual is optimal. However, the more participants, the slower the software will run.

Face trainer program

Face Trainer.py is a Python software that may be used to train a new face. The purpose of the software is to access the face photographs folder and scan for faces. As soon as it detects a face, it crops it, turns it to grayscale, and saves it in a file named face-trainner.yml using the face recognition package we had previously loaded. The information in this script can be used to identify the faces later. In addition to the whole Trainer program provided at the conclusion, we'll go over some more critical lines.

The first step is to import the necessary modules. The cv2 package is utilized to process photos. The NumPy library can be used for image conversion, the operating system package is used for directory navigation, and PIL will be used to process photos.

import cv2

import numpy as np

import os

from PIL import Image

Ensure that the XML file in question is located in the project directory to avoid encountering an issue. The LBPH Facial recognizer is then constructed using the recognizer parameter.

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

recognizer = cv2.createLBPHFaceRecognizer()

Face_Images = os.path.join(os.getcwd(), "Face_Images")

In order to open all of the files ending in.jpg,.jpg, or .png within every subfolder in the face images folder, we must traverse the tree with for loops. In a variable named path, we record the path to every image, and in a variable named person name, we store the file location name (the name of the user who uploaded the images).

For root, dirs, files in os.walk(Face_Images):

for file in files: #check every directory in it

if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):

path = os.path.join(root, file)

person_name = os.path.basename(root)

As a result, in case the name of the person changes, we increase a variable named Face_ID that will allow us to have a unique Face_ID for each individual.

if pev_person_name!=person_name:

Face_ID=Face_ID+1 #If yes increment the ID count

pev_person_name = person_name

Because the BGR values may be ignored, grayscale photos are simpler for OpenCV to deal with than colourful ones. We transform the data to grayscale and afterwards lower the image size by 50% so that all the pictures are the same size. To avoid having your face cut out, place it in the centre of the photo. To get a numerical number for these photos, transform them into NumPy arrays. Afterwards, a classifier identifies a face in a photo and saves the results in variable named faces.

Gery_Image = Image.open(path).convert("L")

Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)

Final_Image = np.array(Crop_Image, "uint8")

faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)

Our Area of Attention will be the portion of the image where the face may be found after being cropped. It will be utilized to train the face-recognition system in the ROI area. Every area of attention face must be appended to a variable named x train. We then feed the recognizer with our training data using the area of attention values and Face ID data. The information gathered will be archived.

for (x,y,w,h) in faces:

roi = Final_Image[y:y+h, x:x+w]

x_train.append(roi)

y_ID.append(Face_ID)

 

recognizer.train(x_train, np.array(y_ID))

recognizer.save("face-trainner.yml")

You'll notice that the face-trainner.yml script is modified whenever you run this program. If you make any modifications to the photographs in the Face Images folder, ensure to recompile this code. Debugging purposes include printing out the Face ID, name of the path, name of a person, and NumPy arrays.

Face recognition program

We can begin using our trained data to identify people now that it has been prepared. We'll use a USB webcam or pi camera to feed video into the Face recognizer application, turning it into an image. Once we've found the faces in those images, we'll find similarities to all of our previously developed Face IDs. Finally, we output the identified person’s name in boxes around their face. Afterwards, the whole program is presented, and the explanation is below.

Import the required module from the training program and use the classifier because we need to do more facial detection in this program.

import cv2

import numpy as np

import os

from time import sleep

from PIL import Image

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

recognizer = cv2.createLBPHFaceRecognizer()

The people listed in the folder should be entered in the variable named labels. Insist on performing each step in the same order. It is "Esther" and "Unknown" in my situation.

labels = ["Esther", "Unknown"]

We need the trainer file to detect faces, so we import it into our software.

recognizer.load("face-trainner.yml")

The camera provides the video stream. It's possible to access any second pi camera by replacing 0 with 1.

cap = cv2.VideoCapture(0)

In the next step, we separate the footage into images and transform it into grayscale, and afterwards, we search for a face in the photo. To save the area of attention grey image, we must first detect the face and then crop the image to remove them.

ret, img = cap.read()

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)

for (x, y, w, h) in faces:

roi_gray = gray[y:y+h, x:x+w]

id_, conf = recognizer.predict(roi_gray)

It informs us how sure the program is in its ability to identify the person. We write the code below to get the person's name based on their Identification number. A square should be drawn around the user's head, written outside their name.

if conf>=80:

font = cv2.FONT_HERSHEY_SIMPLEX

name = labels[id_]

cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)

We must playback and afterwards break the video stream we just evaluated, which is done by pressing a wait key.

cv2.imshow('Preview',img)

if cv2.waitKey(20) & 0xFF == ord('q'):

break

While running this application, ensure the Raspberry is linked to a display via HDMI. A display with your video stream and the name will appear when you open the application. There will be a box around the face identified in the video feed, and if your software recognizes the face, it displays that person’s name. As evidenced by the image below, we've trained our software to identify my face, which shows the recognition process in action.

The face recognition code

import cv2

import numpy as np

import os

from PIL import Image

labels = ["Esther", "Unknown"]

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

recognizer = cv2.createLBPHFaceRecognizer()

recognizer.load("face-trainner.yml")

cap = cv2.VideoCapture(0)

while(True):

ret, img = cap.read()

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5) #Recog. faces

for (x, y, w, h) in faces:

roi_gray = gray[y:y+h, x:x+w]

id_, conf = recognizer.predict(roi_gray)

if conf>=80:

font = cv2.FONT_HERSHEY_SIMPLEX

name = labels[id_]

cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)

cv2.imshow('Preview',img)

if cv2.waitKey(20) & 0xFF == ord('q'):

break

cap.release()

cv2.destroyAllWindows()

Face detection code

import cv2

import numpy as np

import os

from PIL import Image

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

recognizer = cv2.createLBPHFaceRecognizer()

 

Face_ID = -1

pev_person_name = ""

y_ID = []

x_train = []

Face_Images = os.path.join(os.getcwd(), "Face_Images")

print (Face_Images)

for root, dirs, files in os.walk(Face_Images):

for file in files:

if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):

path = os.path.join(root, file)

person_name = os.path.basename(root)

print(path, person_name)

if pev_person_name!=person_name:

Face_ID=Face_ID+1

pev_person_name = person_name

Gery_Image = Image.open(path).convert("L")

Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)

Final_Image = np.array(Crop_Image, "uint8")

faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)

print (Face_ID,faces)

for (x,y,w,h) in faces:

roi = Final_Image[y:y+h, x:x+w]

x_train.append(roi)

y_ID.append(Face_ID)

recognizer.train(x_train, np.array(y_ID))

recognizer.save("face-trainner.yml")

DC motor circuit

Since the "How to operate DC motor in Rpi 4" guide has covered the basics of controlling a DC motor, I won't provide much detail here. Please read this topic if you haven't already. Check all the wiring before using the batteries in your circuit, as outlined in the image above. Everything must be in place before connecting your breadboard's power lines to the battery wires.

Testing

To activate the motors, open the terminal because you'll use the Python code-writing program called Nano in this location. For those of you who aren't familiar with the command-line text editor known as Nano, I'll show you how to use some of its commands as we go.

This code will activate the motor for two seconds, so try it out.

import RPi.GPIO as GPIO

from time import sleep

GPIO.setmode(GPIO.BOARD)

Motor1A = 16

Motor1B = 18

Motor1E = 22

GPIO.setup(Motor1A,GPIO.OUT)

GPIO.setup(Motor1B,GPIO.OUT)

GPIO.setup(Motor1E,GPIO.OUT)

print "Turning motor on"

GPIO.output(Motor1A,GPIO.HIGH)

GPIO.output(Motor1B,GPIO.LOW)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(2)

print "Stopping motor"

GPIO.output(Motor1E,GPIO.LOW)

GPIO.cleanup()

The very first two lines of code tell Python whatever the program needs.

The RPi.GPIO package is what the first line is looking for. The RPi GPIO pins are controlled by this module, which takes care of all the grunt work.

It is necessary to delay the script for a few seconds to provide the package time to operate, therefore leaving a motor to run for a while.

The method set mode is used to leverage the RPi's board numbers. We'll tell Python that the pins 16 through 22 correspond to the motors.

Pin A is used to steer the L293D in one way, and pin B is used to direct it in the opposite direction. You can turn on the motor using an Enable pin, referred to as E, inside the test file.

Finally, use GPIO.OUT to inform the RPi that all these are outputs.

The RPi is ready to turn the motor after the software is set up. After a 2-second pause, some pins will be turned on and subsequently turned off, as seen in the code.

Save and quit by hitting CTRL-X, and a confirmation notice appears at the bottom. To acknowledge, tap Y and Return. You can now run the program in the terminal and watch as the motor begins to spin up.

sudo python motor.py

If the motor doesn't move, check the cabling or power supply. The debug process might be a pain, but it's an important phase in learning new things!

Now turn in the other direction.

I'll teach you how to reverse a motor's rotation to spin in the opposite direction.

There's no need to touch the wiring at this point; it's all Python. Create a new script called motorback.py to accomplish this. Using Nano, type the command:

./script

Please type in the given program:

import RPi.GPIO as GPIO

from time import sleep

GPIO.setmode(GPIO.BOARD)

Motor1A = 16

Motor1B = 18

Motor1E = 22

GPIO.setup(Motor1A,GPIO.OUT)

GPIO.setup(Motor1B,GPIO.OUT)

GPIO.setup(Motor1E,GPIO.OUT)

print "Going forwards"

GPIO.output(Motor1A,GPIO.HIGH)

GPIO.output(Motor1B,GPIO.LOW)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(2)

print "Going backwards"

GPIO.output(Motor1A,GPIO.LOW)

GPIO.output(Motor1B,GPIO.HIGH)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(2)

print "Now stop"

GPIO.output(Motor1E,GPIO.LOW)

GPIO.cleanup()

Save by pressing CTRL, then X, then Y, and finally Enter key.

For reverse compatibility, we've set Motor1A low in the script.

Programmers use the terms "high" and "low" to denote the state of being on or off, respectively.

Motor1E will be turned off to halt the motor.

Irrespective of what A is doing; the motor can be turned on or off using the Enable switch.

Take a peek at the Truth Table to understand better what's going on.

When Enabled, only two states allow the motor to move; A or B is high, and not both high at the same time.

Putting it all together

At this point, we have designed our face detection system and the dc motor control circuit; now, we will put the two systems to work together. When the user is verified, the dc motor should run to open the cd rom drive and close after a few seconds.

In our verify code, we will copy the code below to spin the motor in one direction “open the door” when the user is verified. We will also increase the time to 5 seconds to simulate the door's time to open for the user to get through. This also allows the motor to spin long enough to open and close the cd room completely. I would also recommend putting a stopper on the cd room door so that it doesn't close all the war and get stuck.

if conf>=80:

font = cv2.FONT_HERSHEY_SIMPLEX

name = labels[id_] #Get the name from the List using ID number

cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

#place our motor code here

GPIO.setmode(GPIO.BOARD)

Motor1A = 16

Motor1B = 18

Motor1E = 22

 

GPIO.setup(Motor1A,GPIO.OUT)

GPIO.setup(Motor1B,GPIO.OUT)

GPIO.setup(Motor1E,GPIO.OUT)

Print("Openning")

GPIO.output(Motor1A,GPIO.HIGH)

GPIO.output(Motor1B,GPIO.LOW)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(5)

print("Closing")

GPIO.output(Motor1A,GPIO.LOW)

GPIO.output(Motor1B,GPIO.HIGH)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(5)

print("stop")

GPIO.output(Motor1E,GPIO.LOW)

GPIO.cleanup()

cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)

Output

The advantages of face recognition over alternative biometric verification methods for home security

An individual's biometric identity can be verified by looking at various physical and behavioural characteristics, such as a person's fingerprint, keystrokes, facial characteristics, and voice. Face recognition seems to be the winner because of the precision, simplicity, and lack of contact detection.

Face-recognition technology will continue and will get better over time. The tale has evolved, and your alternatives have grown due to smart tech.

What are the advantages of employing Facial Recognition when it comes to smart home security?

Using an RPi as a surveillance system means you can take it with you and use it wherever you need it.

  1. High accuracy rate

For the most part, the face-recognition software employed in security systems can reliably assess whether or not the individual attempting entry matches your record of those authorized to enter. On the other hand, certain computer programs are more precise when it comes to identifying faces from diverse angles or different countries.

Concerned users may be relieved to learn that some programs have the option of setting custom confidence criteria, which can significantly minimize the likelihood of the system giving false positives. Alternatively, 2-factor authentication can be used to secure your account.

  1. Automation

When your smart security system discovers a match between a user and the list of persons you've given access to, it will instantly let them in. Answering the doorbell or allowing entry isn't necessary.

  1. Smart integration

Face recognition solutions can be readily integrated into existing systems using an API.

Cons of Facial Recognition

  1. Privacy of individuals and society as a whole is more at risk

A major drawback of face recognition technology is that it puts people's privacy at risk. Having one's face collected and stored in an unidentified database does not sit well with the average person.

Confidentiality is so important that several towns have prohibited law enforcement from using real-time face recognition monitoring. Rather than using live face recognition software, authorities can use records from privately-held security cameras in certain situations.

  1. can infringe on one's liberties

Having your face captured and stored by face recognition software might make you feel monitored and assessed for your actions. It is a form of criminal profiling since the police can use face recognition to put everybody in their databases via a virtual crime lineup.

  1. It's possible to deceive modern technology.

Face recognition technology can be affected by various other elements, including camera angle, illumination, and other aspects of a picture or video. Facial recognition software can be fooled by those who do disguises or alter their appearance.

Conclusion

This article walked us through creating a complete Smart Security System using a facial recognition program from the ground up. Our model can now recognize faces with the help of OpenCV image manipulation techniques. There are several ways to further your knowledge of supervised machine learning programming with raspberry pi 4, including adding an alarm to ring whenever an individual's face is not recognized or creating a database of known faces to act like a CCTV surveillance system. We'll design a security system with a motion detector and an alarm in the next session.

Taking a screenshot in Raspberry pi 4

Welcome to the next tutorial of our Raspberry Pi programming course. Our previous tutorial taught us to how to print from a Raspberry pi. We also discussed some libraries to create a print server in our raspberry pi. We will learn how to take screenshots on Raspberry Pi using a few different methods in this lesson. We will also look at how to take snapshots on our Raspberry Pi using SSH remotely.

Where To Buy?
No.ComponentsDistributorLink To Buy
1BreadboardAmazonBuy Now
2Jumper WiresAmazonBuy Now
3PIR SensorAmazonBuy Now
4Raspberry Pi 4AmazonBuy Now

Why should you read this article?

This article will assist you when working with projects that require snapshots for documenting your work, sharing, or generating tutorials.

So, let us begin.

Screenshots are said to be the essential items on the internet today. And if you have seen these screenshots in tutorial videos or even used them in regular communication, you're already aware of how effective screenshots can be. They are quickly becoming a key internet currency for more efficient communication. Knowing how and when to utilize the correct ones will help you stand out from the crowd.

Requirements

  • Raspberry Pi
  • MicroSD Card
  • Power Supply
  • Ethernet Cable

Taking Screenshots Using Scrot

In this case, we'll employ Scrot, a software program, to help with the PrintScreen procedure. This fantastic software program allows you to take screenshots using commands, shortcut keys, and enabled shortcuts.

Features of Scrot

  • We could easily snap screen captures using scrot with no other tasks.
  • We could also improve the image quality of screen photos by using the -q option and a level of quality from 1 to 100. The quality level is currently at 75 by default.
  • It is straightforward to set up and use.
  • We may capture a particular window or even a rectangle portion of the display using the button.
  • Capable of retrieving all screen captures in a specific directory and storing all screen captures on a distant Computer or networked server.
  • Automatically monitor multiple desktop PCs while the administrator is away and prevent unauthorized behaviors.

Scrot is already installed by default in the latest release of the Raspbian Operating system. In case you already have Scrot, you may skip this installation process. If you're not sure whether it's already installed, use the command below inside a pi Terminal window.

If your Pi returns a "command not found" error, you must install it. Use the following command line to accomplish this:

After installing it, you may test its functionality by using the scrot instruction again. If no errors occur, you are set to go on.

Capturing a snapshot on a Raspberry Pi isn't difficult, especially if Scrot is installed. Here are a handful of options for completing the work.

  1. Using a Keyboard Hotkey

If you have the Scrot installed on your Pi successfully, your default hotkey for taking screenshots will be the Print Screen key.

You can try this quickly by pressing the Print Screen button and checking the /home/pi directory. If you find the screenshots taken, your keyboard hotkey (keyboard shortcut) is working correctly.

In addition, screenshots and print screen pictures will be stored with the suffix _scrot attached to the end of their filename. For instance,

  1. Using Terminal Window

This is easy as pie! Execute the following command on your Pi to snap a screenshot:

That is all. It is that easy.

Taking a Delayed Screenshot

The following approach will not work unless you have the navigation closed and have to snap a screenshot without the menu. To get a perfect snapshot with no menu, you must wait a few seconds after taking the picture. You may then close your menu and allow the Scrot to initiate the image capture.

To capture in this manner, send the following command to postpone the operation for five seconds.

Other Scrot settings are as follows:

  • scrot -b : for taking a window's border.
  • scrot -e : To issue a command after taking a snapshot.
  • scrot -h : To bring up an additional assistance panel.
  • scrot -t : To generate a snapshot thumbnail.
  • scrot -u : To take a screenshot of the currently active tab.
  • scrot -v : Scrot's most recent version will be displayed.

Changing Screenshot Saving Location

You might need to give the images a unique name and directory on occasion. Add the correct root directory, followed by the individual title and filename extension, exactly after scrot.

For instance, if you prefer to assign the title raspberryexpert to it and store it in the downloads directory, do the following command:

Remember that the extension should always follow the file name .png.

Mapping the Screenshot Command to a Hotkey

If the capture command isn't already mapped as a hotkey, you'll have to map it by altering your Pi's config file, and it'll come in handy.

It would be best if you defined a hotkey inside the lxde-pi-rc.xml script to use it. To proceed, use this syntax to open the script.

We'll briefly demonstrate how to add the snapshot hotkey to the XML script. It would be best to locate the <keyboard> section and put the following lines directly below it.

We will map the scrot function to the snapshot hotkeys on the keyboard by typing the above lines.

Save the script by hitting CTRL X, Yes, and afterward ENTER key when you've successfully added those lines.

Enter the command below to identify the new changes made.

How to Take a Screenshot Remotely over SSH

You may discover that taking snapshots on the raspberry is impractical in some situations. You'll need to use SSH to take the image here.

When dealing with Ssh, you must first activate it, as is customary. You may get more information about this in our previous tutorials.

Log in with the command below after you have enabled SSH:

Now use the command below to snap an image.

If you've previously installed the Scrot, skip line 2.

Using the command below, you can snap as many snapshots as you like using varying names and afterward transferring them over to your desktop:

Remember to change the syntax to reflect the correct username and Ip.

Saving the Screenshot Directly on your Computer

you can snap a screenshot and save it immediately to your Linux PC. However, if you regularly have to take snapshots, inputting the passcode each time you access the Rpi via SSH will be a tedious chore. So you can use publicly or privately keys to configure no passcode ssh in raspberry pi.

To proceed, use the following command to install maim on raspberry pi.

Return to your computer and use the command below to take a snapshot.

We're utilizing the maim instead of other approaches since it's a more elegant method. It sends the image to stdout, allowing us to save it to our laptop via a simple URL redirect.

Taking Screenshots Using Raspi2png

Raspi2png is a screenshot software that you may use to take a screenshot. Use the code below for downloading and installing the scripts.

After that, please place it in the /usr/local/bin directory.

Enter the following command to capture a screenshot.

Ensure to use your actual folder name rather than <directory_name> used.

Taking Screenshots Using GNOME Tool

Because we are using a GUI, this solution is relatively simple and easy to implement.

First, use the following command to download the GNOME Snapshot tool.

After it has been installed, go to the Raspberry navbar, select the menu, select Accessories, and finally, Screenshot.

This opens the GNOME Picture window, where you can find three different taking options, as seen below.

Choose the appropriate capture method and select Capture Image. If you pick the third choice, you will have to use a mouse to choose the location you wish to snip. If you use this option, you will not need a picture editor to resize the snapshot image. The first choice will record the entire screen, while the second will snip the active window.

GNOME gives you two alternatives once you capture a screen. The first is to save the snapshot, and the other is to copy it to the clipboard. So select it based on your requirements.

What are the Different Types of Screenshots to know?

  1. Screenshot

It all begins with a simple screenshot. You don't need any additional programs or software to capture a basic screenshot. At this moment, this feature is built into almost all Raspberry Pi versions and Windows, Mac PCs, and cellphones.

  1. Screen capture

It is the process of capturing all or a part of the active screen and converting it to a picture or video.

While it may appear the same thing as a screenshot and a screen capture, they are not the same. A screenshot is simply a static picture. A desktop window capture is a process of collecting something on the screen, such as photographs or films.

Assume you wish to save a whole spreadsheet. It's becoming a little more complicated now.

Generally, you would be able to record whatever is on your window. Still, in case you need to snip anything beyond that, such as broad, horizontal spreadsheets or indefinitely lengthy website pages, you'll need to get a screen capture application designed for that purpose. Snagit includes Scrolling snapshot and Panorama Capture capabilities to snap all of the material you need in a single picture rather than stitching together many images.

  1. Animated GIF

This is a GIF file containing a moving image. An animated succession of picture frames is exhibited.

While gif Images aren't limited to screen material, they may be a proper (and underappreciated) method to express what's on your display.

Instead of capturing multiple pictures to demonstrate a process to a person, you may create a single animated Version of what is going on on your computer. These animations have small file sizes, and they play automatically, making them quick and simple to publish on websites and in documents.

  1. Screencast

This is making a video out of screen material to educate a program or sell a product by displaying functionality.

If you want to go further than a simple snapshot or even gif Animation, they are a good option. If you have ever looked for help with a software program, you have come across a screencast.

They display your screen and generally contain some commentary to make you understand what you are viewing.

Screencasts can range from polished movies used among professional educators to fast recordings showing a coworker how to file a ticket to Information technology. The concept is all the same.

Three reasons Why Screenshot tool is vital at work

  1. Communicate Effectively

Using screenshots to communicate removes the guesswork from graphical presentations and saves time.

The snapshot tool is ideal for capturing screenshots of graphical designs, new websites, or social media posts pending approval.

  1. Demonstrate to Save Time

This is a must-have tool for anybody working in Information Technology, human resource, or supervisors training new workers. Choose screenshots over lengthy emails, or print screen pictures with instructions. A snapshot may save you a lot of time and improve team communication.

Furthermore, by preserving the snapshot in Screencast-O-Matic, your staff will be able to retrieve your directions at any time via a shareable link.

To avoid confusion, utilize screen captures to show. IT supervisors, for instance, can utilize images to teach their colleagues where to obtain computer upgrades. Take a snapshot of the system icon on your desktop, then use the Screen capture Editor to convert your screen capture into a graphical how-to instruction.

Any image editing tool may be used to improve pictures. You may use the highlighting tool to draw attention to the location of the icons.

  1. Problem Solve and Share

Everybody has encountered computer difficulties. However, if you can't articulate exactly what has happened, diagnosing the problem afterward will be challenging. It's simple to capture a snapshot of the issue.

This is useful when talking with customer service representatives. Rather than discussing the issue, email them an image to help them see it. Publish your image immediately to Screencast and obtain a URL to share it. Sharing photos might help you get help quickly.

It can also help customer support personnel and their interactions with users. They may assist consumers more quickly by sending screenshots or photographs to assist them in resolving difficulties.

Snapshots are a simple method for social media administrators to categorize, emphasize, or record a specific moment. Pictures are an easy method to keep track of shifting stats or troublesome followers. It might be challenging to track down subscribers who breach social network regulations. Comments and users are frequently lost in ever-expanding discussions.

Take a snapshot of the problem to document it. Save this image as a file or store it in the screenshots folder of Screencast. Even if people remove their remarks, you will have proof of inappropriate activity.

Conclusion

This tutorial taught us how to take screenshots from a Raspberry Pi using different methods. We also went through how to remotely take snapshots on our Pi using SSH and discussed some of the benefits of using the screenshot tool. However, In the following tutorial, we will learn how to use a raspberry pi as a webserver.

How to Create a Time-Lapse Animations with Raspberry Pi 4

Welcome to the next tutorial of our Raspberry Pi programming course. Our previous tutorial looked at how to Interface DS18B20 with Raspberry Pi 4. This tutorial will teach us how to create a time-lapse video with still images and understand how phototimer and FFmpeg work.

Where To Buy?
No.ComponentsDistributorLink To Buy
1Raspberry Pi 4AmazonBuy Now

What is time-lapse?

When photographing something over a lengthy period, "time-lapse" comes to mind. A video can be created by mixing the still photos. Plant development may be tracked with time-lapse movies, as can the changing of the seasons. They can also be utilized as low-resolution security cameras.

Components

  • Raspberry pi 4B
  • Pi camera

Connect to the Raspberry Pi

Cameras that can be used with the Raspberry Pi are a bit limited. A powered USB hub is required for most webcams that are compatible. For this post, we’ll be using a camera specifically made for the Raspberry Pi. The camera module can be plugged into a specified port on the Raspberry Pi. How to do it;

  • Turn off the pi

Shutting down the pi is recommended before adding a camera. A power button should be installed on Pi 4 so that you may shut down the device safely every time.

  • Locate the camera port and lift the tabs to install the camera's cord.
  • Secure the tabs on the flex cable after inserting it into the flex cable slot.

Slide the cord into the port by using an image as a guide. Finally, press down tabs, securing the cable to the motherboard.

Enable the camera in Raspberry Pi OS

Click on the main menu button, select Preferences, and then click Pi Configuration if you're using a monitor. Enable the camera by clicking on the Interfaces tab.

To continue, headless users must type the command below. Make sure the camera is turned on in Interfacing Options. Rebooting the Raspberry Pi will be required.

How to record time-lapse images

Individual stills are used to create time-lapse videos. We'll be using raspistill to acquire our time-lapse images. As part of the Raspberry Pi OS, you don't need to install anything extra. The are two way to record a time lapse:

  • Using the raspistill tool on the Rpi alone
  • Using phototimer
  1. Using the Raspistill tool only

Raspistill is a Linux-based software library that aims to make it easier for Linux users to use sophisticated camera systems. We can use open-source programs running on ARM processors to control the camera system with the Raspberry Pi. Almost all of the Broadcom GPU's proprietary code is bypassed, which customers have no access to.

It provides a C++ API to apps and operates at the base of configuring a camera and enabling the program to obtain image frames. Image buffers are stored in memory space and can be supplied to either video encoders (like H.264) or still image encoding algorithms like JPEG or PNG. raspistill, on the other hand, does not perform any image encoding or display operations directly.

How can we use this tool?

An illustration of how to take time-lapse photography is shown in the following image.

In this case, the time-lapse capture was 10 seconds long. The Raspberry will wait for a total of 2000 milliseconds between each image. For 10 seconds, the Raspberry Pi will take a picture every two seconds.

The photos will be stored as .jpg files. This example uses the name "Pic" and a number that increases with each frame to identify each image. The final digit of percent 04d is a zero. Pic0000.jpg and Pic0001.jpg would be the names of the first two photographs, and so on. Change this value according to the requirements of your project.

How can we compile the to a video?

Your time-lapse video needs to be put together once all of your photographs have been taken. FFmpeg is the tool we'll be utilizing to generate our timelapse video. The command to download the package is as follows:

Allow the installation to be complete before moving on. Using this command, you can create a finished video:

Pic%04d.jpg corresponds to the image filename you specified in the preceding section. If you have previously changed this name, please do so here. With the -r option, you can specify how many frames per second you want to record. When creating a time-lapse video, replace the phrase video-file with your name. Make sure to keep the .mp4 file extension.

What is FFmpeg?

A high-speed audio and video conversion tool that can also capture from a webcast source is included. On the fly, video resizing and sampling can also be done with high-grade polyphase filters.

A plain output URL specifies an indefinite number of outputs that FFmpeg can read from (standard files, piped streams, network streams, capturing devices, etc.), whereas the -i option specifies the number of input files. An output URL is encountered on the command-line interface that cannot be treated as an option.

Video/audio/subtitle/attachment/data streams can all be included in a single input or output URL. The container format may limit the quantity or type of stream used. The -map option can map input streams to output streams, either automatically or manually.

For options, input files must be referred to by their indices (0-based). It's not uncommon to have an infinite number of input files. Indexes are used to identify streams inside a file. As an illustration, the encoding 2:3 designates the third input file's fourth and final stream. In addition, check out the chapter on Stream specifiers.

  1. Using Photo timer

A Python library named phototimer will be used to control the raspistill command-line that comes pre-installed on the Raspbian OS.

  • With the use of this tool, we're able to add valuable features like:
  • Set a time frame for your day.
  • After capturing a photo, create a date-based folder such as:

Let us install docker to use this tool

Using docker, you can develop, analyze, and publish apps in a short time. To run phototimer with Python installed, you'll need to use Docker with Raspbian Lite.

  • Automatic restarts of the time-lapse

A new location does not necessitate an SSH login. The lapse of time will be restarted.

  • Easy setup

Download the Docker image, activate the camera interface, and start the container instead of executing a git clone on each device.

  • Easy access to logs

If you disconnect from the container, docker will keep track of the log information and will enable you to reconnect at any time.

Install Docker in Raspberry pi

We can easily set up docker with the following set of commands.

Clone the phototimer GitHub repository

Download the docker file as a zip as shown below

Or use the terminal by copying the phototimer code as shown here:

If git isn't already installed, use the following command to add it:

Edit the config file

By modifying the config.py file, you can change the time-appearance lapses and duration.

  • Set the hours

You will likely want to alter the default time-lapse settings so that they better fit your requirements, which start at 4 am and end at 8 pm.

  • Set the quality level

For some reason, overall quality of 35/100 produces a substantially smaller image than one with a quality of 60-80/100. You can modify the file to determine how much space you'll need.

  • Flip the image

Depending on how your camera is positioned, the image may need to be flipped horizontally or vertically. True or False in each situation will have the same result.

  • Height and Width

To achieve a specific aspect ratio, you can change the height or width of the image in this way. The default setting is what I use.

The Docker container must be re-built and restarted every time you change your setup.

Now let`s build a Docker container

When docker does a build, it will construct an image from the current directory's code and a base image, like Debian.

The most crucial part is that the image we create right now contains everything our application might want.

For those just getting started, these are some helpful keyboard shortcuts and CLI commands:

Start the timelapse

Let us configure the time zone for docker

The default time zone for the Docker container is UTC. If you're in a different time zone, you'll want to adjust your daylight savings time accordingly.

Docker can be configured to run in your current local time by adding the following additional option to the command:

As a result, the following is what a time-lapse in Texas, United States, might look like:

Below are some time-lapse images taken from the raspberry pi camera.

How can we save the file to our laptop?

To save your photos to your laptop, follow these steps once you've captured a few pictures:

The ssh and SCP functions are included in Git for Windows, which may be installed on Windows.

Connectivity options

You'll need a way to connect to your new rig when you're not near your wi-fi router if you plan on doing the same thing I did. There are a few ideas to get you started:

Use a USB OTG cable

All Pi models allow networking via USB, and it is very straightforward to establish and will not interfere with the wi-fi network. You will have to bring a USB cord to each new site to directly change the Wireless SSID/password on the pi.

Drop a wpa_supplicant.conf file into /boot

Plugging an SD card into your computer while on the road allows you to update your wi-fi configuration file easily, provided there is an SD card adapter nearby. The existing configuration will be replaced with this new one on the next reboot.

Setup the RPi as a wi-fi hotspot

If you're comfortable with Linux, you can use hostapd to create your hotspot on the RPi. To connect to your Raspberry Pi, you'll need a computer with an Ethernet cable and a web browser.

Install your wi-fi Username and password and use that to start/stop timelapse capture and download files if you won't be using the rig outside your location.

How to edit the video

The imported files should be in the correct order if you drag them onto the timeline after they've been imported, so be sure to do that. The crop factor should be set to "fill." Instead of 4.0 seconds, use 0.1 seconds for the showtime every frame.

  1. Transfer files to the Raspberry Pi using SCP

It is often necessary to transfer documents between the Linux laptop to an industrial Raspberry Pi for testing purposes.

Occasionally, you'll need to transfer files or folders between your commercial Raspberry to your computer.

There is no longer a need for you to worry about transmitting files via email, pen drive, or any other method that takes up time. Automation and industry control can help you automate the process in this post.

As the name suggests, SCP refers to secure copy. This command-line application lets you securely transfer files and folders between two remote places, such as between your local computer and another computer or between your computer and another computer.

You may see SCP info by using the command below:

Using SCP, you can transfer files to the RPi in the quickest possible manner. There is a learning curve associated with this strategy for novice users, but you'll be glad you did once you get the hang of it.

  1. Enable SSH

You must activate ssh on your Raspberry Pi to use SCP.

Converting to GIF

A free program like Giphy can help you convert the videos to a GIF; however, this will lower the number of frames.

Conclusion

We learned how to use the Raspberry Pi to create a time-lapse animation in this lesson. In addition, we looked into the pi camera raspistill interface and used FFmpeg and phototimer to create a time-lapse. We also learnt how to interface our raspberry pi with our pc using ssh and transfer files between the two computers. The following tutorial will teach how to design and code a GPIO soundboard using raspberry pi 4.

Syed Zain Nasir

I am Syed Zain Nasir, the founder of <a href=https://www.TheEngineeringProjects.com/>The Engineering Projects</a> (TEP). I am a programmer since 2009 before that I just search things, make small projects and now I am sharing my knowledge through this platform.I also work as a freelancer and did many projects related to programming and electrical circuitry. <a href=https://plus.google.com/+SyedZainNasir/>My Google Profile+</a>

Share
Published by
Syed Zain Nasir