Our next step in the Raspberry Pi training program is to get zero tiers up and run on a Raspberry Pi 4. How to utilize a Raspberry Pi to measure internet speed and store the results in Grafana or Onedrive was the topic of the last piece. During the project, you will discover how to install ZeroTier on a Raspberry Pi and get it up and running. We will also learn how to set up a firewall to secure our network.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Raspberry Pi 4 | Amazon | Buy Now |
Raspberry pi 4
Power supply
Ethernet or wifi
ZeroTier is a software that provides a streamlined web-based interface for constructing virtual networks connecting various gadgets. Somewhat akin to configuring a virtual private network on a Raspberry Pi, these networks exist only in cyberspace. The process of provisioning, however, is much easier, especially when dealing with several devices.
ZeroTier can be used on various platforms, from computers to mobile phones. Its cross-platform compatibility with Unix, Microsoft, and macintosh means you can set up a virtual connection without worrying about whether or not your hardware will be able to connect to it.
The ZeroTier business model is "freemium." Using our free plan, you can connect up to 50 approved devices to the virtual network.
You need to create an account on the ZeroTier website before you can use the program on your Raspberry Pi. This is because virtual network administration is performed through their website.
You may manage your entire virtual network from one central web-based console, including assigning permanent IP addresses to individual devices.
Registration on the ZeroTier hub website is required before a network ID can be generated. Access your virtual networks with this web-based interface. Go to ZeroTier Central on whichever browser you like. When you go to the site, look for the "Register" button so you can start the account creation process.
The following window will appear once you've created an account and logged into the web interface. Hit the "Create A Network" button in the screen's center to get started.
We can move on now that you've joined ZeroTier and have your network ID. In this part, you'll learn how to download and install ZeroTier on your pi device.
First, let's check that the software on your pi Device is up to date.
To be up-to-date, we need to run the following two instructions for the item list and all installed modules.
sudo apt upgrade
After adding the GPG key, we can install ZeroTier via their installation repository on our pi Device. With this key, we can ensure that the tools we're installing are directly from ZeroTier and don't include any malicious code. To obtain the GPG key via their repo, type the following code and store the contents of the "de-armored" file in the "/usr/share/keyrings/" folder.
Now that the GPG key has been inserted, a source list containing the ZeroTier repository must be compiled. First, we need to create a shell variable named "RELEASE" and assign it the operating system's internal codename. To construct the proper URLs for the ZeroTier repo in the subsequent steps, we will execute the following command.
Once we have the shell variable configured, we can utilize it to construct the relevant ZeroTier repo Urls for the Operating system. We finally save this string in the "/etc/apt/sources.list.d/" folder under the name "zerotier.list."
The next time you refresh the Raspberry Pi's packages lists, it will pull ZeroTier directly from this location.
Since we have modified the Rpi's source code, we must revise the list of installed packages. Using the command line, you could change your system's component list.
After updating, we can use the command beforehand to download the ZeroTier package onto our RPi.
ZeroTier can be set up to automatically launch on system startup as part of the setup procedure.
Having finished the ZeroTier installation on our RPi, we can now link to the networking we created in the introduction. First, make sure you get the network's identifier handy.
To connect the RPi to the network, we must use the ZeroTier Command line. You can utilize the following code to accomplish this. As a first step, swap out "[NETWORKID]" for the ID you gathered previously in this tutorial.
So after this message, your RPi should've just joined the ZeroTier channel.
The "Members" portion is located toward the bottom of the managerial section for the ZeroTier system on the RPi.
You'll need to select the "Auth" box to tick here after identifying the machine you added. As a result, your RPi can communicate with other gadgets on the same network.
A machine through your ZeroTier channel can be located using the information in the "Address" column. The "sudo zerotier-cli status" prompt will cause the RPi to display this data.
The Name/Description field can be used to assign a memorable label to this innovative gadget for future reference.
Lastly, take a peek at the "Managed IPs" section.
If an IP address has been assigned to the gadget, it will appear in this column. These IP addresses will allow you to gain access to that machine. This column can specify which device will receive the IP address. If you're trying to get an Internet address for a newly approved source, be patient; it could take a few minutes.
Whenever your RPi successfully connects to the ZeroTier networks, you must see something similar to what is shown below. The last number is Pi's Internet protocol address within the VPN connection.
Connecting to other gadgets on the VPN connection is now possible. Having the device's Internet protocol is all that's required. The ZeroTier management console is the quickest way to learn which IP addresses are assigned to particular gadgets.
Here you can find detailed instructions for setting up your RPi with the Syncthing program. For the program to be installed, we must first add the program's PGP keys and the package repo as possible sources.
sudo apt full-upgrade
Following this, check that the apt-transport-HTTP package has been successfully installed. When using the installer, you can now access sources that utilize the secure Secure protocols, thanks to this package's inclusion. It's not possible to do this by default. This is included by default in most modern operating systems, but it may be missing from lightweight distributions like Raspberry Pi OS Lite. Executing the line below will install the necessary package.
Finally, the Syncthing credentials may be added to our keyrings folder. The purpose of these keys is to verify the authenticity and integrity of the packages we install before trusting them. To obtain the credentials, execute the command that follows on the RPi.
Since the key has been included, the repo itself may be included. The RPi project will use the Syncthing program, namely the stable release. Use the following command to include the repo in the list of sources.
We have to refresh the installation list before installing Syncthing from the repo. We must revise the list for the package manager to use our different sources. To update your RPI, type the following command into your device's terminal.
Let's finish setting up our RPi by installing the Syncthing app. Now that the package repository has been added, the program can be installed with a single command.
The Syncthing web application will only be accessible while close to the device. Those using a Raspberry Pi without a monitor or keyboard would have a very frustrating time if this were the case, but we can change the setup to allow external access.
The first order of business is to discover the RPi's actual local network address. Before proceeding, please ensure that your Rpi has been assigned a permanent IP address. This command lets you find your Pi's local IP address.
To move on, a single iteration of Syncthing must be run to create initial configuration files. The RPI user will be used solely in this tutorial to launch Syncthing.
Press CTRL + C to exit the program after the first launch.
The necessary configurations for Syncthing will be generated after the first execution. The Syncthing program must be launched in the context of the pi user for this configuration file to take effect. With nano editor, start editing the necessary configuration file with the line below.
Locate the following code in this script with the searching key CTRL + W to quickly locate this sentence.
127.0.0.1:8384This line needs to have the local Internet protocol of our Pi substituted for the default local Internet address (127.0.0.1). For instance, with our Pi's IP address, this code would become something like this.
192.168.0.193:8384We are limiting our access to people in the same local area network by use of the local Internet address. Alternatively, you can use the internet address "0.0.0.0" to grant access to every IP. Following the successful IP address change, save changes to the script.
One final step is necessary now that the Syncthing us may be accessed from devices other than the RPi. This responsibility includes developing and launching a system for the program. The Service will enable Syncthing to launch automatically at system boot and be halted and started quickly.
Once again, we'll use nano to make the necessary changes to the Service's configuration file. The Syncthing authorized GitHub is the source for the application we will be developing. To start adding content to the file in "/lib/systemd/system," run the following command.
copy lines below and paste them to this file.
Description=Syncthing - Open Source Continuous File Synchronization
Documentation=man:syncthing(1)
After=network.target
[Service]
User=pi
ExecStart=/usr/bin/syncthing -no-browser -no-restart -logflags=0
Restart=on-failure
RestartSec=5
SuccessExitStatus=3 4
RestartForceExitStatus=3 4
# Hardening
ProtectSystem=full
PrivateTmp=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
NoNewPrivileges=true
[Install]
WantedBy=multi-user.target
Those lines specify how our Rpi's OS must deal with Syncthing. When you're done adding lines, save the file. We could now set up our Service to automatically launch at system startup. Enter this command and hit enter.
Let's run the Service to use the Syncthing internet UI. Once again, the systemctl tool will need to be used to kick off the Service.
The Syncthing program on the RPi should be checked to ensure it has begun. Using the below program, we can make sure of that.
The notification should read as follows if the Service was successfully started and is now active.
If everything goes smoothly, you should utilize the Syncthing program on the RPi. Now that the hardware has been added, we can move on to configure the program and synchronize our data. We'll break this up into chunks for easy reading. The web-based user interface makes installing and linking devices a breeze.
You'll need to launch the web-based interface in your preferred internet browser to begin using it. The Internet address of the RPi is required to use the web-based interface. Using the Url, navigate to the following location in your preferred internet browser.
Since the Syncthing program only listens on port 8384, you mustn't remove it from the end of the string.
After creating login details, you will be prompted to sign in before proceeding to the next step.
There is no predetermined login information for Syncthing, meaning anyone with access to the UI can change your preferences. Login credentials can be set up to prevent unauthorized users from wreaking havoc.
You will be warned of the potential risks if you have never specified the login details. The "Settings" button on this caution will take us directly to the configuration page.
After resetting your password, this website will log you out. You'll need to sign in with your new credentials each time you access Syncthing's graphical interface.
For Syncthing to function, it must create a random identifier for each connected device. Adding the other device's ID to your own is necessary for sharing information between devices. The RPi Syncthing installation's unique identifier can be located via the web interface.
To return to the main page of the web interface, select "Actions" from the toggle menu in the top right. Select "Show ID" from the selection menu to open the desired dialogue box.
The identification string and corresponding QR code are displayed below. The ideal identifier length is between 50 and 56 characters and may incorporate digits, letters, and hyphens. System-wise, the hyphens are disregarded, but they improve readability. If you want to connect your Raspberry Pi to additional devices, you'll need to give each of them the unique ID assigned to your Pi. You must also include their identification number. Syncthing's mechanism for linking many gadgets to a single pool requires the ID.
We've covered how to get your gadget id Number, so now we'll cover adding a new one. Keep in mind that the identifier for your RPi must be entered into whatever gadget you are installing. If not, communication between the devices will be impossible.
The "Add Remote Device" button may be in the lower right corner of the Syncthing UI. When we click this option, we'll be taken to a dialogue where we can add a gadget to our Syncthing collection.
Now that we have a device linked to the RPi Syncthing, you can test directory sharing. In this particular chunk, the default directory will suffice. Here, we keep our sync files in a folder called "/home/pi/sync" on our RPi.
Select the "Edit" button next to a directory to change its share settings. We can access the folder's sharing settings dialog by clicking this option and making the necessary changes.
Having ZeroTier Syncthing installed on your RPi and linked to a VPN, you may now sync data across machines. If you're looking for a basic virtual network solution, ZeroTier is it. And the best part is that it offers an ideally enough free plan for most people's fundamental needs. Additionally, Syncthing is a user-friendly software that enables you to synchronize folders across several gadgets. The program is among the best methods for allowing many computers to maintain directory consistency in real time. No longer will you have to trust a remote service like Cloud Servers to keep your data safe.
Following up on our Raspberry Pi programming course is the next lesson. In the previous post, we learned how to construct an FM radio using a Raspberry Pi. Analog FM broadcasting's circuit construction was also studied in detail. How to use a Raspberry Pi as an internet speed meter and save the data in Grafana or Google Drive is the subject of this article.
You can use this article if you want to keep track of how your downloads, uploads, and ping speeds change over time, and it's easy to use. In addition, you can use this to determine when your internet is at its busiest or if your internet speed has deteriorated. We'll demonstrate how to use Ookla's Internet speed test command-line interface in conjunction with Python code to create an internet speed meter.
The connection speed monitor will employ the Internet speed Command line interface to keep tabs on your connectivity.
Raspberry pi 4
Micro SD card
USB drive
Ethernet cable or Wi-Fi
The first step in configuring the RPi to monitoring system the Internet's performance is to ensure the Raspberry is updated. There is an easy way for this using the command line:
sudo apt-get upgrade
To add a repo for the Internet speed Command line software, we have to download a few additional packages. apt-transport-https, dirmngr, & gnupg1 may all be installed on your RPi by running the commands listed below.
The apt software may now use the HTTPS secure protocols thanks to the apt-transport-HTTPS module. Apt will fail to connect to Ookla's software repository if it doesn't have it. Our Speedtest.net services and your RPi must communicate securely, therefore we'll also set up gnupg1.
Lastly, the dirmngr software is installed. This software is used to add the package repositories to the Rpi's source list. Now that we've installed the necessary tools, we can import the GPG keys for Ookla's Performance test repository into our keychain and start running tests. The performance test CLI interface cannot be downloaded to our RPi without this passcode.
The Ookla repo must be added to our list of sources next. The Performance test CLI cannot be installed on our RPi without the repo being added. The command to add this repo is as follows.
You'll see that "$(LSB release -cs)" is used in the command. Input the title of the RPi Operating system release using this string of text in the prompt. We have to upgrade our packages list because we have a new module repository. Simply use the following command to update the list of installed packages.
Our RPi is now equipped with the official Ookla Connection speed CLI. Installing the software on your device is as simple as running the command below.
We may now run a speed test on your Raspberry Pi to ensure that we have successfully installed the program. To begin the speed test, enter the following command into your terminal.
There are a few terms of service you must agree to while using the speed test app on your Raspberry Pi. Simply hit "YES" accompanied by the Return key to go past this warning.
On our RPi, we can now begin writing our Program code that will actively check the speed of our downloads and uploads. The command prompt will get us started on writing our Program code to check the connection speed on the RPi.
nano speedtest.py
Type the code below in this file. We'll walk you through each component of the program, so you can get a sense of how it all works.
import re
import subprocess
import time
This script will use all of the packages listed in these four lines. We'll discuss exactly each of the modules that will be put to use in the following paragraphs.
The script uses the operating system package to interface with the os. This package will be used to see if a file already exists as part of this program.
This repackage provides a library for managing pattern searching so that we may simply perform regular expressions. The Speed test command line provides us with all the information we need to find our desired values.
To run another python code, this script needs the subprocess package. To use the subprocess module, we will be able to launch the Internet speed Command line software and receive the results.
We make use of the time package to keep track of the dates and times of all Speed test Command line calls. We will be able to keep track of the performance over time thanks to this package.
Subprocess is used to initiate a request to the Internet speed command line and instruct it to route the output of the speed test to stdout in this section of code. stdout.read is used to output data (). Finally, we decode('UTF-8') our reply variables to make it usable as a Py object after the call to the Speed test Command line.
download = re.search('Download:\s+(.*?)\s', response, re.MULTILINE)
upload = re.search('Upload:\s+(.*?)\s', response, re.MULTILINE)
jitter = re.search('\((.*?)\s.+jitter\)\s', response, re.MULTILINE)
Each of these 3 pieces of code accomplishes the same task. Every text fragment has a unique number adjacent to it, which they can deduce by running a mathematical equation on it using the re library. A ping lookup for "Latency: 47.943 ms" returns "Latency: 47.943 ms," with only the value between the characters.
download = download.group(1)
upload = upload.group(1)
jitter = jitter.group(1)
To retrieve the right numbers, we must utilize the ".group()" function. The CSV file will be able to contain the results of the Speed test Command line software output, thanks to this method.
f = open('/home/pi/speedtest/speedtest.csv', 'a+')
if os.stat('/home/pi/speed test/speedtest.csv').st_size == 0:
f.write('Date,Time,Ping (ms),Jitter (ms),Download (Mbps),Upload (Mbps)\r\n')
except:
pass
This is a simple piece of code. The program is contained within a try statement, which ensures that the program will continue to run even if an error occurs. First, we retrieve our speedtest.csv document in the try block.
If indeed the document does not already exist, "a+" in the parameters tells it that we wish to generate it and add any new content to what exists already. After that, we use the operating system package to determine the real size of our speedtest.csv documents. If indeed the file's contents are equal to zero, we can proceed. No action is required on our part if the document does not exist.
There are commas to differentiate each record's information. When formatting a string, we utilize the time strftime() method to include the time and current date. Our pings, downloads, and uploads will follow. Example output.
import re
import subprocess
import time
response = subprocess.Popen('/usr/bin/speedtest --accept-license --accept-gdpr', shell=True, stdout=subprocess.PIPE).stdout.read().decode('UTF-8')
ping = re.search('Latency:\s+(.*?)\s', response, re.MULTILINE)
download = re.search('Download:\s+(.*?)\s', response, re.MULTILINE)
upload = re.search('Upload:\s+(.*?)\s', response, re.MULTILINE)
jitter = re.search('\((.*?)\s.+jitter\)\s', response, re.MULTILINE)
ping = ping.group(1)
download = download.group(1)
upload = upload.group(1)
jitter = jitter.group(1)
try:
f = open('/home/pi/speedtest/speedtest.csv', 'a+')
if os.stat('/home/pi/speed test/speedtest.csv').st_size == 0:
f.write('Date,Time,Ping (ms),Jitter (ms),Download (Mbps),Upload (Mbps)\r\n')
except:
pass
f.write('{},{},{},{},{},{}\r\n'.format(time.strftime('%m/%d/%y'), time.strftime('%H:%M'), ping, jitter, download, upload))
you can save the script. Once our script is complete, we will create a directory in which to keep the speedtest.csv data. Make this directory by typing the command below.
After we have created the necessary directory, we can execute the program. The command below can be used to run our program and see if it works as expected.
Open the newly generated speedtest.csv file to see the results of the script's execution. Let's see whether we can open this document on the RPi with the command below.
You should be able to find anything similar to this in that file. A few rows of records and the column headings.
We'll teach you to easily plot your performance test data using Grafana throughout this section. To conduct data analytics, load up metrics that make some sense of the immense amount of data, and track our applications with the aid of cool configurable panels, we use Grafana, a free software solution that is free and open source. In addition to the fact that Grafana is an open-source platform, we may create our plugins to integrate with a variety of data sources.
Technically known as time series analytics, the technology aids in the study, analysis, and monitoring of data across time. By giving relative data, it aids us in tracking user activity, app behavior patterns, error rate, error kind, and contextual circumstances in operation or a pre-production scenario.
Organizations that are concerned about security or other factors do not have to use the vendor cloud because the project can be implemented on-premise. Over the years, this framework has become an industry standard and is used by companies like PayPal, eBay, Intel, and many more. In a moment, I'll go over some real-world examples from the industry.
Grafana Platform & Enterprise are 2 extra services provided by the Grafana developers for companies in addition to the free software core product. What do they do? The remainder of this post will go into greater detail regarding this. In the meantime, how about we take a closer look at the tool's capabilities and architecture flow, starting with an explanation of what a panel is? & How does it all work? '
They use sources of data like Graphite and Prometheus as well as Influx database and Elastic Search to populate the panels. Grafana has built-in compatibility for a wide range of data sources, including these.
Let's have a look at the fully accessible panel framework's capabilities. Our application's metrics are handled via an open platform. This data can be analyzed through the use of metrics in a variety of ways.
The panel is well-equipped to generate a sense of complicated data, and it is constantly changing. Geo-mapping, heat maps, scatterplots, and more can be displayed with graphs in a variety of ways. Our business needs can be met by a wide range of data presentation possibilities provided by the software.
As soon as a predetermined event occurs, an alert is set up and triggered. Slack or any other communication tool used by the monitoring team might be alerted to these events. Grafana is pre-installed with support for about a dozen different types of databases. And there is a slew of more, all made possible thanks to plugins.
It can be hosted on-premises or in the cloud. Custom data can be retrieved using built-in Graphite support and expressions such as "add," "filter," "average," "minimum," and "maximum" functions. Graphite is a chemical element. Later, I'll address that. Influx database, Prometheus, Elastic Search, and Cloud Monitoring are also included. Up front, I'll cover it all.
A cloud-native, highly accessible, quick, and completely open SaaS metric framework, Grafana Cloud As a result, individuals who don't want to host the solution on their own and prefer to avoid the headache of managing their deployment infrastructure may find this useful. It's a Kubernetes-based service. Prometheus and Graphite back end is supported. This gives us two options: either use Grafana on-premises or both.
Installing Influx Database on your RPi is a prerequisite for this stage of the internet speed monitoring guide. Our connection speed monitoring system sends data to this location, thus we'll be storing it here.
Designed by Influx Intelligence, Influx Database is a free and open-source time series system built in Go. Time series data, such as that collected from sensors and IoT devices, may be accessed quickly and reliably with this system because of its focus on high-availability extraction and retention. As a Time Series Database, Influx Database is capable of storing up to several hundred thousand points each second. A SQL-like query language for time series data, the Influx Database was designed expressly for this purpose.
Shorter duration
Extensive research and analysis
Retention, ingestion, querying, and visualization are now all available through a single application programming interface in Influx Database.
Templates that are simple to create and distribute, thanks to the influx of DB templates
First, we'll fire up the Influx Database CLI tool by typing the command below. Using this application, we will be creating an online repository for our data.
There is no need to enter the passcode and username for Influx Database if you haven't set login. Establish a database with the name "internet speed" in it immediately. After typing CREATE DATABASE, the DB name, and pressing enter, the DB is ready to use.
Creating a user named "speed monitor" will be the next phase in working with the database. The passcode "pass" should be replaced by a more secure one. Privileges are not a concern at this time, as we shall take care of them in the following stage.
To shut off the application, type the command below.
Installing the Python package required to communicate with the Influx DB is the final step.
Create a new Script file to start populating our Influx database now that it has been set up. If you've already read through the previous script, you won't have to go over anything new here.
nano ~/speedtest.py
To get started, we have to include all of the Python packages that we will be using in this file.
import subprocess
from influxdb import InfluxDBClient
operating system and time have been eliminated, as seen. We no longer have to communicate with records, and the Influx database automatically timestamps data, therefore these two libraries are no longer required. After importing the "InfluxDBClient" for our Influx database server, we are ready to use it. The next phase is to launch the Speedtest Command line interface and process the results. Upon completion of this code snippet, we'll have all the information we need.
shell=True, stdout=subprocess.PIPE).stdout.read().decode('utf-8')
ping = re.search('Latency:\s+(.*?)\s', response, re.MULTILINE)
download = re.search('Download:\s+(.*?)\s', response, re.MULTILINE)
upload = re.search('Upload:\s+(.*?)\s', response, re.MULTILINE)
jitter = re.search('\((.*?)\s.+jitter\)\s', response, re.MULTILINE)
ping = ping.group(1)
download = download.group(1)
upload = upload.group(1)
jitter = jitter.group(1)
Now everything gets a little more complicated. This data must be converted to a Py dictionary for us to use it. Because the library wants the information to be presented in a JSON-like form, this is an explanation.
{
"measurement" : "internet_speed",
"tags" : {
"host": "Raspberrytheengineeringprojects"
},
"fields" : {
"download": float(download),
"upload": float(upload),
"ping": float(ping),
"jitter": float(jitter)
}
}
]
In this section, we established our dictionaries by the Influx database data model. "internet speed" is the title we assigned the metric. The tag "host" was also added so that if we were to manage numerous devices within the same DB, we could segregate them. After that, we enter the data we obtained in the preceding line of code, including the download speed, upload speed, and pings.
To make them into numbers, we use the float () method to turn our download, uploads, and pings parameters into strings. Grafana will read these as characters if we don't utilize the float () method. Now that we have all the information we need, we can begin using Influx Database. It is necessary to create an InfluxDBClient object and provide the network information.
Only the hostname, port number, user id, passcode, and DB name are passed to this method. You can refer to the official Python manual for Influx Database if you wish to know what information can be set.
"localhost" should be replaced with the Internet address of your Influx database server if it is hosted elsewhere. Change "pass" to the passcode you created earlier in this article. To send data to our Influx database server, we need to add a block of code like the one below to our existing codebase.
To send data to Influx Database, we only need to do that. Assuming you've entered every bit of code in the document, this should look something like this.
import subprocess
from influxdb import InfluxDBClient
response = subprocess.Popen('/usr/bin/speedtest --accept-license --accept-gdpr',
shell=True, stdout=subprocess.PIPE).stdout.read().decode('UTF-8')
ping = re.search('Latency:\s+(.*?)\s', response, re.MULTILINE)
download = re.search('Download:\s+(.*?)\s', response, re.MULTILINE)
upload = re.search('Upload:\s+(.*?)\s', response, re.MULTILINE)
jitter = re.search('\((.*?)\s.+jitter\)\s', response, re.MULTILINE)
ping = ping.group(1)
download = download.group(1)
upload = upload.group(1)
jitter = jitter.group(1)
speed_data = [
{
"measurement" : "internet_speed",
"tags" : {
"host": "Raspberrytheengineeringprojects"
},
"fields" : {
"download": float(download),
"upload": float(upload),
"ping": float(ping),
"jitter": float(jitter)
}
}
]
client = InfluxDBClient('localhost', 8086, 'speed monitor', pass, 'internet speed')
client.write_points(speed_data)
Save the document to your computer.
The database needs to be displayed in Grafana. All the information will be graphed and shown by using the Grafana application.
It's a fully accessible metric monitoring and data presentation package for people who aren't familiar with it. The purpose of this software is to aid in the visual representation of time-based information. To speed things up, Grafana entrusts most of the heavy lifting to the client, such as generating graphs. Since there are minimal data to analyze, the software can concentrate on giving information that can be used to create graphs.
Grafana is frequently used to keep tabs on system metrics like the temperatures of the equipment and how much of it is being used. In addition, it can be used to graph data, for example, the weather, across time. Grafana is an excellent tool for instantly presenting data from your Raspberry Pi.
It's a good idea to double-check that all of the packages on your RPi are updated before beginning the Grafana installation. The 2 techniques listed below can be used to do this. The packages list will be updated, and all installed applications will be upgraded to the most recent versions using these instructions.
sudo apt upgrade
The Grafana source repo must be added to the RPi before Grafana can be installed. As a prerequisite, we must add an APT password. Using the APT password, you can confirm that the modules you're installing originated from the Grafana packages service and are properly signed. The instruction to include the Grafana APT password to your RPi's keychain is as follows.
Once we've uploaded the password to our Raspberry, we're good to go with the Grafana repo as a resource for our software. Include this repo to the source list by running the command below on your RPi.
The RPi will automatically check the Grafana repo for new packages whenever you launch and upgrade them. An update is necessary because we've added new packages to our list. When using apt to perform an update, the most up-to-date package list is obtained from all available sources. To accomplish this, run the command below in the console of your Raspberry.
Please keep in mind that Grafana can be installed on your RPI. Run the command below to install the newest release of Grafana on your computer.
Getting Grafana to start automatically at startup is the next step we need to take. Grafana includes a systemd service file, which is a godsend for those of us using it on Linux systems. All we have to do is execute the command below to make Grafana start automatically at system startup.
The "grafana-server.service" services record will be enabled by this instruction to a network's service management. The Grafana server's service management will utilize this file as a reference guide. In the console of the Raspberry Pi, enter the following command to begin using Grafana's webserver.
Now that we've installed Grafana on your Pi 4, we can use its web interface to monitor your data. If you have a Raspberry Pi, the first thing we'll need to do is get its Internet address. Grafana on your local area network can be accessed remotely via this Internet protocol. The IP address of your Raspberry Pi may be found by typing the following code.
Static IPs are a good idea if you frequently need to connect to your Raspberry Pi. Make sure you have your Internet Protocol (IP) address available before visiting this URL. A web application for the Grafana dashboard can be found on line 3000 of the Rasp Internet address. "IPADDRESS>" should be replaced with your Internet address from earlier.
When you initially open Grafana, you'll get a login page. When you initially installed Grafana on the RPi, you were given the option of logging in with the default administrator account. The username and passcode are "admin" and "admin," respectively, for this account (1.). However, even though the passphrase is incredibly insecure, we'll be able to alter it right after this one. Grafana's "Login" tab can be clicked once the userid and passcode have been entered.
A new information source must be added to Grafana's web app. ' The "Data Sources" menu selection can be accessed by clicking on the wheel on the left (1.).
The credentials for our DB must then be entered (2.). The Db must be set to "internetspeed" if you closely followed our instructions. Last but not least, the passcode must be the one we mentioned; if you utilize our examples, it is "theengineeringprojects". The Username should be "speedmonitor,". After you've entered all the necessary data, select the "Save & Test" tab (3.)
Making your program run on a regular schedule is as simple as automation. The crontab is the simplest approach to schedule your script to execute regularly. On your RPi, you can change the crontab by typing the command below.
When asked which editor should use, we suggest nano because it's the simplest to learn and the most intuitive. The following cronjob should be added at the bottom of this file. Cronjobs are scheduled to run each half an hour by default. We advise using our Crontab generator if you'd like to experiment with alternative timings.
Jobs are scheduled using Cron, which is built into Unix-like systems like Linux and its numerous variants. It is a time-based mechanism. Using the cron is a common approach to run instructions or bash scripts regularly. "Cron Jobs" refers to tasks that are scheduled using the "cron" utility. While using Unix-based systems like Raspbian, you'll quickly become dependent on cron jobs.
It's easy to use gDrive, a cli program, to transmit to Google Account. Once you've got it established on the smartphone, it's a breeze to use. This instruction will explain to you how to use your personal Google accounts to develop the gDrive program on the RPi. The same procedures can be used to create gDrive for any os, even if this instruction concentrates on the RPi.
The Go engine must be installed on our device before we can assemble the gDrive program. Download the appropriate drivers from the official website whether you're working on a PC or Mac.
If you're working with a Linux distribution like Raspbian, the process becomes a little more complicated. Using a Linux terminal, type one of these commands.
The Raspberry Pi can be used with this.
a 64-bit version of Linux
After downloading the Go libraries, we must now unpack them to the root directory.
sudo tar -C /usr/local -xzf go.tar.gz
Next, we'll see whether we can get the console to talk to Go. If we alter the shell aliases script, we can accomplish this goal. Shell will run automatically the script and pull in our updated path names.
The following lines should be added to the end of this file. With these lines, we may execute the compiler instantly from the cli, without having to specify the directory to the engine.
export PATH=/usr/local/go/bin:$PATH:$GOPATH/bin
Now you may save the script.
We require your Google Cloud Apis details before we can start with the gDrive program compilation. " Your project's name can be found on this webpage (1.). "gDrive-theengineeringprojects" shall be the name of our example.
To save the document, you simply need to type in a title for your program.
Selecting an app type is what we need to do next. We chose "Other" since none of the other options were appropriate for the API's intended use. Once we've done that, we'll need to give this program a name. We'll just call it "gDrive theengineeringprojects" for the sake of simplicity. Once all of the information has been input, click the "Create" tab to begin the process.
We'll need to use git to download gDrive's source code before we can compile it. Before we can proceed, we need to install the git client on our computer. To install Git on a Debian-based operating system like Linux or Raspbian, you may either go to the main Git webpage or use the procedures below.
Just type the command below and we'll be done in no time.
now clone git
The next step is to update the program to reflect the new client password and session id. Make a copy of the "handlers meta.go" file in the cloned subdirectory and edit it with nano.
nano handlers_drive.go
Change the collected details in the following statement of this file. Both your user id and password should be in your possession.
Substitute your login Credential here.
You can use your user password instead
Save all the changes. Now it's time to execute the following code to get the additional modules needed to compile our updated version of gDrive using the Go engine.
To get gDrive working on our device, simply enter the command shown below into your terminal.
It's time to get this thing working on the command line, so let's get started! We need to relocate the file to the root directory to use the gdrive inside the cli. To relocate the executables, type the command below.
The final step is to provide the gdrive file with the ability to run.
Now that your Google account is linked to the app, we can test the program gDrive. The gdrive instruction and the "list" parameter are required to get things started.
Following gDrive's list statement, you will be informed that authorization is necessary. There needs to be a Hyperlink at the bottom of the message. Using your Google acc, users must visit this Address and sign in. You'll get a security code if you perform the next few steps on the internet browser. Enter the verification code that you just copied into the terminal.
GDrive has been successfully installed onto your device if a listing of files is displayed. To see the ids for each of your directories, you can use this command. Using the IDs listed above, you can sync a specific folder. The command below can be used to test syncing a folder. You can replace Folder> with the path to your synchronized folders.
The identification of a directory that you obtained with the grdive listing commands must be substituted for GOOGLEFOLDERID>.
Uploading Speed Test Data to Google Drive
Now that gDrive is installed on the RPi, we're ready to collect some speed test results. Using gDrive, establish a new directory on the Google drive account for our speedtest.csv record. This will be our starting point. This next terminal command will allow us to accomplish this.
A notification stating that the subdirectory has been established will be displayed as a result of running this command. This mail will also provide you with your identification number. Write this Identification down someplace safe; we'll need it in a few stages. We may now utilize the subdirectories Identification to add a file to it, as the directory has been created. The speedtest.csv record will be used in this experiment. Be careful to substitute YOUR FOLDER ID with the identification you received in the previous phase before running the command below.
The command prompt should display something like the one below during the first sync. Messages such as this one inform you that document has been successfully transferred to your Onedrive.
Automating your Raspberry Connection Speed Monitoring is the following main task related to it. We'll be building a shell script to automate the process. Crontab will use this script to run it regularly. Use the following Unix commands on the RPi to get started developing the shell script.
The following lines are what we'd like to include in this document. Your Google storage subdirectories unique ID must be replaced by YOUR FOLDER Identification.
python3 /home/pi/speedtest.py
/usr/local/bin/gdrive sync upload /home/pi/speedtest YOUR_FOLDER_ID
Save the script. Our shell script needs to be granted permission to run before we can set up a crontab in which to run it. By entering the command below into the prompt, we can accomplish our goal!
We're now ready to set up the crontab now that everything is finished. Start by executing the command below on the RPi to begin modifying the crontab. When prompted, choose Nano as your editor of choice.
At the end of the document, paste the following code. This command tells crontab to execute our shell scripts once every hour, which it will do. Our Crontab generator can help you come up with new values for the crontab if you'd like.
0 * * * * /home/pi/speedtest.sh
We learned how to set up a pi 4 internet connection test monitoring in this article. We also learned how to set up the internet monitoring system's influx database and grafana application. Now you can experiment with other servers to see if you can enhance the speed test's precision and performance. We're going to use our Raspberry Pi 4 to develop a Wi-Fi gateway in the next tutorial.
Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we constructed a personal Twitter bot using Tweepy, a Py framework for querying the Twitter application programming interface. We also constructed a Response to robot mentions that would post a response to everybody's tweet mentioning it with a certain keyword. However, in this tutorial, we will implement a security system using a motion sensor with an alarm.
This is what it looks like:
PIR Motion Sensors can be implemented with RPi by understanding how it is connected to a Raspberry Pi. Whenever the motion sensor detects human movement, an alarm is triggered in this project and the LEDs blink. You may create a simple motion-detection alarm using this interface.
Infrared Motion Detectors or PIR Sensors are Motion Sensors that use Infrared Radiation to detect movement.
Infrared rays are emitted by anything with a temperature higher than absolute zero, be it life or non-living. Humans are unable to see infrared radiation because its wavelength is longer than the wavelength of visible light.
That's why PIR Sensors are designed to pick up on those infrared rays. Due to their wide range of uses, such as motion sensors for security systems and intruder alert devices
"Passive" in motion sensor refers to the fact that it doesn't produce any radiant rays of its own, but rather detects it when other things emit infrared radiation. This is in contrast to active detectors, which perform both the generation of infrared waves and the detection of these waves simultaneously.
For this project, we used a motion detector that included an infrared sensor, a BISS0001 integrated circuit, and other parts.
The 3 pins on the motion sensor are used for power, data, and ground. There are two potentiometers on the Motion Sensor that may be used to modify both the sensor's sensitivity and the period it remains high on sensing a body movement.
A key role in directing infrared rays onto the sensor is played by the Fresnel lens overlaying the Pyroelectric Sensor. This lens allows the PIR Sensor to detect things at an angle of 1200 degrees. The sensor has an 8-meter detection range, meaning it can pick up on human movement within that distance.
Two potentiometers are provided for fine-tuning the sensor and output timing, as previously described.
With the aid of a potentiometer, you may modify the sensor's sensitivity. The distance can be changed between 3m and eight meters. To increase the detecting distance, spin the Potentiometer in a clockwise motion and to reduce, rotate it in the opposite direction.
The second potentiometer allows you to choose how long the motion sensor's output remains HIGH. Anywhere from 0.3s to 600s can be used. Turn the POT clockwise to raise the time and the opposite turn to decrease it.
A Motion Sensor based on RPi and Python language has been the goal of this project since the beginning, as stated in the intro.
I have an Infrared Motion Sensor Component in numerous different projects like Automated Lighting using Raspberry and Various Sensors, Automated Door Opening with Arduino and a motion sensor, and GSM Home Automation Security with Pi.
The key advantage of the Infrared Motion Sensor utilizing RPi over the above-described projects is that RPi can be readily connected to the Web and allows Internet of things implementation of the project.
The following figure illustrates the interfaces concerning the Infrared Motion Detector using RPi.
Raspberry Pi 4
PIR Sensor
Speaker
Jumper Wires
Breadboard
Power Supply
Computer
Link the Motion Sensor's Vin and GND connectors to the RPi's 5 volts and GND pins. Use pin11 to attach the Infrared Sensor's DATA Input.
Gnd and pin 3 are where you'll want to connect the led. As soon as the sensor is triggered, these LEDs will come on and go off.
Python is used for the programming portion of the project. The Python program for RPi's infrared Motion Sensor is provided below. Insert the program into a new file called motion.py.
import RPi.GPIO as GPIO
import time
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.IN) #Read output from PIR motion sensor
GPIO.setup(3, GPIO.OUT) #LED output pin
while True:
i=GPIO.input(11)
if i==0: #When output from motion sensor is LOW
print("No intruders",i)
GPIO.output(3, 0) #Turn OFF LED
time.sleep(0.1)
elif i==1: #When output from motion sensor is HIGH
print("Intruder detected",i)
GPIO.output(3, 1) #Turn ON LED
time.sleep(0.1)
The operation of the Infrared Motion Sensor with Raspberry Pi is pretty straightforward. If the Infrared sensor senses some body motion, it sets the Data Input to HIGH.
RPI on identifying a 1 on the associated input gpio, will trigger the alarm.
When you purchase a new sensor, it doesn't work. The Trim port is in the default setting, so it's not a sensor issue. Sensitivity of the sensor and trigger duration port if you modify these settings. It's going to start working as planned. Make sure the trigger duration port's knob is on the left as a low trigger duration and the sensitivity port is in the middle.
Infrared Motion Sensor with Raspberry Pi has already been discussed. They include:
Automated house lights
Motion sensing
Intruders notice
Automated door open
Home security systems
When motion is detected by the PIR sensor on the raspberry pi, we will look into how to record video and transmit it to Whatsapp as an alarm. So that we can tell who's in your room right away thanks to the photo.
Enable the camera by going to the Preferences menu and selecting the Raspberry Pi configuration option.
Activating the camera and saving the image will allow us to identify who or what triggered the alarm.
import picamera
from time import sleep
camera = picamera.PiCamera()
camera.capture('image.jpg')
When we run our software, the preceding code will take a picture and put it inside the root directory of the script. This image will be used to identify the intruder that has been detected.
When an alarm system is triggered, there is an alert that must sound. We'll use a loudspeaker instead of a buzzer for our alarm system in this scenario. When the motion sensor is activated, we will play an alarm sound.
import pygame
pygame.mixer.init()
pygame.mixer.music.load("alarm.mp3")
pygame.mixer.music.play()
while pygame.mixer.music.get_busy() == True:
continue
As a bridge python software for video game design, Pygame is an excellent choice. Additionally, it provides sights, sounds, and visualizations that can improve the game that is being created.
Graphics for video games can be generated using a variety of libraries that deal with visuals and sounds. It streamlines the entire game workflow and makes it easier for newcomers who wish to create games.
Copy the code above and save it to a file named alarm.py then run it in the terminal.
python alarm.py
Any internet or mobile app's compatibility with several platforms was a major hurdle to overcome when designing it. It used to be possible to build a link between two pieces of software using Bandwidth or Podium or Telnyx or Zipwhip or similar before Twilio was invented. In recent years, though, Twilio has dominated the competition. Twilio has become the preferred communication API for programmers. Twilio will become clearer to you if you stick around for a time.
Developers can use Twilio's API to communicate with each other in a modern way.
When it comes to creating the ideal client experience, developers have a wealth of tools at their disposal in the form of Twilio's APIs, which the company describes as "a set of building blocks."
It is possible to utilize Twilio to communicate with customers via text message (SMS), WhatsApp, voice, video, and email. Your software only needs to be integrated with the API.
Twilio is a provider of end-to-end solutions for integrating voice and data communication. Twilio is already used by over a billion developers and some of the world's most well-known businesses. The Twilio Communication application programming interface enables web and mobile app developers to integrate voice, message, and video conferencing capabilities. This makes it easier for app developers to communicate with one another.
The API provided by Twilio makes it simple and accessible to communicate across the web. Mobile and web applications can use this service to make phone calls as well as send text messages and multimedia messages (MMS).
You might want to learn more about Twilio and how it works. As a result, Twilio allows enterprises to better understand their customers than any other service. Twilio's primary concept is to acquire clients, get to know them, provide for their needs, and keep them coming back.
Twilio has a worldwide operations center that keeps an eye on carrier networks around the clock to ensure that they are operating at peak efficiency. To keep up with the ever-changing traffic patterns, Twilio's skilled communications engineers are on the job all the time.
They employ real-time feedback from several provider services to make smarter routing decisions based on real-time data on the availability of handsets. The key distinction between Twilio and other application programming interface integration networks is that Twilio's data-centric strategy provides customer engagement service.
Managing a contact center in today's business environment is critical to the success of the company. Businesses can use Twilio to manage their interactions with clients and consumers through a central contact center platform.
Before Twilight, sending mass SMS was a difficult task. Now, the Twilio Message application programming interface is widely used to transmit and receive messages, MMS, and OTT communications worldwide. Users can verify whether or not messages have been delivered using the intelligence tracking services.
For healthcare, virtual classrooms, recruiting, and other uses, Twilio's WebRTC and cloud infrastructure components make it easy for developers to create secure, video, and HD audio applications.
Twilio's ability to run and manage marketing campaigns is another noteworthy but still-evolving feature. Users can examine performance numbers, run campaigns, and view design concepts.
As a result of this trend, Twilio has also seen an increase in voice traffic. Any app, website, or service can use Twilio to make phone calls over the PSTN or SIP. It's easy to use Twilio Programmable Voice to make and manage digital calls for any campaign.
The Twilio SendGrip application programming interface eliminates the issue of emails that never make it to their intended recipient's inbox. Customers and clients will receive your emails with Twilio, so you won't have any worries about them not getting them.
You'll never have to worry about online scams or fraud again using Twilio's verify feature. It is continuously validated by SMS, Voice, email, and push alerts continuously.
Advancing solutions and services provided by Twilio allow for global connectivity. As a result of this connectedness, your company can grow with ease.
Developing and testing your app is made simple using Twilio's WhatsApp Sandbox. Your Twilio mobile number must be approved by WhatsApp before you can seek production access.
You'll learn how to connect your phone to the environment in this section. Select Messaging in the Twilio Console and then Take a look at the WhatsApp section by clicking on it. On the webpage, you'll find the information you need to join our sandbox.
The word "join" will be the first character in the code, followed by a two-word phrase chosen at random.
As soon as Twilio receives your message, you should be able to send and receive text messages on your cell phone without any issues.
Please repeat the sandbox application process for each additional mobile device that you wish to use to test the application
Set up a new Python project in the following section.
mkdir python-whatsapp-pic
cd python-whatsapp-pic
We'll need a virtual space for this project because we'll be installing several Python packages.
Open a terminal on your RPI machine and type:
python -m venv venv
source venv/bin/activate
(venv) $ pip3 install twilio
When using a PC running Windows, execute these commands from a command line.
python -m venv venv
source venv\bin\activate
(venv) $ pip3 install twilio
Python's Twilio library will be used to deliver messages via Twilio.
To authenticate with the Twilio service, we must safely store a few critical credentials. To use Twilio we need to register for an account at the official Twilio website. Create a new account with your email and password. They will send a confirmation message to your email inbox for you to confirm the registration. Go ahead and confirm it. You will also have to verify your WhatsApp phone number to proceed.
Setting environment variables can be done by entering the code below into your terminal:
ssh auth token
export TWILIO_ACCOUNT_SID="your account sid"
export TWILIO_AUTH_TOKEN= "your auth token"
after we have exported the credentials in our environment, the next step is to activate the WhatsApp sandbox to receive messages. Go to the develop mode, then select messaging and send a Whatsapp message.
You will see a message directing you to deliver a text to your phone and if Whatsapp is connected to the computer, it will be easier to click on the link that will be provided below to send the message. Send the message that will be displayed on the chat box on your Whatsapp application.
If it works you will see a message shown below:
This number that will be displayed here is the “from” number that we will use in our code and the “to” number is your Whatsapp number.
Copy the following code into your python file.
from twilio.rest import Client
account_sid = os.environ['TWILIO_ACCOUNT_SID']
auth_token = os.environ['TWILIO_AUTH_TOKEN']
client = Client(account_sid, auth_token)
from_whatsapp_number = 'whatsapp:+14155238886'
to_whatsapp_number = 'whatsapp:+254706911425'
message = client.messages.create(body='The engineering project sent your this image!',
media_url='https://www.theengineeringprojects.com/wp-content/uploads/2022/04/TEP-Logo.png',
from_=from_whatsapp_number,
to=to_whatsapp_number)
print(message.sid)
With this now all we have to do is run our app.py program on the terminal.
python app.py
import pygame
import RPi.GPIO as GPIO
import time
import picamera
camera = picamera.PiCamera()
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.IN) #Read output from PIR motion sensor
GPIO.setup(3, GPIO.OUT) #LED output pin
pygame.mixer.init()
pygame.mixer.music.load("alarm.mp3")
import os
from twilio.rest import Client
account_sid = os.environ['TWILIO_ACCOUNT_SID']
auth_token = os.environ['TWILIO_AUTH_TOKEN']
client = Client(account_sid, auth_token)
from_whatsapp_number = 'whatsapp:+14155238886'
to_whatsapp_number = 'whatsapp:+254706911425'
while True:
i=GPIO.input(11)
if i==0: #When output from motion sensor is LOW
print("No intruders",i)
GPIO.output(3, 0) #Turn OFF LED
pygame.mixer.music.stop()
time.sleep(0.2)
elif i==1: #When output from motion sensor is HIGH
print("Intruder detected",i)
GPIO.output(3, 1) #Turn ON LED
pygame.mixer.music.play()
capture image
camera.capture('intruder.jpeg')
#send image to whatsapp
message = client.messages.create(body='The engineering projects program has detected and intruder!',
media_url='https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Ftse4.mm.bing.net%2Fth%3Fid%3DOIP.q1z1XWRn_WAV4oM-Qr2M2gHaGb%26pid%3DApi&f=1',
from_=from_whatsapp_number,
to=to_whatsapp_number)
print(message.sid)
time.sleep(0.2)
GPIO.cleanup()
break
In this article, you learned to build a security system using a motion detector and raspberry pi. We also learned how to set up Twilio to send and receive Whatsapp messages using the Twilio API. This project can be implemented in so many areas therefore it is a good idea for you it plays around with the code and implements some extra features. In the next tutorial, we are going to build a led cube in raspberry pi 4.
Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we integrated a real-time clock with our raspberry pi four and used it to build a digital clock. However, In this tutorial, we will construct your personal Twitter bot using Tweepy, a Py framework for querying the Twitter application programming interface.
You will construct a Response to mentions robot that will post a response to everybody's tweet mentioning it with a certain keyword.
The response will be a photo we will make and put any text over it. This message is a quote you will acquire from a 3rd application programming interface. Finally, we will look at the benefits and drawbacks of bots.
This is what it looks like:
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Raspberry Pi 4 | Amazon | Buy Now |
To continue through this guide, you'll need to have the following items ready:
Ensure you've joined up for Aws Beanstalk before deploying the finished project.
To connect your robot to Twitter, you must create a developer account and build an app that Twitter provides you access to.
Python 3.9 is the current version, although it is usually recommended to use an edition that is one point behind the latest version to avoid compatibility problems with 3rd party modules.
You have these Python packages installed in your local environment.
Tweepy — Twitter's API can be used to communicate with the service.
Pillow — The process of creating an image and then adding words to it
Requests — Use the Randomized Quote Generation API by sending HTTP queries.
APScheduler — Regularly arrange your work schedule
Flask — Develop a web app for the Elastic Beanstalk deployment.
The other modules you will see are already included in Python, so there's no need to download and install them separately.
OAuth authentication is required for all requests to the official Twitter API. As a result, to use the API, you must first create the necessary credentials. The following are my qualifications:
consumer keys
consumers secret
access tokens
access secrets
Once you've signed up for Twitter, you'll need to complete the following steps to generate your user ID and password:
The Twitter developer’s platform is where you may apply to become a Twitter developer.
When you sign up for a developer account, Twitter will inquire about the intended purpose of the account. Consequently, the use case of your application must be specified.
To expedite the approval process and increase your chances of success, be as precise as possible about the intended usage of your product.
The verification will arrive in a week. Build an application on Twitter's developers portal dashboard after Twitter's developers account access has been granted.
Apps can only use authentication details; thus, you must go through this process. Twitter's application programming interface can be used to define an app. Information regarding your project is required:
Your project's purpose or how users will interact with your app should be described in this section.
To begin, navigate to Twitter's apps section of your account and create your user credentials. When you click on this tab, you'll be taken to a new page on which you can create your credentials.
The details you generate should be saved to your computer so they may be used in your program later. A new script called credentials.py should be created in your project's folder and contains the following four key-value pairs:
access_token="XXXXXXX"
access_token_secret="XXXXXXXX"
API_key="XXXXXXX"
API_secret_key="XXXXXXXX"
You can also test the login details to see if everything is functioning as intended using:
import tweepy
# Authenticate to Twitter
auth = tweepy.OAuthHandler("CONSUMER_KEY", "CONSUMER_SECRET")
auth.set_access_token("ACCESS_TOKEN", "ACCESS_SECRET")
api = tweepy.API(auth)
try:
api.verify_credentials()
print("Authentication Successful")
except:
print("Authentication Error")
Authorization should be successful if everything is set up correctly.
Tweepy is a Python module for interacting with the Twitter application programming interface that is freely available and simple. It provides a way for you to interact with the Application programming interface of your program.
Tweepy's newest release can be installed by using the following command:
pip install tweepy
Installing from the git repo is also an option.
pip install git+https://github.com/tweepy/tweepy.git
Here are a few of its most important features:
As part of Tweepy, OAuthHandler class handles the authentication process required by Twitter. As you can see from the code above, Tweepy's OAuth implementation is depicted below.
If you'd want to use the RESTful application programming functions, Tweepy provides an application programming interface class that you can use. You'll find a rundown of some of the more popular approaches in the sections that follow:
Function for tweet
Function for user
Function for user timeline
Function for trend
Function for like
Tweepy model class instances are returned when any of the application programming interface functions listed above are invoked. The Twitter response will be contained here. How about this?
user = api.get_user('apoorv__tyagi')
When you use this method, you'll get a User model with the requested data. For instance:
python print(user.screen_name) #User Name print(user.followers_count) #User Follower Count
You're now ready to begin the process of setting up your bot. Whenever somebody mentions the robot, it will respond with a picture with a quotation on it.
So, to get the quote, you'll need to use an application programming interface for a random quotation generator. If you want to do this, you'll need to establish a new function in the tweetreply.py script and send a hypertext transfer protocol request to the application programming interface endpoint. Python's requests library can be used to accomplish this.
Using Python's request library, you can send hypertext transfer protocol requests. As a result, you could only fixate on the software's interactions with services and data consumption rather than dealing with the complex making of requests.
def get_quote():
URL = "https://api.quotable.io/random"
try:
response = requests.get(URL)
except:
print("Error while calling API...")
This is how they responded:
The JSON module can parse the reply from the application programming interface. You can use import JSON to add JSON to your program because it is part of the standard libraries.
As a result, your method returns the contents and author alone, which you will use. As you can see, here's how the whole thing will work.
def get_quote():
URL = "https://api.quotable.io/random"
try:
response = requests.get(URL)
except:
print("Error while calling API...")
res = json.loads(response.text)
return res['content'] + "-" + res['author']
You have your text in hand. You'll now need to take a picture and overlay it with the text you just typed.
The Pillow module should always be your first port of call when working with images in Python. The Python Pillow imaging module provides image analysis and filetypes support, providing the interpreter with a strong image processing capacity.
Wallpaper.py should be created with a new function that accepts a quote as the argument.
def get_image(quote):
image = Image.new('RGB', (800, 500), color=(0, 0, 0))
font = ImageFont.truetype("Arial.ttf", 40)
text_color = (200, 200, 200)
text_start_height = 100
write_text_on_image(image, quote, font, text_color, text_start_height)
image.save('created_image.png')
Let's take a closer look at this feature.
Image.new() A new photo is created using the given mode and size. The first thing to consider is the style used to generate the new photo. There are a couple of possibilities here: RGB or RGBA. Size is indeed the second factor to consider. The width and height of an image are given as tuples in pixels. The color of the background image is the final option (black is the default color).
ImageFont.TrueType() font object is created by this method. It creates a font object with the desired font size using the provided font file. While "Arial" is used here, you are free to use any other font if you so like. Font files should be saved in the project root folder with a TrueType font file extension, such as font.ttf.
In other words, the text's color and height at which it begins are specified by these variables. RGB(200,200,200) works well over dark images.
Image. Save () created png image will be saved in the root directory due to this process. It will overwrite any existing image with the same name that already exists.
def write_text_on_image(image, text, font, text_color, text_start_height):
draw = ImageDraw.Draw(image)
image_width, image_height = image.size
y_text = text_start_height
lines = textwrap.wrap(text, width=40)
for line in lines:
line_width, line_height = font.getsize(line)
draw.text(((image_width - line_width) / 2, y_text),line, font=font, fill=text_color)
y_text += line_height
A message will be added to the image using the following method in the same script, Wallpaper.py. Let's take a closer look at how this feature works:
Create two-dimensional picture objects with the ImageDraw package.
A solitary paragraph is wrapped in texts using text wrap. Wrap () ensures that each line is no more than 40 characters in length. Output lines are returned in a tally form.
Draw. Text () will draw a text at the provided location.
XY — The text's upper-left corner.
Text — The text to be illustrated.
Fill — The text should be in this color.
font — One of ImageFont's instances
This is what Wallpaper.py look like after the process:
from PIL import Image, ImageDraw, ImageFont
import text wrap
def get_wallpaper(quote):
# image_width
image = Image.new('RGB', (800, 400), color=(0, 0, 0))
font = ImageFont.truetype("Arial.ttf", 40)
text1 = quote
text_color = (200, 200, 200)
text_start_height = 100
draw_text_on_image(image, text1, font, text_color, text_start_height)
image.save('created_image.png')
def draw_text_on_image(image, text, font, text_color, text_start_height):
draw = ImageDraw.Draw(image)
image_width, image_height = image.size
y_text = text_start_height
lines = textwrap.wrap(text, width=40)
for line in lines:
line_width, line_height = font.getsize(line)
draw.text(((image_width - line_width) / 2, y_text),line, font=font, fill=text_color)
y_text += line_height
You've got both the quote and an image that incorporates it in one. It's now only a matter of searching for mentions of you in other people's tweets. In this case, in addition to scanning for comments, you will also be searching for a certain term or hashtags.
When a tweet contains a specific hashtag, you should like and respond to that tweet.
You can use the hashtag "#qod" as the keyword in this situation.
Returning to the tweet reply.py code, the following function does what we want it to:
def respondToTweet(last_id):
mentions = api.mentions_timeline(last_id, tweet_mode='extended')
if len(mentions) == 0:
return
for mention in reversed(mentions):
new_id = mention.id
if '#qod' in mention.full_text.lower():
try:
tweet = get_quote()
Wallpaper.get_wallpaper(tweet)
media = api.media_upload("created_image.png")
api.create_favorite(mention.id)
api.update_status('@' + mention.user.screen_name + " Here's your Quote",
mention.id, media_ids=[media.media_id])
except:
print("Already replied to {}".format(mention.id))
Respond to tweet() The last id is the function's only argument. Using this variable, you can only retrieve mentions produced after the ones you've previously processed. Whenever you initially invoke the method, you will set its value to 0, and then you'll keep updating it with each subsequent call.
mentions_timeline() Tweets are retrieved from the Tweepy module using this function. Only tweets with the last id newer than the provided value will be returned using the first parameter. The default is to show the last 20 tweets. When tweet mode='extended' is used, the full uncut content of the Tweet is returned. Text is shortened to 140 characters if the option is set to "compat."
Create favorite() is used to generate a favorite for every tweet that mentions you in reverse chronological order, starting with the earliest tweet first and working backward from there.
In your case, you'll use update status() to send a reply to this message, which includes the original tweet writer's Twitter handle, your textual information, the original tweet's identification, and your list of multimedia.
There are several things to keep in mind when repeatedly responding to a certain tweet. Simply save the tweet's identification to which you last answered in a text document, tweetID.txt; you'll scan for the newer tweet afterward. The mention timeline() function will take care of this automatically because tweet IDs can be sorted by time.
Now, you'll pass a document holding this last identification, and the method will retrieve the identification from the document, and the document will be modified with a new one at the end.
Finally, here is what the method response to tweet() looks like in its final form:
def respondToTweet(file):
last_id = get_last_tweet(file)
mentions = api.mentions_timeline(last_id, tweet_mode='extended')
if len(mentions) == 0:
return
for mention in reversed(mentions):
new_id = mention.id
if '#qod' in mention.full_text.lower():
try:
tweet = get_quote()
Wallpaper.get_wallpaper(tweet)
media = api.media_upload("created_image.png")
api.create_favorite(mention.id)
api.update_status('@' + mention.user.screen_name + " Here's your Quote",
mention.id, media_ids=[media.media_id])
except:
logger.info("Already replied to {}".format(mention.id))
put_last_tweet(file, new_id)
You'll notice that two additional utility methods, get the last tweet() and put the last tweet(), have been added to this section ().
A document name is required for the function to get the last tweet(); the function putlasttweet() requires the document as a parameter, and it will pick the most recent tweet identification and modify the document with the latest identification.
Here's what the final tweet reply.py should look like after everything has been put together:
import tweepy
import json
import requests
import logging
import Wallpaper
import credentials
consumer_key = credentials.API_key
consumer_secret_key = credentials.API_secret_key
access_token = credentials.access_token
access_token_secret = credentials.access_token_secret
auth = tweepy.OAuthHandler(consumer_key, consumer_secret_key)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
# For adding logs in application
logger = logging.getLogger()
logging.basicConfig(level=logging.INFO)
logger.setLevel(logging.INFO)
def get_quote():
url = "https://api.quotable.io/random"
try:
response = requests.get(url)
except:
logger.info("Error while calling API...")
res = json.loads(response.text)
print(res)
return res['content'] + "-" + res['author']
def get_last_tweet(file):
f = open(file, 'r')
lastId = int(f.read().strip())
f.close()
return lastId
def put_last_tweet(file, Id):
f = open(file, 'w')
f.write(str(Id))
f.close()
logger.info("Updated the file with the latest tweet Id")
return
def respondToTweet(file='tweet_ID.txt'):
last_id = get_last_tweet(file)
mentions = api.mentions_timeline(last_id, tweet_mode='extended')
if len(mentions) == 0:
return
new_id = 0
logger.info("someone mentioned me...")
for mention in reversed(mentions):
logger.info(str(mention.id) + '-' + mention.full_text)
new_id = mention.id
if '#qod' in mention.full_text.lower():
logger.info("Responding back with QOD to -{}".format(mention.id))
try:
tweet = get_quote()
Wallpaper.get_wallpaper(tweet)
media = api.media_upload("created_image.png")
logger.info("liking and replying to tweet")
api.create_favorite(mention.id)
api.update_status('@' + mention.user.screen_name + " Here's your Quote", mention.id,
media_ids=[media.media_id])
except:
logger.info("Already replied to {}".format(mention.id))
put_last_tweet(file, new_id)
if __name__=="__main__":
respondToTweet()
In order to complete the process, you will need to upload your program to a server. Python applications can be deployed using AWS Elastic Beanstalk in this area.
Amazon web service simplifies management while allowing for greater flexibility and control. Your application is automatically provisioned with capacity, load-balanced, scaled and monitored for health using Elastic Beanstalk.
Here is how it's going to work out:
Install Python on the AWS environment
Build a basic Flask app for the bot
Connect to AWS and deploy your Flask app
Use logs to find and fix bugs
After logging into the Aws services account, type and pick "Elastic Beanstalk," then click "setup a New App."
You'll be asked to provide the following information:
Name of the application;
Application's tags;
Environment;
Code of the application
Each AWS Elastic Beanstalk application resource can have up to 50 tags. Using tags, you may organize your materials. The tags may come in handy if you manage various AWS app resources.
Platform branches and versions are automatically generated when Python is selected from the selection for the platform.
Later, you will deploy your app to elastic Beanstalk. Select "sample app" from the drop-down menu and click "new app." For the most part, it should be ready in about a minute or two
Python is used to create Flask, a website development framework. It's simple to get started and use. Flask has no dependencies, making it a more "beginner-friendly" framework for web applications.
Flask has several advantages over other frameworks for building online applications, including:
Flask comes with a debugger and a development server.
It takes advantage of Jinja2's template-based architecture.
It complies with the WSGI 1.0 specification.
Unit testing is made easier with this tool's built-in support.
Flask has a plethora of extensions available for customizing its behavior.
It is noted for being lightweight and simply providing the needed components. In addition to routing, resource handling, and session management, it includes a limited set of website development tools. The programmer can write a customized module for further features, such as data management. This method eliminates the need for a boilerplate program that isn't even being executed.
Create a new Python script and call it application.py, then paste the code below into it while AWS creates an environment.
from flask import Flask
import tweet_reply
import atexit
from apscheduler.schedulers.background import BackgroundScheduler
application = Flask(__name__)
@application.route("/")
def index():
return "Follow @zeal_quote!"
def job():
tweet_reply.respondToTweet('tweet_ID.txt')
print("Success")
scheduler = BackgroundScheduler()
scheduler.add_job(func=job, trigger="interval", seconds=60)
scheduler.start()
atexit.register(lambda: scheduler.shutdown())
if __name__ == "__main__":
application.run(port=5000, debug=True)
Use up scheduler and a flask app to execute a single job() function that will ultimately call the main method in the tweet reply.py script on a minute basis.
As a reminder, the object instance's identifier of the flask app must be "app." For Elastic Beanstalk to work with your application, you must give it the correct name.
Deploy and set up the app to Amazon Web Services.
Your online app's code can include Elastic Beanstalk conf files (.ebextensions) for configuring amazon web services resources and the environments.
The .config script extension is used for YAML files, and these are put in the .ebextensions directory together with the app's code during the deployment process.
Establish a new directory called .ebextensions inside the code folder and add a new file called Python .config. Add the following code:
files:
"/etc/httpd/conf.d/wsgi_custom.conf":
mode: "000644"
owner: root
group: root
content: WSGIApplicationGroup %{GLOBAL}
If you want Elastic Beanstalk to tailor its settings to the app's prerequisites, you'll need to include a list of any external libraries inside a requirements.txt script you produce.
Execute the command below to generate the requirements.txt file using pip freeze
Finally, package up everything for uploading on Elastic Beanstalk with Elastic Beanstalk. The architecture of your project directory should now look like this:
Compress all the files and directories listed here together. Open amazon web services again and select Upload Code.
Once you've selected a zip archive, click "Deploy." When the health indicator becomes green, your app has been successfully launched. "Follow @zeal quote!" if all of the above steps have been followed correctly, they should appear on your website link.
The following steps will help you access the reports of your app in the event of an error:
Logs can be seen under the "Environment" tab in the Dashboard.
After choosing "Request Log," you'll be taken to a new page with an options list. The last lines option is for the latest issues, but the "full log" option can be downloaded if you need to troubleshoot an older error.
To see the most recent log line, click "Download," A new web page will open.
Media platforms entrepreneurs benefit greatly from automation, which reduces their workload while increasing their visibility on Twitter and other media platforms. We may use various strategies to ensure that we're always visible on Twitter.
The benefits of automation are numerous.
There is still a need for human intervention with any automated process.
However, automation should only be a minor element of your total plan. An online presence that is put on autopilot might cause problems for businesses. If your campaign relies on automation, you should be aware of these problems:
Engaging others is all about being yourself. The tweet was written by a person who was using a phone to produce it, based on the bad grammar and occasional errors. Those who aren't in the habit of writing their own Twitter tweets on the fly risk seeming robotic when they send out several automated messages. Tweets written in advance and scheduled to post at specific times appear disjointed and formulaic.
It is possible to appear robotic and dry if you retweet several automated messages. If your goal is to promote user interaction, this is not the best option.
The solution: Don't automate all of your messages. The platform can also be used for real-time interaction with other people. Whenever feasible, show up as yourself at gatherings.
When you plan a message to go out at a specific time, you have no idea what will be trending. If a tragic tale is trending, the tweet could be insensitive and out of context. On Twitter, there is a great deal of outrage. Because everyone is rightly concerned about their collective destiny, there is little else to talk about.
Then, in a few hours, a succession of your tweets surface. Images showing the group having a great time in Hawaii.
While it's understandable that you'd want to avoid coming across as uncaring or unaware in this day and age of global connectivity and quick accessibility of info from around the globe, it's also not a good look. Of course, you didn't mean it that way, but people's perceptions can be skewed.
What to do in response to this: Automatic tweets should be paused when there is a major development such as the one above. If you're already informed of the big news, it's feasible, but it may be difficult due to time variations.
Twitter automation allows your messages to display even if you are not into the service. Your or your company's identity will remain visible to a worldwide audience if you have a global target market.
If an automatic tweet appears before you can brush up on the latest events in your location, follow it up with a real one to show your sympathy. People find out about breaking news through Twitter, a global platform. Few of us have the luxury of remaining in our small worlds. While it's excellent to be immersed in your company's day-to-day operations, it's also beneficial to keep up with global events and participate in Twitter's wider discussion.
People respond to your automatic tweet with congratulations, questions, or pointing out broken links that go unanswered because you aren't the one publishing it; a program is doing it in your stead, not you. Awkward.
Suppose something occurs in the wee hours of the morning. Another tweet from you will appear in an hour. After seeing the fresh tweet, one wonders if Mr. I-Know-It-All-About-Social-Media has even read his reply.
What to do in response to this situation: When you next have a chance to log on, read through the comments and answer any that have been left. Delayed responses are better than no responses. Some people don't understand that we're not all connected to our Twitter 24 hours a day.
As a means of providing customer support, Twitter has become increasingly popular among businesses. It's expected that social media queries will be answered quickly. Impatience breeds on the social web since it's a real-time medium where slow responses are interpreted as unprofessionalism.
On the other hand, Automatic tweets offer the idea that businesses are always online, encouraging clients to interact with the company. Customers may think they're being neglected if they don't hear back.
When dealing with consumer issues, post the exact hours you'll be available.
As soon as somebody insults you, the business, or even just a tweet, you don't want to let those unpleasant feelings linger for too long. We're not referring to trolls here; we're referring to legitimate criticism that individuals feel they have the right to express.
What should you do? Even though you may not be able to respond immediately, you should do so as soon as you go back online to limit any further damage.
Individuals and organizations may use IFTTT recipes to do various tasks, like favorite retweets, follow back automatically, and send automated direct messages.
The unfortunate reality is that automation cannot make decisions on its own. In light of what people may write unpredictably, selecting key phrases and establishing a recipe for a favorite tweet that includes those terms, or even postings published by certain individuals, may lead to awkward situations.
Spam firms or individuals with shady history should not be automatically followed back. Additionally, Twitter has a cap on the number of followers you can follow at any given time. Spammy or pointless Twitter individuals should not be given your followers.
What should you do? Make sure you are aware of what others are praising under your name. Discontinue following anyone or any company that does not exude confidence in your abilities. In our opinion, auto-DMs can work if they are personalized and humorous. Please refrain from including anything that can be found on your profile. They haven't signed up for your blog's newsletter; they've just become one of your Twitter followers. Take action as a result!
Smaller companies and busy people can greatly benefit from Tweet automation. As a result of scheduling Twitter posts, your workload is reduced. A machine programmed only to do certain things is all it is in the end. But be careful not to be lulled into complacency.
Social media platforms are all about getting people talking. That can’t be replaced by automation. Whether you use automation or not, you must always be on the lookout for suspicious activity on your Twitter account and take action as soon as you notice it.
In this article, you learned how to build and publish a Twitter robot in Py.
Using Tweepy to access Twitter's API and configuring an amazon web service Elastic Beanstalk environment for the deployment of your Python application were also covered in this tutorial. As part of the following tutorial, the Raspberry Pi 4 will be used to build an alarm system with motion sensors.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Jumper Wires | Amazon | Buy Now | |
2 | DS1307 | Amazon | Buy Now | |
3 | Raspberry Pi 4 | Amazon | Buy Now |
Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we implemented a speech recognition system using raspberry pi and used it in our game project. We also learned the fundamentals of speech recognition and later built a game that used the user's voice to play. However, this tutorial will integrate a real-time clock with our raspberry pi four and use it to build a digital clock. First, we will learn the fundamentals of the RTC module and how it works, then we will build a digital clock in python3. With the help of a library, we'll demonstrate how to integrate an RTC DS3231 chip with Pi 4 to keep time.
RTCs are clock units, as the name suggests. There are eight pins on the interface of the RTC IC, the DS1307. An SRAM cell backup of 56 bytes is included in the DS1307, a small clock, and a calendar. Secs, mins, hrs, days, and months are all included in the timer. When a month has fewer than 31 days, the date of the end of this month is automatically shifted.
They can be found in integrated circuits that monitor time and date as a calendar and clock. An RTC's key advantage is that the clock and calendar will continue to function in the event of a power outage. The RTC requires only a small amount of power to operate. Embedded devices and computer motherboards, for example, contain real-time clocks. The DS1307 RTC is the subject of this article.
The primary purpose of a real-time clock is to generate and keep track of one-second intervals.
The diagram to the right shows how this might look.
A program's method, A, is also displayed, which reads a second counter and schedules an action, B, to take place three secs from now. This kind of behavior is known as an alarm. Keep in mind that the secs counter doesn't start and stop. Accuracy and reliability are two of the most important considerations when choosing a real-time clock.
A real-time clock's hardware components are depicted in the following diagram.
An auxiliary crystal and a spectral reference can be used with a real-time clock's internal oscillator, frequently equipped with an interior crystal. The frequency of all clocks is 32,768 Hertz. A TCXO can be used with an exterior clock input since it is extremely accurate and steady.
An input to a Prescaler halves the clock by 32,768 to generate a one-second clock, which is selectable via a multiplexer.
For the most part, a real-time clock features a secs counter with at least thirty-two bits. Certain real-time clocks contain built-in counters to maintain track of the date and time.
Firmware is used to keep track of time and date in a simple real-time clock. The 1 Hertz square waveform from an output terminal is a frequent choice. It's feasible for a real-time clock to trigger a CPU interrupt with various occurrences.
Whenever the whole microcontroller is turned off, a real-time clock may have a separate power pin to keep it running. In most cases, a cell or external power source is attached to this pin's power supply.
Using a 32,768 Hertz clock supply, real-time clock accuracy is dependent on its precision. The crystals are the primary source of inaccuracy in a perfectly-designed signal generator. The internal oscillators and less costly crystals can be employed with sophisticated frequency enhancement techniques for incredibly precise timing. A crystal has three primary causes of inaccuracy.
Tolerances for the initial circuitry and the crystal.
Temperature-related crystal smearing.
Crystallization
Real-time clock accuracy is seen graphically in the figure below:
Using this graph, you can see how a particular concern tolerance changes with increased temperature. The temperature inaccuracy is visible inside the pink track. The quadratic function used to predict the future characteristics of crystals is essential to correct for temperature. Once a circuit board has been built and the temp is determined, an initial error measurement can be used to correct most of the other sources of error.
To acquire an accurate reading, you'll need to adjust to the yellow band. A year's worth of 1 ppm equals 30 seconds of your life. To some extent, the effects of crystallization can't be undone. Even though you're getting older, it's usually just a couple of years.
Pin 1, 2: The usual 32.768-kilohertz quartz crystal can be connected here.
Pin 3: Any conventional 3 Volt battery can be used as an input. To function properly, the battery voltage must be in the range of 2V to 3.5V.
Pin 4: This is the ground.
Pin 5: Streaming data input or output. It serves as the serial interface input and output, and a resistor is required to raise the power to the maximum of 5.5 volts. Irrespective of the current on the VCC line.
Pin 6: Input for the serial timer Data sync is done via the interface clock here.
Pin 7: Driver for the output waveform. A square wave frequency can be generated by connecting a signal to the out pin with the sqwe value set to 1.
Pin 8: The main source of power. Data is written and read whenever the voltage provided to the gadget is within standard limits.
an output waveform that can be programmed
Power-failure detection and switch circuitry automation
Consume less power
Real-time data and time is provided
The rtc is mainly used for writing and reading to and from its registers. Addresses for the rtc registers can be found from zero to 63. If necessary, transitory variables can be stored in random access memory in place of the first eight clock registers. Second, minute, hour, dates, months, and years are all included in the clock's top seven registers. Let's have a look at how DS1307 operates.
The sole purpose of a real-time clock is to record the passage of time. Regular monitoring of the passing of time is critical to the operation of computers and many smaller electronic devices. Although it simply serves one purpose, managing time has many applications and functions. Nearly every computer activity uses measurements taken and monitoring the present time, from the generation of random numbers to cybersecurity.
Kinematic activities or a real-time clock module keep track of time in a classic watch, so how does it work?
The answer is in crystal oscillators, as you would have guessed. The real-time clock uses oscillator pulses in electronic components to keep track of time. Quartz crystals are commonly used in this oscillator, which typically operates at a frequency of 32 kilohertz. Software cleverness is needed to take the processing times and correct any differences related to power supply variations or tiny changes in cycle speed.
Auxiliary tracking is used in real-time clock modules, which uses an exterior signal to lock onto a precise, uniform time. As a result, this does not override the internal measures but complements them to ensure the highest level of accuracy. An excellent example is a real-time clock module that relies on external signals, such as those found on cell phones. Oscillation cycles are counted again if the phone loses access to the outside world.
An object made of quartz crystals has a physical form. As a result, the real-time clock module accuracy can degrade over time due to extreme heat and cold exposure. Many modules include a temperature sensing technique to counteract the effects of temperature variations and improve the oscillator's overall accuracy. There is a wide range of frequencies available in cheap a crystal used in Computer systems. So, the error rate is around 72 milliseconds per hr in real-time. In this case, the following criteria are met:
Start data transfer: Clock and Data lines must be high for a START.
Stop data transfer: In STOP mode, data lines go from low to high while the clock lines remain high.
Data valid: START conditions must be met before a clock signal high period can be used to determine if the transmission line is stable. The info on the channel must be updated when the clock signal is at a lower frequency. Each piece of data has one clock frequency.
Each data transfer begins and ends with START and END conditions, respectively. During the START or STOP circumstances, the data rate bytes that can be sent are not restricted and are set by the master unit. Byte-by-byte, each recipient validates the transmission with a 9th bit.
A real-time clock can be used in systems to correct timing faults in two different ways. The graphic below shows the Prescaler counting the oscillation cycles per the second period.
The advantage of this method is that the time interval between each second is only slightly altered. However, a variable Prescaler and an extra register for storing the prescale counts, and the interval between applications are necessary for this technology to work.
Suppose the real-time clock does not contain a built-in prescaler that can be used to fine-tune the timing. This diagram illustrates a different way of approaching the problem.
The numbers in the rectangles indicate the secs counter. The program continuously tracks and calculates the real-time clock seconds count. A second is added or subtracted to compensate for the cumulative mistake whenever the error reaches 1 second.
This strategy has a drawback: the difference in seconds whenever an adjustment is made might be rather considerable. With this method, you can use it with any real-time clock.
To keep track of the current date and time, certain RTCs use electronic counters. Counting seconds, mins, hours, days, weeks, months, and years and taking leap years into account is necessary. Applications can also keep track of the time and date.
The second counter of a real-time clock can be used to implement this system on a microcontroller.
The proprietary method gets time(), or something similar is commonly used to access the device timer. Using get time() is as simple as taking a second counter and printing out the resulting value. The library handles the remainder of the work to convert this time in secs to the present time of day and date.
If you turn off your RPi, you won't have access to its internal clock. It uses a network clock protocol that requires an internet connection when it is turned on. A real-time timer must be installed in the raspberry pi to keep time without relying on the internet.
First, we'll need to attach our real-time control module to the RPi board. Ensure the Rpi is deactivated or unplugged before beginning any cabling.
Make use of the links in the following table and figure:
The rtc is powered by a 3.3V supply; hence it needs an input of 3.3V. Connect the RTC to the Pi 4 via a communication protocol.
We must first enable I2C in the RPi to use the RTC component.
Open a terminal window and enter the command raspi-config:
sudo raspi-config
Select the Interfacing Option in the configuration tool.
Selecting I2C will allow I2C in the future.
Before rebooting, enable the I2C.
sudo reboot
Confirm the Connections of the RTC. Then using the I2C interface, we can check to see if our real-time clock module is connected to the device.
Ensure your Pi's software is updated before putting any software on it. Defective dependencies in modules are frequently to blame for installation failures.
sudo apt-get update -y
sudo apt-get upgrade -y
If our RPi detects a connection from the real-time clock module, we'll need python-SMBus i2c-tools installed.
On the command line:
sudo apt-get install python-SMBus i2c-tools
Then:
sudo i2cdetect -y 1
Many real-time devices use the 68 address. This indicates that any driver is not using the address. If the address returned by the system is "UU," it indicates that a program actively uses it.
Install Python.
sudo apt-get install python-pip
sudo apt-get install python3-pip
To get the git library, you'll first need to get the git installed on your computer.
$sudo apt install git-all
First, we will have to download the library using the following command in the command line.
sudo git clone https://github.com/switchdoclabs/RTC_SDL_DS3231.git
A file called "RTCmodule" should be created after cloning. The following code should be copied and pasted into a new py file. Please save it.
import time
import SDL_DS3231
ds3231 = SDL_DS3231.SDL_DS3231(1, 0x68)
ds3231.write_now()
while True:
print “Raspberry Pi=\t” + time.strftime(%Y-%m-%d %H:%M:%S”)
print “Ds3231=\t\t%s” % ds3231.read_datetime()
time.sleep(10.0)
We begin by importing the packages we plan to use for this project.
The clock is initialized.
Next, the RPi and real-time clock module times are printed.
Then, execute the program.
$ python rtc.py
In this case, the output should look something like the following.
The write all() function can alter the rtc to run at a different rate than the Pi's clock.
ds3231.write_all(29,30,4,1,3,12,92,True)
import time
import SDL_DS3231
ds3231 = SDL_DS3231.SDL_DS3231(1, 0x68)
ds3231.write_all(29,30,4,1,3,12,92,True)
while True:
print “Raspberry Pi=\t” + time.strftime(%Y-%m-%d %H:%M:%S”)
print “Ds3231=\t\t%s” % ds3231.read_datetime()
time.sleep(10.0)
Time and date are shown to have adjusted on the rtc. With this, we'll be able to use the real-time clock and the package for various projects. However, more setup is required because we'll use the RPi's real-time clock for this project.
, we will configure the rtc on the RPi used in this project. The first thing to do is to follow the steps outlined above.
The real-time clock address is 0x68, so we must use that. Configuration.txt must be edited so that a device tree layer can be added.
sudo nano /boot/config.txt
Please note where your real-time clock chip is in the Pi config file and include it there.
dtoverlay=i2c-rtc,ds1307
or
dtoverlay=i2c-rtc,pcf8523
or
dtoverlay=i2c-rtc,ds3231
After saving and restarting the Pi, inspect the 0x68 address status.
sudo reboot
After reboot, run:
sudo i2cdetect -y 1
Once the "fake hwclock" has been disabled, we can use the real-time clock real hwclock again.
The commands below should be entered into the terminal to remove the bogus hwclock from use.
sudo apt-get -y remove fake-hwclock
sudo update-RC.df fake-hwclock remove
sudo systemctl disable fake-hwclock
We can use our rtc hardware as our primary clock after disabling the fake hwclock.
sudo nano /lib/udev/hwclock-set
Afterward, type in the lines below.
#if [-e/run/systemd/system];then
#exit 0
#if
#/sbin/hwclock --rtc=$dev --systz --badyear
#/sbin/hwclock --rtc=$dev --systz
We can now run some tests to see if everything is working properly.
To begin with, the real-time clock will show an inaccurate time on its screen. To use the real-time clock as a device, we must first correct its time.
To verify the current date, the real-time clock is launched.
$sudo hwclock
It is necessary to have an internet connection to our real-time clock module to synchronize the accurate time from the internet.
To verify the date of the terminal and time input, type in:
date.
Time can also be manually set using the line below. It's important to know that the internet connection will immediately correct it even if you do this manual process.
date --date="May 26 2022 13:12:10"
The real-time clock module time can also be used to set the time of the Rpi.
sudo hwclock –systems
or
sudo hwclock –w
Setting the time on our real-time clock module is also possible using:
sudo hwclock --set --date "Thu May 26 13:12:10 PDT 2022"
Or
sudo hwclock --set --date "26/05/2022 13:12:45"
Once the time has been synchronized, the real-time clock module needs a battery inserted to continue saving the timestamp. Once the real-time clock module has been installed on the RPi, the timestamp will be updated automatically!
Building our application
Next, we'll build a digital clock that includes an alarm, stopwatch, and timer features. It is written in Python 3, and it will operate on a Raspberry Pi using the Tkinter GUI library.
The library includes the Tkinter Graphical interface framework, which runs on various operating systems and platforms. Cross-platform compatibility means that the code can be used on every platform.
Tkinter is a small, portable, and simple-to-use alternative to other tools available. Because of this, it is the best platform for quickly creating cross-platform apps that don't require a modern appearance.
Python Tkinter module makes it possible to create graphical user interfaces for Python programs.
Tkinter offers a wide range of standard GUI elements and widgets to create user interfaces. Controls and menus are included in this category.
Tkinter has all of the benefits of the TK package, thanks to its layered design. When Tkinter was first developed, its Graphical interface toolkit had already had time to evolve, so it benefited from that experience when it was created. As a result, Tk software developers can learn Tkinter very quickly because converting from Tcl/Tcl to Tkinter is extremely simple.
Because Tkinter is so user-friendly, getting started with it is a breeze. The Tkinter application hides the complex and detailed calls in simple, understandable methods. When it comes to creating a working prototype rapidly, python is a natural choice. Because of this, it is anticipated that its favorite Graphical interface library will be applied similarly.
Tkinter-based Py scripts don't require any running changes on a different platform. Any environment that implements python can use Tkinter. This gives it a strong benefit over many other competitive libraries, typically limited to a single or a handful of operating systems. Tkinter, on the other hand, provides a platform-specific look and feel.
Python distributions now include Tkinter by default. Therefore, it is possible to run commands using Tkinter without additional modules.
Tkinter's slower execution may be due to the multi-layered strategy used in its design. Most computers are relatively quick enough to handle the additional processes in a reasonable period, despite this being an issue for older, slower machines. When time is of the essence, it is imperative to create an efficient program.
Import the following modules.
from Tkinter import *
from Tkinter. ttk import *
import DateTime
import platform
We are now going to build a Tkinter window.
window = Tk()
window.title("Clock")
window.geometry('700x500')
Here, we've created a basic Tkinter window. "Clock" has been officially renamed. And make it a 700X500 pixel image.
Tkinter notebook can be used to add tab controls. We'll create four new tabs, one for each of the following: Clock, Alarm, Stopwatch, and Timer.
tabs_control = Notebook(window)
clock_tab = Frame(tabs_control)
alarm_tab = Frame(tabs_control)
stopwatch_tab = Frame(tabs_control)
timer_tab = Frame(tabs_control)
tabs_control.add(clock_tab, text="Clock")
tabs_control.add(alarm_tab, text="Alarm")
tabs_control.add(stopwatch_tab, text='Stopwatch')
tabs_control.add(timer_tab, text='Timer')
tabs_control.pack(expand = 1, fill ="both")
We've created a frame for every tab and then added it to our notebook.
We are now going to add the clock Tkinter components. Instead of relying on the RPi to provide the date and time, we'll use the rtc module time and date instead.
We'll include a callback function to the real-time clock module in the clock code to obtain real-time.
def clock():
date_time = ds3231.read_datetime()
time_label.config(text = date_time)
time_label.after(1000, clock)
Timestamps are retrieved from the DateTime package and transformed to Am or Pm time. This method must be placed after Tkinter's initialization but before the notebook.
We'll design an Alarm that will activate when the allotted time has expired in the next step.
get_alarm_time_entry = Entry(alarm_tab, font = 'calibri 15 bold')
get_alarm_time_entry.pack(anchor='center')
alarm_instructions_label = Label(alarm_tab, font = 'calibri 10 bold', text = "Enter Alarm Time. Eg -> 01:30 PM, 01 -> Hour, 30 -> Minutes")
alarm_instructions_label.pack(anchor='s')
set_alarm_button = Button(alarm_tab, text = "Set Alarm", command=alarm)
set_alarm_button.pack(anchor='s')
alarm_status_label = Label(alarm_tab, font = 'calibri 15 bold')
alarm_status_label.pack(anchor='s')
Set the alarm with the following format: HH: MM (PM/AM). For example, the time at which 01:30 PM corresponds to 1:30 p.m. As a final step, a button labeled "Set Alarm Button." In addition, the alarm status label indicates if the alarm has been set and shows the current time.
The set alarm button will trigger an alarm when this method is called. Replace Clock and Notebook setup functions with this one.
def alarm():
main_time = datetime.datetime.now().strftime("%H:%M %p")
alarm_time = get_alarm_time_entry.get()
alarm_time1,alarm_time2 = alarm_time.split(' ')
alarm_hour, alarm_minutes = alarm_time1.split(':')
main_time1,main_time2 = main_time.split(' ')
main_hour1, main_minutes = main_time1.split(':')
if int(main_hour1) > 12 and int(main_hour1) < 24:
main_hour = str(int(main_hour1) - 12)
else:
main_hour = main_hour1
if int(alarm_hour) == int(main_hour) and int(alarm_minutes) == int(main_minutes) and main_time2 == alarm_time2:
for i in range(3):
alarm_status_label.config(text='Time Is Up')
if platform.system() == 'Windows':
winsound.Beep(5000,1000)
elif platform.system() == 'Darwin':
os.system('say Time is Up')
elif platform.system() == 'Linux':
os.system('beep -f 5000')
get_alarm_time_entry.config(state='enabled')
set_alarm_button.config(state='enabled')
get_alarm_time_entry.delete(0,END)
alarm_status_label.config(text = '')
else:
alarm_status_label.config(text='Alarm Has Started')
get_alarm_time_entry.config(state='disabled')
set_alarm_button.config(state='disabled')
alarm_status_label.after(1000, alarm)
In this case, the module's time is taken and formatted in this way in case the If the time provided matches the time stored, it continues. In this case, it beeps following the operating system's default chime.
As a final step, we'll add a stopwatch to our code.
To complete our timer, we'll add all Tkinter elements now.
stopwatch_label = Label(stopwatch_tab, font='calibri 40 bold', text='Stopwatch')
stopwatch_label.pack(anchor='center')
stopwatch_start = Button(stopwatch_tab, text='Start', command=lambda:stopwatch('start'))
stopwatch_start.pack(anchor='center')
stopwatch_stop = Button(stopwatch_tab, text='Stop', state='disabled',command=lambda:stopwatch('stop'))
stopwatch_stop.pack(anchor='center')
stopwatch_reset = Button(stopwatch_tab, text='Reset', state='disabled', command=lambda:stopwatch('reset'))
stopwatch_reset.pack(anchor='center')
The stopwatch method is activated by pressing the Start, Stop, and Reset Buttons located below the Stopwatch Label.
Stopwatch counters will be included in the next section. Two stopwatch counters will be added first. Tkinter Initialization and the clock's method should be added to the bottom of the list.
stopwatch_counter_num = 66600
stopwatch_running = False
The stopwatch is described in these words. Adding a Stopwatch Counter is the next step. Add it to the bottom of the alarm clock and the top of the notebook's setup procedure.
def stopwatch_counter(label):
def count():
if stopwatch_running:
global stopwatch_counter_num
if stopwatch_counter_num==66600:
display="Starting..."
else:
tt = datetime.datetime.fromtimestamp(stopwatch_counter_num)
string = tt.strftime("%H:%M:%S")
display=string
label.config(text=display)
label.after(1000, count)
stopwatch_counter_num += 1
count()
The counter controls the stopwatch on the stopwatch. At the rate of one second each second, the Stopwatch counter is incremented by 1.
The stopwatch method, which is invoked by the Stopwatch Controls, is now complete.
if work == 'start':
global stopwatch_running
stopwatch_running=True
stopwatch_start.config(state='disabled')
stopwatch_stop.config(state='enabled')
stopwatch_reset.config(state='enabled')
stopwatch_counter(stopwatch_label)
elif work == 'stop':
stopwatch_running=False
stopwatch_start.config(state='enabled')
stopwatch_stop.config(state='disabled')
stopwatch_reset.config(state='enabled')
elif work == 'reset':
global stopwatch_counter_num
stopwatch_running=False
stopwatch_counter_num=66600
stopwatch_label.config(text='Stopwatch')
stopwatch_start.config(state='enabled')
stopwatch_stop.config(state='disabled')
stopwatch_reset.config(state='disabled')
We will now create a timer that rings when the timer has expired. Based on the stopwatch, it deducts one from the counter rather than adding 1.
The timer component will now be included in Tkinter.
timer_get_entry = Entry(timer_tab, font='calibiri 15 bold')
timer_get_entry.pack(anchor='center')
timer_instructions_label = Label(timer_tab, font = 'calibri 10 bold', text = "Enter Timer Time. Eg -> 01:30:30, 01 -> Hour, 30 -> Minutes, 30 -> Seconds")
timer_instructions_label.pack(anchor='s')
timer_label = Label(timer_tab, font='calibri 40 bold', text='Timer')
timer_label.pack(anchor='center')
timer_start = Button(timer_tab, text='Start', command=lambda:timer('start'))
timer_start.pack(anchor='center')
timer_stop = Button(timer_tab, text='Stop', state='disabled',command=lambda:timer('stop'))
timer_stop.pack(anchor='center')
timer_reset = Button(timer_tab, text='Reset', state='disabled', command=lambda:timer('reset'))
timer_reset.pack(anchor='center')
Get timer provides guidance that explains how to set the timer. HH:MM: SS, For instance, 01:30:40 denotes a time interval of one hour, thirty minutes, and forty secs. It then has a toggle that calls the Timer method, which has a start, stop, and reset button.
To begin, we'll insert 2 Timer counters. The two lines of code below the stopwatch and the clock method below should be added.
timer_counter_num = 66600
timer_running = False
In this section, we learn more about the "Timer" feature. Next, we'll implement the Timer Counter feature. In between the Stopwatch method and Notebook Initiation, put it.
def timer_counter(label):
def count():
global timer_running
if timer_running:
global timer_counter_num
if timer_counter_num==66600:
for i in range(3):
display="Time Is Up"
if platform.system() == 'Windows':
winsound.Beep(5000,1000)
elif platform.system() == 'Darwin':
os.system('say Time is Up')
elif platform.system() == 'Linux':
os.system('beep -f 5000')
timer_running=False
timer('reset')
else:
tt = datetime.datetime.fromtimestamp(timer_counter_num)
string = tt.strftime("%H:%M:%S")
display=string
timer_counter_num -= 1
label.config(text=display)
label.after(1000, count)
count()
The Timer counter controls the timer. Timer counter-variable num is decreased by one each second.
def timer(work):
if work == 'start':
global timer_running, timer_counter_num
timer_running=True
if timer_counter_num == 66600:
timer_time_str = timer_get_entry.get()
hours,minutes,seconds=timer_time_str.split(':')
minutes = int(minutes) + (int(hours) * 60)
seconds = int(seconds) + (minutes * 60)
timer_counter_num = timer_counter_num + seconds
timer_counter(timer_label)
timer_start.config(state='disabled')
timer_stop.config(state='enabled')
timer_reset.config(state='enabled')
timer_get_entry.delete(0,END)
elif work == 'stop':
timer_running=False
timer_start.config(state='enabled')
timer_stop.config(state='disabled')
timer_reset.config(state='enabled')
elif work == 'reset':
timer_running=False
timer_counter_num=66600
timer_start.config(state='enabled')
timer_stop.config(state='disabled')
timer_reset.config(state='disabled')
timer_get_entry.config(state='enabled')
timer_label.config(text = 'Timer')
If the task is started, this method retrieves the Timer input text and formats it before setting the Timer counter and calling the Timer counter to set the clock running. The timer is set to "False" if programmed to Stop. The counter is set to 666600, and running is configured to "False."
Finally, here we are at the end of the project. Finally, add the code below to start Tkinter and the clock.
clock()
window.main loop()
It will open the Tkinter and clock windows.
Stopwatch
32-Bit Second Counter Problems
Even while this counter can operate for a long period, it will eventually run out of memory space. Having a narrow count range can pose problems.
Management of the streetlights
Lighting Street Management System is a one-of-a-kind solution that regulates the automatic reallocation of lights in public spaces. It can measure electric consumption and detect tampering and other electrical situations that hinder the most efficient use of street lights. IoT-based automated streetlight management systems are intended to cut electricity usage while decreasing labor costs through precession-based scheduling. Streetlights are a vital element of any town since they improve night vision, offer safety on the streets, and expose public places, but they waste a significant amount of energy. Lights in manually controlled streetlight systems run at full capacity from sunset to morning, even if there is adequate light. High reliability and lengthy stability are guaranteed by this method. This task is carried out using computer software. When compared to the previous system, the new one performs better. Automated On and Off processes are based on the RTC module for the time frame between dawn and dusk of the next day. Manual mode was removed because of human flaws and difficulties in timely on and off activities, which necessitated the relocation of certified electricians over wide distances and caused time delays. Using an Internet of things controlled from a central command center and portable devices helps us avoid these delays while also identifying and correcting faults, conserving energy, and providing better services more efficiently.
This tutorial teaches the mechanics of a real-time clock and some major applications in real life. With ease, you can integrate this module into most of your projects that require timed operations, such as robot movements. You will also understand how the RTC works if you play around with it more. In the next tutorial, we will build a stop motion movie system using Raspberry pi 4.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Raspberry Pi 4 | Amazon | Buy Now |
Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, a facial recognition system on a Raspberry Pi 4 was used to develop a smart security system. We also learned how to create a dataset using two Python scripts to train and analyze a series of photographs of a certain person. This tutorial will teach you how to install pi-hole on a Raspberry Pi 4 and use it to block advertisements from anywhere. This is a great initiative for folks who are tired of annoying pop-up adverts while browsing. First, we'll learn how to set up the pi-hole without the need for an environment, then we'll use Docker to install pi-hole, and ultimately we'll see how we can access it from anywhere.
Ads with poor quality are all over the internet, causing havoc with the entire user experience. Various varieties of these intrusive ads are available, ranging from video content that takes control of your browser to ads that infect your computer with malware and steal your private information without your knowledge.
So far, using an ad blocker has shown to be an effective method of preventing advertising of this nature. What if you could have an ad blocker that works on all your local area network devices instead?
Using Pi-hole, an ad-blocking program for the raspberry pi computer, you can block all major advertising networks from loading adverts on your networked devices.
You must first install and configure Pi-hole on your RPi before using it.
DNS server requests are used by all internet services, including adverts, to get you from source to destination. A DNS server translates theengineeringprojects.com into server IP addresses that your web browser can use to connect to the site. As with DNS, ad networks use it to request that their advertising be served.
When you see a Google ad, your internet explorer is likely making requests to domains like theengineeringprojects.com to deliver them appropriately. You have to intercept and prevent these adverts from loading, and that's exactly what Pi-hole does.
You can use Pi-hole to operate as a DNS server for your network. Requests to ad networks are initially routed through Pi-hole, which serves as your Domain name server. Thousands of domains on its blocklist are compared to see if any match these. Ads will be prohibited if the website is blocked, allowing you to have an ad-free experience.
Devices with limited ad-blocking options will find this useful. If you want to avoid adverts on your smart TV or gaming console, you'll need a third-party tool such as Pi-hole to handle the job for you.
Pi-web hole's interface allows you to block or allow specific domains to prohibit strange advertising networks or other questionable sites from loading if you need to tailor the system.
Pi-hole can be installed in two methods on a Raspberry Pi and other Unix distributions such as Debian and Ubuntu. When running a Linux distribution other than rasbian, a single-line command can be used to install it.
Alternatively, you can utilize Docker on the RPi to isolate Pi-hole in a software container. However, Docker requires many configurations, but it does enable you to host pi-hole on a virtual machine.
The fastest approach to get Pi-hole up and running is to utilize the designer's installation script. Running the script via curl or downloading it manually are two options.
Launch a command line and enter:
sudo curl -SSL https://install.pi-hole.net | bash
Before the setup is complete, you can customize Pi settings holes by running the automatic installation script and downloading any necessary packages.
Curling a program from the web is generally terrible practice because you can't preview what the script does before running it, even if it is allowed. You could download your program and then explicitly run it as a last resort.
Obtaining the file is as simple as opening a terminal and typing.
wget -O basic-install.sh https://install.pi-hole.net
sudo bash basic-install.sh
Installing Pi-hole and any other required tools will be completed using the same script used during setup.
Your network connection and chosen logging level will be requested for confirmation at some stage in the installation procedure under the terminal's config choices.
If you don't want to risk losing the randomly created admin passcode displayed at the end of the installation, make a note of that information. If you lose or forget the passcode, you will have to open a command line and run the below to change the password.
sudo pihole -a -p
A Docker container can be used instead of the script supplied above to manage Pi-hole.
Docker is a free software platform for creating containers. With the rise of cloud-native systems, container delivery has gained popularity as a more efficient means of distributing dispersed applications.
It's possible to develop a container without Docker. However, the framework makes the process easier and more secure to do. Docker is a set of tools that make it easy for programmers to create, deploy, operate, upgrade and terminate containers with a single application programming interface and a few basic instructions.
The Unix kernel has inbuilt isolation and virtualization features that make containers practical. Many of the same functionality that a hypervisor provides to allow multiple virtual machines to share a common hardware CPU and memory can also be found in the os.
Since containers provide all the benefits and features of Virtual machines, like software isolation, low-cost scaling, and disposability, they also offer some major new benefits:
Docker provides a shell script that makes it simple to get it up and run.
curl -sSL https://get.docker.com | sh
The following two commands can be used to analyze programs before they are executed:
curl -SSL get.docker.com -o get-docker.sh
sh get-docker.sh
The "permission denied" message will appear if you run several Docker commands, indicating that the root account can only use Docker. For non-root users that want to run Docker commands, the following command will work:
sudo usermod -aG docker pi
It is possible to add users to groups using usermod with -aG. A person called "pi" is now part of a " docker group." This made it possible for pi to perform docker instructions. Any other user can substitute the default pi account.
The command below can be used to confirm this:
groups pi
Check to see if Docker is mentioned as a group.
We will use the hello-world image to see how well Docker works.
docker run hello-world
To tell whether you have installed Docker correctly, you will see "Hello from Docker!"
With the help of Docker-compose, you can create and maintain your environment with YAML files. For applications that require various interconnected services, this is a particularly helpful feature.
To get docker-compose up and running, run pip3 to get it started.
sudo pip3 -v install docker-compose
You'll need to clone the Pi-hole Docker installer from the Pi-hole Git repository to run Pi-hole in a docker container. Let us install Git so that we can get started on this.
Git keeps track of all the modifications you make to scripts, allowing you to roll back to previous versions if necessary. Using Git, many people may merge their contributions into a single source, making it easier for everyone to work together.
You can use Git irrespective of whether or not you develop code that you alone will view or if you are collaborating.
Git program can be executed on a computer's hard drive. You have access to all of your documents and their histories on your computer. Online hosting can also be used to keep a duplicate of the documents and the histories of their modifications. Sharing your modifications and downloading others' updates in a central location makes it easier to collaborate with other programmers. You may even integrate the modifications made by two people working on the same project without losing one another's work, thanks to Git's ability to combine changes automatically.
You may get the software of any os from the Git official site.
Installing Git from the command-line interface is as simple as typing the commands below into your terminal window:
sudo apt-get install git-all
Type in the following command:
git config --global user.name "TheEngineeringProjects"
The quote marks are for your name. The —global parameter specifies that your Git name will be set for all your computer's repositories.
Type the command below to make sure you typed your username accurately when in doubt.
Git config --global user.name
this should return your username:
When pushing code to GitHub, you'll need to provide a valid email address. This can be done in Git.
git config --global user.email "Theengineeringprojects@example.com"
To make sure you've entered your email address correctly, here's a final check:
git config --global user.email Theengineeringprojects@example.com
In a command window, enter the following commands.
git clone https://github.com/pi-hole/docker-pi-hole.git
The pi-hole git repository will be fetched to run Pi-hole in a Virtualized environment. Your container's setup script may be found in the copied Pi-hole directory. You should look over and make any necessary changes to the setup script before executing it.
The program automatically generates an admin passcode for Pi-hole, as are other options like the time - zone Pi-hole utilizes and the standard Domain name server it uses for outbound DNS queries.
./docker_run.sh
Take note of the passcode displayed when the script has been completed successfully. Please note that this passcode is required for further configuration of the Pi-hole.
A reboot of your raspbian should immediately start the service Pi-hole, thanks to its Docker launch script use of the —restart=unless stopped parameter.
However, this only works if you have performed the above steps to activate the container init file to ensure the Docker container is automatically launched upon startup.
Your gadgets should be ready to use Pi-hole now that it has been installed and operating. Default by Pi-hole does not block advertisements in your network. Change your device Domain name server settings to use the Rpi's internet address instead, and they will work.
Each gadget on the local network can be configured for Pi-hole as its Domain name server manually or set up your home network router for Pi-hole instead. To physically configure each device to use Pi-hole, please follow the instructions below.
Your Pi-hole admin interface should begin to show web requests from all devices once configured to utilize the Raspberry Pi's Internet address. Launch an ad-supported application or visit a site like Forbes if you want to ensure Pi-hole is restricting advertisements.
Pi-hole should function properly if the adverts are blocked. In some cases, restarting your devices may be necessary for the Domain name server settings to take effect.
To utilize Pi-hole on many devices within your network, you'll need to configure all of the local network's computers and routers, which is time-consuming and inefficient.
You should use your Rpi's internet address instead of your router's Domain name server settings. As a result, no advertising will be sent to any gadgets on the local area network. By contrast, manually configuring the Domain name server on each device is time-consuming and inconvenient.
To change your Domain name server settings, you will need to know what kind of router you have and what model it is. This information should be displayed on the router or included in the package that came with it.
As a last resort, you might examine your router's guide, use an internet browser, and try out some popular internet protocol addresses.
Internet providers' Domain name servers are typically pre-configured in the router. Ensure the Internet protocol address of the RPi is used as the DNS server.
As a result, all devices connected will be directed to use Pi-hole as the initial gateway for all Domain name server requests. There will be no processing of banned queries, while approved queries will be forwarded to the internet domain name server provider you've selected in your Pi-hole settings.
You may need to reset your router for your new Domain name server configurations to take effect throughout your whole network.
If Pi-hole is up and running, you may log into the administrator site from any internet browser by entering the URL below.
If that doesn't work, it may be necessary to look up your Rpi Pi's internet protocol address if that doesn't work.
Users who haven't logged in to Pi-admin hole's site should be able to see a summary of the service's statistics. Even though Pi-hole is designed to be a "set it and leave it" solution, you will have to make any necessary changes to the settings here.
The Pi-hole administrator site can be accessed by clicking the Login button in the left navigation. Signing in requires you to use the passcode you set up while installing Pi-hole.
If you ever forget the Pi-hole administrative passcode, launch a command prompt or remote SSH connect and use the following command to change the passcode.
sudo pihole -a -p
or
docker exec -it pihole pihole -a -p
When using Docker containers to execute Pi-hole.
You will see all of Pi's features, hole stats, and reports after you have logged in. Pi-hole logs, bans and access control lists, and settings area are all accessible from this left navigation bar.
Most popular advertising networks are blocked by Pi-hole's lists which are frequently updated and managed by individuals and companies.
You can see them by selecting Group Management afterward Adlists in the left menu, where you may deactivate or delete any existing groups, or you can create your own.
Restrict and unblock domains by adding and removing them from the Blacklisted domain list. You can add a DNS server and descriptions to either the Blacklisted or Allowed domains menu by clicking on the Add button.
Select the red delete button symbol next to any entry in the Category of items section below that you want to delete.
If you've got an RPi running Pi-hole, you can utilize it as the Wi-Fi network's domain name server. At home, this is a terrific way to relax. However, you must be at home for this to work. Your Pi-hole must be accessible from any location if you wish to utilize it to prevent advertisements.
Pi-holes in the cloud is the most obvious method of accomplishing this. As long as you don't take further safeguards, malicious actors could exploit your Pi-hole to target other network portions. You also need to subscribe to it and integrate a cloud server. Regardless, we'll go with the low-cost and simple solution of using our RPi with Tailscale.
There is no better way to connect every device safely and conveniently than Tailscale. It gives you the option of selecting a Domain name server.
Tailscale can be used for free in most situations like this by individuals.
the first step is the installation of transport HTTP
sudo apt-get install apt-transport-https
Afterward, include the tailscale sign key and repo in the
curl -fsSL https://pkgs.tailscale.com/stable/raspbian/buster.gpg | sudo apt-key add -
curl -fsSL https://pkgs.tailscale.com/stable/raspbian/buster.list | sudo tee /etc/apt/sources.list.d/tailscale.list
With the code provided below, tailscale is now up and running
sudo apt-get install tailscale
Link your computer to the Tailscale network by authenticating and connecting it there.
sudo tailscale up --accept-dns=false
Pi-upstream hole's servers are Linux-configured Domain name servers, to which it sends DNS requests it cannot answer. Pi-Hole should not attempt to act as its source as it will serve as our Domain name server in this configuration.
Now you're linked up! You can discover your Tailscale Internet protocol address by:
tailscale IP -4
Tailscale's official website has the software you need to get started.
Tailscale's administrator console allows you to set up Domain name servers for your whole network. As a universal name server, provide the Internet address of your RPi's Tailscale as the Domain name server address.
Ensure to use the Override domain Name server option upon entering the Rpi's Tailscale Internet address so that our internet Domain name server will overwrite any local Domain name server settings that endpoints may have.
The security of your network may require that from time to time, you re-authenticate your devices with Tailscale. You'll want to turn off key expiration on the RPi in the administrator dashboard to prevent Domain name server outages when this happens.
That's all there is to it! Whenever you sign in to Tailscale, the Pi-hole will immediately serve as the Domain name server for that machine.
If the Pi-hole is preventing you from accessing whatever you need, you can deactivate it by deactivating Tailscale and reconnecting whenever you're ready.
If something goes wrong, you'll want to be comfortable with the Pi-hole administrator interface, but you won't need to use it regularly. Advertising networks and trackers will be secured and blocked on all or some of your equipment when your gadgets have been configured with Pi-hole in the background.
Because of Docker, you can run multiple programs on the Raspberry Pi simultaneously, providing a 24-hour DNS for you to access. As we've described in earlier chapters, you can use a Rpi network storage or a Rpi VPN to protect your privacy and anonymity online.
A list of well-known ad-serving domains is produced using data obtained from third parties.
Using internet filtering, it is possible to ban adverts in many platforms, such as mobile applications and smart screens.
Your network connection will perform better because advertising is prevented before they load.
Use adblocking software with a VPN service for advertisement blocking and data savings.
You can see the number of ads prevented and a query history on the Graphical interface.
Pi-hole blocks not all advertisements; however, the majority of them are. Pi-hole is a Domain name server with some blocked and allowed features pre-installed. It's not much more than that. Those guidelines determine whether or not a request is granted or refused. To filter advertising that does not use a Domain name server, Pi-hole uses a Domain name server and domain matching. This is a common occurrence in games played on smartphones. Ads are a common feature in many video games. It will not be possible to ban adverts that use the same domains as legitimate traffic. You'll have to put up with the YouTube advertising, even on pi-hole.
With Pi-hole and docker container, we learned how to filter intrusive advertisements from the world wide web. Our datagrams will be accessed more quickly, data loss will be minimized, and cost-effective. Using a raspberry pi 4, we'll develop a voice recognition system in the next chapter.
Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we created a pi-hole ad blocker for our home network using raspberry pi 4. We also learned how to install pi-hole on raspberry pi four and how to access it in any way with other devices. This tutorial will implement a speech recognition system using raspberry pi and use it in our project. First, we will learn the fundamentals of speech recognition, and then we will build a game that uses the user's voice to play it and discover how it all works with a speech recognition package.
Here, you'll learn:
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Raspberry Pi 4 | Amazon | Buy Now |
Are you curious about how to incorporate speech recognition into a Python program? Well, when it comes to conducting voice recognition in Python, there are a few things you need to know first. I'm not going to overwhelm you with the technical specifics because it would take up an entire book. Things have gone a long way when it comes to modern voice recognition technologies. Several speakers can be recognized and have extensive vocabulary in several languages.
Voice is the first element of speech recognition. A mic and an analog-to-digital converter are required to turn speech into an electronic signal and digital data. The audio can be converted to text using various models once it has been digitized.
Markov models are used in most modern voice recognition programs. It is assumed that audio signals can be reasonably represented as a stationary series when seen over a short timescale.
The audio signals are broken into 10-millisecond chunks in a conventional HMM. Each fragment's spectrogram is converted into a real number called cepstral coefficients. The dimensions of this cepstral might range from 10 to 32, depending on the device's accuracy. These vectors are the end product of the HMM.
Training is required for this calculation because the voice of a phoneme changes based on the source and even within a single utterance by the same person. The most probable word to produce the specified phoneme sequence is determined using a particular algorithm.
This entire process could be computationally costly, as one might expect. Before HMM recognition, feature transformations and dimension reduction methods are employed in many current speech recognition programs. It is also possible to limit an audio input to only those parts which are probable to include speech using voice detectors. As a result, the recognizer does not have to waste time studying sections of the signal that aren't relevant.
There are a few speech recognition packages in PyPI. There are a few examples:
NLP can discern a user's purpose in some of these programs, which goes beyond simple speech recognition. Several other services are focused on speech-to-text conversion alone, such as Google Cloud-Speech.
SpeechRecognition is the most user-friendly of all the packages.
Voice recognition necessitates audio input, which SpeechRecognition makes a cinch. SpeechRecognition will get you up to speed in minutes rather than requiring you to write your code for connecting mics and interpreting audio files.
Since it wraps a variety of common speech application programming interfaces, this SpeechRecognition package offers a high degree of extensibility. The SpeechRecognition library is a fantastic choice for every Python project because of its flexibility and ease of usage. The APIs it encapsulates may or may not be able to support every feature. For SpeechRecognition to operate in your situation, you'll need to research the various choices.
You've decided to give SpeechRecognition ago, and now you need to get it deployed in your environment.
Using pip, you may set up Speech Recognition software in the terminal:
$ pip install SpeechRecognition
When you've completed the setup, you should start a command line window and type:
Import speech_recognition as sr
Sr.__version__
Let's leave this window open for now. Soon enough, you'll be able to use it.
If you only need to deal with pre-existing audio recordings, Speech Recognition will work straight out of the box. A few prerequisites are required for some use cases, though. In particular, the PyAudio library must record audio from a mic.
As you continue reading, you'll discover which components you require. For the time being, let's look at the package's fundamentals.
The recognizer is at the heart of Speech Recognition's magic.
Naturally, the fundamental function of a Recognizer class is to recognize spoken words and phrases. Each instance has a wide range of options for identifying voice from the input audio.
The process of setting up a Recognizer is straightforward. It's as simple as typing "in your active interpreter window."
sr.Recognizer()
There are seven ways to recognize the voice from input audio by utilizing a distinct application programming interface in each Recognizer class. The following are examples:
Aside from recognizing sphinx(), all the other functions fail to work offline using CMU Sphinx. Internet access is required for the remaining six activities.
This tutorial does not cover all of the capabilities and features of every Application programming interface in detail. Speech Recognition comes with a preset application programming interface key for the Google Speech Application programming interface, allowing you to immediately get up and running with the service. As a result, this tutorial will extensively use the Web Speech Application programming interface. Only the Application programming interface key and the user are required for the remaining six application programming interfaces.
Speech Recognition provides a default application programming interface key for testing reasons only, and Google reserves the right to cancel it at any time. Using the Google Web application programming interface in a production setting is not recommended. There is no method to increase the daily request quota, even if you have a valid application programming interface key. If you learn how to use the Speech Recognition application programming interface today, it will be straightforward to apply to any of your projects.
Whenever a recognize function fails to recognize the voice, it will output an error message. Request Error if the application programming interface is unavailable. A faulty Sphinx install could cause this in the case of recognizing sphinx(). If quotas are exceeded, servers are unreachable, or there isn't internet service, a Request Error will be raised for all the six methods.
Let us use recognize google() in our interpreter window and see if it works!
Exactly what has transpired?
Something like this is most likely what you've gotten.
I'm sure you could have foreseen this. How is it possible to tell something from nothing?
The Recognizer function recognize() expects an audio data parameter. If you're using Speech Recognition, then audio data should become an instance of the audio data class.
To construct an AudioData instance, you have two options: you can either use an audio file or record your audio. We'll begin with audio files because they're simpler to work with.
To proceed, you must first obtain and save an audio file. Use the same location where your Python interpreter is running to store the file.
Speech Recognition's AudioFile interface allows us to work with audio files easily. As a context manager, this class gives the ability to access the information of an audio file by providing a path to its location.
This software supports various file formats, which include:
You'll need to get a hold of the FLAC command line and a FLAC encoding tool.
To play the "har.wav" file, enter the following commands into your interpreter window:
har = sr.AudioFile('har.wav')
with harvard as source:
audio = r.record(source)
Using the AudioFile class source, the context manager stores the data read from the file. Then, using the record() function, the full file's data is saved to an AudioData class. Verify this by looking at the format of the audio:
type(audio)
You can now use recognize_google() to see if any voice can be found in the audio file. You might have to wait a few seconds for the output to appear, based on the speed of your broadband connection.
r.recognize_google(audio)
Congratulations! You've just finished your very first audio transcription!
Within the "har.wav" file, you'll find instances of Har Phrases if you're curious. In 1965, the IEEE issued these phrases to evaluate telephone lines for voice intelligibility. VoIP and telecom testing continue to make use of them nowadays.
Seventy-two lists of 10 phrases are included in the Har Phrases. On the Open Voice Repository webpage, you'll discover a free recording of these words and phrases. Each language has its own set of translations for the recordings. Put your code through its paces; they offer many free resources.
You may want to record a small section of the speaker's speech. The record() method accepts the duration term parameter, which terminates the program after a defined amount of time.
Using the example above, the first 4 secs of the file will be saved as a transcript.
with har as source:
audio = r.record(source, duration=4)
r.recognize_google(audio)
In the files stream, utilize the record() function within a block. As a result, the 4 secs of audio you recorded for 4 seconds will be returned when you record for 4 seconds again.
with har as source:
audio1 = r.record(source, duration=4)
audio2 = r.record(source, duration=4)
r.recognize_google(audio1)
r.recognize_google(audio2)
As you can see, the 3rd phrase is contained within audio2. When a timeframe is specified, the recorder can cease in the middle of a word. This can harm the transcript. In the meantime, here's what I have to say about this.
The offset keywords arguments can be passed to the record() function combined with a recording period. Before recording, this setting specifies how many frames of a file to disregard.
with har as source:
audio = r.record(source, offset=4, duration=3)
r.recognize_google(audio)
Using the duration and the offset word parameters can help you segment an audio track if you understand the language structure beforehand. They can, however, be misused if used hurriedly. Using the following command in your interpreter should get the desired result.
with har as source:
audio = r.record(source, offset=4.7, duration=2.8)
r.recognize_google(audio)
The application programming interface only received "akes heat," which matches "Mesquite," because "it t" half of the sentence was missed.
You also recorded "a co," the first word of the 3rd phrase after the recording. The application programming interface matched this to "Aiko."
Another possible explanation for the inaccuracy of your transcriptions is human error. Noise! Since the audio is relatively clean, the instances mentioned above all worked. Noise-free audio cannot be expected in the actual world except if the soundtracks can be processed in advance.
Noise is an unavoidable part of everyday existence. All audiotapes have some noise level, and speech recognition programs can suffer if the noise isn't properly handled.
I listened to the "jackhammer" audio sample to understand how noise can impair speech recognition. Ensure to save it to the root folder of your interpreter session.
The sound of a jackhammer is heard in the background while the words "the stale scent of old beer remains" are spoken.
Try to translate this file and see what unfolds.
jackmer = sr.AudioFile('jackmer.wav')
with jackhammer as source:
audio = r.record(source)
r.recognize_google(audio)
How wrong!
So, how do you go about dealing with this situation? The Recognizer class has an adjust for ambient noise() function you might want to give a shot.
with jackmer as source:
r.adjust_for_ambient_noise(source)
audio = r.record(source)
r.recognize_google(audio)
You're getting closer, but it's still not quite there yet. In addition, the statement's first word is missing: "the." How come?
Recognizer calibration is done by reading the first seconds of the audio stream and adjusting for noise level. As a result, the stream has already been consumed when you run record() to record the data.
Adjusting ambient noise() takes the duration word parameter to change the time frame for analysis. The default value for this parameter is 1, but you can change it to whatever you choose. Reduce this value by half.
with jackmer as a source:
r.adjust_for_ambient_noise(source, duration=0.5)
audio = r.record(source)
r.recognize_google(audio)
Now you've got a whole new set of problems to deal with after getting "the" at the start of the sentence. There are times when the noise can't be removed from the signal because it simply has a lot of noise to cope with. That's the case in this particular file.
These problems may necessitate some sound pre-processing if you encounter them regularly. Audio editing programs, which can add filters to the audio, can be used to accomplish this. For the time being, know that background noise can cause issues and needs to be handled to improve voice recognition accuracy.
Application programming interface responses might be useful whenever working with noisy files. There are various ways to parse the JSON text returned by most application programming interfaces. For the recognize google() function to produce the most accurate transcription, you must explicitly request it.
Using the recognize google() function and the show all boolean argument will do this.
r.recognize_google(audio, show_all=True)
A transcript list can be found in the dictionary returned by recognizing google(), with the entry 'alternative .'This response format varies in different application programming interfaces, but it's primarily useful for debugging purposes when you get it.
As you've seen, the Speech Recognition software has a lot to offer. Aside from gaining expertise with the offsets and duration arguments, you also learned about the harmful effects noise has on transcription accuracy.
The fun is about to begin. Make your project dynamic by using a mic instead of transcribing audio clips that don't require any input from the user.
For Speech Recognizer to work, you must obtain the PyAudio library.
Use the command below to install pyaudio in raspberry pi:
sudo apt-get install python-pyaudio python3-pyaudio
Using the console, you can verify that PyAudio is working properly.
python -m speech_recognition
Ensure your mic is turned on and unmuted. This is what you'll see if everything went according to plan:
Let SpeechRecognition translate your voice by talking into your mic and discovering its accuracy.
The recognizer class should be created in a separate interpreter window.
import speech_recognition as sr
r = sr.Recognizer()
After utilizing an audio recording, you'll use the system mic as your input. Instantiation your Microphone interface to get at this information!
mic = sr.Microphone()
For raspberry pi, you must provide a device's index to use a certain mic. For a list of microphones, simply call our Mic class function.
Sr.Microphone.list_microphone_names()
Keep in mind that the results may vary from those shown in the examples.
You may find the mic's device index using the list microphone names function. A mic instance might look like this if you wanted to use the "front" mic, which has a value of Three in the output.
mic = sr.Microphone(device_index=3)
A Mic instance is ready, so let's get started recording.
Similar to AudioFile, Mic serves as a context manager for the application. The listen() function of the Recognizer interface can be used in the with section to record audio from the mic. This technique uses an input source as its initial parameter to capture audio until quiet is invoked.
with mic as source:
audio = r.listen(source)
Try saying "hi" into your mic once you've completed the block. Please be patient as the interpreter prompts reappear. Once you hear the ">>>" prompt again, you should be able to hear the voice.
r.recognize_google(audio)
If the message never appears again, your mic is probably taking up the excessive background noise. Ctrl then C key can halt the execution and restore your prompts.
Recognizer class's adjustment of ambient noise() method must be used to deal with the noise level, much like you did while attempting to decipher the noisy audio track. It's wise to do this whenever you're listening for mic input because it's less unpredictable than audio file sources.
with mic as source:
r.adjust_for_ambient_noise(source)
audio = r.listen(source)
Allow for adjustment of ambient noise() to finish before speaking "hello" into the mic after executing the code mentioned above. Be patient as the interpreter's prompts reappear before ascertaining the speech.
Keep in mind that the audio input is analyzed for a second by adjusting ambient noise(). Using the duration parameter, you can shorten it if necessary.
According to the website, not under 0.5 secs is recommended by the Speech Recognition specification. There are times when greater durations are more effective. The lower the ambient noise, the lower the value you need. Sadly, this knowledge is often left out of the development process. In my opinion, the default one-second duration is sufficient for most purposes.
Using your interpreter, type in the above code snippet and mutter anything nonsensical into the mic. You may expect a response such as this:
An UnknownValueError exception is thrown if the application programming interface cannot translate speech into text. You must always encapsulate application programming interface requests in try and except statements to address this problem.
Getting the exception thrown may take more effort than you imagine. When it comes to transcribing vocal sounds, the API puts in a lot of time and effort. For me, even the tiniest of noises were translated into words like "how." A cough, claps of the hands, or clicking the tongue would all raise an exception.
To put what you've learned from the SpeechRecognition library into practice, develop a simple game that randomly selects a phrase from a set of words and allows the player three tries to guess it.
Listed below are all of the scripts:
import random
import time
import speech_recognition as sr
def recognize_speech_from_mic(recognizer, microphone):
if not isinstance(recognizer, sr.Recognizer):
raise TypeError("`recognizer` must be `Recognizer` instance")
if not isinstance(microphone, sr.Microphone):
raise TypeError("`microphone` must be `Microphone` instance")
with microphone as source:
recognizer.adjust_for_ambient_noise(source)
audio = recognizer.listen(source)
response = {
"success": True,
"error": None,
"transcription": None
}
try: response["transcription"] = recognizer.recognize_google(audio)
except sr.RequestError:
response["success"] = False
response["error"] = "API unavailable"
except sr.UnknownValueError:
response["error"] = "Unable to recognize speech"
return response
if __name__ == "__main__":
WORDS = ["apple", "banana", "grape", "orange", "mango", "lemon"]
NUM_GUESSES = 3
PROMPT_LIMIT = 5
recognizer = sr.Recognizer()
microphone = sr.Microphone()
word = random.choice(WORDS)
instructions = (
"I'm thinking of one of these words:\n"
"{words}\n"
"You have {n} tries to guess which one.\n"
).format(words=', '.join(WORDS), n=NUM_GUESSES)
print(instructions)
time.sleep(3)
for i in range(NUM_GUESSES):
for j in range(PROMPT_LIMIT):
print('Guess {}. Speak!'.format(i+1))
guess = recognize_speech_from_mic(recognizer, microphone)
if guess["transcription"]:
break
if not guess["success"]:
break
print("I didn't catch that. What did you say?\n")
if guess["error"]:
print("ERROR: {}".format(guess["error"]))
break
print("You said: {}".format(guess["transcription"]))
guess_is_correct = guess["transcription"].lower() == word.lower()
user_has_more_attempts = i < NUM_GUESSES - 1
if guess_is_correct:
print("Correct! You win!".format(word))
break
elif user_has_more_attempts:
print("Incorrect. Try again.\n")
else:
print("Sorry, you lose!\nI was thinking of '{}'.".format(word))
break
Let's analyze this a little bit further.
There are three keys to this function: Recognizer and Mic. It takes these two as inputs and outputs a dictionary. The "success" value indicates the success or failure of the application programming interface request. It is possible that the 2nd key, "error," is a notification showing that the application programming interface is inaccessible or that a user's speech was incomprehensible. As a final touch, the audio input "transcription" key includes a translation of all of the captured audio.
A TypeError is raised if the recognition system or mic parameters are invalid:
Using the listen() function, the mic's sound is recorded.
For every call to recognize speech from the mic(), the recognizer is re-calibrated using the adjust for ambient noise() technique.
After that, whether there is any voice in the audio, recognize function is invoked to translate it. RequestError and UnknownValueError are caught by the try and except block and dealt with accordingly. Recognition of voice from a microphone returns a dictionary containing the success, error, and translated voice of the application programming interface request and the dictionary keys.
In an interpreter window, execute the following code to see if the function works as expected:
import speech_recognition as sr
from guessing_game import recognize_speech_from_mic
r = sr.Recognizer()
m = sr.Microphone()
recognize_speech_from_mic(r, m)
The actual gameplay is quite basic. An initial set of phrases, a maximum of guesses permitted, and a time restriction are established:
Once this is done, a random phrase is selected from the list of WORDS and input into the Recognizer and Mic instances.
After displaying some directions, the condition statement is utilized to handle each user's attempts at guessing the selected word. This is the first operation that happens inside of the first loop. Another loop tries to identify the person's guesses at least PROMPT LIMIT instances and stores the dictionary provided to a variable guess.
Otherwise, a translation was performed, and the closed-loop will end with a break in case the guess "transcription" value is unknown. False is set as an application programming interface error when no audio is transcribed; this causes the loop to be broken again with a break. Aside from that, the application programming interface request was successful; nonetheless, the speech was unintelligible. As a precaution, the for loop repeatedly warns the user, giving them a second chance to succeed.
If there are any errors inside the guess dictionary, the inner loop will be terminated again. An error notice will be printed, and a break is used to exit the outer for loop, which will stop the program execution.
Transcriptions are checked for accuracy by comparing the entered text to a word drawn at random. As a result, the lower() function for text objects is employed to ensure a more accurate prediction. In this case, it doesn't matter if the application programming interface returns "Apple" or "apple" as the speech matching the phrase "apple."
If the user's estimate was correct, the game is over, and they have won. The outermost loop restarts when a person guesses incorrectly and a fresh guess is found. Otherwise, the user will be eliminated from the contest.
This is what you'll get when you run the program:
Speech recognition in other languages, on the other hand, is entirely doable and incredibly simple.
The language parameter must be set to the required string to use the recognize() function in a language other than English.
r = sr.Recognizer()
with sr.AudioFile('path/to/audiofile.wav') as source:
audio = r.record(source)
r.recognize_google(audio, language='fr-FR')
There are only a few methods that accept-language keywords:
Do you ever have second thoughts about how you're going to pay for future purchases? Has it occurred to you that, in the future, you may be able to pay for goods and services simply by speaking? There's a good chance that will happen soon! Several companies are already developing voice commands for money transfers.
This system allows you to speak a one-time passcode rather than entering a passcode before buying the product. When it comes to online security, think of captchas and other one-time passwords that are read aloud. This is a considerably better option than reusing a password every time. Soon, voice-activated mobile banking will be widely used.
When driving, you may use such Intelligent systems to get navigation, perform a Google search, start a playlist of songs, or even turn on the lights in your home without touching your gadget. These digital assistants are programmed to respond to every voice activation, regardless of the user.
There are new technologies that enable Ai applications to recognize individual users. This tech, for instance, allows it to respond to the voice of a certain person exclusively. Using an iPhone as an example, it's been around for a few years now. If you want Siri to only respond to your commands and queries when you speak to it, you can do so on your iPhone. Unauthorized access to your gadgets, information, and property is far less possible when your voice can only activate your Artificial intelligent assistant. Anyone who is not permitted to use the assistant will not be able to activate it. Other uses for this technology are almost probably on the horizon.
In a distant place, imagine attempting to check into an unfamiliar hotel. Since neither you nor the front desk employee is fluent in the other country's language, no one is available to act as a translator. You can use the translator device to talk into the microphone and have your speech processed and translated verbally or graphically to communicate with another person.
Additionally, this tech can benefit multinational enterprises, educational institutions, or other institutions. You can have a more productive conversation with anyone who doesn't speak your language, which helps break down the linguistic barrier.
There are many ways to use the SpeechRecognition program, including installing it and utilizing its Recognizer interface, which may be used to recognize audio from both files and the mic. You learned how to use the record offset and the duration keywords to extract segments from an audio recording.
The recognizer's tolerance to noise level can be adjusted using the adjust for the ambient noise function, which you've seen in action. Recognizer instances can throw RequestErrors and UnknownValueErrors, and you've learned how to manage them with try and except block.
More can be learned about speech recognition than what you've just read. We will implement the RTC module integration in our upcoming tutorial to enable real-time control.
Hello readers, I hope you all are doing great. In this tutorial, we will learn how to interface the PIR sensor to detect motion with the Raspberry Pi Pico module and MicroPython programming language. Later in this tutorial, we will also discuss the interrupts and how to generate an external interrupt with a PIR sensor.
Before interfacing and programming, the PIR and Pico boards let’s first have a look at the quick introduction to the PIR sensor and its working.
Fig. 1 Raspberry Pi Pico and PIR sensor
PIR stands for Passive Infrared sensors and the PIR module we are using is HC-SR501. As the name suggests the PIR or passive infrared sensor, produces TTL (transistor transistor logic) output (that is either HIGHT or LOW) in response to the input infrared radiation. The HC-SR501 (PIR) module is featured a pair of pyroelectric sensors to detect heat energy in the surrounding environment. Both the sensors sit beside each other, and when a motion is detected or the signal differential between the two sensors changes the PIR motion sensor will return a LOW result (logic zero volts). It means that you must wait for the pin to go low in the code. When the pin goes low, the desired function can be called.
In the PIR module, a fresnel lens is used to focus all the incoming infrared radiation to the PIR sensor.
Fig. 2 PIR motion sensor
The PIR motion sensor has a few setting options available to control or change its behaviour.
Two potentiometers are available in the HC-SR501 module as shown in the image attached below (Fig. 3). Sensitivity will be one of the options. So, one of the potentiometers is used to control the sensing range or sensitivity of the module. The sensitivity can be adjusted based on the installation location and project requirements. The second potentiometer (or the tuning option) is to control the delay time. Basically, this specifies how long the detection output should be active. It can be set to turn on for as little as a few seconds or as long as a few minutes.
Fig. 3 HC-SR501 PIR sensor module
Thermal sensing applications, such as security and motion detection, make use of PIR sensors. They're frequently used in security alarms, motion detection alarms, and automatic lighting applications.
Some of the basic technical specifications of HC-SR501 (PIR) sensor module are:
Table: 1 HC-SR501 technical specification
Fig. 4 Hardware components required
Table: 2 Interfacing HC-SR501 and Pico
Fig. 5 Interfacing PIR with Pico module
Before writing the MicroPython program make sure that you have the installed integrated development environment (IDE) to program the Pico board for interfacing the PIR sensor module.
There are multiple development environments available to program the Raspberry Pi Pico (RP2040) with MicroPython programming language like VS Code, uPyCraft IDE, Thonny IDE etc.
In this tutorial, we are using Thonny IDE with the MicroPython programming language (as mentioned earlier). We already published a tutorial on how to install the Thonny IDE for Raspberry Pi Pico Programming.
Now, let’s write the MicroPython program to interface the PIR (HC-SR501) and Pico modules and implement motion detection with Raspberry Pi Pico:
The first task is importing the necessary libraries and classes. To connect the data (OUT) pin of the PIR sensor module with Raspberry Pi Pico we can use any of the GPIO pins of the Pico module. So, here we are importing the ‘Pin’ class from the ‘machine’ library to access the GPIO pins of the Raspberry Pi Pico board.
Secondly, we are importing the ‘time’ library to access the internal clock of RP2040. This time module is used to add delay in program execution or between some events whenever required.
Fig. 6 Importing necessary libraries
Next, we are declaring some objects. The ’led’ object represents the GPIO pin to which the LED is connected (representing the status of PIR output) and the pin is configured as an output.
The ‘PirSensor’ object represents the GPIO pin to which the ‘OUT’ pin of HC-SR501 is to be connected which is GPIO_0. The pin is configured as input and pulled down.
Fig. 7 Object declaration
A ‘motion_det()’ function is defined to check the status of the PIR sensor and degenerate an event in response.
The status of the PIR sensor is observed using the ‘PirSensor.value()’ command. The default status of GPIO_0 is LOW or ‘0’ because it is pulled down. We are using a LED to represent the status of the PIR sensor. Whenever a motion is detected the LED will change its state and will remain in that state for a particular time interval.
If the motion is detected, the status of the GPIO_0 pin will turn to HIGH or ‘1’ and the respective status will be printed on the ‘Shell’ and simultaneously the status of led connected to GPIO_25 will also change to HIGH for 3sec. Otherwise, the “no motion” status will be printed on the Shell.
Fig. 8 creating a function
Here we are using the ‘while’ loop to continuously run the motion detection function. So, the PIR sensor will be responding to the infrared input continuously with the added delay.
Fig. 9 mail loop
# importing necessary libraries
from machine import Pin
import time
# Object declaration
led = Pin(25, Pin.OUT, Pin.PULL_DOWN)
PirSensor = Pin(0, Pin.IN, Pin.PULL_DOWN)
def motion_det():
if PirSensor.value() ==1: # status of PIR output
print("motion detected") # print the response
led.value(1)
time.sleep(3)
else:
print("no motion")
led.value(0)
time.sleep(1)
while True:
motion_det()
Fig. 10 Fig enable Shell
Fig. 11 Output on Shell
Fig. 12 Motion detected with LED ‘ON’
Now let’s take another example where we will discuss the interrupts with Raspberry Pi Pico.
Interrupts comes into existence in two conditions. First one is when a microcontroller is executing a task or a sequence of dedicated tasks and along with that continuously monitoring for an event to occur and then execute the task arriving with that particular event. So, instead of continuously monitoring for an event, a microcontroller can directly jump to a new task whenever an interrupt occurs meanwhile keeping the regular task on halt. Thus we can avoid the wastage memory and energy.
Fig. 13 Interrupt
In second case, a microcontroller will start executing the task only when an interrupt occurs. Otherwise the microcontroller will remain in standby or low power mode (as per the instruction provided).
In this example, we are going to implement the second case of interrupt. Where, we are using the PIR sensor to generate an interrupt. The Raspberry Pi Pico will execute the assigned task only after receiving an interrupt request.
Interrupts can either be external or an internal one. Internal interrupts are mostly software generated for example timer interrupts. On the other hand, external interrupts are mostly hardware generated for example using a push button, motion sensor, temperature sensor, light detector etc.
In this example, we are using the PIR sensor to generate an external interrupt. Whenever the motion is detected, a particular group of LEDs will turn ON (HIGH) while keeping rest of the LEDs in OFF (LOW) state. A servo motor is also interfaced with the Raspberry Pi Pico board. The motor will start rotating once an interrupt is being detected.
We already published tutorial on interfacing a servo motor with Raspberry Pi Pico. You can follow our site for more details.
Fig. 14 Schematic_2
Now let’s write the MicroPython program to generate an external interrupt for raspberry Pi Pico with PIR sensor.
As we discussed earlier, in our previous example the first task is importing necessary libraries and classes. Rest of the modules , are similar to the previous example except the ‘PWM’ one.
The ‘PWM’ class from ‘machine’ library is used to implement the PWM on the servo motor interfaces with the raspberry Pi Pico board.
Fig. 15 importing libraries
In this example, we are using three different components a PIR sensor, a servo motor, and some LEDs. Object are declared for each component. The object ‘ex_interrupt’ represents the GPIO pin to which the PIR sensor is connected where the pin is configured as an input one and pulled down.
The second object represents the GPIO pin to which the servo motor is connected. The ‘led_x’ object represents the GPIO pins to which the peripheral LEDs are connected. Here we are using six peripheral LEDs.
Fig. 16 Object declaration
Fig. 17 PIR output status
Next we are defining a interrupt handler function. The Parameter ‘Pin’ in the function represents the GPIO pin caused the interrupt.
The variable ‘pir_output’ is assigned with a ‘True’ state value which will be executed only when an interrupt occurs (in the while loop).
Fig. 18 Interrupt handling function
Interrupt is attached to GPIO_0 pin represented with ‘ex_interrupt’ variable. The interrupt will be triggered on the rising edge.
Fig. 18 Attaching interrupt
In the function defined to change the position of servo motor we are using pulse width modulation technique to change the servo position/angle. The motor will rotate to 180 degree and then again back to 0 degree.
Fig. 19 defining function for servo
This is the function where we are calling all the previously defined function and each function will be executed as per there assigned sequence whenever an interrupts is detected.
Fig. 20
The MicroPython code to generate an external interrupt with PIR sensor for Raspberry Pi Pico is attached below:
# importing necessary libraries
from machine import Pin, PWM
import time
# Object declaration PIR, PWM and LED
ex_interrupt = Pin(0, Pin.IN, Pin.PULL_DOWN)
pwm = PWM(Pin(1))
led1 = Pin(13, Pin.OUT)
led2 = Pin(14, Pin.OUT)
led3 = Pin(15, Pin.OUT)
led4 = Pin(16, Pin.OUT)
led5 = Pin(17, Pin.OUT)
led6 = Pin(18, Pin.OUT)
# PIR output status
pir_output = False
# setting PWM frequency at 50Hz
pwm.freq(50)
# interrupt handling fucntion
def intr_handler(Pin):
global pir_output
pir_output = True
# attaching interrupt to GPIO_0
ex_interrupt.irq(trigger=Pin.IRQ_RISING, handler= intr_handler)
# defining LED blinking function
def led_blink_1():
led1.value(1)
led3.value(1)
led5.value(1)
led2.value(0)
led4.value(0)
led6.value(0)
time.sleep(0.5)
def led_blink_2():
led1.value(0)
led3.value(0)
led5.value(0)
led2.value(1)
led4.value(1)
led6.value(1)
time.sleep(0.5)
def servo():
for position in range(1000, 9000, 50): # changing angular position
pwm.duty_u16(position)
time.sleep(0.00001) # delay
for position in range(9000, 1000, -50):
pwm.duty_u16(position)
time.sleep(0.00001) # delay
def motion_det():
if pir_output: # status of PIR output
print("motion detected") # print the response
led_blink_1()
servo() # rotate servo motor (180 degree)
time.sleep(0.5)
pir_output == False
else:
print("no motion")
led_blink_2()
while True:
motion_det()
The results observed are attached below:
Fig. 21 Output printed on Shell
Fig. 22 Motion Detected
Fig. 23 No motion detected
In this tutorial, we discussed how to interface the HC-SR501 PIR sensor with raspberry Pi Pico and detect the motion where we used Thonny IDE and MicroPython programming language. We also discussed the interrupts and how to generate interrupts using HC-SR501 sensor.
This concludes the tutorial. I hope you found this of some help and also hope to see you soon with a new tutorial on Raspberry Pi Pico programming.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | DC Motor | Amazon | Buy Now | |
3 | Jumper Wires | Amazon | Buy Now | |
4 | Raspberry Pi 4 | Amazon | Buy Now |
A growing number of us already use face recognition software without realizing it. Facial recognition is used in several applications, from basic Fb Tag suggestions to advanced security screening surveillance. Chinese schools employ facial recognition to track students' adherence and behaviour for the first time. Retail stores use face recognition to classify their clients and identify those who have a history of crime. There's no denying that this tech will be all over soon, especially with so many other developments in the works.
When it comes to facial recognition, biometric authentication goes well beyond simply being able to identify human faces in images or videos. An additional step is taken to identify the person's identity. A facial recognition software compares an image of a person's face to a database to see if the features match another person's. Since facial expressions and hair do not affect the technology's ability to identify matches, it has been built to do so.
How can face recognition be used when it comes to smart security systems?
The first thing you should do if you want to make your home "smart" is to focus on security. Your most prized possessions are housed at this location, and protecting them is a must. You can monitor your home security status from your computer or smartphone thanks to a smart security system when you're outdoors.
Installing a system that is not wireless in your house and signing you up for professional monitoring was traditionally done by a security company. The plot has been rewritten. When setting up a smart home system, you can even do it yourself. In addition, your smart smartphone acts as a professional monitor, providing you with real-time information and notifications.
Face recognition is the ability of a smart camera in your house to identify a person based on their face. Consequently, you will have to inform the algorithm what face goes with what name for face recognition to operate. Facial detection in security systems necessitates the creation of user accounts for family members, acquaintances, and others you want to be identified by the system. Your doors or the inside of your house will be alerted when they arrive.
Face-recognition technology allows you to create specific warning conditions. For example, you can configure a camera to inform you when an intruder enters your home with a face the camera doesn't recognize.
Astonishing advancements in smart tech have been made in recent years. Companies are increasingly offering automatic locks with face recognition. You may open your doors just by smiling at a face recognition system door lock. You could, however, use a passcode or a real key to open and close the smart door. You may also configure your smart house lock to email you an emergency warning if someone on the blacklist tries to unlock your smart security door.
OpenCV, as previously stated, will be used to identify and recognize faces. So, before continuing, let's set up the OpenCV library. Your Pi 4 needs a 2A power adapter and an HDMI cable because we won't be able to access the Pi's screen through SSH. The OpenCV documentation is a good place to learn how image processing works, but I'm not going to go into it here.
pip is well-known for making it simple to add new libraries to the python language. In addition, there is a technique to install OpenCV on a Raspberry Pi via PIP, but it didn't work for me. We can't obtain complete control of the OpenCV library when using pip to install OpenCV; however, this might be worth a go if time is of the essence.
Ensure pip is set up on your Raspberry Pi. Then, one by one, execute the lines of code listed below into your terminal.
sudo apt-get install libhdf5-dev libhdf5-serial-dev
sudo apt-get install libqtwebkit4 libqt4-test
sudo pip install opencv-contrib-python?
Facial recognition and face detection are not the same things, and this must be clarified before we proceed. When simply a user's face is detected using Face detection, the program has no clue who that person is. Only the face will be detected in facial recognition software, but the program will also recognize it. At this point, it's pretty evident that facial detection comes before facial recognition. To explain how OpenCV recognizes a person or other objects, I will have to go into detail.
Essentially, a webcam feed is like a long series continuously updating still photos. And every image is nothing more than a jumble of pixels with varying values arranged in a specific order. So, how does a computer software identify a face among all of these random pixels? Trying to describe the underlying techniques is outside the scope of this post, but since we're utilizing the OpenCV library, facial recognition is a straightforward process that doesn't necessitate a deeper understanding of the underlying principles.
We can only recognize a person if we can see it. Detection of an item, including a face, Classifiers are a feature of OpenCV. They are pre-trained datasets that may be utilized to recognize a certain item, such as a face. Classifiers may also detect additional objects, such as the mouth, the eyebrows, the number plate of a vehicle, and smiles.
Alternatively, OpenCV allows you to design your custom Classifier for detecting any objects in images by retraining the cascade classifier. For the sake of this tutorial, we'll be using the classifier named "haarcascade_frontalface_default.xml" to identify faces from the camera. We'll learn more about image classifiers and how to apply them in code in the following sections.
For the face training and detection, we only need the pi camera, and to install this, insert the raspberry pi camera in the pi camera slot as shown below. Then go to your terminal, open the configuration window using "sudo raspi-config", and press enter. Navigate to the interface options and activate the pi camera module. Accept the changes and finish the setup. Then reboot your RPi.
First, ensure pip is set up, and then install the following packages using it.
Install dlib: Dlib is a set of libraries for building ML and data analysis programs in the real world. To get dlib up and running, type the following command into your terminal window.
Pip install dlib
If everything goes according to plan, you should see something similar after running this command.
Install pillow: The Python Image Library, generally known as PIL, is a tool for opening, manipulating, and saving images in various formats. The following command will set up PIL for you.
pip install pillow
You should receive the message below once this app has been installed.
Install face_recognition: The face recognition package is often the most straightforward tool for detecting and manipulating human faces. Face recognition will be made easier with the help of this library. Installing this library is as simple as running the provided code.
Pip install face_recognition –no –cache-dir
If all goes well, you should see something similar to the one shown below after the installed software. Due to its size, I used the "—no –cache-dir" command-line option to configure the package without keeping any of its cache files.
A script named "haarcascade_frontalface_default.xml" is for detecting faces using a Classifier. It will also build a "face-trainner.yml" file using the training script based on the photos found in the face images directory.
The face images folder indicated above should contain subdirectories with the names of each person to be identified and several sample photographs of them. Esther and x have been identified for this tutorial. As a result, I've just generated the two sub-directories shown below, each containing a single image.
You must rename the directory and replace the photographs with the names of the people you are identifying. It appears that a minimum of five images for each individual is optimal. However, the more participants, the slower the software will run.
Face Trainer.py is a Python software that may be used to train a new face. The purpose of the software is to access the face photographs folder and scan for faces. As soon as it detects a face, it crops it, turns it to grayscale, and saves it in a file named face-trainner.yml using the face recognition package we had previously loaded. The information in this script can be used to identify the faces later. In addition to the whole Trainer program provided at the conclusion, we'll go over some more critical lines.
The first step is to import the necessary modules. The cv2 package is utilized to process photos. The NumPy library can be used for image conversion, the operating system package is used for directory navigation, and PIL will be used to process photos.
import cv2
import numpy as np
import os
from PIL import Image
Ensure that the XML file in question is located in the project directory to avoid encountering an issue. The LBPH Facial recognizer is then constructed using the recognizer parameter.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
Face_Images = os.path.join(os.getcwd(), "Face_Images")
In order to open all of the files ending in.jpg,.jpg, or .png within every subfolder in the face images folder, we must traverse the tree with for loops. In a variable named path, we record the path to every image, and in a variable named person name, we store the file location name (the name of the user who uploaded the images).
For root, dirs, files in os.walk(Face_Images):
for file in files: #check every directory in it
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):
path = os.path.join(root, file)
person_name = os.path.basename(root)
As a result, in case the name of the person changes, we increase a variable named Face_ID that will allow us to have a unique Face_ID for each individual.
if pev_person_name!=person_name:
Face_ID=Face_ID+1 #If yes increment the ID count
pev_person_name = person_name
Because the BGR values may be ignored, grayscale photos are simpler for OpenCV to deal with than colourful ones. We transform the data to grayscale and afterwards lower the image size by 50% so that all the pictures are the same size. To avoid having your face cut out, place it in the centre of the photo. To get a numerical number for these photos, transform them into NumPy arrays. Afterwards, a classifier identifies a face in a photo and saves the results in variable named faces.
Gery_Image = Image.open(path).convert("L")
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)
Our Area of Attention will be the portion of the image where the face may be found after being cropped. It will be utilized to train the face-recognition system in the ROI area. Every area of attention face must be appended to a variable named x train. We then feed the recognizer with our training data using the area of attention values and Face ID data. The information gathered will be archived.
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w]
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID))
recognizer.save("face-trainner.yml")
You'll notice that the face-trainner.yml script is modified whenever you run this program. If you make any modifications to the photographs in the Face Images folder, ensure to recompile this code. Debugging purposes include printing out the Face ID, name of the path, name of a person, and NumPy arrays.
We can begin using our trained data to identify people now that it has been prepared. We'll use a USB webcam or pi camera to feed video into the Face recognizer application, turning it into an image. Once we've found the faces in those images, we'll find similarities to all of our previously developed Face IDs. Finally, we output the identified person’s name in boxes around their face. Afterwards, the whole program is presented, and the explanation is below.
Import the required module from the training program and use the classifier because we need to do more facial detection in this program.
import cv2
import numpy as np
import os
from time import sleep
from PIL import Image
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
The people listed in the folder should be entered in the variable named labels. Insist on performing each step in the same order. It is "Esther" and "Unknown" in my situation.
labels = ["Esther", "Unknown"]
We need the trainer file to detect faces, so we import it into our software.
recognizer.load("face-trainner.yml")
The camera provides the video stream. It's possible to access any second pi camera by replacing 0 with 1.
cap = cv2.VideoCapture(0)
In the next step, we separate the footage into images and transform it into grayscale, and afterwards, we search for a face in the photo. To save the area of attention grey image, we must first detect the face and then crop the image to remove them.
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w]
id_, conf = recognizer.predict(roi_gray)
It informs us how sure the program is in its ability to identify the person. We write the code below to get the person's name based on their Identification number. A square should be drawn around the user's head, written outside their name.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
We must playback and afterwards break the video stream we just evaluated, which is done by pressing a wait key.
cv2.imshow('Preview',img)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
While running this application, ensure the Raspberry is linked to a display via HDMI. A display with your video stream and the name will appear when you open the application. There will be a box around the face identified in the video feed, and if your software recognizes the face, it displays that person’s name. As evidenced by the image below, we've trained our software to identify my face, which shows the recognition process in action.
import cv2
import numpy as np
import os
from PIL import Image
labels = ["Esther", "Unknown"]
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
recognizer.load("face-trainner.yml")
cap = cv2.VideoCapture(0)
while(True):
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5) #Recog. faces
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w]
id_, conf = recognizer.predict(roi_gray)
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('Preview',img)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
import os
from PIL import Image
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
Face_ID = -1
pev_person_name = ""
y_ID = []
x_train = []
Face_Images = os.path.join(os.getcwd(), "Face_Images")
print (Face_Images)
for root, dirs, files in os.walk(Face_Images):
for file in files:
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):
path = os.path.join(root, file)
person_name = os.path.basename(root)
print(path, person_name)
if pev_person_name!=person_name:
Face_ID=Face_ID+1
pev_person_name = person_name
Gery_Image = Image.open(path).convert("L")
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)
print (Face_ID,faces)
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w]
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID))
recognizer.save("face-trainner.yml")
Since the "How to operate DC motor in Rpi 4" guide has covered the basics of controlling a DC motor, I won't provide much detail here. Please read this topic if you haven't already. Check all the wiring before using the batteries in your circuit, as outlined in the image above. Everything must be in place before connecting your breadboard's power lines to the battery wires.
To activate the motors, open the terminal because you'll use the Python code-writing program called Nano in this location. For those of you who aren't familiar with the command-line text editor known as Nano, I'll show you how to use some of its commands as we go.
This code will activate the motor for two seconds, so try it out.
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
print "Turning motor on"
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Stopping motor"
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
The very first two lines of code tell Python whatever the program needs.
The RPi.GPIO package is what the first line is looking for. The RPi GPIO pins are controlled by this module, which takes care of all the grunt work.
It is necessary to delay the script for a few seconds to provide the package time to operate, therefore leaving a motor to run for a while.
The method set mode is used to leverage the RPi's board numbers. We'll tell Python that the pins 16 through 22 correspond to the motors.
Pin A is used to steer the L293D in one way, and pin B is used to direct it in the opposite direction. You can turn on the motor using an Enable pin, referred to as E, inside the test file.
Finally, use GPIO.OUT to inform the RPi that all these are outputs.
The RPi is ready to turn the motor after the software is set up. After a 2-second pause, some pins will be turned on and subsequently turned off, as seen in the code.
Save and quit by hitting CTRL-X, and a confirmation notice appears at the bottom. To acknowledge, tap Y and Return. You can now run the program in the terminal and watch as the motor begins to spin up.
sudo python motor.py
If the motor doesn't move, check the cabling or power supply. The debug process might be a pain, but it's an important phase in learning new things!
I'll teach you how to reverse a motor's rotation to spin in the opposite direction.
There's no need to touch the wiring at this point; it's all Python. Create a new script called motorback.py to accomplish this. Using Nano, type the command:
./script
Please type in the given program:
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
print "Going forwards"
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Going backwards"
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(2)
print "Now stop"
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
Save by pressing CTRL, then X, then Y, and finally Enter key.
For reverse compatibility, we've set Motor1A low in the script.
Programmers use the terms "high" and "low" to denote the state of being on or off, respectively.
Motor1E will be turned off to halt the motor.
Irrespective of what A is doing; the motor can be turned on or off using the Enable switch.
Take a peek at the Truth Table to understand better what's going on.
When Enabled, only two states allow the motor to move; A or B is high, and not both high at the same time.
At this point, we have designed our face detection system and the dc motor control circuit; now, we will put the two systems to work together. When the user is verified, the dc motor should run to open the cd rom drive and close after a few seconds.
In our verify code, we will copy the code below to spin the motor in one direction “open the door” when the user is verified. We will also increase the time to 5 seconds to simulate the door's time to open for the user to get through. This also allows the motor to spin long enough to open and close the cd room completely. I would also recommend putting a stopper on the cd room door so that it doesn't close all the war and get stuck.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_] #Get the name from the List using ID number
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
#place our motor code here
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
Print("Openning")
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(5)
print("Closing")
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
sleep(5)
print("stop")
GPIO.output(Motor1E,GPIO.LOW)
GPIO.cleanup()
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
An individual's biometric identity can be verified by looking at various physical and behavioural characteristics, such as a person's fingerprint, keystrokes, facial characteristics, and voice. Face recognition seems to be the winner because of the precision, simplicity, and lack of contact detection.
Face-recognition technology will continue and will get better over time. The tale has evolved, and your alternatives have grown due to smart tech.
Using an RPi as a surveillance system means you can take it with you and use it wherever you need it.
For the most part, the face-recognition software employed in security systems can reliably assess whether or not the individual attempting entry matches your record of those authorized to enter. On the other hand, certain computer programs are more precise when it comes to identifying faces from diverse angles or different countries.
Concerned users may be relieved to learn that some programs have the option of setting custom confidence criteria, which can significantly minimize the likelihood of the system giving false positives. Alternatively, 2-factor authentication can be used to secure your account.
When your smart security system discovers a match between a user and the list of persons you've given access to, it will instantly let them in. Answering the doorbell or allowing entry isn't necessary.
Face recognition solutions can be readily integrated into existing systems using an API.
A major drawback of face recognition technology is that it puts people's privacy at risk. Having one's face collected and stored in an unidentified database does not sit well with the average person.
Confidentiality is so important that several towns have prohibited law enforcement from using real-time face recognition monitoring. Rather than using live face recognition software, authorities can use records from privately-held security cameras in certain situations.
Having your face captured and stored by face recognition software might make you feel monitored and assessed for your actions. It is a form of criminal profiling since the police can use face recognition to put everybody in their databases via a virtual crime lineup.
This article walked us through creating a complete Smart Security System using a facial recognition program from the ground up. Our model can now recognize faces with the help of OpenCV image manipulation techniques. There are several ways to further your knowledge of supervised machine learning programming with raspberry pi 4, including adding an alarm to ring whenever an individual's face is not recognized or creating a database of known faces to act like a CCTV surveillance system. We'll design a security system with a motion detector and an alarm in the next session.
First, we will design a database for our website, then we will design the RFID circuit for scanning the student cards and displaying present students on the webpage, and finally, we will design the website that we will use to display the attendees of a class.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | Jumper Wires | Amazon | Buy Now | |
3 | LCD 16x2 | Amazon | Buy Now | |
4 | LCD 16x2 | Amazon | Buy Now | |
5 | Raspberry Pi 4 | Amazon | Buy Now |
Additionally, the Database server offers a DBMS that can be queried and connected to and can integrate with a wide range of platforms. High-volume production environments are no problem for this software. The server's connection, speed, and encryption make it a good choice for accessing the database.
There are clients and servers for MySQL. This system contains a SQL server with many threads that support a wide range of back ends, utility programs, and application programming interfaces.
We'll walk through the process of installing MySQL on the RPi in this part. The RFID kit's database resides on this server, and we'll utilize it to store the system's signed users.
There are a few steps before we can begin installing MySQL on a Raspberry Pi. There are two ways to accomplish this.
sudo apt update
sudo apt upgrade
Installing the server software is the next step.
Here's how to get MySQL running on the RPi using the command below:
sudo apt install MariaDB-server
Having installed MySQL on the Raspberry Pi, we'll need to protect it by creating a passcode for the "root" account.
If you don't specify a password for your MySQL server, you can access it without authentication.
Using this command, you may begin safeguarding MySQL.
sudo mysql_secure_installation
Follow the on-screen instructions to set a passcode for the root account and safeguard your MySQL database.
To ensure a more secured installation, select "Y" for all yes/no questions.
Remove elements that make it easy for anyone to access the database.
We may need that password to access the server and set up the database and user for applications like PHPMyAdmin.
For now, you can use this command if you wish to access the Rpi's MySQL server and begin making database modifications.
sudo MySQL –u root -p
To access MySQL, you'll need to enter the root user's password, which you created in Step 3.
Note: Typing text will not appear while typing, as it does in typical Linux password prompts.
Create, edit, and remove databases with MYSQL commands now available. Additionally, you can create, edit, and delete users from inside this interface and provide them access to various databases.
After typing "quit;" into MySQL's user interface, you can exit the command line by pressing the ESC key.
Pressing CTRL + D will also exit the MYSQL command line.
You may proceed to the next step now that you've successfully installed MySQL. In the next few sections, we'll discuss how to get the most out of our database.
The command prompt program MySQL must be restarted before we can proceed with creating a username and database on the RPi.
The MySQL command prompt can be accessed by typing the following command. After creating the "root" user, you will be asked for the password.
To get things started, run the command to create a MySQL database.
The code to create a database is "CREATE DATABASE", and then the name we like to give it.
This database would be referred to as "rfidcardsdb" in this example.
To get started, we'll need to create a MySQL user. The command below can be used to create this new user.
"rfidreader" and "password" will be the username and password for this example. Take care to change these when making your own.
create user “rfidreader" @localhost identified by "password."
We can now offer the user full access to the database after it has been built.
Thanks to this command, " "rfidreader" will now have access to all tables in our "rfidcardsdb" database.
grant all on rfidcardsdb.* to "rfidreader" identified by "password."
We have to flush the permission table to complete our database and user set up one last time. You cannot grant access to the database without flushing your privilege table.
The command below can be used to accomplish this.
Now we have our database configured, and now the next step is to set up our RFID circuit and begin authenticating users. Enter the “Exit” command to close the database configuration process.
An RFID reader reads the tag's data when a Rfid card is attached to a certain object. An RFID tag communicates with a reader via radio waves.
In theory, RFID is comparable to bar codes in that it uses radio frequency identification. While a reader's line of sight to the RFID tag is preferable, it is not required to be directly scanned by the reader. You can't read an RFID tag that is more than three feet away from the reader. To quickly scan a large number of objects, the RFID tech is used, and this makes it possible to identify a specific product rapidly and effortlessly, even if it is sandwiched between several other things.
There are major parts to Cards and tags: an IC that holds the unique identifier value and a copper wire that serves as the antenna:
Another coil of copper wire can be found inside the RFID card reader. When current passes through this coil, it generates a magnetic field. The magnetic flux from the reader creates a current within the wire coil whenever the card is swiped near the reader. This amount of current can power the inbuilt IC of the Card. The reader then reads the card's unique identifying number. For further processing, the card reader transmits the card's unique identification number to the controller or CPU, such as the Raspberry Pi.
Connect the reader to the Raspberry the following way:
use the code spi bcm2835 to see if it is displayed in the terminal.
lsmod | grep spi
SPI must be enabled in the setup for spi bcm2835 to appear (see above). Make sure that RPi is running the most recent software.
Make use of the python module.
sudo apt-get install python
The RFID RC522 can be interacted with using the Library SPI Py, found on your RPi.
cd ~
git clone https://github.com/lthiery/SPI-Py.git
cd ~/SPI-Py
sudo python setup.py install
cd ~
git clone https://github.com/pimylifeup/MFRC522-python.git
To test if the system is functioning correctly, let's write a small program:
cd ~/
sudo nano test.py
now copy the following the code into the editor
import RPi.GPIO as GPIO
import sys
sys.path.append('/home/pi/MFRC522-python')
from mfrc522 import SimpleMFRC522
reader = SimpleMFRC522()
print("Hold a tag near the reader")
try:
id, text = reader.read()
print(id)
print(text)
finally:
GPIO.cleanup()
Here we will write a short python code to register users whenever they swipe a new card on the RFID card reader. First, create a file named addcard.py.
copy the following code.
import pymysql
import cv2
from mfrc522 import SimpleMFRC522
import RPi.GPIO as GPIO
import drivers
display = drivers.Lcd()
display.lcd_display_string('Scan your', 1)
display.lcd_display_string('card', 2)
reader = SimpleMFRC522()
reader = SimpleMFRC522()
id, text = reader.read()
display = drivers.Lcd()
display.lcd_display_string('Type your name', 1)
display.lcd_display_string('in the terminal', 2)
user_id = input("user name?")
# put serial_no uppercase just in case
serial_no = '{}'.format(id)
# open an sql session
sql_con = pymysql.connect(host='localhost', user='rfidreader', passwd='password', db='rfidcardsdb')
sqlcursor = sql_con.cursor()
# first thing is to check if the card exist
sql_request = 'SELECT card_id,user_id,serial_no,valid FROM cardtbl WHERE serial_no = "' + serial_no + '"'
count = sqlcursor.execute(sql_request)
if count > 0:
print("Error! RFID card {} already in database".format(serial_no))
display = drivers.Lcd()
display.lcd_display_string('The card is', 1)
display.lcd_display_string('already registered', 2)
T = sqlcursor.fetchone()
print(T)
else:
sql_insert = 'INSERT INTO cardtbl (serial_no,user_id,valid) ' + \
'values("{}","{}","1")'.format(serial_no, user_id)
count = sqlcursor.execute(sql_insert)
if count > 0:
sql_con.commit()
# let's check it just in case
count = sqlcursor.execute(sql_request)
if count > 0:
print("RFID card {} inserted to database".format(serial_no))
T = sqlcursor.fetchone()
print(T)
display = drivers.Lcd()
display.lcd_display_string('Congratulations', 1)
display.lcd_display_string('You are registered', 2)
GPIO.cleanup()
The program starts by asking the user to scan the card.
Then it connects to the database using the pymysql.connect function.
If we enter our name successfully, the program inserts our details in the database, and a congratulations message is displayed to show that we are registered.
Using the LCD command library, you can:
sudo apt install git
cd /home/pi/
git clone https://github.com/the-raspberry-pi-guy/lcd.git
cd lcd/
sudo ./install.sh
After installation is complete, try running one of the program files
cd /home/pi/lcd/
Next, we will install the mfrc522 library, which the RFID card reader uses. This will enable us to read the card number for authentication. We will use:
Pip install mfrc522
Next, we will import the RPI.GPIO library enables us to utilize the raspberry pi pins to power the RFID card and the LCD screen.
Import RPi.GPIO
We will also import the drivers for our LCD screen. The LCD screen used here is the I2C 16 * 2 LCD.
Import drivers
Then we will import DateTime for logging the time the user has swiped the card into the system.
Import DateTime
In order to read the card using the rfid card, we will use the following code:
reader = SimpleMFRC522()
display = drivers.Lcd()
display.lcd_display_string('Hello Please', 1)
display.lcd_display_string('Scan Your ID', 2)
try:
id, text = reader.read()
print(id)
display.lcd_clear()
finally:
GPIO.cleanup()
The LCD is divided into two rows, 1 and 2. To display text in the first row, we use:
Display.lcd_display_string(“string”,1)
And 2 to display in the second row.
After scanning the card, we will connect to the database we created earlier and search whether the scanned card is in the database or not.
If the query is successful, we can display if the card is in the database; if not, we can proceed, then the user needs to register the card.
If the user is registered, the system saves the logs, the username and the time the card was swapped in a text file located in the/var/www/html root directory of the apache server.
Note that you will need to be a superuser to create the data.txt file in the apache root directory. For this, we will use the following command in the Html folder:
Sudo touch data.txt
Then we will have to change the access privileges of this data.txt file to use the program to write the log data. For this, we will use the following code:
Sudo chmod 777 –R data.txt
The next step will be to display this data on a webpage to simulate an online attendance register. The code for the RFID card can be found below.
#! /usr/bin/env python
# Import necessary libraries for communication and display use
import RPi.GPIO as GPIO
from mfrc522 import SimpleMFRC522
import pymysql
import drivers
import os
import numpy as np
import datetime
# read the card using the rfid card
reader = SimpleMFRC522()
display = drivers.Lcd()
display.lcd_display_string('Hello Please', 1)
display.lcd_display_string('Scan Your ID', 2)
try:
id, text = reader.read()
print(id)
display.lcd_clear()
# Load the driver and set it to "display"
# If you use something from the driver library use the "display." prefix first
try:
sql_con = pymysql.connect(host='localhost', user='rfidreader', passwd='password', db='rfidcardsdb')
sqlcursor = sql_con.cursor()
# first thing is to check if the card exist
cardnumber = '{}'.format(id)
sql_request = 'SELECT user_id FROM cardtbl WHERE serial_no = "' + cardnumber + '"'
now = datetime.datetime.now()
print("Current date and time: ")
print(str(now))
count = sqlcursor.execute(sql_request)
if count > 0:
print("already in database")
T = sqlcursor.fetchone()
print(T)
for i in T:
print(i)
file = open("/var/www/html/data.txt","a")
file.write(i +" Logged at "+ str(now) + "\n")
file.close()
display.lcd_display_string(i, 1)
display.lcd_display_string('Logged In', 2)
else:
display.lcd_clear()
display.lcd_display_string(“Please register”, 1)
display.lcd_display_string(cardnumber,2)
except KeyboardInterrupt:
# If there is a KeyboardInterrupt (when you press ctrl+c), exit the program and cleanup
print("Cleaning up!")
display.lcd_clear()
finally:
GPIO.cleanup()
Now we are going to design a simple website with Html that we are going to display the information of the attending students of a class, and to do this, we will have to install a local server in our raspberry pi.
Web, database, and mail servers all run on various server software. Each of these programs can access and utilize files located on a physical server.
A web server's main responsibility is to provide internet users access to various websites. It serves as a bridge between a server and a client machine to accomplish this. Each time a user makes a request, it retrieves data from the server and posts it to the web.
A web server's largest issue is to simultaneously serve many web users, each of whom requests a separate page.
For internet users, convert them to Html pages and offer them in the browser. Whenever you hear the term "webserver," consider the device in charge of ensuring successful communication in a network of computers.
Among its responsibilities is establishing a link between a server and a client's web browser (such as Chrome to send and receive data (client-server structure). As a result, the Apache software can be used on any platform, from Microsoft to Unix.
Visitors to your website, such as those who wish to view your homepage or "About Us" page, request files from your server via their browser, and Apache returns the required files in a response (text, images, etc.).
Using HTTP, the client and server exchange data with the Apache webserver, ensuring that the connection is safe and stable.
Because of its open-source foundation, Apache promotes a great deal of customization. As a result, web developers and end-users can customize the source code to fit the needs of their respective websites.
Additional server-side functionality can be enabled or disabled using Apache's numerous modules. Encryption, password authentication, and other capabilities are all available as Apache modules.
To begin, use the following code to upgrade the Pi package list.
sudo apt-get update
sudo apt-get upgrade
After that, set up the Apache2 package.
sudo apt install apache2 -y
That concludes our discussion. You can get your Raspberry Pi configured with a server in just two easy steps.
Type the code below to see if the server is up and functioning.
sudo service apache2 status
You can now verify that Apache is operating by entering your Raspberry Pi's IP address into an internet browser and seeing a simple page like this.
Use the following command in the console of your Raspberry Pi to discover your IP.
hostname-i
Only your home network and not the internet can access the server. You'll need to configure your router's port forwarding to allow this server to be accessed from any location. Our blog will not be discussing this topic.
The standard web page on the Raspberry Pi, as depicted above, is nothing more than an HTML file. First, we will generate our first Html document and develop a website.
Let's start by locating the Html document on the Raspbian system. You can do this by typing the following code in the command line.
cd /var/www/html
To see a complete listing of the items in this folder, run the following command.
ls -al
The root account possesses the index.html file; therefore, you'll see every file in the folder.
As a result, to make changes to this file, you must first change the file's ownership to your own. The username "pi" is indeed the default for the Raspberry Pi.
sudo chown pi: index.html
To view the changes you've made, all you have to do is reload your browser after saving the file.
Here, we'll begin to teach you the fundamentals of HTML.
To begin a new page, edit the index.html file and remove everything inside it using the command below.
sudo nano index.html
Alternatively, we can use a code editor to open the index.html file and edit it. We will use VS code editor that you can easily install in raspberry pi using the preferences then recommended software button.
You must first learn about HTML tags, which are a fundamental part of HTML. A web page's content can be formatted in various ways by using tags.
There are often two tags used for this purpose: the opening and closing tags. The material inside these tags behaves according to what these tags say.
The p> tag, for example, is used to add paragraphs of text to the website.
<p>The engineering projects</p>
Web pages can be made more user-friendly by using buttons, which can be activated anytime a user clicks on them.
<button>Manual Refresh</button>
<button>Sort By First Name</button>
<button>Sort By last Name</button>
A typical HTML document is organized as follows:
Let us create the page that we will use in this project.
<html>
<head>
</head>
<body>
<div id="pageDiv">
<p> The engineering projects</p>
<button type="button" id="refreshNames">Manual Refresh</button><br/>
<button type="button" id="firstSort">Sort By First Name</button><br/>
<button type="button" id="lastSort">Sort By Last Name</button>
<div id="namesFromFile">
</div>
</div>
</body>
</html>
<!DOCTYPE html>: HTML documents are identified by this tag. This does not necessitate the use of a closing tag.
<html>: This tag ensures that the material inside will meet all of the requirements for HTML. There is a /html> tag at the end of this.
</head>: It contains data about the website, but when you view it in a browser, you won't be able to see anything.
A metadata tag in the head tag can be used to set default character encoding in your website, for instance. This has a /head> tag at the end of it.
<head>
<meta charset="utf-8">
</head>
Also, you can have a title tag inside the head tag. This tag sets the title of your web page and has a closing </title> tag.
<head>
<meta charset="utf-8">
<title> My website </title>
</head>
<body>: The primary focus of the website page is included within this tag. Everything on a web page is usually contained within body tags once you've opened it. This has a /body> tag at the end of it. Many other tags can be found in this body tag, but we'll focus on the ones you need to get started with your first web page.
We will go ahead and style our webpage using CSS with the lines of codes below;
<head>
<!--
body {
width:100%;
background:#ccc;
color:#000;
text-align:left;
margin:0
;padding:10px
;font:16px/18pxArial;
}
button {
width:160px;
margin:0 0 10px;}
#pageDiv {
width:160px;
margin:20px auto;
padding:20px;
background:#ddd;
color:#000;
}
#namesFromFile {
margin:20px 0 0;
padding:10px;
background:#fff;
color:#000;
border:1px solid #000;
border-radius:10px;
}
-->
</style>
</head>
The style tags is a cascading style sheet syntax that lets developers style the webpages however they prefer.
You can add images to your web page by using the <img> tag. It is also a void element and doesn’t have a closing tag. It takes the following format
<img src="URL of image location">
For example, let’s add an image of the Seeeduino XIAO
<p>The Engineering projects</p>
<img src="https://www.theengineeringprojects.com/wp-content/uploads/2022/04/TEP-Logo.png">
Reload the browser to see the changes
This is the last step of this project, and we will implement a program that reads our data.txt file from the apache root directory and display it on the webpage that we designed. Since we already have our webpage up and running, we will use the javascript programming language to implement this function of displaying the log list on the webpage. All changes that we are about to implement will be done in the index.html file; therefore, open it in the visual studio code editor.
JavaScript is a dynamic computer programming language. It is lightweight and most commonly used as a part of web pages, whose implementations allow client-side scripts to interact with the user and make dynamic pages. It is an interpreted programming language with object-oriented capabilities.
One of the major strengths of JavaScript is that it does not require expensive development tools. You can start with a simple text editor such as Notepad.
Well, javascript as mentioned earlier is a very easy to use language that simply requires us to put the script tags inside the html tags.
<script> script program </script>
<header>
<script>
Here goes our javascript program
</script>
</header>
The javascript code first opens the data.txt file, then it reads all the contents form that file. Then it uses the xmlHttpRequest function to display the contents on the webpage. The buttons on the webpage activate different functions in the code.For instance manual refresh activates:
function refreshNamesFromFile(){
var namesNode=document.getElementById("namesFromFile");
while(namesNode.firstChild)
{ namesNode.removeChild(namesNode.firstChild);
}
getNameFile();
}
This function reads the content of the data.txt
The sort by buttons activate the sort function to sort the logged users either by first name or last name. The function that gets activated by these buttons is:
function sortByName(e)
{ var i=0, el, sortEl=[], namesNode=document.getElementById("namesFromFile"), sortMethod, evt, evtSrc, oP;
evt=e||event;
evtSrc=evt.target||evt.srcElement;
sortMethod=(evtSrc.id==="firstSort")?"first":"last";
while(el=namesNode.getElementsByTagName("P").item(i++)){
sortEl[i-1]=[el.innerHTML.split(" ")[0],el.innerHTML.split(" ")[1]];
}
sortEl.sort(function(a,b){
var x=a[0].toLowerCase(), y=b[0].toLowerCase(), s=a[1].toLowerCase(), t=b[1].toLowerCase();
if(sortMethod==="first"){
return x<y?-1:x>y?1:s<t?-1:s>t?1:0;
}
else{
return s<t?-1:s>t?1:x<y?-1:x>y?1:0;
}
});
while(namesNode.firstChild){
namesNode.removeChild(namesNode.firstChild);
}
for(i=0;i<sortEl.length;i++){
oP=document.createElement("P");
namesNode.appendChild(oP).appendChild(document.createTextNode(sortEl[i][0]+" "+sortEl[i][1]));
namesNode.appendChild(document.createTextNode("\r\n"));
//insert tests -> for style/format
if(sortEl[i][0]==="John"){
oP.style.color="#f00";
}
if(sortEl[i][0]==="Sue")
{ oP.style.color="#0c0";
oP.style.fontWeight="bold";
}
}
}
Automated attendance systems are excessively time-consuming and sophisticated in the current environment. It is possible to strengthen company ethics and work culture by using an effective smart attendance management system. Employees will only have to complete the registration process once, and images get saved in the system's database. The automated attendance system uses a computerized real-time image of a person's face to identify them. The database is updated frequently, and its findings are accurate in a user interactive state because each employee's presence is recorded.
Smart attendance systems have several advantages, including the following:
Students in elementary, secondary, and postsecondary institutions can utilize this system to keep track of their attendance. It can also keep track of workers' schedules in the workplace. Instead of using a traditional method, it uses RFID tags on ID cards to quickly and securely track each person.
1) Real-time tracking – Keeping track of staff attendance using mobile devices and desktops is possible.
2)Decreased errors – A computerized attendance system can provide reliable information with minimal human intervention, reducing the likelihood of human error and freeing up staff time.
3) Management of enormous data – It is possible to manage and organize enormous amounts of data precisely in the db.
4) Improve authentications and security – A smart system has been implemented to protect the privacy and security of the user's data.
5) Reports – Employee log-ins and log-outs can be tracked, attendance-based compensation calculated, the absent list may be viewed and required actions are taken, and employee personal information can be accessed.
This tutorial taught us to build a smart RFID card authentication project from scratch. We also learned how to set up an apache server and design a circuit for the RFID and the LCD screen. To increase your raspberry programming skills, you can proceed to building a more complex system with this code for example implementing face detection that automatically starts the authentication process once the student faces the camera or implement a student log out whenever the student leaves the system. In the following tutorial, we will learn how to build a smart security system using facial recognition.