Getting Past Legacy Software Pains in Requirements Management

Legacy software refers to a business operating software that has been in use for a long time. This is older system software that is still relevant and fulfills the business needs. The software is critical to the business mission and the software operates on specific hardware for this reason.

Generally, the hardware in this situation has a shorter lifespan than the software. With time, the hardware becomes harder to maintain. Such a system will either be too complex or expensive to replace. For that reason, it continues operating.

What is a Legacy System?

Legacy software and legacy hardware are often installed and maintained simultaneously within an organization. The main changes to the legacy system typically only replace the hardware. That helps to avoid the onerous requirements management of the software certification process.

Which Businesses Have Legacy Systems?

Legacy systems operate in a wide range of business organizations, such as banks, manufacturing, energy companies, hospitals, and insurance companies. You can also find them in the defense industry, among other multifaceted business organizations.

Legacy Software Pains in Requirements Management

Only companies born in the digital age don't face the problem of chronic legacy system pains, i.e. the distress of a lack of digital transformation. Legacy systems can involve unbearable complexity, mismatched skills, lack of innovation, bureaucracy, and so on.

Legacy systems form in organizations for many reasons. First, compliance issues and a rollout are often challenging to carry out all at once. There may be an ongoing project that needs the old system. There are also instances where decision-makers don't like change.

However, the shortcomings of operating an old system are annoying and can also cause severe damage to the company. The pains of operating an old system include:

Legacy Systems Strategies are Not Prepared For Change

Legacy systems can include the 'Stop and Start' strategy. There are also long static periods or unchanging business, which was the mainstream way of running business operations throughout the industrial age. This means systems have a short period to make and adapt to any necessary changes, while business stops and waits for the wave of essential changes to finish.

The world doesn't work like this anymore. Intermittent changes allow organizations with old systems to persist up to the next phase of evolution.

Fortunately, there is an alternate option called lean IT. The model advocates for making positive changes and continuous improvements and is aimed at avoiding getting stuck in waiting mode.

The lean IT model is well suited to data-oriented and digital systems, and helps discourage the myopic views that legacy systems foster in the first place.

Legacy Systems Create Security Problems

Legacy systems can pose several data security problems in an organization. Security is a prominent feature of the lean IT model. Continuous improvements and positive changes help to curb the latest threats. Old systems, because of their age, struggle with this.

Legacy systems may pose various challenges when fixing specific vulnerabilities, due to their large and inflexible nature. Making a fix in legacy systems can face delays because developers find it challenging to create one. Also, creating a repair is often not on the development team’s priority list. As a result, the fix ends up being very expensive.

Old systems can enter into a period where there is a danger to the organization due to their outdated security measures.

Inability to Meet Customers on Their Terms

The digital age has created tremendous opportunities, including those related to changing a company's way of operations to benefit its users. Businesses that don't have legacy systems find that when technology moves, the industry can move with it. They are ready to use any new generation that comes out and are prepared to download and install any new application that becomes popular.

Under these conditions, challenges can mount for a company stuck with an old system. Legacy systems have restrictions on using new applications. Businesses that have many customer interactions can encounter serious challenges.

Customers often go for features on the latest applications available in this digital era, such as Instagram and Windows 10 updates. Both of these have chatting options that legacy systems can't enable. This is very much a missed opportunity.

Legacy Systems are Not Cost Effective

It may seem like legacy systems would be less expensive to maintain. However, that connotation changes over time, and circumstances often prove cost to be a pain point. Support and software updates for legacy systems are often much more expensive than current models, whose support and updates are always ready for seamless implementation.

The reason behind the additional maintenance cost is that knowledgeable software developers are hard to find. It involves a lot more work for software developers to offer the necessary continued care and updates for a legacy system than a current system.

Compatibility Issues Threaten Business Interaction

A legacy systems compatibility issue affects all users. We’re talking about the customers and business partners, suppliers, team members, and other associated users.

The legacy system will support file and data formats up to a certain point. But over time, these formats advance over and beyond what the legacy system can handle.

The evolution of support formats only takes a couple of years. In this event, the business will be stuck and experience pain points from using forms the customers or partners are no longer willing to use.

A company without legacy problems will adapt to successful implementation fast, aiming for better collaborations among users and team members. They also avoid waste in IT operations. Therefore, the future of the company's business remains adaptable.

Lack of Storage Availability and Budget

Legacy systems are often full of lurking, untested problems. Support is often difficult to come by, leading to frustrating support interactions.

Customer support is critical, especially when you have large data sets or tight deadlines. Modern software development techniques make it easy to release and access track records. System data storage matters a lot to the users, so data storage and accessibility are key features in new systems that also come in handy when support is necessary.

Unhealthy for Employee Training

Look at the IT team members' psychological aspects for a moment. What does operating a legacy system say to the workforce? It signals that it's okay to work with an old system on one end while putting off addressing worries until later. The system solutions from the past are still working.

But an organization should not encourage this view in their employees, especially when training employees in new skills.

The method may still function, but it will be a massive liability for connectivity and security. Legacy systems also reduce productivity, lower team members' morale, and repel some of the best talents. Employees with first-hand experience in new technology want to hold to that and have no interest in learning old systems.

"If it's not broken, don't fix it" is the IT professional's general attitude. Though it does not stem from any bad intentions, it can cause a company severe problems down the road.

Proprietary Tools are Not Fun

Legacy systems tend to be clunky, extensive, and very proprietary. Changing or customizing them poses a serious challenge. But modern IT professionals prefer to use the latest techniques and have no interest in mastering old systems.

Some organizations' software and hardware needs are different from other companies. Therefore, they need to build their custom software and hardware. New systems have smaller parts that make them flexible and easy to adopt. Specialized needs are no excuse for retaining legacy systems.

Getting Past Legacy Software Pains In Requirements Management

High-regulated industries have difficulty catching up with technology because of their complex systems. You may feel your company has outgrown its requirements management software, and you’re not alone.

The line between software and hardware becomes more and more blurred, and innovations are occurring faster than ever before. Requirements management providers may not supply the right software that matches the users' goals, regardless of the notable reputation or how complex the software is. That can create severe problems that affect productivity.

Here are some common methods for how to work your way out from a legacy system into something much more helpful for yourself and your customers:

Working with Multiple Stakeholders in Mind

Highly regulated industries interact with many different stakeholders and players, which is good for their business. It is essential to value the input of the various roles and skill sets.

But problems can arise if one of these users doesn't know how to use your requirements management software. Stakeholders bring their benefits to the company, but interfering with the system can be a recipe for compliance disaster.

To avoid such problems, look for software that flows with several roles and is seamless. Also, make sure your software integrates user-friendly traceability. Every user on the project needs to see the progress from beginning to end. This will prevent use problems from becoming a lasting problem and hindering productivity.

Timely Notifications to Help Meet Deadlines

Missing deadlines happen in many organizations. A team member needs to provide feedback but fails to do so on time. It could have been sent via email, Google, or Word document that someone didn’t know to monitor. Whichever means were used, the model of collaboration in place failed.

Review processes are quite complex these days, and you need collaboration software that sets clear intentions for the users. Real-time notification and editing will help keep team members on the truck.

Opt for a requirements management tool that prompts the next step to avoid falling into the trap. The requirements management software will help to prevent blame games - and it’s an excellent reason to suggest upgrading away from the legacy system altogether.

Accessibility and Intuitiveness

There may be instances where you want to carry out an essential process through the legacy system - but you are somehow locked out. You may have challenges finding the person who manages the system access rights and can get you back in the loop.

The situation can be frustrating and pose significant risks to the company, since one can attempt to break into the system for a couple of hours. This also wastes time. The desire to provide timely feedback with confidence is thwarted, which goes against the first intention of collecting the data.

The right requirements management tool should provide continuous data collection and growth. To achieve this, it should be open, accessible, and intuitive. The stakeholders will get motivation and provide constant input and collaboration, which is vital in keeping up with breakneck innovation.

An Upgrade Doesn’t Have to Spell Disaster

When an upgrade notification pops up on our screen, it’s normal to get skittish. There was once a very real fear of losing essential data and vital information with any kind of system update.

But this fear has dissipated with the advent of the cloud. A company no longer has to fear upgrading and fixing software requirements. It’s necessary to improve the security requirements and access some of the latest features.

It's crucial to have a system that adapts quickly to update requirements. When purchasing legacy software, consider the opportunity costs of not upgrading and encountering the headache of being locked out of various unsupported platforms.

Data Storage, Security, and Availability

As part of evaluating your current system, investigate data storage safety during software development. Find out from existing customers how well your system releases the accurate document.

Whatever system you choose will be around for a long time, so you will need to measure your predicted needs from the vendor. You will be putting some of your crown jewels on the system you want to buy. They need to be safe.

Conclusion

We live in an age with the most innovative and disruptive products available to more people than ever. We have ultra-fast electric cars, self-piloted spaceships, and lifelike prosthetics. We also have some of the brightest minds toiling to help propel us into the future.

This means the regulatory environment now is more stringent, especially on public safety and the marketplace demands. There is a need to have a team ready to meet the ever-increasing demands of compliance.

To be on the safer side, you need to put in place a collaborative infrastructure to keep the team organized and detect mistakes of disconnected persons in real-time. The future of your company depends on it.

Frequently Asked Questions

In what ways can one modernize a legacy system?

Migrate. This allows the business to perform critical processes immediately. Legacy software may not be flexible enough to allow the expected modification. Also, your old system may not offer its users the right results. Legacy migration can be the better choice in this case.

The other option is extension. You may not need to replace a legacy system that still performs its core functions (especially if it still has some years of warranty remaining). Despite that, you can still modernize it by extending its capabilities.

How do you handle management pushing legacy systems?

Try to make concrete and practical recommendations on how to make the legacy system better. Provide evidence on how the improvements will lead to better performance. You may change minds by presenting realistic situations where the legacy system can still be helpful in the future with just a few additions.

Reasons Why 3D Printing Technology is Underrated

3D printing technology is one of the most progressive methods of creating objects in the market today. There are four different types of production techniques in manufacturing. These include subtractive manufacturing, casting, formation, and additive manufacturing.

Reasons Why Additive Manufacturing is a Superior Form of Manufacturing

Additive manufacturing is the technique of production that 3D printers use to create physical objects. Additive manufacturing is the manufacturing process in which you create an object from scratch by directly layering the raw material.

This technique is different from others because it does not create waste like subtractive manufacturing. It does not require manual effort and time like the formation process where you add force to a material to shape it, and it does not require tools and molds like casting to produce a functional product.

In conclusion, additive manufacturing requires no tools, has no waste, and needs zero cutting and complex procedures.

How Does Additive Manufacturing Work?

There are various types of additive manufacturing in the market. All of these processes are suitable for their respective fields and functions. A production house can use one of these techniques to create its design.

Direct Casting

The large scale 3D printer allows the production company to create a mold and cast-in-motion with the help of a dual head. The cast-in-motion technique cuts a lot of time and steps from the traditional molding and casting procedure. This procedure includes four steps. The first step is to feed the information into the printer and print the design.

After the design is complete, cure the product in an oven to enhance its mechanical properties. The cast of the product comprises a water-breakable mixture. Therefore, in the third step, soak the mold in water. It will degenerate, and you can finish your product in the last step for further use.

This process is rapid, and your production, which took weeks, now only takes a few days.

FDM

Fusion deposition modeling is a typical process of manufacturing in an additive process. A domestic printer uses this process to create designs. This printer is easy to use as you need to fill the printer with filament from one end, and it will produce your design on the other end.

This process needs a design, the software cuts the design into thin digital slices, and once the data is in the printer, it will create the product automatically. It may be easy to use, but large-scale industries do not prefer it for its poor finish and fragile material.

SLA Printing

The process of Stereolithography is unique as it uses photosensitive resin to solidify the design out of liquid polymer. It is a precise way of creating objects from resin, and it does not require as much time as other printing methods. The UV rays cure the liquid polymer on top of the layers. It creates an illusion of an object being born from the liquid.

Laser Sintering

The laser sintering method is dangerous in public places, but it is practical for enormous industries. It uses refined metal powder as raw material. The machine uses a laser to create a solid object from this metal dust. The printer does have a container box for metal powder but inhaling this powder can be dangerous.

NASA and other mechanical corporations use this technique to build parts for air crafts, planes, cars, and other automobiles. If the production house follows all the SOPs, this method is one of the fastest methods of creating parts.

Poly Jet Method

The poly jet method is a mixture of FDM and SLA printing. The machine uses thin layers of resin and layers are like a filament, and the UV light instantly cures the resin in place. This technique is valuable for rapid prototyping, fixtures, and creating functional moving parts.

The Revolutionary Implications of 3D Printing

Multiple applications and implications of 3D printing suggest that it has revolutionized the world. 3D printing has minimized the connection between production and consumption. A consumer can directly produce a design in a short time. The time for a product to reach the consumer has drastically reduced.

The emergency disaster zone fixtures are possible with the help of instant 3D printing. It also inspires the younger generation to create and practically use their theoretical knowledge. It has inspired the world to connect virtual reality to physical reality. And all of this is just the beginning of what 3D technology has to offer.

JMP & LBL Instructions in Ladder Logic Programming

Hi friends, I hope you are very well; today in this tutorial, we will practice conditional jumping for performing some code at the occurrence of some conditions. Like any other programming language, jumping is one of the most common approaches to transfer the execution from its sequential mode to run different processes or instructions marked by a label and bypassing the lines of codes in between the last executed transaction before the jump instruction and the labeled instruction whom the program is going to move to. The good thing about this technique is shortening the scan cycle of the program due to not running the whole program. However, using jumping techniques in coding is very dangerous. It would help if you were careful of missing some open cases before going anywhere in the code. For example, let us say that we started one process. In the middle of the process’s sequence, one jump instruction is used to run some logic under some conditions, so jumping means leaving or skyping for some reason and going running some logic or code. Therefore, we should consider the skipped instructions’ effects and consequences on the whole process and program logic. In addition, because any jump instruction can target any label and many jump instructions can target the same label, you should pay great attention to make sure you name the correct label according to the designed logic to avoid losing the reason you plan your program to do. I know you might take it easy now and can not imagine someone right wrong label by mistake. But, when you will be writing a large-scale project which contains hundreds or even thousands of rungs and has couples of dozens of labels, in that case, you can quickly mistakenly write the wrong label.

JMP and LBL instructions

To perform jumping in the ladder logic program, two instructions are together, as shown in Fig. 1. The first instruction is JMP which tells the PLC you need to jump from where the jump instruction is to where you find the LBL instruction.

Fig. 1: JMP and LBL instructions

Now how to know the label to whom you want the program execution to go or jump? Well, that is an excellent question, and the answer is that you should specify the label name as a parameter of jump instruction and the label instruction. Therefore, you can notice, my friends, if Fig. 2, the JMP and LBL instructions have a question mark denoting you should specify the label name, which is the next station to programme execution.

Fig. 2: Jump and Label instruction showing the label name above of them

Jump instruction ladder logic example

Now friends, let us see how jump and label instructions work together, as shown in fig. Three depicts a straightforward example of ladder logic in which JMP and LBL instruction work together, referring to the same label Q2:0. In this example, if input contact I1/0 is activated, JMP instruction will take the execution where the Q2:0 label “LBL” instruction is. As a result, the rung 001 has been bypassed.

Fig. 3: ladder logic example for jump and label instructions

How about going with a situation where we need to employ JMP and LBL instruction? Yes, in the example shown in fig. 4, You can notice here in this example. There are a couple of motors and for these motors. We need to. Use Siri. The command of the jump. And in combination with the uh label instruction. 2. Let some motors work in one scenario. And in another scenario, we will let some of these motors worworkd the others are not working. So you can imagine if you are working with Couples of pumps and want to run all pumps together in one scenario, so you do not activate the JMP command. But, in another scenario, if you’re running some of these pumps. So what we are going to do here in this example is to activate the JMP command to bypass some of these pumps, which are between the JMP and LBL commands.

Fig. 4: a real example of jump instruction

Figure 5 depicts the execution of the sample program that demonstrates JMP instruction. This case shows what happened in the first scenario example. If we do not activate jump instruction in rung 3, then all motors 1,2,3,4,5,6 and 7 will run based on the status of their firing contacts which are I:0 0 to 7, respectively. And the program execution continues running other motors after the JMP as it is not activated by touch I:0 3.

Fig. 5: test when JMP is not activated

On the other hand, Fig. 6 depicts the second scenario when activating JMP instruction by contact I0:3 at run number 3. By activating the jump command, We can notice only the motor before jump instructions are running based on their command contacts when the pumps in between jumps and label commands are bypassed, and the other engines after label instruction are running. So motors 1,2 and 3 are running, motors 4,5 and 6 have been ignored by activation of jump command instruction, while motor seven is running, which is at label command.

Fig. 6: test when JMP is activated

What’s next,

I want to thank you all for following me in this short and essential tutorial that can help you utilize the jump and label command instructions to control the logic flow of your program based on the situations and the logic design of your program. So now you can perform safety control to avoid running some equipment under some circumstances to protect the operator and equipment. In addition, you can run many logic scenarios in one program based on the status of input and output devices. In the following tutorial, we will take something neer to jump, which is subroutines, showing you how you can make your program modular and divided into subroutines to be organized and readable and easy to maintain, follow, and being modified. So be ready, and let us meet soon to learn and practice the new ladder logic programming tutorial.

LinkedIn marketing strategies that can help expand your small business

Are you looking for ways to boost your small business? LinkedIn may be the answer. LinkedIn is a powerful platform that can help you reach a larger audience than ever before.

LinkedIn is a powerful social media platform for businesses of all sizes. It provides an opportunity to connect with a larger audience, build relationships with potential customers, and create brand awareness. But promoting your business on LinkedIn is not as easy as you might think. For one, LinkedIn is a professional network, which means that users are not always looking to be sold to. Secondly, LinkedIn’s algorithm favors content that is educational and informative over content that is promotional.

So how can you promote your small business on LinkedIn in a way that will reach your target audience and help you achieve your business goals? Here are the top LinkedIn marketing strategies that you can use to expand your small business.

Share engaging content

If you want to reach your target audience on LinkedIn, you need to share content that is interesting and engaging. This means creating content that educates, informs, or entertains your readers. For example, if you're a construction company, you could share articles about the latest industry trends, tips for remodeling your home, or interesting case studies.

The key is to make sure that your content is relevant to your target audience and provides value. If you're not sure what type of content to create, take a look at the content that your competitors are sharing. Chances are, their content will give you some ideas.

However, that doesn't mean you should write walls of words - avoid that at all costs. No one wants to read an essay on LinkedIn. Keep your posts short and to the point. Use images, infographics, videos, and LinkedIn banners to break up your text and make your content more visually appealing.

Know your target audience

You can't just start posting on LinkedIn with a blindfold on now, can you? You need to know who you're targeting first.

Creating personas for your target audience is a great way to get to know them better. Once you've created personas, take a look at where your target audience hangs out online. What type of content do they consume? What are their interests?

You can use this information to create content that appeals to your target audience. For example, if you're targeting millennials, you might want to create content that is relevant to their interests, such as entrepreneurship, travel, or personal development.

Create a company page

One of the best ways to promote your small business on LinkedIn is by creating a company page. Your company page is like a mini-website on LinkedIn, and it's a great way to showcase your products or services.

When creating your company page, make sure to include a strong headline, an engaging description, and relevant images. You should also take advantage of LinkedIn's SEO features by including keywords in your page content.

Once you've created your company page, start sharing content that will interest your target audience. This could include blog posts, product information, case studies, or even company news. You can also use your company page to run LinkedIn ads. LinkedIn offers several ad formats that you can use to promote your business, and they're a great way to reach a larger audience.

Optimize your LinkedIn profile

Your LinkedIn profile is one of the most important tools in your LinkedIn marketing arsenal, so optimizing your LinkedIn Profile is essential. It's your chance to make a good first impression, so make sure you're putting your best foot forward.

Start by optimizing your headline and summary. These are the first things people will see when they visit your profile, so make sure they're attention-grabbing and relevant. Include keywords that describe your business or industry, and make sure to mention your most important selling points. Your headline and summary are also a great place to showcase your personality.

Next, take a look at your profile photo. Is it professional and polished? If not, consider changing it to something that presents you in a positive light. Finally, take some time to update your work experience and education section. Include any relevant information that will help you stand out, such as awards or publications. By optimizing your LinkedIn profile, you'll be sure to make a good impression on potential customers and clients.

Join relevant groups

Another great way to reach your target audience on LinkedIn is by joining relevant groups. There are thousands of groups on LinkedIn, covering just about every topic imaginable. And chances are, there are several groups that would be a good fit for your business.

For example, if you're an apparel retailer, you could join relevant groups on LinkedIn to promote your business. These groups could include fashion professionals, small business owners, or even general interest groups.

When you join a group, make sure to participate in the discussion and add value to the conversation. This will help you build relationships with other members and position yourself as an expert in your field.

You can also use LinkedIn groups to collect leads. Many groups allow members to post their contact information in the group description. This makes it easy for you to get in touch with potential customers.

Generate leads with InMail

Did you know that you can use LinkedIn to generate leads? It's true! LinkedIn offers a feature called Sponsored InMail, which allows you to send messages directly to your target audience. Normally, on the free plan, you can't send priority messages to anyone. Therefore, InMail is a great way to promote your products or services, and it's an especially effective lead generation tool.

Sponsored InMail is a great way to reach out to potential customers and promote your products or services. You can use it to offer discounts, announce new products, or even invite people to events.

To get started, simply create a Sponsored InMail campaign and target your ideal customer. LinkedIn will then match your message with the right people, and you'll start seeing results in no time. Just make sure your message resonates with your target audience. Otherwise, you risk having it perceived as spam.

A final piece of advice

While the above marketing strategies will work like a charm when it comes to promoting your business and gaining new customers, bear in mind that every business is different and has a different target audience. Therefore, it's important to experiment with different strategies and find the ones that work best for you. Identifying the right marketing mix for your business is what you should be aiming for.

For example, you could use LinkedIn ads to reach out to your target audience if you're looking for immediate results. Or, if you're trying to build long-term relationships with potential customers, focus on creating a strong company page and sharing high-quality content.

Also, don't forget to harness the power of SEO! In simple words, SEO makes your content more visible in LinkedIn's search results, which means more people will see it. You can do this by optimizing your LinkedIn profile and company page for keywords, and you'll surely gain more traction on your LinkedIn profile.

Speech Recognition System Using Raspberry pi 4

Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we created a pi-hole ad blocker for our home network using raspberry pi 4. We also learned how to install pi-hole on raspberry pi four and how to access it in any way with other devices. This tutorial will implement a speech recognition system using raspberry pi and use it in our project. First, we will learn the fundamentals of speech recognition, and then we will build a game that uses the user's voice to play it and discover how it all works with a speech recognition package.

Here, you'll learn:

  • The basics of voice recognition
  • On PyPI, what packages may be found?
  • Utilize the SpeechRecognition package with a wide range of useful features.
Where To Buy?
No.ComponentsDistributorLink To Buy
1Raspberry Pi 4AmazonBuy Now

Components

  • Raspberry pi 4
  • Microphone
 

A Brief Overview of Speech Recognition

Are you curious about how to incorporate speech recognition into a Python program? Well, when it comes to conducting voice recognition in Python, there are a few things you need to know first. I'm not going to overwhelm you with the technical specifics because it would take up an entire book. Things have gone a long way when it comes to modern voice recognition technologies. Several speakers can be recognized and have extensive vocabulary in several languages.

Voice is the first element of speech recognition. A mic and an analog-to-digital converter are required to turn speech into an electronic signal and digital data. The audio can be converted to text using various models once it has been digitized.

Markov models are used in most modern voice recognition programs. It is assumed that audio signals can be reasonably represented as a stationary series when seen over a short timescale.

The audio signals are broken into 10-millisecond chunks in a conventional HMM. Each fragment's spectrogram is converted into a real number called cepstral coefficients. The dimensions of this cepstral might range from 10 to 32, depending on the device's accuracy. These vectors are the end product of the HMM.

Training is required for this calculation because the voice of a phoneme changes based on the source and even within a single utterance by the same person. The most probable word to produce the specified phoneme sequence is determined using a particular algorithm.

This entire process could be computationally costly, as one might expect. Before HMM recognition, feature transformations and dimension reduction methods are employed in many current speech recognition programs. It is also possible to limit an audio input to only those parts which are probable to include speech using voice detectors. As a result, the recognizer does not have to waste time studying sections of the signal that aren't relevant.

Choosing a Speech Recognition Tool

There are a few speech recognition packages in PyPI. There are a few examples:

NLP can discern a user's purpose in some of these programs, which goes beyond simple speech recognition. Several other services are focused on speech-to-text conversion alone, such as Google Cloud-Speech.

SpeechRecognition is the most user-friendly of all the packages.

Voice recognition necessitates audio input, which SpeechRecognition makes a cinch. SpeechRecognition will get you up to speed in minutes rather than requiring you to write your code for connecting mics and interpreting audio files.

Since it wraps a variety of common speech application programming interfaces, this SpeechRecognition package offers a high degree of extensibility. The SpeechRecognition library is a fantastic choice for every Python project because of its flexibility and ease of usage. The APIs it encapsulates may or may not be able to support every feature. For SpeechRecognition to operate in your situation, you'll need to research the various choices.

You've decided to give SpeechRecognition ago, and now you need to get it deployed in your environment.

Speech Recognition Software Installation

Using pip, you may set up Speech Recognition software in the terminal:

$ pip install SpeechRecognition

When you've completed the setup, you should start a command line window and type:

Import speech_recognition as sr

Sr.__version__

Let's leave this window open for now. Soon enough, you'll be able to use it.

If you only need to deal with pre-existing audio recordings, Speech Recognition will work straight out of the box. A few prerequisites are required for some use cases, though. In particular, the PyAudio library must record audio from a mic.

As you continue reading, you'll discover which components you require. For the time being, let's look at the package's fundamentals.

Recognizer Class

The recognizer is at the heart of Speech Recognition's magic.

Naturally, the fundamental function of a Recognizer class is to recognize spoken words and phrases. Each instance has a wide range of options for identifying voice from the input audio.

The process of setting up a Recognizer is straightforward. It's as simple as typing "in your active interpreter window."

sr.Recognizer()

There are seven ways to recognize the voice from input audio by utilizing a distinct application programming interface in each Recognizer class. The following are examples:

Aside from recognizing sphinx(), all the other functions fail to work offline using CMU Sphinx. Internet access is required for the remaining six activities.

This tutorial does not cover all of the capabilities and features of every Application programming interface in detail. Speech Recognition comes with a preset application programming interface key for the Google Speech Application programming interface, allowing you to immediately get up and running with the service. As a result, this tutorial will extensively use the Web Speech Application programming interface. Only the Application programming interface key and the user are required for the remaining six application programming interfaces.

Speech Recognition provides a default application programming interface key for testing reasons only, and Google reserves the right to cancel it at any time. Using the Google Web application programming interface in a production setting is not recommended. There is no method to increase the daily request quota, even if you have a valid application programming interface key. If you learn how to use the Speech Recognition application programming interface today, it will be straightforward to apply to any of your projects.

Whenever a recognize function fails to recognize the voice, it will output an error message. Request Error if the application programming interface is unavailable. A faulty Sphinx install could cause this in the case of recognizing sphinx(). If quotas are exceeded, servers are unreachable, or there isn't internet service, a Request Error will be raised for all the six methods.

Let us use recognize google() in our interpreter window and see if it works!

Exactly what has transpired?

Something like this is most likely what you've gotten.

I'm sure you could have foreseen this. How is it possible to tell something from nothing?

The Recognizer function recognize() expects an audio data parameter. If you're using Speech Recognition, then audio data should become an instance of the audio data class.

To construct an AudioData instance, you have two options: you can either use an audio file or record your audio. We'll begin with audio files because they're simpler to work with.

Using Audio Files

To proceed, you must first obtain and save an audio file. Use the same location where your Python interpreter is running to store the file.

Speech Recognition's AudioFile interface allows us to work with audio files easily. As a context manager, this class gives the ability to access the information of an audio file by providing a path to its location.

File Formats that are supported

This software supports various file formats, which include:

  • WAV
  • AIFF
  • FLAC

You'll need to get a hold of the FLAC command line and a FLAC encoding tool.

Recording data using the record() Function

To play the "har.wav" file, enter the following commands into your interpreter window:

har = sr.AudioFile('har.wav')

with harvard as source:

audio = r.record(source)

Using the AudioFile class source, the context manager stores the data read from the file. Then, using the record() function, the full file's data is saved to an AudioData class. Verify this by looking at the format of the audio:

type(audio)

You can now use recognize_google() to see if any voice can be found in the audio file. You might have to wait a few seconds for the output to appear, based on the speed of your broadband connection.

r.recognize_google(audio)

Congratulations! You've just finished your very first audio transcription!

Within the "har.wav" file, you'll find instances of Har Phrases if you're curious. In 1965, the IEEE issued these phrases to evaluate telephone lines for voice intelligibility. VoIP and telecom testing continue to make use of them nowadays.

Seventy-two lists of 10 phrases are included in the Har Phrases. On the Open Voice Repository webpage, you'll discover a free recording of these words and phrases. Each language has its own set of translations for the recordings. Put your code through its paces; they offer many free resources.

Segments with a start and end time

You may want to record a small section of the speaker's speech. The record() method accepts the duration term parameter, which terminates the program after a defined amount of time.

Using the example above, the first 4 secs of the file will be saved as a transcript.

with har as source:

audio = r.record(source, duration=4)

r.recognize_google(audio)

In the files stream, utilize the record() function within a block. As a result, the 4 secs of audio you recorded for 4 seconds will be returned when you record for 4 seconds again.

with har as source:

audio1 = r.record(source, duration=4)

audio2 = r.record(source, duration=4)

r.recognize_google(audio1)

r.recognize_google(audio2)

As you can see, the 3rd phrase is contained within audio2. When a timeframe is specified, the recorder can cease in the middle of a word. This can harm the transcript. In the meantime, here's what I have to say about this.

The offset keywords arguments can be passed to the record() function combined with a recording period. Before recording, this setting specifies how many frames of a file to disregard.

with har as source:

audio = r.record(source, offset=4, duration=3)

r.recognize_google(audio)

Using the duration and the offset word parameters can help you segment an audio track if you understand the language structure beforehand. They can, however, be misused if used hurriedly. Using the following command in your interpreter should get the desired result.

 

with har as source:

audio = r.record(source, offset=4.7, duration=2.8)

r.recognize_google(audio)

The application programming interface only received "akes heat," which matches "Mesquite," because "it t" half of the sentence was missed.

You also recorded "a co," the first word of the 3rd phrase after the recording. The application programming interface matched this to "Aiko."

Another possible explanation for the inaccuracy of your transcriptions is human error. Noise! Since the audio is relatively clean, the instances mentioned above all worked. Noise-free audio cannot be expected in the actual world except if the soundtracks can be processed in advance.

Noise Can Affect Speech Recognition.

Noise is an unavoidable part of everyday existence. All audiotapes have some noise level, and speech recognition programs can suffer if the noise isn't properly handled.

I listened to the "jackhammer" audio sample to understand how noise can impair speech recognition. Ensure to save it to the root folder of your interpreter session.

The sound of a jackhammer is heard in the background while the words "the stale scent of old beer remains" are spoken.

Try to translate this file and see what unfolds.

jackmer = sr.AudioFile('jackmer.wav')

with jackhammer as source:

audio = r.record(source)

r.recognize_google(audio)

How wrong!

So, how do you go about dealing with this situation? The Recognizer class has an adjust for ambient noise() function you might want to give a shot.

with jackmer as source:

r.adjust_for_ambient_noise(source)

audio = r.record(source)

r.recognize_google(audio)

You're getting closer, but it's still not quite there yet. In addition, the statement's first word is missing: "the." How come?

Recognizer calibration is done by reading the first seconds of the audio stream and adjusting for noise level. As a result, the stream has already been consumed when you run record() to record the data.

Adjusting ambient noise() takes the duration word parameter to change the time frame for analysis. The default value for this parameter is 1, but you can change it to whatever you choose. Reduce this value by half.

with jackmer as a source:

r.adjust_for_ambient_noise(source, duration=0.5)

audio = r.record(source)

r.recognize_google(audio)

Now you've got a whole new set of problems to deal with after getting "the" at the start of the sentence. There are times when the noise can't be removed from the signal because it simply has a lot of noise to cope with. That's the case in this particular file.

These problems may necessitate some sound pre-processing if you encounter them regularly. Audio editing programs, which can add filters to the audio, can be used to accomplish this. For the time being, know that background noise can cause issues and needs to be handled to improve voice recognition accuracy.

Application programming interface responses might be useful whenever working with noisy files. There are various ways to parse the JSON text returned by most application programming interfaces. For the recognize google() function to produce the most accurate transcription, you must explicitly request it.

Using the recognize google() function and the show all boolean argument will do this.

r.recognize_google(audio, show_all=True)

A transcript list can be found in the dictionary returned by recognizing google(), with the entry 'alternative .'This response format varies in different application programming interfaces, but it's primarily useful for debugging purposes when you get it.

As you've seen, the Speech Recognition software has a lot to offer. Aside from gaining expertise with the offsets and duration arguments, you also learned about the harmful effects noise has on transcription accuracy.

The fun is about to begin. Make your project dynamic by using a mic instead of transcribing audio clips that don't require any input from the user.

Using Microphone

For Speech Recognizer to work, you must obtain the PyAudio library.

Install PyAudio

Use the command below to install pyaudio in raspberry pi:

sudo apt-get install python-pyaudio python3-pyaudio

Confirmation of Successful Setup

Using the console, you can verify that PyAudio is working properly.

python -m speech_recognition

Ensure your mic is turned on and unmuted. This is what you'll see if everything went according to plan:

Let SpeechRecognition translate your voice by talking into your mic and discovering its accuracy.

Microphone instance

The recognizer class should be created in a separate interpreter window.

import speech_recognition as sr

r = sr.Recognizer()

After utilizing an audio recording, you'll use the system mic as your input. Instantiation your Microphone interface to get at this information!

mic = sr.Microphone()

For raspberry pi, you must provide a device's index to use a certain mic. For a list of microphones, simply call our Mic class function.

Sr.Microphone.list_microphone_names()

Keep in mind that the results may vary from those shown in the examples.

You may find the mic's device index using the list microphone names function. A mic instance might look like this if you wanted to use the "front" mic, which has a value of Three in the output.

mic = sr.Microphone(device_index=3)

Use listen() to record the audio from the mic

A Mic instance is ready, so let's get started recording.

Similar to AudioFile, Mic serves as a context manager for the application. The listen() function of the Recognizer interface can be used in the with section to record audio from the mic. This technique uses an input source as its initial parameter to capture audio until quiet is invoked.

with mic as source:

audio = r.listen(source)

Try saying "hi" into your mic once you've completed the block. Please be patient as the interpreter prompts reappear. Once you hear the ">>>" prompt again, you should be able to hear the voice.

r.recognize_google(audio)

If the message never appears again, your mic is probably taking up the excessive background noise. Ctrl then C key can halt the execution and restore your prompts.

Recognizer class's adjustment of ambient noise() method must be used to deal with the noise level, much like you did while attempting to decipher the noisy audio track. It's wise to do this whenever you're listening for mic input because it's less unpredictable than audio file sources.

with mic as source:

r.adjust_for_ambient_noise(source)

audio = r.listen(source)

Allow for adjustment of ambient noise() to finish before speaking "hello" into the mic after executing the code mentioned above. Be patient as the interpreter's prompts reappear before ascertaining the speech.

Keep in mind that the audio input is analyzed for a second by adjusting ambient noise(). Using the duration parameter, you can shorten it if necessary.

According to the website, not under 0.5 secs is recommended by the Speech Recognition specification. There are times when greater durations are more effective. The lower the ambient noise, the lower the value you need. Sadly, this knowledge is often left out of the development process. In my opinion, the default one-second duration is sufficient for most purposes.

How to handle speech that isn't recognizable?

Using your interpreter, type in the above code snippet and mutter anything nonsensical into the mic. You may expect a response such as this:

An UnknownValueError exception is thrown if the application programming interface cannot translate speech into text. You must always encapsulate application programming interface requests in try and except statements to address this problem.

Getting the exception thrown may take more effort than you imagine. When it comes to transcribing vocal sounds, the API puts in a lot of time and effort. For me, even the tiniest of noises were translated into words like "how." A cough, claps of the hands, or clicking the tongue would all raise an exception.

A "Guess the Word" game to Put everything together

To put what you've learned from the SpeechRecognition library into practice, develop a simple game that randomly selects a phrase from a set of words and allows the player three tries to guess it.

Listed below are all of the scripts:

import random

import time

import speech_recognition as sr

def recognize_speech_from_mic(recognizer, microphone):

if not isinstance(recognizer, sr.Recognizer):

raise TypeError("`recognizer` must be `Recognizer` instance")

if not isinstance(microphone, sr.Microphone):

raise TypeError("`microphone` must be `Microphone` instance")

with microphone as source:

recognizer.adjust_for_ambient_noise(source)

audio = recognizer.listen(source)

response = {

"success": True,

"error": None,

"transcription": None

}

 

try: response["transcription"] = recognizer.recognize_google(audio)

except sr.RequestError:

response["success"] = False

response["error"] = "API unavailable"

except sr.UnknownValueError:

response["error"] = "Unable to recognize speech"

return response

if __name__ == "__main__":

WORDS = ["apple", "banana", "grape", "orange", "mango", "lemon"]

NUM_GUESSES = 3

PROMPT_LIMIT = 5

recognizer = sr.Recognizer()

microphone = sr.Microphone()

word = random.choice(WORDS)

instructions = (

"I'm thinking of one of these words:\n"

"{words}\n"

"You have {n} tries to guess which one.\n"

).format(words=', '.join(WORDS), n=NUM_GUESSES)

print(instructions)

time.sleep(3)

for i in range(NUM_GUESSES):

for j in range(PROMPT_LIMIT):

print('Guess {}. Speak!'.format(i+1))

guess = recognize_speech_from_mic(recognizer, microphone)

if guess["transcription"]:

break

if not guess["success"]:

break

print("I didn't catch that. What did you say?\n")

if guess["error"]:

print("ERROR: {}".format(guess["error"]))

break

print("You said: {}".format(guess["transcription"]))

guess_is_correct = guess["transcription"].lower() == word.lower()

user_has_more_attempts = i < NUM_GUESSES - 1

if guess_is_correct:

print("Correct! You win!".format(word))

break

elif user_has_more_attempts:

print("Incorrect. Try again.\n")

else:

print("Sorry, you lose!\nI was thinking of '{}'.".format(word))

break

Let's analyze this a little bit further.

There are three keys to this function: Recognizer and Mic. It takes these two as inputs and outputs a dictionary. The "success" value indicates the success or failure of the application programming interface request. It is possible that the 2nd key, "error," is a notification showing that the application programming interface is inaccessible or that a user's speech was incomprehensible. As a final touch, the audio input "transcription" key includes a translation of all of the captured audio.

A TypeError is raised if the recognition system or mic parameters are invalid:

Using the listen() function, the mic's sound is recorded.

For every call to recognize speech from the mic(), the recognizer is re-calibrated using the adjust for ambient noise() technique.

After that, whether there is any voice in the audio, recognize function is invoked to translate it. RequestError and UnknownValueError are caught by the try and except block and dealt with accordingly. Recognition of voice from a microphone returns a dictionary containing the success, error, and translated voice of the application programming interface request and the dictionary keys.

In an interpreter window, execute the following code to see if the function works as expected:

import speech_recognition as sr

from guessing_game import recognize_speech_from_mic

r = sr.Recognizer()

m = sr.Microphone()

recognize_speech_from_mic(r, m)

The actual gameplay is quite basic. An initial set of phrases, a maximum of guesses permitted, and a time restriction are established:

Once this is done, a random phrase is selected from the list of WORDS and input into the Recognizer and Mic instances.

After displaying some directions, the condition statement is utilized to handle each user's attempts at guessing the selected word. This is the first operation that happens inside of the first loop. Another loop tries to identify the person's guesses at least PROMPT LIMIT instances and stores the dictionary provided to a variable guess.

Otherwise, a translation was performed, and the closed-loop will end with a break in case the guess "transcription" value is unknown. False is set as an application programming interface error when no audio is transcribed; this causes the loop to be broken again with a break. Aside from that, the application programming interface request was successful; nonetheless, the speech was unintelligible. As a precaution, the for loop repeatedly warns the user, giving them a second chance to succeed.

If there are any errors inside the guess dictionary, the inner loop will be terminated again. An error notice will be printed, and a break is used to exit the outer for loop, which will stop the program execution.

Transcriptions are checked for accuracy by comparing the entered text to a word drawn at random. As a result, the lower() function for text objects is employed to ensure a more accurate prediction. In this case, it doesn't matter if the application programming interface returns "Apple" or "apple" as the speech matching the phrase "apple."

If the user's estimate was correct, the game is over, and they have won. The outermost loop restarts when a person guesses incorrectly and a fresh guess is found. Otherwise, the user will be eliminated from the contest.

This is what you'll get when you run the program:

Recognition of Other Languages

Speech recognition in other languages, on the other hand, is entirely doable and incredibly simple.

The language parameter must be set to the required string to use the recognize() function in a language other than English.

r = sr.Recognizer()

with sr.AudioFile('path/to/audiofile.wav') as source:

audio = r.record(source)

r.recognize_google(audio, language='fr-FR')

There are only a few methods that accept-language keywords:

What are the applications of speech recognition software?

  1. Mobile Payment with Voice command

Do you ever have second thoughts about how you're going to pay for future purchases? Has it occurred to you that, in the future, you may be able to pay for goods and services simply by speaking? There's a good chance that will happen soon! Several companies are already developing voice commands for money transfers.

This system allows you to speak a one-time passcode rather than entering a passcode before buying the product. When it comes to online security, think of captchas and other one-time passwords that are read aloud. This is a considerably better option than reusing a password every time. Soon, voice-activated mobile banking will be widely used.

  1. AI Assistants

When driving, you may use such Intelligent systems to get navigation, perform a Google search, start a playlist of songs, or even turn on the lights in your home without touching your gadget. These digital assistants are programmed to respond to every voice activation, regardless of the user.

There are new technologies that enable Ai applications to recognize individual users. This tech, for instance, allows it to respond to the voice of a certain person exclusively. Using an iPhone as an example, it's been around for a few years now. If you want Siri to only respond to your commands and queries when you speak to it, you can do so on your iPhone. Unauthorized access to your gadgets, information, and property is far less possible when your voice can only activate your Artificial intelligent assistant. Anyone who is not permitted to use the assistant will not be able to activate it. Other uses for this technology are almost probably on the horizon.

  1. Translation Application

In a distant place, imagine attempting to check into an unfamiliar hotel. Since neither you nor the front desk employee is fluent in the other country's language, no one is available to act as a translator. You can use the translator device to talk into the microphone and have your speech processed and translated verbally or graphically to communicate with another person.

Additionally, this tech can benefit multinational enterprises, educational institutions, or other institutions. You can have a more productive conversation with anyone who doesn't speak your language, which helps break down the linguistic barrier.

Conclusion

There are many ways to use the SpeechRecognition program, including installing it and utilizing its Recognizer interface, which may be used to recognize audio from both files and the mic. You learned how to use the record offset and the duration keywords to extract segments from an audio recording.

The recognizer's tolerance to noise level can be adjusted using the adjust for the ambient noise function, which you've seen in action. Recognizer instances can throw RequestErrors and UnknownValueErrors, and you've learned how to manage them with try and except block.

More can be learned about speech recognition than what you've just read. We will implement the RTC module integration in our upcoming tutorial to enable real-time control.

Get To Know About Bluetooth Beacons For Indoor Positioning

Indoor positioning technology has become widely available in a variety of configurations and quality levels. It's a jungle out there, and no single established solution, unlike outdoor location using GPS satellite technology, has been adequate for all needs.

Since GPS satellite technology became widely available in the late 1990s, positioning systems have played an increasingly important role in people's lives. Almost, everyone now owns a device with positioning capabilities, whether it's a mobile phone, tablet, GPS tracker, or smartwatch with built-in GPS.

Though GPS transformed outdoor positioning, we're now moving on to inside positioning, which will require new technologies. Because the signal is attenuated and scattered by roofs and walls, satellite-based location does not function indoors or on narrow streets. Other technology standards, thankfully, have arisen that enable indoor positioning, albeit with a new form of infrastructure.

Indoor positioning is useful for a variety of purposes for individuals and organizations. From making travel easier to locate what you're looking for, delivering/receiving targeted location-based information, enhancing accessibility, and gaining useful data insights, there's a lot more.

What Is BLE Indoor Positioning And How Does It Function Properly?

The User's Position

Indoor location relies heavily on BLE beacons. The device can detect when it is in the range of a Bluetooth beacon and even determine its position if it is in reach of more than two beacons using this technology.

The original BLE-based positioning prototypes could only detect which beacon was closest to the user. Hence, today we can combine proximity data from multiple beacons to place the consumer in 2D space on an indoor map. The accuracy varies depending on the situation, but it can be as accurate as 1.5 meters.

This technology is improving, and it now uses magnetic field sensing, gyroscopes, accelerator meters, and Near Field Communication circuits to provide exact positioning.

Apps From The Customer/Standpoint Visitors

This technology is used by customers and visitors for navigation and receiving location-based content. They do it by installing an app on their smartphone, tablet, or watch. Indoor mapping and location-specific content distribution are common features of the app.

The Viewpoint Of The Organization - Content Management System (CMS)

BLE positioning systems are used by businesses to deliver a better experience for their visitors or customers. Almost any form of organization can profit from location-based technologies. For instance:

  • Museums can provide visitors with location-based narrations accompanied by a map of the venue, allowing for navigation and participatory learning.
  • Indoor positioning systems can be utilized in retail to provide customers with location-based marketing, navigation, and other location-based content.
  • Location-based data, such as turn-by-turn navigation, could be beneficial to airports and hospitals.

Organizations utilize the CMS online platform for managing their content, floor maps, and Bluetooth beacon positions. A content management system(CMS) is often a hosted software system that maintains track of every piece of material in the app that users or customers access. Organizations need a fully working CMS because it offers them full control over the material that consumers see.

UWB Vs. BLE:

Low power, low cost, and effectiveness as asset tracking systems are all characteristics shared by BLE and UWB. UWB, on the other hand, has significantly more precision than Bluetooth. This owes in part to UWB's exact distance-based method of location determination.

BLE commonly locates devices using RSSI, which has a much lower rate of precision because it is reliant on whether a device transmits a weak or strong signal about a Bluetooth beacon or sensors.

In comparison to UWB, BLE has a substantially lesser range and data rate. Bluetooth, on the other hand, is a widely used RF technology that can be integrated into a variety of indoor settings using flexible hardware, such as BLE beacons, sensors, and asset tags.

Benefits:

Low-Cost, Low-Power

  • BLE is an appropriate RF standard for BLE sensor, Bluetooth beacon, and asset or personnel tag because of its low energy consumption and cost-effective technology.

Ease of Deployment

  • BLE offers simple, easy-to-deploy solutions and versatile hardware alternatives that can be used on or off the network and seamlessly incorporate into your Bluetooth ecosystem.

Adaptable Technology

  • Extend the technology's capabilities to support a variety of location-aware applications, including asset monitoring, Bluetooth device identification, indoor positioning and navigation, proximity applications, and more.

Some Ways Of Using Bluetooth Beacons:

  • Stationary Beacons, Roaming Mobile Devices, And Traveling Asset Beacons Are Used In Tandem.

In some cases, tracking the position of assets within a workspace is desirable, yet mounting permanent BLE receivers is impractical. Without a device to detect asset location and communicate data back to a cloud service, asset monitoring becomes difficult. It can be avoided by piggybacking on a mobile device's location.

Bluetooth beacon is placed throughout a facility, and a mobile app is installed to track where each device is at all times, similar to the previous strategy. The app can detect adjacent assets by marking them with beacons and assigning them to the same position as the device based on nearby fixed beacons.

  • Combining A BLE Technology With GPS, Wi-Fi, Or Geo-Fencing Is The Preferred Technology.

Indoors, BLE beacons offer substantial advantages for tracking people and assets. Integrating this technique with more conventional location services, like GPS, or Wi-Fi but still has advantages. Assets with embedded beacons can be used to identify items, and then further mobile location technologies can be utilized to give context.

Connecting a Bluetooth beacon to the inside of the vehicle is used to track the whereabouts of mobile workers while they drive. It's also utilized to track asset location within an office building utilizing Wi-Fi enabled client tracking and asset tagging beacons.

Summing Up!

In a nutshell, we can say that Bluetooth will remain a popular RF technology for wireless devices, short-range communication, and indoor positioning. The proliferation of Access Points with incorporated Bluetooth low energy beacon and sensor systems out of the box, as well as more, equipped consumer wearable, IoT devices, asset tracking tags, employee badges, and customer Bluetooth trackers, will almost certainly continue and grow.

Analog Input Scaling in Ladder Logic Programming

Hi friends and hope you are doing very well. Today we would like to take one tutorial which is very essential in the industry which is analog input processing for handling analog measurements of physical signals like temperature, humidity, pressure, distance, flow and level of liquids, etc. Typically, sensors produce two types of analog signals to represent the equivalent measured signal which is current and voltage signals. The currently produced signals would be within the range of 4-20 mAwhile voltage signals are in the range of 0-10 v. because, that output signals represent physical signals, the limits of output signals are 0 to 10 v for voltage based sensors and 4 to 20 mA for current-based sensors, these values should be scaled to represent the equivalent value as regards to physical signals. By completing this tutorial, we will have elaborated on the concept of handling analog signals and how we scale them to represent the physical signals. In addition, we will go through practical examples for handling analog inputs with ladder programming and then we will enjoy practicing these examples on the simulator to validate the written ladder code and show practically the analog processing in PLC.

In computer systems, plc, and microcontrollers all processing work that has been done inside is processed in digital which is 0 and 1 representation. So, one may ask how analog signals which change continuously within a specific range could be processed by computers, PLCs, or microcontrollers? Well, that is an exciting question and its answer will open the door to show the aim of this tutorial. Figure 1 shows an example of analog signals of the voltage that represents the temperature sensor reading.

Fig. 1: Output voltage of temperature sensor

As you see in fig. 1, the output of the temperature sensor would be either a voltage signal or a current signal based on the type of the sensor. And the output of sensors is applied to the analog input module that converts the analog signal to a digital signal to be processed digitally by the PLC.

Scaling analog inputs

The output signals of sensors either voltage or current signal should be scaled to represent the physical signal and at the same time, it has its equivalent digital values. For example, let the temperature that is being measured has a range between 0 to 100 °C, and the sensor that we use produces voltage output from 0 to 10 v. Therefore, each 0.1 v change in voltage is equivalent to a change of 1°C. Also, assuming the maximum digital value that can be received is 4000. So, 0 V is equivalent to 0 as a digital value and 10 voltage is equivalent to 4000 as a digital value. So, it is crucial to scale the output voltage to determine accurately the equivalent change in sensors’ reading in terms of voltage to represent the temperature in digital value and the real range of the physical signal.

Scaling with parameters

Again, we have two sides to be scaled from one to the other. Therefore, the parameters that are needed for achieving such scaling are as follows:

  • The input value: represents the physical signal that comes out from a sensor. And it typically could be from 0-10v for voltage-based sensors, or it could be 4-20mA for current sensors type.
  • The scaled value represents the digital equivalent of that value. And it depends on the word size or the number of bits that we use for representation in digital format. For example, for 16 bits size, the scaled value is up to which is +/- 32768.

So we can imagine together now the scaling function block in ladder logic should be as shown in fig. 2. It shows the input minimum and maximum which could be 0 and 10v or 4 and 20mA. In addition, there are scaled min and max which could be from 0 to 32768 for 16 bits size conversion.

Fig.2: the SCP block in Allen Bradley

Analog input processing in Ladder logic

We are going to show a complete example of analog input processing in Siemens and AB as well to present the merits of analog conversion in both brands. Figure 3 shows a primitive rung of the ladder logic program that processes the analog input reading. The ladder rung uses a scaling with parameters (SCP) block that includes an input minimum of 4mA and an input maximum of 20mA. And it uses a scaled range from 0 to 32767 because it utilizes 16-bits word for representing the digital data.

Fig. 3: Ladder logic rung for analog input processing

Figure 4 shows the run of the simple example which shows up the processing of analog inputs. It shows that, when the input measured value was 12mA, the output was 16884 which is pretty accurate. It is good to mention that, the output is limited to be within the range of the scaled min and max meaning that it should be from 0 to 32767.

Fig. 4: Testing analog processing by the simulator

Let us give another example but in siemens s7-1200 which is represented by Fig. 5. The ladder code is very simple and it consists of two runs as shown in fig. 5; the first one is for validating the reading to be within the range which is from 0 to 27648 and the second rung is the main one which performs the analog processing in two steps. Firstly it normalizes the input based on the aforementioned range and then it scales the output back to represent it in the output signal format. In this example, we measure a battery voltage that is typically located in the range of 0 to 12v. Therefore, the min and max parameters in the scale_X block should be 0 and 12 v respectively.

Fig. 5: Example of processing analog inputs in Siemens S7-1200

Figure 6 demonstrates the test of the analog processing showing the output reported 5.858941 when the reading was 13499 which is high accuracy.

Fig. 6: Simulation of processing analog inputs in Siemens S7-1200

What’s next

I am truly thrilled to have you shared with me such a very important tutorial about analog input processing and scaling with parameters because this is a very common operation you might see in every operation in the industry as there are hundreds of sensors that read analog physical signals and be processed by the controller for deciding the next stage of the operation. Next time we are going to talk about jumping and branching techniques in the ladder logic program. So please be ready to meet very soon and learn together and enjoy practicing plc ladder programming

Interfacing PIR Motion Sensor and Raspberry Pi Pico Module with MicroPython

Hello readers, I hope you all are doing great. In this tutorial, we will learn how to interface the PIR sensor to detect motion with the Raspberry Pi Pico module and MicroPython programming language. Later in this tutorial, we will also discuss the interrupts and how to generate an external interrupt with a PIR sensor.

Before interfacing and programming, the PIR and Pico boards let’s first have a look at the quick introduction to the PIR sensor and its working.

Fig. 1 Raspberry Pi Pico and PIR sensor

PIR motion sensor and its working

PIR stands for Passive Infrared sensors and the PIR module we are using is HC-SR501. As the name suggests the PIR or passive infrared sensor, produces TTL (transistor transistor logic) output (that is either HIGHT or LOW) in response to the input infrared radiation. The HC-SR501 (PIR) module is featured a pair of pyroelectric sensors to detect heat energy in the surrounding environment. Both the sensors sit beside each other, and when a motion is detected or the signal differential between the two sensors changes the PIR motion sensor will return a LOW result (logic zero volts). It means that you must wait for the pin to go low in the code. When the pin goes low, the desired function can be called.

In the PIR module, a fresnel lens is used to focus all the incoming infrared radiation to the PIR sensor.

Fig. 2 PIR motion sensor

The PIR motion sensor has a few setting options available to control or change its behaviour.

Two potentiometers are available in the HC-SR501 module as shown in the image attached below (Fig. 3). Sensitivity will be one of the options. So, one of the potentiometers is used to control the sensing range or sensitivity of the module. The sensitivity can be adjusted based on the installation location and project requirements. The second potentiometer (or the tuning option) is to control the delay time. Basically, this specifies how long the detection output should be active. It can be set to turn on for as little as a few seconds or as long as a few minutes.

Fig. 3 HC-SR501 PIR sensor module

Thermal sensing applications, such as security and motion detection, make use of PIR sensors. They're frequently used in security alarms, motion detection alarms, and automatic lighting applications.

Some of the basic technical specifications of HC-SR501 (PIR) sensor module are:

Table: 1 HC-SR501 technical specification

Hardware and software components required

  1. Raspberry Pi Pico development board
  2. PIR motion sensor (HC-SR5010
  3. Jumper wires
  4. Breadboard
  5. Thonny IDE (installed)
  6. USB cable

Fig. 4 Hardware components required

Interfacing the PIR sensor module with Raspberry Pi Pico

The HC-SR501 module has three interfacing pins VCC, GND and OUT. The VCC pin is to power up the board and should be connected to the 3.3V pin of the Raspberry Pi Pico board. The ‘OUT’ pin provides the TTL (Transistor-Transistor Logic) output. The TTL output is either High or LOW depending upon the electromagnetic input received. Raspberry Pi Pico module has 26 multifunctional GPIO pins. The ‘OUT’ pin of the PIR sensor can be connected to any of the GPIO pins of the Raspberry Pi Pico.

Table: 2 Interfacing HC-SR501 and Pico

Fig. 5 Interfacing PIR with Pico module

Programming with Thonny IDE and MicroPython

Before writing the MicroPython program make sure that you have the installed integrated development environment (IDE) to program the Pico board for interfacing the PIR sensor module.

There are multiple development environments available to program the Raspberry Pi Pico (RP2040) with MicroPython programming language like VS Code, uPyCraft IDE, Thonny IDE etc.

In this tutorial, we are using Thonny IDE with the MicroPython programming language (as mentioned earlier). We already published a tutorial on how to install the Thonny IDE for Raspberry Pi Pico Programming.

Now, let’s write the MicroPython program to interface the PIR (HC-SR501) and Pico modules and implement motion detection with Raspberry Pi Pico:

Importing Necessary Libraries

The first task is importing the necessary libraries and classes. To connect the data (OUT) pin of the PIR sensor module with Raspberry Pi Pico we can use any of the GPIO pins of the Pico module. So, here we are importing the ‘Pin’ class from the ‘machine’ library to access the GPIO pins of the Raspberry Pi Pico board.

Secondly, we are importing the ‘time’ library to access the internal clock of RP2040. This time module is used to add delay in program execution or between some events whenever required.

Fig. 6 Importing necessary libraries

Object Declaration

Next, we are declaring some objects. The ’led’ object represents the GPIO pin to which the LED is connected (representing the status of PIR output) and the pin is configured as an output.

The ‘PirSensor’ object represents the GPIO pin to which the ‘OUT’ pin of HC-SR501 is to be connected which is GPIO_0. The pin is configured as input and pulled down.

Fig. 7 Object declaration

Creating a function to detect motion

A ‘motion_det()’ function is defined to check the status of the PIR sensor and degenerate an event in response.

The status of the PIR sensor is observed using the ‘PirSensor.value()’ command. The default status of GPIO_0 is LOW or ‘0’ because it is pulled down. We are using a LED to represent the status of the PIR sensor. Whenever a motion is detected the LED will change its state and will remain in that state for a particular time interval.

If the motion is detected, the status of the GPIO_0 pin will turn to HIGH or ‘1’ and the respective status will be printed on the ‘Shell’ and simultaneously the status of led connected to GPIO_25 will also change to HIGH for 3sec. Otherwise, the “no motion” status will be printed on the Shell.

Fig. 8 creating a function

Running the motion detection function

Here we are using the ‘while’ loop to continuously run the motion detection function. So, the PIR sensor will be responding to the infrared input continuously with the added delay.

Fig. 9 mail loop

Code

# importing necessary libraries

from machine import Pin

import time

# Object declaration

led = Pin(25, Pin.OUT, Pin.PULL_DOWN)

PirSensor = Pin(0, Pin.IN, Pin.PULL_DOWN)

def motion_det():

if PirSensor.value() ==1: # status of PIR output

print("motion detected") # print the response

led.value(1)

time.sleep(3)

else:

print("no motion")

led.value(0)

time.sleep(1)

while True:

motion_det()

 

Results

    • Connect your Raspberry Pi Pico board with your system and select the MicroPython interpreter.
    • Create a new program in Thonny IDE and paste the above code.
    • Save the program to either on your system or on Raspberry Pi Pico.
    • To see the output result (printed) enable the ‘Shell’.
    • To enable the Shell got to View >> Shell.

Fig. 10 Fig enable Shell

  • Run the program by clicking on the ‘Run’

Fig. 11 Output on Shell

Fig. 12 Motion detected with LED ‘ON’

Generating External interrupt with Raspberry Pi Pico and PIR sensor modules

Now let’s take another example where we will discuss the interrupts with Raspberry Pi Pico.

Interrupts comes into existence in two conditions. First one is when a microcontroller is executing a task or a sequence of dedicated tasks and along with that continuously monitoring for an event to occur and then execute the task arriving with that particular event. So, instead of continuously monitoring for an event, a microcontroller can directly jump to a new task whenever an interrupt occurs meanwhile keeping the regular task on halt. Thus we can avoid the wastage memory and energy.

Fig. 13 Interrupt

In second case, a microcontroller will start executing the task only when an interrupt occurs. Otherwise the microcontroller will remain in standby or low power mode (as per the instruction provided).

In this example, we are going to implement the second case of interrupt. Where, we are using the PIR sensor to generate an interrupt. The Raspberry Pi Pico will execute the assigned task only after receiving an interrupt request.

Interrupts can either be external or an internal one. Internal interrupts are mostly software generated for example timer interrupts. On the other hand, external interrupts are mostly hardware generated for example using a push button, motion sensor, temperature sensor, light detector etc.

In this example, we are using the PIR sensor to generate an external interrupt. Whenever the motion is detected, a particular group of LEDs will turn ON (HIGH) while keeping rest of the LEDs in OFF (LOW) state. A servo motor is also interfaced with the Raspberry Pi Pico board. The motor will start rotating once an interrupt is being detected.

We already published tutorial on interfacing a servo motor with Raspberry Pi Pico. You can follow our site for more details.

Fig. 14 Schematic_2

Now let’s write the MicroPython program to generate an external interrupt for raspberry Pi Pico with PIR sensor.

Importing libraries

As we discussed earlier, in our previous example the first task is importing necessary libraries and classes. Rest of the modules , are similar to the previous example except the ‘PWM’ one.

The ‘PWM’ class from ‘machine’ library is used to implement the PWM on the servo motor interfaces with the raspberry Pi Pico board.

Fig. 15 importing libraries

Object Declaration

In this example, we are using three different components a PIR sensor, a servo motor, and some LEDs. Object are declared for each component. The object ‘ex_interrupt’ represents the GPIO pin to which the PIR sensor is connected where the pin is configured as an input one and pulled down.

The second object represents the GPIO pin to which the servo motor is connected. The ‘led_x’ object represents the GPIO pins to which the peripheral LEDs are connected. Here we are using six peripheral LEDs.

Fig. 16 Object declaration

  • We are declaring a global variable ‘pir_output’ to represent the status of PIR output or motion being detected or not. When no motion is detected the state of the variable will remain in default state that is ‘False’. Whenever the motion is detected the state of ‘pir_output’ variable will change to ‘True’ and an interrupt will be generated.

Fig. 17 PIR output status

Interrupt handler

Next we are defining a interrupt handler function. The Parameter ‘Pin’ in the function represents the GPIO pin caused the interrupt.

The variable ‘pir_output’ is assigned with a ‘True’ state value which will be executed only when an interrupt occurs (in the while loop).

Fig. 18 Interrupt handling function

Attaching interrupt to GPIO_0

Interrupt is attached to GPIO_0 pin represented with ‘ex_interrupt’ variable. The interrupt will be triggered on the rising edge.

Fig. 18 Attaching interrupt

Setting Servo motor position

In the function defined to change the position of servo motor we are using pulse width modulation technique to change the servo position/angle. The motor will rotate to 180 degree and then again back to 0 degree.

Fig. 19 defining function for servo

Motion_det()

This is the function where we are calling all the previously defined function and each function will be executed as per there assigned sequence whenever an interrupts is detected.

Fig. 20

Code

The MicroPython code to generate an external interrupt with PIR sensor for Raspberry Pi Pico is attached below:

# importing necessary libraries

from machine import Pin, PWM

import time

# Object declaration PIR, PWM and LED

ex_interrupt = Pin(0, Pin.IN, Pin.PULL_DOWN)

pwm = PWM(Pin(1))

led1 = Pin(13, Pin.OUT)

led2 = Pin(14, Pin.OUT)

led3 = Pin(15, Pin.OUT)

led4 = Pin(16, Pin.OUT)

led5 = Pin(17, Pin.OUT)

led6 = Pin(18, Pin.OUT)

# PIR output status

pir_output = False

# setting PWM frequency at 50Hz

pwm.freq(50)

# interrupt handling fucntion

def intr_handler(Pin):

global pir_output

pir_output = True

# attaching interrupt to GPIO_0

ex_interrupt.irq(trigger=Pin.IRQ_RISING, handler= intr_handler)

# defining LED blinking function

def led_blink_1():

led1.value(1)

led3.value(1)

led5.value(1)

led2.value(0)

led4.value(0)

led6.value(0)

time.sleep(0.5)

def led_blink_2():

led1.value(0)

led3.value(0)

led5.value(0)

led2.value(1)

led4.value(1)

led6.value(1)

time.sleep(0.5)

def servo():

for position in range(1000, 9000, 50): # changing angular position

pwm.duty_u16(position)

time.sleep(0.00001) # delay

for position in range(9000, 1000, -50):

pwm.duty_u16(position)

time.sleep(0.00001) # delay

def motion_det():

if pir_output: # status of PIR output

print("motion detected") # print the response

led_blink_1()

servo() # rotate servo motor (180 degree)

time.sleep(0.5)

pir_output == False

else:

print("no motion")

led_blink_2()

while True:

motion_det()

Result

The results observed are attached below:

Fig. 21 Output printed on Shell

  • For the hardware demonstration we are using six LEDs (3 red and 3 green) and a servo motor.
  • The Red LEDs represents the state whenever a motion is detected.

Fig. 22 Motion Detected

  • The green LEDs are representing the ‘No motion detected’ state.

Fig. 23 No motion detected

Conclusion

In this tutorial, we discussed how to interface the HC-SR501 PIR sensor with raspberry Pi Pico and detect the motion where we used Thonny IDE and MicroPython programming language. We also discussed the interrupts and how to generate interrupts using HC-SR501 sensor.

This concludes the tutorial. I hope you found this of some help and also hope to see you soon with a new tutorial on Raspberry Pi Pico programming.

Smart Security System using Facial Recognition with Raspberry Pi 4

Greeting, and welcome to the next tutorial of our raspberry programming tutorial. In the previous tutorial, we learned how to build a smart attendance system using an RFID card reader, which we used to sign in students in attendance in a class. When it comes to building a face-recognition program on a Raspberry Pi, this tutorial will show you how. Two Python programs will be used in the lesson, one of which is a Training program that analyzes a collection of photographs of a certain individual and generates a dataset. (YML File). The Recognizer application uses the YML script to detect a face and afterward utters the person's name when the face is detected.

Where To Buy?
No.ComponentsDistributorLink To Buy
1BreadboardAmazonBuy Now
2DC MotorAmazonBuy Now
3Jumper WiresAmazonBuy Now
4Raspberry Pi 4AmazonBuy Now

Components

  • Raspberry Pi
  • Breadboard
  • L293 or SN755410 motor driver chip
  • Jumper wires
  • DC motor
  • 5v power supply

A growing number of us already use face recognition software without realizing it. Facial recognition is used in several applications, from basic Fb Tag suggestions to advanced security screening surveillance. Chinese schools employ facial recognition to track students' adherence and behaviour for the first time. Retail stores use face recognition to classify their clients and identify those who have a history of crime. There's no denying that this tech will be all over soon, especially with so many other developments in the works.

How does facial recognition work?

When it comes to facial recognition, biometric authentication goes well beyond simply being able to identify human faces in images or videos. An additional step is taken to identify the person's identity. A facial recognition software compares an image of a person's face to a database to see if the features match another person's. Since facial expressions and hair do not affect the technology's ability to identify matches, it has been built to do so.

How can face recognition be used when it comes to smart security systems?

The first thing you should do if you want to make your home "smart" is to focus on security. Your most prized possessions are housed at this location, and protecting them is a must. You can monitor your home security status from your computer or smartphone thanks to a smart security system when you're outdoors.

Installing a system that is not wireless in your house and signing you up for professional monitoring was traditionally done by a security company. The plot has been rewritten. When setting up a smart home system, you can even do it yourself. In addition, your smart smartphone acts as a professional monitor, providing you with real-time information and notifications.

Face recognition is the ability of a smart camera in your house to identify a person based on their face. Consequently, you will have to inform the algorithm what face goes with what name for face recognition to operate. Facial detection in security systems necessitates the creation of user accounts for family members, acquaintances, and others you want to be identified by the system. Your doors or the inside of your house will be alerted when they arrive.

Face-recognition technology allows you to create specific warning conditions. For example, you can configure a camera to inform you when an intruder enters your home with a face the camera doesn't recognize.

Astonishing advancements in smart tech have been made in recent years. Companies are increasingly offering automatic locks with face recognition. You may open your doors just by smiling at a face recognition system door lock. You could, however, use a passcode or a real key to open and close the smart door. You may also configure your smart house lock to email you an emergency warning if someone on the blacklist tries to unlock your smart security door.

How to install OpenCV for Raspberry Pi 4.

OpenCV, as previously stated, will be used to identify and recognize faces. So, before continuing, let's set up the OpenCV library. Your Pi 4 needs a 2A power adapter and an HDMI cable because we won't be able to access the Pi's screen through SSH. The OpenCV documentation is a good place to learn how image processing works, but I'm not going to go into it here.

Installing OpenCV using pip

pip is well-known for making it simple to add new libraries to the python language. In addition, there is a technique to install OpenCV on a Raspberry Pi via PIP, but it didn't work for me. We can't obtain complete control of the OpenCV library when using pip to install OpenCV; however, this might be worth a go if time is of the essence.

Ensure pip is set up on your Raspberry Pi. Then, one by one, execute the lines of code listed below into your terminal.

sudo apt-get install libhdf5-dev libhdf5-serial-dev

sudo apt-get install libqtwebkit4 libqt4-test

sudo pip install opencv-contrib-python?

How OpenCV Recognizes Face

Facial recognition and face detection are not the same things, and this must be clarified before we proceed. When simply a user's face is detected using Face detection, the program has no clue who that person is. Only the face will be detected in facial recognition software, but the program will also recognize it. At this point, it's pretty evident that facial detection comes before facial recognition. To explain how OpenCV recognizes a person or other objects, I will have to go into detail.

Essentially, a webcam feed is like a long series continuously updating still photos. And every image is nothing more than a jumble of pixels with varying values arranged in a specific order. So, how does a computer software identify a face among all of these random pixels? Trying to describe the underlying techniques is outside the scope of this post, but since we're utilizing the OpenCV library, facial recognition is a straightforward process that doesn't necessitate a deeper understanding of the underlying principles.

Using Cascade Classifiers for Face Detection

We can only recognize a person if we can see it. Detection of an item, including a face, Classifiers are a feature of OpenCV. They are pre-trained datasets that may be utilized to recognize a certain item, such as a face. Classifiers may also detect additional objects, such as the mouth, the eyebrows, the number plate of a vehicle, and smiles.

Alternatively, OpenCV allows you to design your custom Classifier for detecting any objects in images by retraining the cascade classifier. For the sake of this tutorial, we'll be using the classifier named "haarcascade_frontalface_default.xml" to identify faces from the camera. We'll learn more about image classifiers and how to apply them in code in the following sections.

Setup the raspberry pi camera

For the face training and detection, we only need the pi camera, and to install this, insert the raspberry pi camera in the pi camera slot as shown below. Then go to your terminal, open the configuration window using "sudo raspi-config", and press enter. Navigate to the interface options and activate the pi camera module. Accept the changes and finish the setup. Then reboot your RPi.

How to Setup the Necessary Software

First, ensure pip is set up, and then install the following packages using it.

Install dlib: Dlib is a set of libraries for building ML and data analysis programs in the real world. To get dlib up and running, type the following command into your terminal window.

Pip install dlib

If everything goes according to plan, you should see something similar after running this command.

Install pillow: The Python Image Library, generally known as PIL, is a tool for opening, manipulating, and saving images in various formats. The following command will set up PIL for you.

pip install pillow

You should receive the message below once this app has been installed.

Install face_recognition: The face recognition package is often the most straightforward tool for detecting and manipulating human faces. Face recognition will be made easier with the help of this library. Installing this library is as simple as running the provided code.

Pip install face_recognition –no –cache-dir

If all goes well, you should see something similar to the one shown below after the installed software. Due to its size, I used the "—no –cache-dir" command-line option to configure the package without keeping any of its cache files.

Face Recognition Folders

A script named "haarcascade_frontalface_default.xml" is for detecting faces using a Classifier. It will also build a "face-trainner.yml" file using the training script based on the photos found in the face images directory.

Start the face images folder with a collection of face images.

The face images folder indicated above should contain subdirectories with the names of each person to be identified and several sample photographs of them. Esther and x have been identified for this tutorial. As a result, I've just generated the two sub-directories shown below, each containing a single image.

You must rename the directory and replace the photographs with the names of the people you are identifying. It appears that a minimum of five images for each individual is optimal. However, the more participants, the slower the software will run.

Face trainer program

Face Trainer.py is a Python software that may be used to train a new face. The purpose of the software is to access the face photographs folder and scan for faces. As soon as it detects a face, it crops it, turns it to grayscale, and saves it in a file named face-trainner.yml using the face recognition package we had previously loaded. The information in this script can be used to identify the faces later. In addition to the whole Trainer program provided at the conclusion, we'll go over some more critical lines.

The first step is to import the necessary modules. The cv2 package is utilized to process photos. The NumPy library can be used for image conversion, the operating system package is used for directory navigation, and PIL will be used to process photos.

import cv2

import numpy as np

import os

from PIL import Image

Ensure that the XML file in question is located in the project directory to avoid encountering an issue. The LBPH Facial recognizer is then constructed using the recognizer parameter.

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

recognizer = cv2.createLBPHFaceRecognizer()

Face_Images = os.path.join(os.getcwd(), "Face_Images")

In order to open all of the files ending in.jpg,.jpg, or .png within every subfolder in the face images folder, we must traverse the tree with for loops. In a variable named path, we record the path to every image, and in a variable named person name, we store the file location name (the name of the user who uploaded the images).

For root, dirs, files in os.walk(Face_Images):

for file in files: #check every directory in it

if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):

path = os.path.join(root, file)

person_name = os.path.basename(root)

As a result, in case the name of the person changes, we increase a variable named Face_ID that will allow us to have a unique Face_ID for each individual.

if pev_person_name!=person_name:

Face_ID=Face_ID+1 #If yes increment the ID count

pev_person_name = person_name

Because the BGR values may be ignored, grayscale photos are simpler for OpenCV to deal with than colourful ones. We transform the data to grayscale and afterwards lower the image size by 50% so that all the pictures are the same size. To avoid having your face cut out, place it in the centre of the photo. To get a numerical number for these photos, transform them into NumPy arrays. Afterwards, a classifier identifies a face in a photo and saves the results in variable named faces.

Gery_Image = Image.open(path).convert("L")

Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)

Final_Image = np.array(Crop_Image, "uint8")

faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)

Our Area of Attention will be the portion of the image where the face may be found after being cropped. It will be utilized to train the face-recognition system in the ROI area. Every area of attention face must be appended to a variable named x train. We then feed the recognizer with our training data using the area of attention values and Face ID data. The information gathered will be archived.

for (x,y,w,h) in faces:

roi = Final_Image[y:y+h, x:x+w]

x_train.append(roi)

y_ID.append(Face_ID)

 

recognizer.train(x_train, np.array(y_ID))

recognizer.save("face-trainner.yml")

You'll notice that the face-trainner.yml script is modified whenever you run this program. If you make any modifications to the photographs in the Face Images folder, ensure to recompile this code. Debugging purposes include printing out the Face ID, name of the path, name of a person, and NumPy arrays.

Face recognition program

We can begin using our trained data to identify people now that it has been prepared. We'll use a USB webcam or pi camera to feed video into the Face recognizer application, turning it into an image. Once we've found the faces in those images, we'll find similarities to all of our previously developed Face IDs. Finally, we output the identified person’s name in boxes around their face. Afterwards, the whole program is presented, and the explanation is below.

Import the required module from the training program and use the classifier because we need to do more facial detection in this program.

import cv2

import numpy as np

import os

from time import sleep

from PIL import Image

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

recognizer = cv2.createLBPHFaceRecognizer()

The people listed in the folder should be entered in the variable named labels. Insist on performing each step in the same order. It is "Esther" and "Unknown" in my situation.

labels = ["Esther", "Unknown"]

We need the trainer file to detect faces, so we import it into our software.

recognizer.load("face-trainner.yml")

The camera provides the video stream. It's possible to access any second pi camera by replacing 0 with 1.

cap = cv2.VideoCapture(0)

In the next step, we separate the footage into images and transform it into grayscale, and afterwards, we search for a face in the photo. To save the area of attention grey image, we must first detect the face and then crop the image to remove them.

ret, img = cap.read()

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)

for (x, y, w, h) in faces:

roi_gray = gray[y:y+h, x:x+w]

id_, conf = recognizer.predict(roi_gray)

It informs us how sure the program is in its ability to identify the person. We write the code below to get the person's name based on their Identification number. A square should be drawn around the user's head, written outside their name.

if conf>=80:

font = cv2.FONT_HERSHEY_SIMPLEX

name = labels[id_]

cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)

We must playback and afterwards break the video stream we just evaluated, which is done by pressing a wait key.

cv2.imshow('Preview',img)

if cv2.waitKey(20) & 0xFF == ord('q'):

break

While running this application, ensure the Raspberry is linked to a display via HDMI. A display with your video stream and the name will appear when you open the application. There will be a box around the face identified in the video feed, and if your software recognizes the face, it displays that person’s name. As evidenced by the image below, we've trained our software to identify my face, which shows the recognition process in action.

The face recognition code

import cv2

import numpy as np

import os

from PIL import Image

labels = ["Esther", "Unknown"]

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

recognizer = cv2.createLBPHFaceRecognizer()

recognizer.load("face-trainner.yml")

cap = cv2.VideoCapture(0)

while(True):

ret, img = cap.read()

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5) #Recog. faces

for (x, y, w, h) in faces:

roi_gray = gray[y:y+h, x:x+w]

id_, conf = recognizer.predict(roi_gray)

if conf>=80:

font = cv2.FONT_HERSHEY_SIMPLEX

name = labels[id_]

cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)

cv2.imshow('Preview',img)

if cv2.waitKey(20) & 0xFF == ord('q'):

break

cap.release()

cv2.destroyAllWindows()

Face detection code

import cv2

import numpy as np

import os

from PIL import Image

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

recognizer = cv2.createLBPHFaceRecognizer()

 

Face_ID = -1

pev_person_name = ""

y_ID = []

x_train = []

Face_Images = os.path.join(os.getcwd(), "Face_Images")

print (Face_Images)

for root, dirs, files in os.walk(Face_Images):

for file in files:

if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):

path = os.path.join(root, file)

person_name = os.path.basename(root)

print(path, person_name)

if pev_person_name!=person_name:

Face_ID=Face_ID+1

pev_person_name = person_name

Gery_Image = Image.open(path).convert("L")

Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)

Final_Image = np.array(Crop_Image, "uint8")

faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)

print (Face_ID,faces)

for (x,y,w,h) in faces:

roi = Final_Image[y:y+h, x:x+w]

x_train.append(roi)

y_ID.append(Face_ID)

recognizer.train(x_train, np.array(y_ID))

recognizer.save("face-trainner.yml")

DC motor circuit

Since the "How to operate DC motor in Rpi 4" guide has covered the basics of controlling a DC motor, I won't provide much detail here. Please read this topic if you haven't already. Check all the wiring before using the batteries in your circuit, as outlined in the image above. Everything must be in place before connecting your breadboard's power lines to the battery wires.

Testing

To activate the motors, open the terminal because you'll use the Python code-writing program called Nano in this location. For those of you who aren't familiar with the command-line text editor known as Nano, I'll show you how to use some of its commands as we go.

This code will activate the motor for two seconds, so try it out.

import RPi.GPIO as GPIO

from time import sleep

GPIO.setmode(GPIO.BOARD)

Motor1A = 16

Motor1B = 18

Motor1E = 22

GPIO.setup(Motor1A,GPIO.OUT)

GPIO.setup(Motor1B,GPIO.OUT)

GPIO.setup(Motor1E,GPIO.OUT)

print "Turning motor on"

GPIO.output(Motor1A,GPIO.HIGH)

GPIO.output(Motor1B,GPIO.LOW)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(2)

print "Stopping motor"

GPIO.output(Motor1E,GPIO.LOW)

GPIO.cleanup()

The very first two lines of code tell Python whatever the program needs.

The RPi.GPIO package is what the first line is looking for. The RPi GPIO pins are controlled by this module, which takes care of all the grunt work.

It is necessary to delay the script for a few seconds to provide the package time to operate, therefore leaving a motor to run for a while.

The method set mode is used to leverage the RPi's board numbers. We'll tell Python that the pins 16 through 22 correspond to the motors.

Pin A is used to steer the L293D in one way, and pin B is used to direct it in the opposite direction. You can turn on the motor using an Enable pin, referred to as E, inside the test file.

Finally, use GPIO.OUT to inform the RPi that all these are outputs.

The RPi is ready to turn the motor after the software is set up. After a 2-second pause, some pins will be turned on and subsequently turned off, as seen in the code.

Save and quit by hitting CTRL-X, and a confirmation notice appears at the bottom. To acknowledge, tap Y and Return. You can now run the program in the terminal and watch as the motor begins to spin up.

sudo python motor.py

If the motor doesn't move, check the cabling or power supply. The debug process might be a pain, but it's an important phase in learning new things!

Now turn in the other direction.

I'll teach you how to reverse a motor's rotation to spin in the opposite direction.

There's no need to touch the wiring at this point; it's all Python. Create a new script called motorback.py to accomplish this. Using Nano, type the command:

./script

Please type in the given program:

import RPi.GPIO as GPIO

from time import sleep

GPIO.setmode(GPIO.BOARD)

Motor1A = 16

Motor1B = 18

Motor1E = 22

GPIO.setup(Motor1A,GPIO.OUT)

GPIO.setup(Motor1B,GPIO.OUT)

GPIO.setup(Motor1E,GPIO.OUT)

print "Going forwards"

GPIO.output(Motor1A,GPIO.HIGH)

GPIO.output(Motor1B,GPIO.LOW)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(2)

print "Going backwards"

GPIO.output(Motor1A,GPIO.LOW)

GPIO.output(Motor1B,GPIO.HIGH)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(2)

print "Now stop"

GPIO.output(Motor1E,GPIO.LOW)

GPIO.cleanup()

Save by pressing CTRL, then X, then Y, and finally Enter key.

For reverse compatibility, we've set Motor1A low in the script.

Programmers use the terms "high" and "low" to denote the state of being on or off, respectively.

Motor1E will be turned off to halt the motor.

Irrespective of what A is doing; the motor can be turned on or off using the Enable switch.

Take a peek at the Truth Table to understand better what's going on.

When Enabled, only two states allow the motor to move; A or B is high, and not both high at the same time.

Putting it all together

At this point, we have designed our face detection system and the dc motor control circuit; now, we will put the two systems to work together. When the user is verified, the dc motor should run to open the cd rom drive and close after a few seconds.

In our verify code, we will copy the code below to spin the motor in one direction “open the door” when the user is verified. We will also increase the time to 5 seconds to simulate the door's time to open for the user to get through. This also allows the motor to spin long enough to open and close the cd room completely. I would also recommend putting a stopper on the cd room door so that it doesn't close all the war and get stuck.

if conf>=80:

font = cv2.FONT_HERSHEY_SIMPLEX

name = labels[id_] #Get the name from the List using ID number

cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

#place our motor code here

GPIO.setmode(GPIO.BOARD)

Motor1A = 16

Motor1B = 18

Motor1E = 22

 

GPIO.setup(Motor1A,GPIO.OUT)

GPIO.setup(Motor1B,GPIO.OUT)

GPIO.setup(Motor1E,GPIO.OUT)

Print("Openning")

GPIO.output(Motor1A,GPIO.HIGH)

GPIO.output(Motor1B,GPIO.LOW)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(5)

print("Closing")

GPIO.output(Motor1A,GPIO.LOW)

GPIO.output(Motor1B,GPIO.HIGH)

GPIO.output(Motor1E,GPIO.HIGH)

sleep(5)

print("stop")

GPIO.output(Motor1E,GPIO.LOW)

GPIO.cleanup()

cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)

Output

The advantages of face recognition over alternative biometric verification methods for home security

An individual's biometric identity can be verified by looking at various physical and behavioural characteristics, such as a person's fingerprint, keystrokes, facial characteristics, and voice. Face recognition seems to be the winner because of the precision, simplicity, and lack of contact detection.

Face-recognition technology will continue and will get better over time. The tale has evolved, and your alternatives have grown due to smart tech.

What are the advantages of employing Facial Recognition when it comes to smart home security?

Using an RPi as a surveillance system means you can take it with you and use it wherever you need it.

  1. High accuracy rate

For the most part, the face-recognition software employed in security systems can reliably assess whether or not the individual attempting entry matches your record of those authorized to enter. On the other hand, certain computer programs are more precise when it comes to identifying faces from diverse angles or different countries.

Concerned users may be relieved to learn that some programs have the option of setting custom confidence criteria, which can significantly minimize the likelihood of the system giving false positives. Alternatively, 2-factor authentication can be used to secure your account.

  1. Automation

When your smart security system discovers a match between a user and the list of persons you've given access to, it will instantly let them in. Answering the doorbell or allowing entry isn't necessary.

  1. Smart integration

Face recognition solutions can be readily integrated into existing systems using an API.

Cons of Facial Recognition

  1. Privacy of individuals and society as a whole is more at risk

A major drawback of face recognition technology is that it puts people's privacy at risk. Having one's face collected and stored in an unidentified database does not sit well with the average person.

Confidentiality is so important that several towns have prohibited law enforcement from using real-time face recognition monitoring. Rather than using live face recognition software, authorities can use records from privately-held security cameras in certain situations.

  1. can infringe on one's liberties

Having your face captured and stored by face recognition software might make you feel monitored and assessed for your actions. It is a form of criminal profiling since the police can use face recognition to put everybody in their databases via a virtual crime lineup.

  1. It's possible to deceive modern technology.

Face recognition technology can be affected by various other elements, including camera angle, illumination, and other aspects of a picture or video. Facial recognition software can be fooled by those who do disguises or alter their appearance.

Conclusion

This article walked us through creating a complete Smart Security System using a facial recognition program from the ground up. Our model can now recognize faces with the help of OpenCV image manipulation techniques. There are several ways to further your knowledge of supervised machine learning programming with raspberry pi 4, including adding an alarm to ring whenever an individual's face is not recognized or creating a database of known faces to act like a CCTV surveillance system. We'll design a security system with a motion detector and an alarm in the next session.

Smart Attendance System using RFID with Raspberry Pi 4

Greetings! This is the complete project of our Raspberry Pi 4 tutorials. In our previous tutorial, we learned to set up our raspberry pi as a virtual private network server. In this tutorial, we will design a smart attendance system using an RFID card reader, which we will use to sign in students in attendance in a class.

First, we will design a database for our website, then we will design the RFID circuit for scanning the student cards and displaying present students on the webpage, and finally, we will design the website that we will use to display the attendees of a class.

Where To Buy?
No.ComponentsDistributorLink To Buy
1BreadboardAmazonBuy Now
2Jumper WiresAmazonBuy Now
3LCD 16x2AmazonBuy Now
4LCD 16x2AmazonBuy Now
5Raspberry Pi 4AmazonBuy Now

Components

  • RFID card kit
  • Breadboard
  • Jumper wires
  • Raspberry pi 4
  • I2C LCD screen

Design a database in MySQL server

Additionally, the Database server offers a DBMS that can be queried and connected to and can integrate with a wide range of platforms. High-volume production environments are no problem for this software. The server's connection, speed, and encryption make it a good choice for accessing the database.

There are clients and servers for MySQL. This system contains a SQL server with many threads that support a wide range of back ends, utility programs, and application programming interfaces.

We'll walk through the process of installing MySQL on the RPi in this part. The RFID kit's database resides on this server, and we'll utilize it to store the system's signed users.

There are a few steps before we can begin installing MySQL on a Raspberry Pi. There are two ways to accomplish this.

sudo apt update

sudo apt upgrade

Installing the server software is the next step.

Here's how to get MySQL running on the RPi using the command below:

sudo apt install MariaDB-server

Having installed MySQL on the Raspberry Pi, we'll need to protect it by creating a passcode for the "root" account.

If you don't specify a password for your MySQL server, you can access it without authentication.

Using this command, you may begin safeguarding MySQL.

sudo mysql_secure_installation

Follow the on-screen instructions to set a passcode for the root account and safeguard your MySQL database.

To ensure a more secured installation, select "Y" for all yes/no questions.

Remove elements that make it easy for anyone to access the database.

We may need that password to access the server and set up the database and user for applications like PHPMyAdmin.

For now, you can use this command if you wish to access the Rpi's MySQL server and begin making database modifications.

sudo MySQL –u root -p

To access MySQL, you'll need to enter the root user's password, which you created in Step 3.

Note: Typing text will not appear while typing, as it does in typical Linux password prompts.

Create, edit, and remove databases with MYSQL commands now available. Additionally, you can create, edit, and delete users from inside this interface and provide them access to various databases.

After typing "quit;" into MySQL's user interface, you can exit the command line by pressing the ESC key.

Pressing CTRL + D will also exit the MYSQL command line.

You may proceed to the next step now that you've successfully installed MySQL. In the next few sections, we'll discuss how to get the most out of our database.

Creating database and user

The command prompt program MySQL must be restarted before we can proceed with creating a username and database on the RPi.

The MySQL command prompt can be accessed by typing the following command. After creating the "root" user, you will be asked for the password.

To get things started, run the command to create a MySQL database.

The code to create a database is "CREATE DATABASE", and then the name we like to give it.

This database would be referred to as "rfidcardsdb" in this example.

To get started, we'll need to create a MySQL user. The command below can be used to create this new user.

"rfidreader" and "password" will be the username and password for this example. Take care to change these when making your own.

create user “rfidreader" @localhost identified by "password."

We can now offer the user full access to the database after it has been built.

Thanks to this command, " "rfidreader" will now have access to all tables in our "rfidcardsdb" database.

grant all on rfidcardsdb.* to "rfidreader" identified by "password."

We have to flush the permission table to complete our database and user set up one last time. You cannot grant access to the database without flushing your privilege table.

The command below can be used to accomplish this.

Now we have our database configured, and now the next step is to set up our RFID circuit and begin authenticating users. Enter the “Exit” command to close the database configuration process.

The RFID card circuit

An RFID reader reads the tag's data when a Rfid card is attached to a certain object. An RFID tag communicates with a reader via radio waves.

In theory, RFID is comparable to bar codes in that it uses radio frequency identification. While a reader's line of sight to the RFID tag is preferable, it is not required to be directly scanned by the reader. You can't read an RFID tag that is more than three feet away from the reader. To quickly scan a large number of objects, the RFID tech is used, and this makes it possible to identify a specific product rapidly and effortlessly, even if it is sandwiched between several other things.

HOW RFID CARD READERS AND WRITERS WORK

There are major parts to Cards and tags: an IC that holds the unique identifier value and a copper wire that serves as the antenna:

Another coil of copper wire can be found inside the RFID card reader. When current passes through this coil, it generates a magnetic field. The magnetic flux from the reader creates a current within the wire coil whenever the card is swiped near the reader. This amount of current can power the inbuilt IC of the Card. The reader then reads the card's unique identifying number. For further processing, the card reader transmits the card's unique identification number to the controller or CPU, such as the Raspberry Pi.

RFID card reader circuit

Connect the reader to the Raspberry the following way:

use the code spi bcm2835 to see if it is displayed in the terminal.

lsmod | grep spi

SPI must be enabled in the setup for spi bcm2835 to appear (see above). Make sure that RPi is running the most recent software.

Make use of the python module.

sudo apt-get install python

The RFID RC522 can be interacted with using the Library SPI Py, found on your RPi.

cd ~

git clone https://github.com/lthiery/SPI-Py.git

cd ~/SPI-Py

sudo python setup.py install

cd ~

git clone https://github.com/pimylifeup/MFRC522-python.git

To test if the system is functioning correctly, let's write a small program:

cd ~/

sudo nano test.py

now copy the following the code into the editor

import RPi.GPIO as GPIO

import sys

sys.path.append('/home/pi/MFRC522-python')

from mfrc522 import SimpleMFRC522

reader = SimpleMFRC522()

print("Hold a tag near the reader")

try:

id, text = reader.read()

print(id)

print(text)

finally:

GPIO.cleanup()

Register card

Here we will write a short python code to register users whenever they swipe a new card on the RFID card reader. First, create a file named addcard.py.

copy the following code.

import pymysql

import cv2

from mfrc522 import SimpleMFRC522

import RPi.GPIO as GPIO

import drivers

display = drivers.Lcd()

display.lcd_display_string('Scan your', 1)

display.lcd_display_string('card', 2)

reader = SimpleMFRC522()

reader = SimpleMFRC522()

id, text = reader.read()

display = drivers.Lcd()

display.lcd_display_string('Type your name', 1)

display.lcd_display_string('in the terminal', 2)

user_id = input("user name?")

# put serial_no uppercase just in case

serial_no = '{}'.format(id)

# open an sql session

sql_con = pymysql.connect(host='localhost', user='rfidreader', passwd='password', db='rfidcardsdb')

sqlcursor = sql_con.cursor()

# first thing is to check if the card exist

sql_request = 'SELECT card_id,user_id,serial_no,valid FROM cardtbl WHERE serial_no = "' + serial_no + '"'

count = sqlcursor.execute(sql_request)

if count > 0:

print("Error! RFID card {} already in database".format(serial_no))

display = drivers.Lcd()

display.lcd_display_string('The card is', 1)

display.lcd_display_string('already registered', 2)

T = sqlcursor.fetchone()

print(T)

else:

sql_insert = 'INSERT INTO cardtbl (serial_no,user_id,valid) ' + \

'values("{}","{}","1")'.format(serial_no, user_id)

count = sqlcursor.execute(sql_insert)

if count > 0:

sql_con.commit()

# let's check it just in case

count = sqlcursor.execute(sql_request)

if count > 0:

print("RFID card {} inserted to database".format(serial_no))

T = sqlcursor.fetchone()

print(T)

display = drivers.Lcd()

display.lcd_display_string('Congratulations', 1)

display.lcd_display_string('You are registered', 2)

GPIO.cleanup()

The program starts by asking the user to scan the card.

Then it connects to the database using the pymysql.connect function.

If we enter our name successfully, the program inserts our details in the database, and a congratulations message is displayed to show that we are registered.

Creating the main system

Using the LCD command library, you can:

  1. Install git

sudo apt install git

  1. Download and install the repo on your Raspberry Pi.

cd /home/pi/

git clone https://github.com/the-raspberry-pi-guy/lcd.git

cd lcd/

  1. Then begin installation with the following

sudo ./install.sh

After installation is complete, try running one of the program files

cd /home/pi/lcd/

Next, we will install the mfrc522 library, which the RFID card reader uses. This will enable us to read the card number for authentication. We will use:

Pip install mfrc522

Next, we will import the RPI.GPIO library enables us to utilize the raspberry pi pins to power the RFID card and the LCD screen.

Import RPi.GPIO

We will also import the drivers for our LCD screen. The LCD screen used here is the I2C 16 * 2 LCD.

Import drivers

Then we will import DateTime for logging the time the user has swiped the card into the system.

Import DateTime

In order to read the card using the rfid card, we will use the following code:

reader = SimpleMFRC522()

display = drivers.Lcd()

display.lcd_display_string('Hello Please', 1)

display.lcd_display_string('Scan Your ID', 2)

try:

id, text = reader.read()

print(id)

display.lcd_clear()

finally:

GPIO.cleanup()

The LCD is divided into two rows, 1 and 2. To display text in the first row, we use:

Display.lcd_display_string(“string”,1)

And 2 to display in the second row.

After scanning the card, we will connect to the database we created earlier and search whether the scanned card is in the database or not.

If the query is successful, we can display if the card is in the database; if not, we can proceed, then the user needs to register the card.

If the user is registered, the system saves the logs, the username and the time the card was swapped in a text file located in the/var/www/html root directory of the apache server.

Note that you will need to be a superuser to create the data.txt file in the apache root directory. For this, we will use the following command in the Html folder:

Sudo touch data.txt

Then we will have to change the access privileges of this data.txt file to use the program to write the log data. For this, we will use the following code:

Sudo chmod 777 –R data.txt

The next step will be to display this data on a webpage to simulate an online attendance register. The code for the RFID card can be found below.

#! /usr/bin/env python

# Import necessary libraries for communication and display use

import RPi.GPIO as GPIO

from mfrc522 import SimpleMFRC522

import pymysql

import drivers

import os

import numpy as np

import datetime

# read the card using the rfid card

reader = SimpleMFRC522()

display = drivers.Lcd()

display.lcd_display_string('Hello Please', 1)

display.lcd_display_string('Scan Your ID', 2)

try:

id, text = reader.read()

print(id)

display.lcd_clear()

# Load the driver and set it to "display"

# If you use something from the driver library use the "display." prefix first

try:

sql_con = pymysql.connect(host='localhost', user='rfidreader', passwd='password', db='rfidcardsdb')

sqlcursor = sql_con.cursor()

# first thing is to check if the card exist

cardnumber = '{}'.format(id)

sql_request = 'SELECT user_id FROM cardtbl WHERE serial_no = "' + cardnumber + '"'

now = datetime.datetime.now()

print("Current date and time: ")

print(str(now))

count = sqlcursor.execute(sql_request)

if count > 0:

print("already in database")

T = sqlcursor.fetchone()

print(T)

for i in T:

print(i)

file = open("/var/www/html/data.txt","a")

file.write(i +" Logged at "+ str(now) + "\n")

file.close()

display.lcd_display_string(i, 1)

display.lcd_display_string('Logged In', 2)

else:

display.lcd_clear()

display.lcd_display_string(“Please register”, 1)

display.lcd_display_string(cardnumber,2)

except KeyboardInterrupt:

# If there is a KeyboardInterrupt (when you press ctrl+c), exit the program and cleanup

print("Cleaning up!")

display.lcd_clear()

finally:

GPIO.cleanup()

Building the website

Now we are going to design a simple website with Html that we are going to display the information of the attending students of a class, and to do this, we will have to install a local server in our raspberry pi.

Installing Apache Web Server

Web, database, and mail servers all run on various server software. Each of these programs can access and utilize files located on a physical server.

A web server's main responsibility is to provide internet users access to various websites. It serves as a bridge between a server and a client machine to accomplish this. Each time a user makes a request, it retrieves data from the server and posts it to the web.

A web server's largest issue is to simultaneously serve many web users, each of whom requests a separate page.

For internet users, convert them to Html pages and offer them in the browser. Whenever you hear the term "webserver," consider the device in charge of ensuring successful communication in a network of computers.

How Does Apache Work?

Among its responsibilities is establishing a link between a server and a client's web browser (such as Chrome to send and receive data (client-server structure). As a result, the Apache software can be used on any platform, from Microsoft to Unix.

Visitors to your website, such as those who wish to view your homepage or "About Us" page, request files from your server via their browser, and Apache returns the required files in a response (text, images, etc.).

Using HTTP, the client and server exchange data with the Apache webserver, ensuring that the connection is safe and stable.

Because of its open-source foundation, Apache promotes a great deal of customization. As a result, web developers and end-users can customize the source code to fit the needs of their respective websites.

Additional server-side functionality can be enabled or disabled using Apache's numerous modules. Encryption, password authentication, and other capabilities are all available as Apache modules.

Step 1

To begin, use the following code to upgrade the Pi package list.

sudo apt-get update

sudo apt-get upgrade

Step 2

After that, set up the Apache2 package.

sudo apt install apache2 -y

That concludes our discussion. You can get your Raspberry Pi configured with a server in just two easy steps.

Type the code below to see if the server is up and functioning.

sudo service apache2 status

You can now verify that Apache is operating by entering your Raspberry Pi's IP address into an internet browser and seeing a simple page like this.

Use the following command in the console of your Raspberry Pi to discover your IP.

hostname-i

Only your home network and not the internet can access the server. You'll need to configure your router's port forwarding to allow this server to be accessed from any location. Our blog will not be discussing this topic.

Setting up HTML page for editing.

The standard web page on the Raspberry Pi, as depicted above, is nothing more than an HTML file. First, we will generate our first Html document and develop a website.

Step 1

Let's start by locating the Html document on the Raspbian system. You can do this by typing the following code in the command line.

cd /var/www/html

Step 2

To see a complete listing of the items in this folder, run the following command.

ls -al

The root account possesses the index.html file; therefore, you'll see every file in the folder.

As a result, to make changes to this file, you must first change the file's ownership to your own. The username "pi" is indeed the default for the Raspberry Pi.

sudo chown pi: index.html

To view the changes you've made, all you have to do is reload your browser after saving the file.

Building your first HTML page

Here, we'll begin to teach you the fundamentals of HTML.

To begin a new page, edit the index.html file and remove everything inside it using the command below.

sudo nano index.html

Alternatively, we can use a code editor to open the index.html file and edit it. We will use VS code editor that you can easily install in raspberry pi using the preferences then recommended software button.

HTML Tags

You must first learn about HTML tags, which are a fundamental part of HTML. A web page's content can be formatted in various ways by using tags.

There are often two tags used for this purpose: the opening and closing tags. The material inside these tags behaves according to what these tags say.

The p> tag, for example, is used to add paragraphs of text to the website.

<p>The engineering projects</p>

Web pages can be made more user-friendly by using buttons, which can be activated anytime a user clicks on them.

<button>Manual Refresh</button>

<button>Sort By First Name</button>

<button>Sort By last Name</button>

The basic format of an HTML document

A typical HTML document is organized as follows:

Let us create the page that we will use in this project.

<html>

<head>

</head>

<body>

<div id="pageDiv">

<p> The engineering projects</p>

<button type="button" id="refreshNames">Manual Refresh</button><br/>

<button type="button" id="firstSort">Sort By First Name</button><br/>

<button type="button" id="lastSort">Sort By Last Name</button>

<div id="namesFromFile">

</div>

</div>

</body>

</html>

<!DOCTYPE html>: HTML documents are identified by this tag. This does not necessitate the use of a closing tag.

<html>: This tag ensures that the material inside will meet all of the requirements for HTML. There is a /html> tag at the end of this.

</head>: It contains data about the website, but when you view it in a browser, you won't be able to see anything.

A metadata tag in the head tag can be used to set default character encoding in your website, for instance. This has a /head> tag at the end of it.

<head>

<meta charset="utf-8">

</head>

Also, you can have a title tag inside the head tag. This tag sets the title of your web page and has a closing </title> tag.

<head>

<meta charset="utf-8">

<title> My website </title>

</head>

<body>: The primary focus of the website page is included within this tag. Everything on a web page is usually contained within body tags once you've opened it. This has a /body> tag at the end of it. Many other tags can be found in this body tag, but we'll focus on the ones you need to get started with your first web page.

We will go ahead and style our webpage using CSS with the lines of codes below;

<head>

<!--

body {

width:100%;

background:#ccc;

color:#000;

text-align:left;

margin:0

;padding:10px

;font:16px/18pxArial;

}

button {

width:160px;

margin:0 0 10px;}

#pageDiv {

width:160px;

margin:20px auto;

padding:20px;

background:#ddd;

color:#000;

}

#namesFromFile {

margin:20px 0 0;

padding:10px;

background:#fff;

color:#000;

border:1px solid #000;

border-radius:10px;

}

-->

</style>

</head>

The style tags is a cascading style sheet syntax that lets developers style the webpages however they prefer.

Adding images to your web page

You can add images to your web page by using the <img> tag. It is also a void element and doesn’t have a closing tag. It takes the following format

<img src="URL of image location">

For example, let’s add an image of the Seeeduino XIAO

<p>The Engineering projects</p>

<img src="https://www.theengineeringprojects.com/wp-content/uploads/2022/04/TEP-Logo.png">

Reload the browser to see the changes

How to display the attendance list on the webpage

This is the last step of this project, and we will implement a program that reads our data.txt file from the apache root directory and display it on the webpage that we designed. Since we already have our webpage up and running, we will use the javascript programming language to implement this function of displaying the log list on the webpage. All changes that we are about to implement will be done in the index.html file; therefore, open it in the visual studio code editor.

Javascript – What is it?

JavaScript is a dynamic computer programming language. It is lightweight and most commonly used as a part of web pages, whose implementations allow client-side scripts to interact with the user and make dynamic pages. It is an interpreted programming language with object-oriented capabilities.

Advantage of javascript

One of the major strengths of JavaScript is that it does not require expensive development tools. You can start with a simple text editor such as Notepad.

How to use javascript with this program

Well, javascript as mentioned earlier is a very easy to use language that simply requires us to put the script tags inside the html tags.

<script> script program </script>

<header>

<script>

Here goes our javascript program

</script>

</header>

The javascript code first opens the data.txt file, then it reads all the contents form that file. Then it uses the xmlHttpRequest function to display the contents on the webpage. The buttons on the webpage activate different functions in the code.For instance manual refresh activates:

function refreshNamesFromFile(){

var namesNode=document.getElementById("namesFromFile");

while(namesNode.firstChild)

{ namesNode.removeChild(namesNode.firstChild);

}

getNameFile();

}

This function reads the content of the data.txt

The sort by buttons activate the sort function to sort the logged users either by first name or last name. The function that gets activated by these buttons is:

function sortByName(e)

{ var i=0, el, sortEl=[], namesNode=document.getElementById("namesFromFile"), sortMethod, evt, evtSrc, oP;

evt=e||event;

evtSrc=evt.target||evt.srcElement;

sortMethod=(evtSrc.id==="firstSort")?"first":"last";

while(el=namesNode.getElementsByTagName("P").item(i++)){

sortEl[i-1]=[el.innerHTML.split(" ")[0],el.innerHTML.split(" ")[1]];

}

sortEl.sort(function(a,b){

var x=a[0].toLowerCase(), y=b[0].toLowerCase(), s=a[1].toLowerCase(), t=b[1].toLowerCase();

if(sortMethod==="first"){

return x<y?-1:x>y?1:s<t?-1:s>t?1:0;

}

else{

return s<t?-1:s>t?1:x<y?-1:x>y?1:0;

}

});

while(namesNode.firstChild){

namesNode.removeChild(namesNode.firstChild);

}

for(i=0;i<sortEl.length;i++){

oP=document.createElement("P");

namesNode.appendChild(oP).appendChild(document.createTextNode(sortEl[i][0]+" "+sortEl[i][1]));

namesNode.appendChild(document.createTextNode("\r\n"));

//insert tests -> for style/format

if(sortEl[i][0]==="John"){

oP.style.color="#f00";

}

if(sortEl[i][0]==="Sue")

{ oP.style.color="#0c0";

oP.style.fontWeight="bold";

}

}

}

Output

With no logged-in users

With one logged in user

With two users logged in

Sort by the first name

Sort by last name

Applications of RFID

Benefits of Smart Attendance System

Automated attendance systems are excessively time-consuming and sophisticated in the current environment. It is possible to strengthen company ethics and work culture by using an effective smart attendance management system. Employees will only have to complete the registration process once, and images get saved in the system's database. The automated attendance system uses a computerized real-time image of a person's face to identify them. The database is updated frequently, and its findings are accurate in a user interactive state because each employee's presence is recorded.

Smart attendance systems have several advantages, including the following:

Students in elementary, secondary, and postsecondary institutions can utilize this system to keep track of their attendance. It can also keep track of workers' schedules in the workplace. Instead of using a traditional method, it uses RFID tags on ID cards to quickly and securely track each person.

What are the applications of a smart attendance system?

Computerized smart attendance can be applied in many areas in our day today activities which include the following:

1) Real-time tracking – Keeping track of staff attendance using mobile devices and desktops is possible.

2)Decreased errors – A computerized attendance system can provide reliable information with minimal human intervention, reducing the likelihood of human error and freeing up staff time.

3) Management of enormous data – It is possible to manage and organize enormous amounts of data precisely in the db.

4) Improve authentications and security – A smart system has been implemented to protect the privacy and security of the user's data.

5) Reports – Employee log-ins and log-outs can be tracked, attendance-based compensation calculated, the absent list may be viewed and required actions are taken, and employee personal information can be accessed.

Conclusion

This tutorial taught us to build a smart RFID card authentication project from scratch. We also learned how to set up an apache server and design a circuit for the RFID and the LCD screen. To increase your raspberry programming skills, you can proceed to building a more complex system with this code for example implementing face detection that automatically starts the authentication process once the student faces the camera or implement a student log out whenever the student leaves the system. In the following tutorial, we will learn how to build a smart security system using facial recognition.

Syed Zain Nasir

I am Syed Zain Nasir, the founder of <a href=https://www.TheEngineeringProjects.com/>The Engineering Projects</a> (TEP). I am a programmer since 2009 before that I just search things, make small projects and now I am sharing my knowledge through this platform.I also work as a freelancer and did many projects related to programming and electrical circuitry. <a href=https://plus.google.com/+SyedZainNasir/>My Google Profile+</a>

Share
Published by
Syed Zain Nasir