Hello friends, I hope you all are having fun. Welcome to the 10th tutorial of our Raspberry Pi programming course. In the last chapter, PWM was utilized to regulate the DC motor's speed and direction with a motor driver L293D. In this chapter, we'll advance our skills with PWM and use it to control a stepper motor using the same motor driver L293D.
Here's the video demonstration of this project:
Let's get started:
Here's the list of components, which we will use to control the speed and direction of Raspberry Pi 4:
The Raspberry Pi with desktop is required for this project. An SSH connection can be made, or the RPi can be shown on an LCD screen with a keyboard, and mouse. (We discussed this in previous chapters)
We will use an L293D motor driver to control the direction and speed of the stepper motor. In our last lecture, we controlled the DC motor with the same driver i.e. L293D and I explained it's working & why we use it? in detail there. So, please check that tutorial out, if you are new to this motor driver.
The below figure shows the circuit diagram of Stepper motor interfacing with Raspberry Pi4:
The wire mappings from my Raspberry Pi 4 to a stepper motor driver are shown in the below diagrams:
Open Thonny text editor. Importing the GPIO and time modules is the first step. Make sure you type the GPIO module's name exactly, case-sensitively, on the first line.
Congratulations! You have made it to the end of this tutorial. We have seen how PWM is used with a motor driver IC to control a stepper motor. We have also seen different stepper motor control techniques, how to set up our circuit diagram, and how to write a Python program that controls the steps for our motor. In the next tutorial, we will have a look at how to control a Servo Motor with Raspberry Pi 4 using Python. Till then, take care and have fun !!!
Data loss or inaccessibility after a natural disaster is a significant concern. After Hurricane Sandy in 2012, data centers in Manhattan had to extract water from the generator rooms and restore switchgear to become operational. In the U.K., flooding in Leeds caused immense water damage to a Vodafone facility that it had to close for several days.
According to the Insurance Information Institute, over 25% of businesses never reopen after an extreme weather event. Fortunately, preventing disaster-related downtime is possible through proper monitoring systems and a disaster recovery plan.
Knowing which natural disasters to expect based on your server room and data center location can make the difference. Here’s how to protect your facility from the most common natural disasters.
Three levels of fire protection exist.
Your country will have specific fire suppression system standards. Typically, data centers choose between two sprinkler systems, wet pipes or pre-engineered. The former will have water in its pipes which automatically flow once you trigger the fire alarm. The only con is that wet pipe sprinklers can suffer leakage, damaging the servers.
On the other hand, pre-engineered sprinklers require two-point activation to disperse water. It’s also the preferred choice for many businesses. Depending on the model, some pre-action sprinklers operate on a quadrant level. Therefore, they will only disperse water in that specific area once activated. Like the wet piping, this system poses a risk of water damage, so you should consider installing a gaseous system instead.
Gaseous systems employ a clean agent or inert gas. The latter uses nitrogen and argon to reduce the oxygen in the server room, thereby putting out the fire. Note that you will need to install sound muffling equipment to prevent damage to hard drives.
Clean agent systems like FM-200 are a better option. They eliminate the fire through absorption. Also, they have low emissions and are non-conductive and non-corrosive, making them environmentally friendly.
Regular inspections ensure you stay compliant. Typically, the expert will confirm that the suppression systems and fire alarm is in good condition. More importantly, they’ll inspect whether the fire protection interface meets the sensitivity prerequisites.
Flooding can cause grave consequences from short circuits to corrosion. Besides rainfall-related flooding of a server room, several water sources can threaten the data security of your server room. These include:
Before taking action, you’ll need to perform a risk assessment to determine areas that require water leakage detection.
Monitoring systems are the simplest way to detect water leakage to prevent water damage. Various systems are available in the market. Typically, businesses choose between zone leak and distance-read leak monitoring systems. Zone leak detection is the ideal choice for small server rooms. In comparison, distance-read monitoring systems are suitable for large server rooms precisely because they can pinpoint the exact location
Which system should you choose? We recommend a centralized one that detects water leakage and humidity, motion, plus ambient temperature. A vital aspect of this system is a distribution list for fire alerts. Emails, SNMP, and SMS are excellent circulation, monitoring, and reporting channels.
Different leak detection cable runs range between two and fifty meters. These cables can go under power cables. And if any water starts leaking from the air conditioning systems or backup drains, these cables can detect with pinpoint precision and let you know the exact floor tile.
In case of leakage, swift action is paramount to save equipment and other items in the server room. An experienced water damage remediation company will perform immediate water extraction and contents restoration.
Earthquakes impact the most damage to server rooms and data centers than any other natural disaster. Approximately 500,000 incidents occur globally. The double aftermath of IT equipment damage and downtime can result in business closure. And although the world is yet to come up with tech that would predict the exact time and location of an earthquake, there are seismic planning activities you can do to protect your servers.
Rigid bolting is the most common server protection approach. Doing so secures equipment racks to the floor. And as a result, it prevents the server racks from vibrating during an earthquake. While you may want to perform cabinet bolting instead, this method only protects the employees, and servers can only escape damage if it’s a mild earthquake.
Base isolation technology is a more effective earthquake protection method. It works by significantly decreasing the path through which vibrations pass. As a consequence, it channels the seismic motions away from your servers. If your data center is located in an earthquake-prone area, base isolation systems ensure your business achieves tier 4 classification, i.e., zero disruption to the critical load.
Preventing power outages in your server room is perhaps the primary focus when preparing for hurricanes.
Here’s what you can do.
Natural disasters have proven to be a significant threat to data centers. For some businesses, the equipment damage is beyond repair, and for others, the downtime results in loss of customer trust. Having robust monitoring and report systems can mitigate disaster-related damage, thereby ensuring business continuity. Preparedness always pays off. Ultimately, leaving your servers unprotected with such high stakes would be a miscalculation.
The kind of data generated in every business environment varies, and these data sets only become useful once they are harnessed to give useful insights. Data engineers are the professionals often tasked with building and maintaining key systems that collect, manage and convert these data sets.
The huge amount of data generated in different industries has expanded the data engineering profession to cover a wide range of skills, including web-crawling, distributed computing, data cleansing, data storage, and data retrieval.
Over the years, data storage has become a subject of interest in the data engineering field, thanks to the rise of modern data storage options. Most data engineers and scientists are familiar with SQL databases such as MSSQL, PostgreSQL, and MySQL, but the shift in preference is slowly changing this narrative.
The need for speed, flexibility, and adaptability has also become apparent in data handling, and non-conventional data storage technologies are now coming to market. Several businesses are also embracing storage as a service solution, and the trend is just getting better. Below, we have discussed the three data storages that are increasingly becoming popular among data engineers.
Search engines, documents stores, and columnar stores are the three technologies that are seeing wider adoption in the data handling field. Here’s a quick overview of how they operate and why they are becoming storage options of choice.
When defining data storage in the data engineering field, three critical aspects are used to score the best storage solutions. These are data indexing, data sharing, and data aggregation.
Ideally, each data indexing technique improves specific queries but undermines others. So knowing the kind of queries used can often help you choose the right data storage option.
Data sharding is a process in which a single dataset is split and distributed across multiple databases so they can be stored in various data nodes. The goal is often to increase the total storage capacity of a given system. Sharding determines how the data infrastructure will grow as more data is stored in the system.
On the other hand, data aggregation is the process where data is collected and expressed in a more summarized manner before they are ready for statistical analysis. The wrong data aggregation strategy can limit the performance and the types of reports generated. Below, we’ve broken down the three data storage types based on the data indexing, sharding, and aggregation capabilities.
Search engine storage Elasticsearch is a data store that specializes in indexing texts. Unlike the traditional data stores that create indices based on the values in the field, this storage type allows for data retrieval with only a fragment of the text field. This is also done automatically through analyzers. The latter are modules that create multiple index keys after evaluating the field values and breaking them into smaller values.
Elasticsearch is built on top of Apache Lucene and provides a JSON-based REST API that refers to Lucene features. Scaling is often done by creating several Lucene shards and distributing them to multiple servers/nodes within a cluster. Therefore, each document is routed to its shard through the id field. When retrieving data, the master server sends each shard/ Lucene instance a copy of the query before it finally aggregates and ranks them for output.
Elasticsearch is document-based storage whose content can be bucketed by ranged, exact, or geolocation values. The buckets can also be grouped into finer details through nested aggregation. Metrics such as mean and standard deviations can be calculated easily for every layer, making it easy to analyze several parameters in a single query. However, it suffers the limitation of intra-document field comparisons. A solution is often to inject scripts as custom predicates, a feature that works for one-off analysis but is often unsustainable due to degraded performance in production.
MongoDB is a generic data store with lots of flexibility for indexing a wide range of data. However, unlike Elasticsearch, it’s designed to index the id field by default; hence you’ll need to manually create indices for the commonly queried fields. MongoDB’s text analyzer is also less powerful than that of Elasticsearch.
MongoDB’s cluster contains three types of servers: shard, config, and router. The servers will accept more requests when you scale the router, but most workloads are often directed to the shard servers. Like Elasticsearch, MongoDB documents are routed by default to their specific shards. When you execute a query request, the config server communicates to the router and shards the query. The router server then distributes the query and retrieves the results.
MongoDB’s Aggregation Pipeline is fast and very powerful. It operates on returned data in a stage-wise fashion, where each step can filter, transform and combine documents or unwind previously-aggregated groups. Since the operations are done step-by-step, the final documents are filtered, which minimizes the memory cost. Like Elasticsearch, MongoDB lacks the intra-document field comparison; hence it can’t use distributed computing.
Unlike MongoDB, Elasticsearch, and even the traditional SQL databases, Amazon Redshift doesn’t support data indexing. Instead, it reduces the query time by consistently sorting data on the disk. That is, each table has its sort key that determines how rows have been stored once the data is loaded.
Amazon Redshift’s cluster has one leader node and multiple compute nodes. The leader node computes and distributes queries before sampling intermediate results. Compared to MongoDB’s router servers, this leader node is very consistent and cannot be scaled horizontally. This creates some limitations but allows efficient caching for specific execution plans.
Since Amazon Redshift is a relational database that supports SQL, it’s quite popular among traditional database engineers. It also solves the slow aggregations common with MongoDB when analyzing mobile traffic. However, it doesn’t have the schema flexibility that Elasticsearch and MongoDB have. It’s also optimized for reading operations and hence suffers from performance issues during updates.
From the three alternative storage options above, choosing the ultimate best isn’t as obvious as it may seem. Depending on your unique data storage needs, one storage option is always better than the other. So instead of narrowing down to the ultimate best, you want to compare the different features and capabilities against your needs and then choose those that work best for you.
Hello friends, I hope you all are doing well. Welcome to the 9th tutorial of our Raspberry Pi programming course. In the last chapter, we generated a PWM signal from our Raspberry Pi to control the brightness of an LED. We also studied different functions used in Python to perform PWM. In this chapter, we'll get a bit advanced with PWM and use it to control the speed and direction of a DC motor with the help of a motor driver IC.
To control the speed & direction of the DC Motor, we will:
We will use the following components to control the DC motor speed:
Pulse Width Modulation(we studied in the previous tutorial) will be used to regulate the speed of a DC motor. A quick recall, a PWM signal is used to generate a variable voltage at the output depending on the duty cycle. The duty cycle refers to the length of time during which the signal is kept at a high level and determines how much power is given to the signal.
As a result of the PWM signal, the speed of a DC motor can be controlled in a non-resistive or non-dissipative manner.
The L293D pinout is shown in the following diagram.
The microcontrollers provide either 5V or 3.3V at their GPIO Pins, in the case of RPi4, it's 3.3V. The current rating of these GPIO pins is normally 10-50mA, which is quite low and it's justifiable as their sole purpose is to send the signal.
Now if we talk about DC Motors, they normally operate at 5V-48V and have a current rating from 100mA to 10A. So, we can't connect a DC motor directly to a microcontroller's pin. We need a motor driver to amplify the voltage and current.Moreover, DC motors also produce back EMF, which may burn the GPIO, so in order to protect the board, we should have a motor driver in between.
We have designed the circuit in the above section and now it's time to get our hands on Python code. We will be using the Thonny IDE in Raspberry Pi 4.
In this code, we will write a simple code to drive the motor forward for 5 seconds, then backward for another 5 seconds at a 50% duty cycle. You can alter any of these values as you see fit.
I will explain the code line by line for better understanding:
Motors from the DC series are commonly employed in electric locomotives and fast transit systems, as well as trolley vehicles. Because of their high starting torque, they're also found in cranes, hoists, and conveyors.
The use of DC shunt motors in rolling mills is due to their ability to accurately manage speed. They're used for driving lathes at a fixed speed, used in reciprocating and centrifugal pump drives, and also used in blowers, machines, and reciprocating pumps.
They can be found in a wide variety of machinery, including elevators, conveyors, heavy planers, shears, and punches, as well as intermittently high torque loads and air compressors.
Congratulations! You have made it to the end of this tutorial. We have seen how PWM is used with a motor driver IC to control a DC motor's speed and direction. In the next tutorial, we will have a look at how to Control a Stepper Motor with Raspberry Pi 4 using Python. Till then, take care. Have fun !!!
Hello friends, I hope you all are doing great. It's the 8th tutorial in our Raspberry Pi programming course. In the previous lectures, we interfaced LCD 16x2 and Keypad 4x4 with Raspberry Pi 4. In this chapter, we are not going to interface any external module with Pi, instead, we'll create a PWM signal in the raspberry pi using Python. Let's get started:
We are going to use the below components in today's PWM project:
Before going forward, let's first understand what is PWM:
Let's understand the working of PWM with an LED example. We can change the brightness of an LED using PWM. If we provide +5V, the LED will have full brightness, but if we provide +2.5V to the LED, its brightness will fade. We achieve +2.5V from a +5V signal by turning it ON and OFF continually. So, in a signal of 1 sec, if we turn it ON and OFF 100 times, the overall power of the signal will be halved as it's in an OFF state for 50% of the duration. This process is called Pulse Width Modulation(PWM).
The percentage for which the signal remains in the ON state during one cycle is called the duty cycle.
To get an ideal square wave, you need a duty cycle of 50%. The signal is always on(full-scale) with a 100% duty cycle, while the signal is always off(Ground) with a 0% duty cycle.
The inverse of the period is the frequency of the signal, which is the number of times a periodic change is accomplished per unit of time. Speed is determined by how quickly a signal goes from high to low i.e. how quickly a PWM completes a cycle. Constant voltage output is achieved by continually turning the digital signal on and off at a high frequency.
The 'PWM resolution' refers to the degree of control over the duty cycle. The more 'brightness' levels we can display, the greater our PWM resolution needs to be. Pprecise microcontroller timing is required because the duty cycle is normally around 50Hz. The more powerful the microcontroller, the shorter the time intervals it can keep track of. The microcontroller must not only time the 'interrupt,' which generates the pulse but also run the code that controls the LED output, which must be completed before the next interrupt is called, which is another limiting issue. It's also likely that you'll want your microcontroller to accomplish activities other than controlling the brightness of the LEDs, so you'll need some spare execution time between interrupts.
The fundamental benefit of greater PWM resolutions for LED PWM control is that it reduces the difference between 'off' and the LED's lowest achievable brightness. Suppose we have a duty cycle of 20,000 microseconds and a resolution of 10,000 microseconds. In that case, the difference in brightness between "off" and the lowest possible brightness will be 50 percent of the total brightness. The difference would be 10% at a resolution of 2,000 microseconds. The "PWM resolution" determines the number of brightness levels that we can support between 0% and 100% when it comes to brightness levels. (100 percent). Again, the better the resolution, the more precise the timing, and the more computing power is needed to process the information.
The above diagram shows a PWM resolution of 10%.
Depending on the nature of your application, the resolution and overall duty cycle requirements may be different. There is no need for precision control for simple displays; nevertheless, the ability to manage the brightness level may be crucial (think of the issue of mixing colors using an RGB LED, for example). More control and accuracy necessitate more microcontroller resources; thus, the trade-off is straightforward.
Even though hardware PWM is the preferred approach for generating PWM from the Raspberry Pi, we will use software PWM in this article.
Pins 2 and 6 of the Pi board can be used to supply the circuit with Vcc and ground.
The thorny Python IDE on raspberry pi will be used here to write our Python script. If you haven't already done so, please go back to Chapter 4 and read about how to get started with this IDE before reading on.
To keep things simple, we'll create a file called PMW.py and save it to our desktop.
We're using a 50 Hz software PWM signal to generate a customized sine wave with RPi. It has a 20-millisecond window at this frequency. During the application, the frequency does not fluctuate.
Increasing the software PWM duty cycle from 0 to 100 is required to produce a rectified sine wave. The PWM signal is applied to the LED in five-pulse trains every 0.1 seconds, with each train lasting 0.1 seconds.
As a result, the duty cycle is lowered from 100 to 1 in steps of minus one. Five PWM pulse trains, each lasting 0.1 seconds, are applied to each increment. Iteration continues indefinitely until a keyboard interrupt is received, at which point the user program terminates.
Import RPi.GPIO then time libraries. Then a simple script is run to begin. The GPIO.setwarnings() method is used to disable the warnings.
To set the RPi's PINs to the number of board, use the GPIO.setmode() function to set the pin numbering. The GPIO.setup() method configures pin 40 of the board as an output. However, the GPIO.PWM() technique is used to instantiate board pin 40 as a software PWM.
It is possible to write a user-defined setup() function to ensure that the software PWM has no duty cycle when it is first started. Only one instance of this function is ever called.
The duty cycle of the PWM signal is altered from 0 to 100 and then back to 0 in a user-defined loop() function. This occurs in increments of one, with a 0.1-second gap between each. For an endless number of times, the LED lights up and fades back down.
The PWM signal is turned off when a keyboard interrupt is received by calling the endprogram() method. The GPIO of the Raspberry Pi is then wiped clean.
Setup() and loop() are the two methods in a try-exception statement, and they are each called once.
A PWM instance can be created with the help of this function. This is a two-step process:
The syntax for this method is:
The number of the channel must be given in accordance with the user-Board program or BCM numbering.
This technique can be used with a PWM software instance. PWM duty cycle is all you need to know about this.
PWM instances can be accessed by calling this method from a Python program. A software PWM signal with the specified duty cycle is started at the supplied channel.
This technique can be used with a PWM software instance. There's only one thing needed: a new Hertz value for the PWM signal's frequency.
The frequency of the PWM output is changed when this method is used on a PWM object in Python.
The syntax is as follows:
An instance of PWM software can use this technique. One reason is all that is required: the launch of a new cycle of service.
The duty cycle ranges from 0.0 to 100.0. The duty cycle of the PWM signal is changed when this method is called on a PWM instance in Python.
Here is the syntax of the method:
This technique can be used with a software PWM instance. It doesn't need a response. An instance's PWM signal is paused when this method is called on it.
The syntax for this method is:
Congratulations! You have made it to the end of this tutorial. We have seen how PWM is generated in the raspberry pi. We have also seen how to set up our raspberry pi pins with LEDs to be controlled and wrote a python program that controls the output of these pins. The following tutorial will learn how to control a DC motor with Raspberry Pi 4 using Python.
Hi Friends! Hope you’re well today. I welcome you on board. In this post today, I’ll walk you through Cloud Computing Services.
The requirement to process and store data varies from business to business. Some organizations can handle data in on-site data centers. They have a team of experts who handle IT infrastructure and install, maintain and upgrade hardware based on the availability of data. This approach is expensive, no doubt. Some companies, however, don’t accept this model. They prefer cloud computing which is the availability of on-demand IT infrastructure over the internet. This model sets them free from handling and managing on-site data centers, instead, everything is managed and controlled by the cloud service providers. End users only pay for the computing services they use. This IT solution is not only cost-effective but also reliable and secure as your data is managed and stored over the cloud with globally managed data center.
I suggest you read this entire post as I’ll cover cloud computing services and how they can improve the efficiency of any business.
Scroll on.
Cloud services are the availability of software, platform, and infrastructure by the cloud service providers over the internet. Cloud computing services come with the following features:
Cloud computing services are maintained and hosted by cloud service providers. The end users don’t have to purchase or install software or hardware on-site since the service providers host, maintain and purchase the necessary IT infrastructure on their premises.
Service providers offer these services with the pay-as-you-go model which means the end-users only pay for the services and computing resources they use. This is the most economical approach for businesses since they don’t have to install and maintain the entire hardware and software system instead they only pay for the computing resources they use.
Cloud computing offers unlimited storage capacity. The virtual office you create with cloud computing gives you accessibility to almost limitless data to store and manage. This is very difficult to incorporate into traditional data centers since the more storage capacity and bandwidth you need the more hardware and software setup you have to install.
Cloud computing services are mainly divided into three types:
No matter the service model businesses opt for, the cloud service providers host and manage the entire IT infrastructure in their onsite facility. The end users only get only IT resources as a service instead of businesses using them directly.
All three services are different in terms of resource pooling and storage though, they can form a comprehensive model of cloud computing by interacting with each other.
In the following, we’ll discuss these services one by one.
In this service model, the service providers host the software on their own IT system and offer it to organizations based on the subscription fees. This way software is not installed in an individual’s system, instead, users can access the software installed on the cloud data centers over the internet with log-in usernames and passwords.
The services in this SaaS model include calendaring, email, and collaboration. Other business applications that enterprises can get on rent from the service providers include document management, ERP (enterprise resource planning), and CRM (customer relationship management).
Know that cloud software or SaaS is a full web application that requires huge capital investment since cloud service providers offer the full-fledge online app dedicated to the customers of an enterprise. The organizations get these services with a pay-as-you-go plan and more often this type of application or cloud software can be accessed directly from the web browsers without any installation or downloads. The reason, it is commonly called on-demand software, web-based software, or hosted software.
Economical: It works on the pay-as-you-go model which means you only pay for the computing resources you use.
Reduced time: Most SaaS apps can be accessed directly from the web browser. No downloads or installations are required. This means less time is required to run this app which you would otherwise spend on the installation and configuration of apps on an individual system.
Mobility: You can access this cloud software from anywhere in the world.
Automatic Updates: You don’t purchase the entire software. Only the services from that software on rent. This sets you free from manual updates, instead service providers will automatically update the software to avoid any potential threats.
IaaS service is the availability of on-demand IT infrastructure to businesses over the internet. This infrastructure includes operating systems, networks, storage, virtual machines, and servers. The cloud service provider offers this service to the organizations on a pay-as-you-go model.
The IaaS is an ideal solution for small and medium-sized businesses looking for an economical approach for their business growth. This gives them better control over the computing services and removes the need for intricate hardware installation as companies can access this model over the internet.
PaaS is the availability of on-demand IT platforms to businesses over the internet. With PaaS, cloud service providers create an online environment by incorporating multiple technologies including orchestration, containerization, security, routing, management, automation, and application programming interfaces.
Using this service, developers can develop, test, manage and deploy software applications without the need for the underlying infrastructure of the network, storage, servers, and databases required for the development.
Many cloud computing services we already use regularly. Common PaaS services include OpenShift, Apache Stratos, Google App Engine. Similarly Cisco WebEx, DropBox, SalesForce fall under the SaaS service model. And Microsoft Azure, Amazon Web Services, and Cisco Metapod belong to the IaaS service model. End users don’t need hardware and software installation on-premises instead they can access these services with a computer and a strong internet connection.
Know that the cloud computing service models are different from the cloud computing types. The four cloud computing types include Public Cloud, Private Cloud, Hybrid Cloud, and Community Cloud. While cloud computing services, on the other hand, include IaaS, PaaS, and SaaS service models. In the public cloud, services are delivered to the general public. For instance, several organizations can use the public cloud. While in a private cloud, services are delivered to a single organization.
That’s all for today. Hope you’ve enjoyed reading this article. Feel free to reach out in the section below about any questions regarding cloud computing. I’m willing to help in the best way possible. Thank you for reading this post.
We have already mentioned in our previous tutorials that RP2040 or Raspberry Pi Pico supports multiple programming languages like C/C++, Circuit python, MicroPython cross-platform development environments. Raspberry Pi Pico module consists of a built-in UF2 bootloader enabling programs to be loaded by drag and drop and floating-point routines are baked into the chip to achieve ultra-fast performance.
There are multiple development environments to program a Raspberry Pi Pico board like Visual Studio Code, Thonny Python IDE, Arduino IDE etc.
So, in this tutorial, we will learn how to install Thonny Python IDE to program the Raspberry Pi Pico board using Micropython programming language.
Thonny Python IDE (Integrated development environment) is a development tool designed for beginners. The major feature of using Thonny is that it is easy to operate and this development environment also provides a faithful representation of function calls. The Thonny IDE is compatible with Linux, MacOS and Windows OS.
Fig. 1 Download thonny
Fig. 2 Choose Installation Mode
Fig. 3 Accept the agreement
Fig. 4 installation location
Fig. 5 Desktop icon
MicroPython is a programming language that runs directly on embedded hardware, for example, ESP and Raspberry Pi Pico. It is a full implementation of the Python (3) programming language.
Programming Raspberry Pi Pico is a very easy process. Users can program the board by connecting it via USB port, and then just drag and drop the file into Raspberry Pi Pico.
https://www.raspberrypi.com/documentation/microcontrollers/micropython.html
Fig. 6 Download Micropython UF2 file
Fig. 7 Select Interpreter
Fig. 8 MicroPython for Raspberry Pi Pico
Fig. 9 Printing a message with MicroPython (Pi Pico)
So, this concludes the tutorial and the installation procedure of Thonny Python IDE (Windows) for raspberry Pi Pico programming.
In our next tutorial we will discuss how to write a program for raspberry Pi Pico programming to control GPIOs using MicroPython programming language.
The majority of these companies need skilled developers and engineers to build safe and robust eCommerce sites to house their businesses. If you are interested in specializing in eCommerce development, you would be remiss to ignore the advantages and disadvantages of each payment gateway option.
Payment gateways allow online customers to purchase products seamlessly and securely. However, they are not all created equal. As an engineer or site developer, you should understand the technical and practical implications of each payment gateway type.
All businesses need a way to collect money from their customers. While a brick-and-mortar shop uses a cash register and payment terminals to manage its transactions, online retailers must use web-based options.
To protect customer information from being hacked during the transaction, eCommerce shops use payment gateways to encrypt user data and authorize the transaction.
Gateways can also perform functions that you may have encountered when paying with a credit card. For example, gateways can automatically calculate tax, shipping costs, and custom fees based on the customer’s location and accept payments in multiple currencies.
Since the first payment gateway came online in 1996, there have been numerous innovations in technology and software. Today, customers and retailers can choose from dozens of options, including providers that operate in specific regions of the world. Many gateways do not even interact with banks, and instead, draw and deposit money from virtual wallets or accounts.
When integrating a payment system for a client, you must consider how each gateway type will impact the customer experience and the retailer’s bottom line.
Systems that are flawed, appear unprofessional, or constantly crash can put off customers and lower sales. Relatedly, while customers prefer to select from multiple payment options, having too many integrated into one shop can also make customers wary. It is essential to understand your client’s business and end clients to select the best class of payment gateway for their eCommerce site.
This payment gateway moves customers from the eCommerce site to the payment service provider’s web page to complete the transaction. If the provider is widely-known and trusted, such as PayPal, this can increase customer confidence. However, this will have the opposite effect if the provider is not a household name.
Further, while leveraging the name recognition and secure infrastructure of a large payment service provider can help boost sales, retailers are reliant on a third party to handle transactions. Customers will have to go through the payment service provider to handle issues with payment processing, refunds, and other transactions. If the third party does a poor job, it can affect your clients’ businesses.
Clients can also maintain a payment gateway directly on their website. When a customer pays, the transaction through an embedded payment gateway is connected directly to the retailer’s account.
Many invoicing and bookkeeping software offer this type of payment gateway. Onsite providers give retailers more control over the customer’s experience, but there is no outside support for handling issues.
Retailers who want complete oversight of their payment gateway may prefer an Application Programming Interface hosted system. The look and feel of the system can be designed to fit the company’s branding and culture.
However, if you build this type of gateway, you are also responsible for ensuring it meets all of the security requirements for handling customer financial data. You can ensure compliance under the Payment Card Industry Data Security Standard by following a PCI DSS compliance checklist.
Finally, small-scale vendors may opt for the security, ease, and reputation of a bank-integrated payment gateway. These systems are integrated within the banking system to facilitate virtual bank-to-bank transactions. Zelle, one of the largest such gateways in the U.S., is compatible with more than 30 national banks, including Bank of America and Chase.
While bank integrated payments are instant and often incur no fees, they are only accessible to customers with an account at a participating bank. This can greatly reduce accessibility, especially on the international market. Also, many of these gateways cannot handle high-volume transactions.
If you are working with an established payment service provider like PayPal, Apple Pay, or a bank integrated gateway, you can rest assured that the system is compliant and secure.
However, if your client is interested in an API-hosted gateway, you will need to be much more diligent. In addition to adhering to the PCI DSS, you will need to install a Secure Sockets Layer (SSL) certificate to ensure the website can transmit and receive encrypted data securely. The highest quality SSL certificate runs about $1,000 per year, but affordable and secure options cost around $60 per year.
Hi Guys! Glad to have you on board. Thank you for clicking this read. In this post today, I’ll walk you through the Types of Internet of Things (IoT).
IoT has been around for a while and has started making the headlines over the past couple of years. Some people experience IoT in their everyday life but are not aware of what it actually is. When physical objects “things” interact with the digital world, IoT is born. In simple words, it’s the network of connected devices integrated with sensors that work to exchange and share data over the internet. It is a rapidly growing technology with more than 18 billion connected IoT devices today and with the inception and boost of 5G technology this figure is expected to touch 125 billion by 2030. Experts say we may witness the stage when everything around us will be a thing in IoT. This is crazy.
I suggest you read this post all the way through as it aims to cover the types of Internet of Things.
Scroll on.
IoT is used to improve efficiency and services, making humans’ lives easy and productive. The connected IoT devices range from simple kitchen appliances and thermostats to heart monitors and cooling systems. And when used in sophisticated industrial tools, IoT can enhance the productivity of the manufacturing and production processes.
The Internet of Things is commonly divided into eight major types:
1: Internet of Things (IoT)
2: Internet of Everything (IoE)
3: Internet of Nano Things (IoNT)
4: Internet of Mobile Things (IoMT)
5: Internet of Mission-Critical Things (IoMT)
6: Industrial Internet of Things (IIoT)
7: Infrastructure Internet of Things
8: Commercial Internet of Things
We’ll discuss each one in the section below.
IoT is a network of things embedded with sensors that connect to the internet for acquiring and sharing data with other connected devices. IoT is applied to make sure which data is important and which is useless to monitor the patterns and find out issues even before they occur. The main purpose of IoT technology is to automate processes, especially that are time-consuming, repetitive, and dangerous.
You might have heard the term “smart home” that has recently soared to popularity and is the main application of the IoT. A smart home is a home with a smart system that is mainly connected with the home appliances to automate certain tasks and can be remotely controlled. From commercial and domestic purposes to industrial and military use, you’ll find IoT everywhere.
Internet of Everything (IoE) is the extension of the Internet of Things. The Internet of Things includes a connected network of things (physical objects) while the Internet of Everything, on the other hand, is about things, processes, data, and people. It covers the Internet for Everyone/Everything.
The IoE plays a key part to monitor and analyze real-time data obtained from a network of thousands of sensors connected to it. This data is then applied to accelerate people-based and automated processes. The IoE is beneficial to support modern business trends and can be incorporated into programs like m-learning and e-learning to allow students to learn new technologies.
Nanotechnology is on the rise. Big tech industries strive to make new devices compact, precise and small that can perform similar tasks to regular electronic devices.
The Internet of Nano Things is a network of Nanodevices connected with the internet to share and acquire information. The presence of Nano components in this technology makes it separable from the general Internet of Things technology.
We are connected through smartphones. We use these devices to ease our lives and improve the way we communicate with each other. The IoMT is mainly geared towards the mobility of things where change occurs in connectivity, context, privacy, and energy availability.
The connectivity identifies how mobile device is connected using connectivity protocol either WiFi, 3G/4G or wired network. The context refers to the current location of the mobile. Energy availability means how a mobile phone is charged and the privacy issue may occur because of the wide use of cell phones thus the mobile phone locations are connected with the human possessors.
The Internet of Mission-Critical Things (IoMT) is used in critical missions like surveillance, critical structure monitoring, search and salvage, border patrol, battleground, etc. In simple words, it’s the use of IoT in battlefield situations or military settings. The main aim of this technology is to accelerate situational awareness, monitor surrounding risks, and improve response time. The main IoMT applications include tanks, planes, connecting ships, drones, and soldiers.
The IoT plays a vital role in industries. This technology is commonly used in industries to automate production and manufacturing processes.
Automation guarantees the accuracy of the processes and removes the possibility of error which is very difficult to attain by using traditional processes and human workforce. Common industries that deploy IIoT include automotive, agriculture, logistics, and healthcare.
Infrastructure IoT is focused on the development of modern infrastructure that uses IoT for maintenance, cost-saving, and operational efficiency. It is concerned to analyze and monitor the operations occurring in rural and urban infrastructures including railway tracks, bridges, and wind farms.
Commercial Internet of Things mainly focuses on the use of IoT in commercial settings including stores, supermarkets, buildings, entertainments venues, healthcare facilities, and hotels. The main purpose of this technology is to improve business conditions, boost customer experience and monitor environmental conditions.
Developing your own IoT device is a no-brainer. There are platforms out there that are open source and offer you the opportunity to create your own IoT devices. Common platforms include Arduino.cc which is open source which means the code is developed to be accessible for the general public – anyone can edit, modify and distribute the code as they like better. And the other platform is Raspberry Pi which comes with a built-in Ethernet port allowing network communications a walk in the park.
That’s all for today. Hope you’ve enjoyed reading this article. If you’re unsure or have any questions regarding IoT, ask me in the section below. I’d love to assist you the best way I can. Thank you for reading this post.
As you'll learn in this article, there are various benefits of automating the workflows that can help you achieve higher levels of success!
Workflow automation uses technology to improve or replace manual work tasks. Automation can save time and money by reducing or eliminating the need for human intervention in repetitive or time-consuming tasks and thus reducing workforce management.
Most businesses can benefit from workflow automation in some way. Some common areas where companies typically automate their workflows include:
There are many ways in which automation can help employees work more productively. Here are just a few:
One of the most important benefits of automation is saving employees time. When tasks are automated, employees no longer have to complete those tasks manually. This allows employees to focus on more critical tasks and projects, leading to higher productivity levels.
By automating their workflows, businesses can help their employees to become more productive. When employees can focus on essential and relevant tasks, they can produce better results. Additionally, employees who can complete tasks quickly and efficiently will feel more satisfied with their job, leading to increased productivity in the long run.
Businesses that automate their workflows can save money by reducing or eliminating the need for human intervention in repetitive or time-consuming tasks. Automation can also help businesses to become more efficient, which can lead to cost savings in the long run.
When businesses can reduce the time to complete tasks, they can do more in less time. This can decrease labor costs, as companies no longer need as many employees to complete the same number of tasks.
Additionally, companies that automate their workflows can often improve their efficiency, leading to a decrease in overhead costs.
Overall, many benefits of automation can help businesses save money and become more productive.
Workflow automation can help improve communication between employees. When tasks are automated, employees no longer have to rely on email or other forms of communication to share information. This can lead to a more efficient and productive workplace, as employees will communicate more easily and quickly.
Additionally, businesses that automate their workflows can improve communication between departments. When departments can work together more effectively, they can achieve better results. Automation can help to break down the barriers between departments and allow them to work together more efficiently.
Furthermore, organizations that automate their processes can enhance communication between employees. Employees are less likely to make errors when they can communicate more effectively.
Automation might assist staff in breaking down barriers and working together more effectively. This may lead to a drop in the number of workplace administrative mistakes.
Overall, automation can help businesses reduce or eliminate administrative errors made in the workplace. This can lead to a more efficient and productive workplace.
When businesses automate their workflows, they are often able to collect data that is actionable. This means that the data can be used to make decisions and take action. When businesses have actionable data, they can improve their operations and become more successful.
Businesses that automate their workflows can collect data from a variety of sources. This data can include information from sensors, social media, or financial data. When this data is processed and organized, businesses can make better decisions about running their company.
Actionable data can help businesses improve several areas, including sales, marketing, and operations. By having access to actionable data, companies can make changes that will enhance their bottom line. Additionally, businesses can use actionable data to create a more successful long-term strategy.
Overall, businesses that automate their workflows can collect data that is actionable. This data can be used to make decisions and take action, which can help companies to improve their operations and become more successful.
Task management is an integral part of any business. When tasks are managed efficiently, companies can achieve better results. Automation can help improve task and project management workflows in several ways.
When tasks are automated, employees can complete them more quickly and efficiently. This can decrease the time it takes to complete tasks, which can free up employees' time to work on other projects.
When it comes to getting started with workflow automation, there are a few things you need to consider. Here are a few tips:
The future of workflow automation is very promising. With the help of technology, businesses will automate more and more processes, making the workplace more efficient and productive. Additionally, automating repetitive and time-consuming tasks will free employees to focus on higher-level work requiring creative thinking and problem-solving skills.
As workflow automation becomes more commonplace, organizations will need to invest in tools and training to ensure that they effectively use these new technologies.
In conclusion, there are many benefits of employee productivity through workflow automation. Automation can speed up tasks, eliminate errors, and improve communication between employees. By automating your workflows, you can help your team work more efficiently and productively, which will lead to a more successful business!