Hi pals! Welcome to the next deep learning tutorial, where we are at the exciting stage of TensorFlow. In the last tutorial, we just installed the TensorFlow library with the help of Anaconda, and we saw all the procedures step by step. We saw all the prerequisites and understood how you can follow the best procedure to download and install TensorFlow successfully without any trouble. If you have done all the steps, then you might be interested in knowing the basics of TensorFlow. No matter if you are a beginner or have knowledge about TensorFlow, this lecture will be equally beneficial for all of you because there is some important and interesting information that not all people know. So, have a look at the topics that will be discussed with you in just a bit.
What is a tensor?
What are some important types of tensors that we use all the time while using TensorFlow?
How can we start programming in TensorFlow?
What are the different operations on the TensorFlow?
How can you print the multi-dimensional array int he TensorFlow?
Moreover, you will see some important notes related to the practice that we are going to perform so, you can say, no point will be missing and we will recall our previous concepts all the time when we need them so that the beginners may also get the clear concepts that what is going on in the course.
There are different meanings of tensors in different fields of study, but we are ignoring others and focusing on the field with which we are most concerned: the mathematical definition of the tensor. This term is used most of the time when dealing with the data structure. We believe you have a background in programming languages and know the basics, such as what a data structure is, so I will just discuss the basic definition of tensors.
"The term "tensor" is the generalization of the metrics of nth dimensions and is the mathematical representation of a scaler, vector, dyad, triad, and other dimensions."
Keep in mind, in tensor, all values are identical in the data type with a known shape. Moreover, this shape can also be unknown. There are different ways to introduce the new tensor in TensorFlow while you are programming in it. If it is not clear at the moment, leave that because you will see the practical implementation in the next section. By the same token, you will see more of the additional concepts in just a bit, but before this, let me remind you of something:
Types of Data Structure |
No. of Rank |
Description |
No. of Components |
Vector |
1 |
There is only the magnitude but no direction. |
1 |
Scaler |
2 |
It has both magnitude and direction both. |
3 |
Dyad |
3 |
It has both magnitude and direction both. If x, y, and z are the components of the directions, then the overall direction is expressed by applying the sum operation of all these components. |
9 |
Triad |
4 |
It has the magnitude and also have the direction that is obtained by multiplying the 3 x 3 x 3. |
27 |
Before going deep into the practical work, I want to clarify some important points in the learning of TensorFlow. Moreover, in this tutorial, we will try to divide the lecture into the practical implementation and the theoretical part of the tutorial. We will see some terms often in this tutorial, and I have not discussed them before. So, have a look at the following descriptions.
The length of the tensor is called its shape, and we can understand the term "shape" in a way that it is defined with the help of the total number of rows and columns. While declaring the tensor, we have to provide its shape.
The rank defines the dimensions of the tensor. So, in other words, rank defines the order of the dimensions that starts at 1 and ends on the nth dimension.
When talking about the tensor, the “type” means the data type of that particular tensor. Just as we consider the data type in other programming languages, when talking about the language of TensorFlow, the type provides the same information about the tensor.
Moreover, in order to learn more about the type of the tensors, we examine them with respect to different operations. In this way, we found the following types of tensors:
tf.Variable
tf.constant
tf.placeholder
tf.SparseTensor
Before we get into the practical application of the information we discussed above, we'll go over the fundamentals of programming with TensorFlow. These are not the only concepts, but how you apply them in TensorFlow may be unfamiliar to you. So, have a look at the information given below:
When you want your compiler to ignore some lines that you have written as notes, you use the “sign of hash” before those lines. In other words, the compiler ignores every line that you start with this sign. In other programming languages, we use different types of signs for the same purpose, such as, // is used in C++ to start the comment line when we are using compilers such as Visual Studio and Dev C++.
When we want to print the results or the input to show on the screen, we use the following command:
print(message)
Where the message may be the input, output, or any other value that we want to print. Yet, we have to follow the proper pattern for this.
To apply the operations that we have discussed above, we have to first launch TensorFlow. We covered this in our previous session. Yet, every time, you have to follow some specific steps. Have a look at the details of these steps:
Fire up your Anaconda navigator from the search bar.
Go to the Home tab, where you can find the “Jupyter notebook."
Click on the launch button.
Select “Python” from the drop-down menu on the right side of the screen.
A new localhost will appear on your Google browser, where you have to write the following command:
import tensorflow as tf
from tensorflow import keras
Make sure you are using the same pattern and take care of the case of the alphabets in each word.
Here, you can see, we have provided that information about the type of tensor we want, and as an output, it has provided us with the information about the output. There is no need to provide you with the meaning of the int16 here as we all know that there are different types of integers and we have used the one that occupies the int16 space in the memory. You can change the data type for the practice. It is obvious that you are just feeding the input value, and the compiler is showing the output of the same kind as you were expecting. Here, the shape was empty because we have not input its value. So, we have understood that there is no need to provide the shape all the time. But, in the next programs, we will surely use the shape to tell you the importance of this type.
Before the practical implementation, you have seen the information about the dimensions of the Tensors. Now, we are moving forward with these types, and here is the practical way to produce these tensors in TensorFlow.
Here, you can print the two-dimensional array with the help of some additional commands. I'll go over everything in detail one by one. In the case discussed before, we have provided information about the tensor without any shape value. Now, have a look at the case given next;
Copy the following code and insert it into your TensorFlow compiler:
a=tf.constant([[3,5,7],[3,9,1]])
print(a)
Here comes the interesting point. In this case, we are just declaring the array with two dimensions, and TensorFlow will provide you with the shape (information of the dimension) and the memory this tensor name “a" is occupying by itself. As a result, you now know that this array has two rows and three columns.
By the same token, you can use any number of dimensions, and the information will be provided to you without any issues. Let us add the other dimensions.
Let’s have another example to initialize the matrix in TensorFlow, and you will see the shortcut form of the declaration of a unit matrix of your own choice. We know that a unit matrix has all the elements equal to one. So, you can have it by writing the following code:
a=tf.ones((3,3)
print(a)
The result should look like this:
Other examples of matrices that can be generated in such a way are the identity matrix and zero matrices. The identity matrix and the zero matrices are two other matrices that can be generated in this manner. For a zero and identity matrix, we will use “zero” and “eye” respectively in place of “ones” in the code given above.
The next step is to practice creating a matrix containing random numbers between the ranges that are specified by us. For this, we are using the random operation, and the code for this is given in the next line:
a=tf.random.uniform((1,5),minval=1, maxval=3)
print(a)
When we observe these lines, we will understand that first of all, we are using the random number where the numbers of
Rows and columns are given by us. I have given the single-row and five-column matrix in which I am providing the compiler with the maximum and minimum values. Now, the compiler will generate random values between these numbers and display the matrix in the order you specify.
So, from the programs above, we have learned some important points:
Just like in the formation of matrices in MATLAB, you have to put the values of rows in square brackets.
Each element is separated from the others with the help of a comma.
Each row is separated by applying the comma between the rows and enclosing the rows in the brackets.
All the rows are enclosed in the additional square bracket.
You have to use the commas properly, or even if you have a single additional space, you can get an error from the compiler while running it.
It is convenient to give the name of the matrix you are declaring so that you can feed the name into the print operation.
If you do not name the matrix, you can also use the whole matrix in the print operation, but it is confusing and can cause errors.
For the special kinds of matrices that we have learned in our early concepts of matrices, we do not have to use the square brackets, but instead, we will use the parentheses along with the number of elements so that compiler may have the information about the numbers of rows and columns, and by reading the name of the specific type of the matrix, it will automatically generate the special matrices such as the unit matrix, the null matrix, etc.
The special kind of matrices can also be performed in the TensorFlow but you have to follow the syntax and have to get the clear concept for the performance.
So, in this tutorial, we have started using the TensorFlow that we had installed in the previous lecture. Some of the steps to launch TensorFlow are the same and you will practice them every day in this course. We have seen how can we apply the basic functions in the TensorFlow related to the matrices. We have seen the types of tensors that are, more or less, similar to the matrices. You will see the advanced information about the same concepts in the tutorials given next as we are moving from the basics to the advance so stay with us for more tutorials.
Hello Peeps! Welcome to the next lecture on deep learning, where we are discussing TensorFlow in detail. You have seen why we have chosen TensorFlow for this course, and we have read a lot about the working mechanism, programming languages, and advantages of using TensorFlow instead of other libraries. Instead of using the other options for the same purpose, we have seen several reasons to use TensorFlow. Because of the latest work on the library for more improvement and better results, it's now time to learn the specifics of TensorFlow installation. But before this, you have to check the list of the concepts that will be cleared today:
The simple and to-the-point answer to this question is, the installation is easy and usually does not require any practice. If you are new to the technical world or have not experienced the installation of any software, do not worry because we will not skip any steps. Moreover, we have chosen the perfect way to install TensorFlow by telling you the all necessary information about the installation process so you start only if all the parameters are completed. So, first of all, let us share the prerequisites with you.
What are the minimum requirements for Tensorflow to be installed on your PC?
How can we choose the best method for the installation of TensorFlow?
How can we install TensorFlow with Jupyter?
What is the process for the Keras to be installed?
How can you launch the TensorFlow and Keras with the help of Jupyter Notebook?
What is the significance to use Jupyter, Keras, and TensorFlow?
To install TensorFlow without difficulty, you must keep all types of requirements in mind. We have categorised each type of requirement and you just have to check whether you are ready to download it or not.
System Requirements |
||
Ubuntu |
16.04 or higher |
64 bits |
macOS |
10.12.6 (Sierra) or higher, GPU is not supported. |
N/A |
Window Native |
Window 7 (higher is recommended) |
64 bits |
Window WSL |
Window 10 |
64 bits |
By the same token, there are some hardware requirements and below these values, the hardware does not support the TensorFlow.
Hardware Requirement |
|
GPU |
NVIDIA® GPU card with CUDA® architectures 3.5, 5.0, 6.0, 7.0, 7.5, 8.0 |
Here, it is important to notice that the requirements given in all the tables are the minimum requirement s and you can go for the higher versions of all of these for the better results and quality work.
Software Requirement |
|
Python version |
3.7-3.10 |
Pip version |
19.0 (for window and Linux), 20.3 (for macOS) |
NVIDIA® for GPU support |
|
NVIDIA® GPU driver |
version 450.80.02 |
CUDA® Toolkit |
11.2 |
cuDNN SDK |
8.1.0 |
Moreover, to enhance the latency and performance, TesnorFlowRT is recommended. All the requirements given above are authentic and you must never skip any of these if you want to get 100 efficiencies. Moreover, for the course we are working on, there is no need for any GPU as we are moving towards the basic course and we can install GPU in future if required.
Now it's time to make a decision about the type of installation you want. This is the step that makes TensorFlow different from the other simple installations. If you're going to install it on your PC, you have to get help from another software. There is certain software through which you can install the library software with the help of Anaconda software. For this, we are going to the official website and installing the anaconda software.
As soon as you will click on the download option, the loading will start and the software of Anaconda with the size 600+MBs will start downloading. It will take a few moments to be downloaded.
Once the process discussed above is completed, you have to click on the installation button and the window will be pop-up on your screen where the installation steps have to be completed by you.
The installation process is so simple that many of you have known them before but the purpose to tell you about each and every step is, some people do not know much about the installation or have the habit to match the steps with the tutorials so they may know that they are at the right path.
In the next step, you have to provide the path for the installation place of the anaconda. By default, as you expect, the C drive is set but I am going to change the directory. You can choose the path according to your choice.
Now, you will see that it is asking for the settings according to your choice. By default, the second option is ticked and I am installing the Anaconda as it is and click on the installation process.
Now, the installation process is starting and it will take some time.
While this step is taking a little time, you can read about the documentation of the TensorFlow.
Once the installation is complete, the next button will direct you towards this window:
In this way, Anaconda will be installed on your PC. You must know that it has multiple libraries, functions, and software within it and there is no need to check them all. For our practice, we just have to know about the Jupyter notebook. It will be cleared in just a bit How can we start and work with this notebook?
It seems that you have successfully installed Anaconda and are now moving towards the installation of your required library. It is a simple and interesting process that requires no technical skills. You just have to follow the simple steps given next:
Go to the start menu of your window.
Search for the “Anaconda command prompt."
Click on it, and a command prompt window will appear on your screen.
You just have to write the following command and the Anaconda will automatically install this amazing library for you.
As you can see, it mentions that the download of TensorFlow requires 266.3 MBs. Once this command is entered, the installation of the TensorFlow will carry out and you have to wait for some moments.
To confirm the installation process, I am providing you with some important commands. You just have to type “python” in the command prompt, and Anaconda will confirm the presence of the python information on your PC.
In the next step, to ensure that you have installed Tensorflow successfully, you can write the following command:
Import TensorFlow as tf
If nothing happens in the command prompt, it means your library was successfully installed; otherwise, it throws an error.
Hence, the TensorFlow library is successfully installed on our PCs. The same purpose is also completed with the help of Jupyter Navigator, and you will see it in detail.
Finally, for the perfect installation, we will search, follow the path Home>Search>Anaconda Navigator, and press enter. The following screen will appear here.
You have to choose the “Environment” button and click on the “Create” button to create a new environment. A small window will appear, and you have to name your environment. I am going to name it "TensorFlow.”
There is a possibility that it recommends the updated version if it is available. We recommend you have the latest version, but it is not necessary. As soon as you click on the "Create" button, in the lower right corner, you will see that your project is being loaded.
This step takes some time; in the meantime, you can check the other packages in the Anaconda software.
There is a need for Keras API as you have seen in our previous lectures. But as a reminder, we must tell you that Keras is a high-level application programming interface that is specially designed for deep learning and machine learning by Google, and with the help of this PAI, TensorFlow provides you with perfect performance and efficient work. So, here are the steps to install Keras on your PC.
Open the Jupyter navigator.
Click on the "create" button.
Write the name of your new environment, I am giving it the name "Keras,” as you were expecting.
The next step is to load the environment, as you have seen in the case of TensorFlow as well.
These steps are identical to the creation of an environment for TensorFlow. It is not necessary to discuss why we are doing this and what the alternatives are. For now, you have to understand the straightforward procedure and follow it for practice.
Keep in mind that, till now, you have just installed the library and API, but for the working of both of these, you have to run them, and we will earn this in just a bit.
The installation process does not end here. After the installation process, you have to check if everything is working properly or not. For this, go to the home page and then search for “Jupyter notebook." You must notice that there is a launch button at the bottom of this notebook’s section. If you found something else here, such as "Install,” then you have to first install the notebook by simply clicking on it, and then launch the notebook.
As soon as you launch the Jupyter notebook, you will be directed to your browser, where the notebook is launched on the local host of your computer. Here, it's time to write the commands to check the presence and working of the TensorFlow. You have to go to the upper right side of the screen and choose the Python3 (ipykernel) mode.
Now, as you can see, you are directed towards the screen where a code may be run. So you have to write the following command here:
Import tensorflow as tf
From tensorflow import keras
This may look the same as the previous way to install tensorflow, but it is a little bit different because now, at Jupyter, you can easily run your code and work more and more on it. It is more user-friendly and has the perfect working environment for the students and learners because of the efficient results all the time.
Keras is imported along with TensorFlow, and it is so easy to deal with deep learning with the help of this library and API.
If you do not remember these steps, do not worry because you will practice them again and again in this course, and after that, you will become an expert in TensorFlow. Another thing to mention is that you can easily launch Keras and TensorFlow together; you do not have to do them one after the other. But sometimes, it shows an error because of the difference in the Python version or other related issues. So it is a good practice to install them one after the other because, for both, the procedure is identical and is not long.
So, it was an informative and interesting lecture today. We have utilized the information from the previous lectures and tried to install and understand TensorFlow in detail. Not only this, but we also discussed the installation process of Keras, which has a helpful API, and understood the importance of using them together. Once you have started TensorFlow, you are now ready to use and work with it within Jupyter. Obviously, there are also other ways to do the same work as we have done here, but all of them are a little bit complex, and I found these procedures to be the best. If you have better options, let us know in the comment section or contact us directly through the website.
In the next session, we will work on TensorFlow and learn the basics of this amazing library. We will start from the basics and understand the workings and other procedures of this library. Till then, stay with us.
Hey learners! Welcome to the new tutorial on deep learning, where we are going deep into the learning of the best platform for deep learning, which is TensorFlow. Let me give you a reminder that we have studied the need for libraries of deep learning. There are several that work well when we want to work with amazing deep-learning procedures. In today’s lecture, you are going to know the exact reasons why we chose TensorFlow for our tutorial. Yet, first of all, it is better to present the list of topics that you will learn today:
Why do we use TensorFlow with deep learning?
What are some helpful features of this library?
How can you understand the mechanism of TensorFlow?
Show the light towards the architecture, and components of the TensorFlow.
In how many phases you can complete the work in the TensorFlow and what are the details of each step?
How the data is represented in the TensorFlow?
In this era of technology, where artificial intelligence has taken charge of many industries, there is a high demand for platforms that, with the help of their fantastic features, can make deep learning easier and more effective. We have seen many libraries for deep learning and tested them personally. As a result of our research, we found TensorFlow the best among them according to the requirements of this course.
There are many reasons behind this choice that we have already discussed in our previous sessions but here, for a reminder, here is a small summary of the features of TensorFlow
Flexibility
Easy to train
Ability to train the parallel neural network training
Modular nature
Best match with the Python programming language
As we have chosen Python for the training process, therefore, we are comfortable with the TensorFlow. It also works with traditional machine learning and has the specialty of solving complex numerical computations easily without the requirement of minor details. TensorFlow proved itself one of the best ways to learn deep learning, therefore, google open-sourced it for all types of users, especially for students and learners.
The features that we have discussed before we very generalized and you must know more about the specific features that are important to know before getting started with TensorFlow.
Before you start to take an interest in any software or library, you must have knowledge about the specific programming languages in which you operate that library. Not all programmers are experts in all coding languages; therefore, they go with the specific libraries of their matching APIs. TensorFlow can be operated in two programming languages via APIs:
C++
Python
Java (Integration)
R (Integration)
The reason we love TensorFlow is that the coding mechanism for deep learning is much more complicated. It is a difficult job to learn and then work with these coding mechanisms.
TensorFlow provides the APIs in comparatively simple and easy-to-understand programming languages. So, with the help of C++ or Python, you can do the following jobs in TensorFlow:
To configure the neuron
Work with the neuron
Prepare the neural network
As we have said multiple times, deep learning is a complex field with applications in several forms. The training process of the neural network with deep learning is not a piece of the cake. The training process of neural networks requires a lot of patience. The computations, multiplication of the matrices, complex calculations of mathematical functions, and much more require a lot of time to be consumed, even after experience and perfect preparations. At this point, you must know clearly about two types of processing units:
Central processing unit
Graphical processing unit
The central processing units are the normal computer units that we use in our daily lives. We've all heard of them. There are several types of CPUs, but we'll start with the most basic to highlight the differences between the other types of processing units. The GPUs, on the other hand, are better than the CPUs. Here is a comparison between these two:
CPU |
GPU |
Consume less memory |
Consume more memory |
They work at a slow speed |
They work at a higher speed |
The cores are more powerful.powerful |
They have contained relatively less powerful cores |
It has a specialty in serial instruction processing |
They are specialized to work in a parallel processing |
Lower latency |
Higher latency |
The good thing about TensorFlow is, it can work with both of them, and the main purpose behind mentioning the difference between CPU and GPU was to tell you about the perfect match for the type of neural network you are using. TensorFlow can work with both of these to make deep learning algorithms. The feature of working with the GPU makes it better for compilation than other libraries such as Torch and Keras.
It is interesting to note that Python has made the workings of TensorFlow easier and more efficient. This easy-to-learn programming language has made high-level abstraction easier. It makes the working relationship between the nodes and tensors more efficient.
The versatility of TensorFlow makes the work easy and effective. TensorFlow modules can be used in a variety of applications, including
Android apps
iOS
Cluster
Local machines
Hence, you can run the modules on different types of devices, and there is no need to design or develop the same application for different devices.
The history of deep learning is not unknown to us. We have seen the relationship between artificial intelligence and machine learning. Usually, the libraries are limited to specific fields, and for all of them, you have to install and learn different types of software. But TensorFlow makes your work easy, and in this way, you can run conventional neural networks and the fields of AI, ML, and deep learning on the same library if you want.
The architecture of the TensorFlow depends upon the working of the library. You can divide the whole architecture into the three main parts given next:
Data Processing
Model Building
Training of the data
The data processing involves structuring the data in a uniform manner to perform different operations on it. In this way, it becomes easy to group the data under one limiting value. The data is then fed into different levels of models to make the work clear and clean.
In the third part, you will see that the models created are now ready to be trained, and this training process is done in different phases depending on the complexity of the project.
While you are running your project on TensorFlow, you will be required to pass it through different phases. The details of each phase will be discussed in the coming lectures, but for now, you must have an overview of each phase to understand the information shared with you.
The development phase is done on the PC or other types of a computer when the models are trained in different ways. The neural networks vary in the number of layers, and in return, the development phase also depends upon the complexity of the model.
The run phase is also sometimes referred to as the inference phase. In this phase, you will test the training results or the models by running them on different machines. There are multiple options for a user to run the model for this purpose. One of them is the desktop, which may contain any operating system, whether it is Windows, macOS, or Linux. No matter which of the options you choose, it does not affect the running procedure.
Moreover, the ability of TensorFlow to be run on the CPU and GPU helps you test your model according to your resources. People usually prefer GPU because it produces better results in less time; however, if you don't have a GPU, you can do the same task with a CPU, which is obviously slower; however, people who are just getting started with deep learning training prefer CPU because it avoids complexities and is less expensive.
Finally, we are at the part where we can learn a lot about the TensorFlow components. In this part, you are going to learn some very basic but important definitions of the components that work magically in the TensorFlow library.
Have you ever considered the significance of this library's name? If not, then think again, because the process of performance is hidden in the name of this library. The tensor is defined as:
"A tensor in TensorFlow is the vector or the n-dimensional data matrix that is used to transfer data from one place to another during TensorFlow procedures."
The tensor may be formed as a result of the computation during these procedures. You must also know that these tensors contain identical datatypes, and the number of dimensions in these matrices is known as the shape.
During the process of training, the operations taking place in the network are called graphs. These operations are connected with each other, and individually, you can call them "ops nodes." The point to notice here is that the graphs do not show the value of the data fed into them; they just show the connections between the nodes. There are certain reasons why I found the graphs useful. Some of them are written next:
These can be run or tested on any type of device or operating system. You have the versatility to run them on the GPU, OS, or mobile devices according to your resources.
The graphs can be saved for future use if you do not want to use them at the current time or want to reuse them in the future for other projects or for the same project at any other time, just like a simple file or folder. This portable nature allows different people sharing the same project to use the same graph without having any issues.
TensorFlow works differently than other programming languages because the flow of data is in the form of nodes. In traditional programming languages, code is executed in the form of a sequence, but we have observed that in TensorFlow, the data is executed in the form of different sessions. When the graph is created, no code is executed; it is just saved in its place. The only way to execute the data is to create the session. You will see this in action in our coming lectures.
Each node in the TensorFlow graph, the mathematical operation such as addition, subtraction, multiplication, etc, is represented as the node. By the same token, the multidimensional arrays (or tensors) are shown by the nodes.
In the memory of TensorFlow, the graph of programming languages is known as a "computational graph."
With the help of CPUs and GPUs, large-scale neural networks are easy to create and use in TensorFlow.
By default, a graph is made when you start the Tensorflow object. When you move forward, you can create your own graphs that work according to your requirements. These external data sets are fed into the graph in the form of placeholders, variables, and constants. Once these graphs are made and you want to run your project, the CPUs and GPUs of TensorFlow make it easy to run and execute efficiently.
Hence, the discussion of TensorFlow is ended here. We have read a lot about TensorFlow today and we hope it is enough for you to understand the importance of TensorFlow and to know why we have chosen TensorFlow among the several options. In the beginning, we have read what is TensorFlow and what are some helpful features of TensorFlow. In addition to this, we have seen some important APIs and programming languages of this library. Moreover, the working mechanism and the architecture of TensorFlow were discussed here in addition to the phases and components. We hope you found this article useful, stay with us for more tutorials.
Hey buddies! Welcome to the next tutorial on deep learning, in which you are about to acquire knowledge related to Python. This is going to be very interesting because the connection between these two is easy and useful. In the last lecture, we had an eye on the latest and trendiest deep learning algorithms, and therefore, I think you are ready to take the next step towards the implementation of the information that I shared with you. To help you make up your mind about the topics of today, I have made a list for you that will surely be useful for you to understand what we are going to do today.
How do you introduce the Python programming language to a deep learning developer?
How is Python useful for deep learning training in different ways?
Do Python provide the useful frameworks for the depe learning?
What are some libraries of Python that are useful for the deep learning?
Why do programmers prefer Python over other options when working with deep learning?
What are some other options besides Python to be used with deep learning?
Over the years, the hot topic in the world of programming languages has been Python because of many reasons that you will learn soon. It is critical to understand that when selecting a coding language, you must always be confident in its efficiency and functionality. Python is the most popular because of its fantastic performance, and therefore, I have chosen it for this course. From 2017 to the present, calculations and estimations of popularity show that Python is in the top ten in the interests of both common users and professionals due to its ease of installation and unrivaled efficiency.
Now, recall that deep learning is a popular topic in the industry of science and technology, and people are working hard to achieve their goals with the help of deep learning because of its jaw-dropping results. When talking about complexity, you will find that deep learning is a difficult yet useful field, and therefore, to minimize the complexity, experts recommend python as the programming language. All the points discussed below are an extraction of my personal experience, and I chose the best points that every developer must know. The following is a list of the points that will be discussed next:
I am discussing this point at the start of the discussion because I think it is one of the most important points that make programming better and more effective. If the code is clean and easy to read, you will definitely be able to pay attention to the programming in a better way. Usually, the programming is done in the grouping phase, and for the testing and other phases of successful programming, it is important to understand the code written by the others. Hence, coding in Python is easy to read and understand, and by the same token, you will be able to share and practice more and more with this interesting coding language.
The syntax and rules of the Python programming language allow you to present your code without mentioning many details. People realize that it is very close to the human language, and therefore, there is no need to have a lot of practice or knowledge to start practising. These are the important points that prove the importance of the Python language for writing more useful code. As a result, you can conclude that for complex and time taking processes such as deep learning, Python is one of the ideal languages because you do not have to spend a lot of time coding but will be able to use this energy to understand the concepts of deep learning and its applications.
Python, like other modern programming languages, supports a variety of programming paradigms. It fully supports:
Object-oriented Programming
Structured programming
Furthermore, its language features support a wide range of concepts in functional and aspect-oriented programming. Another point that is important to notice is that Python also includes a dynamic type system and automatic memory management.
Python's programming paradigms and language features enable you to create large and complex software applications. Therefore, it is a great language to use with deep learning.
If you are a programmer, you will have the idea that for different programming languages, you have to download and install other platforms for proper working. It becomes hectic to learn, buy, and use other platforms for the working of a single language. But when talking about Python, the flexibility can be proven by looking at the following points:
It supports multiple operating systems.
It is an interpreted programming language. That means you can run the Python code on several platforms without the headache of recompilation for the other platforms.
The testing of the Python code is easier than in some other programming languages.
All these points are enough to understand the best combination of deep learning with the Python programming language because deep learning requires the training and testing process, and there may be a need to test the same code or the network on different platforms.
Want to know why Python is better than other programming languages? One of the major reasons is the fantastic and gigantic library of the Python language. It is a programming tip that programmers should always check the programming language's library if they want to know its efficiency and ability to work instantly. One thing to notice is that you will get a large number of modules, and it allows you to choose the modules of your choice. So, you can ignore the unnecessary modules. This feature is also present in other popular programming languages. Moreover, you can also add more code according to your needs. For the experts, it is a blessing because they can use their creativity by using the already-saved modules.
Deep Elarnigna only contains algorithms, and it requires a programming language that allows for simple and quick module creation. Python is therefore ideal for deep embedding in context.
In the past lectures, we have seen the frameworks of deep learning, and therefore, for the best compatibility, the programming language in which the deep learning is being processed must also have open-source frameworks; otherwise, this advantage of deep learning will not be useful. Most of the time, the tools and frameworks are not only open source but also easily accessible, which makes your work easier. I believe that having more coding options makes your work easier because coding is a time-consuming process that requires you to have as much ease as possible for better practice. Here is the list of some frameworks that are used with the Python programming language:
Django
Flask
Pyramid
Bottle
Cherrypy.
Another reason why experts recommend Python for deep learning is the Python frameworks related to graphical user interfaces. In the previous lectures, you have seen that deep learning has a major application in image and video processing, and therefore, it is a good match for deep learning with Python coding. The GUI frameworks of Python include:
PyQT
PyJs
PyGUI
Kivy
PyGTK
WxPython
Observe that the keyword "Py" with all these frameworks indicates the specification of the Python programming language with these frameworks. At this point, it is not important to understand all of them. But as an example, I want to tell you that Kivy is used for the front end of Android apps with the help of Python.
This category makes it important to notice the connection between the Python programming language and deep learning because, when working with deep learning, a greater variety of frameworks results in an easier working and better training process.
If you are following our previous tutorials, you will be aware of the importance of testing in deep learning. But allow me to tell you the connection between Python and the test-driven approach. In deep learning, all efficiency depends upon the testing process. More and more training and testing means better performance, which the network can recognize better. Python provides for the rapid creation of prototype applications, and similarly, it also provides the best test driven approach when connected to networks.
The first rule to learning programming languages is to have consistency in your nature. Yet, for the more difficult programming languages, where the absence of a single semicolon can be confusing for the compiler, consistency is difficult to attain. On the contrary, an easier and more readable programming language, such as Python, helps to pay more attention to the code, and thus the user is more drawn to its work. Deep learning can only be performed in such an environment. So, for peace of mind, always choose Python.
Have you ever been stuck in a problem while coding and could not find the help you needed? I've seen this many times, and it's a miserable situation because the code contains your hard work from hours or even days, but you still have to leave it. Yet, because of the popularity and saturation of this field, Python developers are not alone. Python is a comparatively easy language, and normally people do not face any major issues. Yet, for the help of the developers, there is a large community related to Python where you can find the solution of your problems, check the trends, have a chit chat with other developers, etc.
When working on deep learning projects, it's fun to be a part of a community with other people who are working on similar projects. It is the perfect way to learn from the seniors and grow in a productive environment. Moreover, while you are solving the problems of the juniors, you will cultivate creativity in your mind, and deep learning will become interesting for you.
At this point, where I am discussing a lot about Python, it must be clarified that it is not the only option for deep learning. Deep learning subjects are always wasteful, and users always have more than one option. However, we prefer Python for a variety of reasons, and now I'd like to tell you about some other options that appear useful but are, in fact, less useful than Python. The other programming languages are:
JavaScript
Swift
Ruby
R.
C
C++
Julia
PHP
No doubt, people are showing amazing results when they combine one or more of these programming languages with deep learning, but usually, I prefer to work more with Python. It totally depends on the type of project you have or other parameters such as the algorithm, frameworks, hardware the user has, etc. to effectively choose the best programming language for deep learning. An expert always has an eye on all the parameters and then chooses the perfect way to solve the deep learning problems, no matter what the difficulty level of the language is.
Hence, we have discussed a lot about the Python today. Before all this discussion, our focus was on the deep learning and its working so you amy have the idea what actually si going on. In this article, we have seen the compatibility of the Python programming language with deep learning. We knew about the parameters of the deep learning and therefore were able to understand the reason of choosing the Python for our work. Throughout this article, we have seen different reasons why we have chosen TensorFlow and related libraries for our work. It is important to notice that Python works best with the TensorFlow and Keras APIs, and therefore, from day one, we have focused on both of these. In the next lecture, you will see some more important information about deep learning, and we are moving towards the practical implementation of this information. Once we have performed the experiment, all the points will be crystal clear in your mind. So until then, learn with us and grow your knowledge.
Hello pupils! Welcome to the following lecture on deep learning. As we move forward, we are learning about many of the latest and trendiest tools and techniques, and this course is becoming more interesting. In the previous lecture, you saw some important frameworks in deep learning, and this time, I am here to introduce you to some fantastic algorithms of deep learning that are not only important to understand before going into the practical implementation of the deep learning frameworks but are also interesting to understand the applications of deep learning and related fields. So, get ready to learn the magical algorithms that are making deep learning so effective and cool. Yet before going into details, let me discuss the questions for which we are trying to find answers.
How does deep learning algorithms are introduced?
What is the working of deep learning algorithms?
What are some types of DL algorithms?
How do deep learning algorithms work?
How these algorithms are different from each other?
Deep learning plays an important role in the recognition of objects and therefore, people use this feature in image, video and voice recognition where the objects are not only detected but can be changed, removed, edited, or altered using different techniques. The purpose to discuss these algorithms with you is, to have more and more knowledge and practice to choose the perfect algorithm for you and to have the concept of the efficiency and working of each algorithm. Moreover, we will discuss the application to provide you with the idea to make new projects by merging two or more algorithms together or creating your own algorithm.
Throughout this course, you are learning that with the help of the implementation of deep learning, computers are trained in such a way that they can take human-like decisions and can have the ability to act like humans with the help of their own intelligence. Yet, it is time to learn about how they are doing this and what the core reason is behind the success of these intelligent computers.
First of all, keep in mind that deep learning is done in different layers, and these layers are run with the help of the algorithm. We introduce the deep learning algorithm as:
“Deep learning algorithms are the set of instructions that are designed dynamically to run on the several layers of neural networks (depending upon the complexity of the neural networks) and are responsible for running all the data on the pre-trained decision-making neural networks.”
One must know that, usually, in machine learning, there is tough training to work with complex datasets that have hundreds of columns or features. This becomes difficult with the classic deep learning algorithm, so the developers are constantly designing a more powerful algorithm with the help of experimentation and research.
When people are using different types of neural networks with the help of deep learning, they have to learn several algorithms to understand the working of each layer of the algorithm. Basically, these algorithms depend upon the ANNs (artificial neural networks) that follow the principles of human brains to train the network.
While the training of the neural network is carried out, these algorithms take the unknown data as input and use it for the following purposes:
To group the objects
To extract the required features
To find out the usage patterns of data
The basic purpose of these algorithms is to build different types of models. There are several algorithms for neural networks, and it is considered that no algorithm is perfect for all types of tasks. All of them have their own pros and cons, and to have mastery over the deep learning algorithm, you have to study more and more and test several algorithms in different ways.
Do you remember that in the previous lectures we discussed the types of deep learning networks? Now you will observe that, while discussing the deep learning algorithms, you will utilize your concepts of neural networks. With the advancement of deep learning concepts, several algorithms are being introduced every year. So, have a look at the list of algorithms.
Convolutional Neural Networks (CNNs)
Long Short-Term Memory Networks (LSTMs)
Deep Belief Networks (DBNs)
Generative Adversarial Networks (GANs)
Autoencoders
Radial Basis Function Networks (RBFNs)
Multilayer Perceptrons (MLPs)
Restricted Boltzmann Machines( RBMs)
Recurrent Neural Networks (RNNs)
Self-Organizing Maps (SOMs)
Do not worry because we are not going to discuss all of them at a time but will discuss only the important ones to give you an overview of the networks.
Convolutional neural networks are also known as "ConvNets," and the main applications of these networks are in image processing and related fields. If we look back at its history, we find that it was first introduced in 1998. Yan LeCun initially referred to it as LeNet. At that time, it was introduced to recognize ZIP codes and other such characters.
We know that neural networks have many layers, and similar is the case with CNN. We observe different layers in this type of network, and these are described below:
Sr # |
Name of the Layer |
Description of the Layer |
1 |
Convolution layer |
The convolution layer contains many filters and is used to perform the convolution operations. |
2 |
Rectified linear unit |
The short form of this layer is ReLu, and it is used to perform different operations on the elements. It is called “rectified” because the output is obtained as a rectified feature map by using this layer. |
3 |
Pooling layer |
This is the layer where the results of the ReLu are fed as the input. Pooling is defined as the downsampling operation, and it is used to reduce the dimension of a feature map. The next phase is to convert this feature map, and then this two-dimensional array is converted into a single flat, continuous, and single vector. |
4 |
Fully connected layer |
The single vector from the pooling layer is finally fed into this last layer. At the end, classification of the image is done to identify it. |
As a reminder, you must know that neural networks have many layers, and the output of one layer becomes the input for the next layer. In this way, we get refined and better results in every layer.
This is a type of RNN (recurrent neural network) with a good memory that is used by experts to remember long-term dependencies. By default, it has the ability to recall past information over a long period of time. Because of this ability, LSTMs are used in time series prediction. It is not a single layer but a combination of four layers that communicate with each other in a unique way. Some very typical uses of LSTM are given below:
Speech recognition
Development in pharmaceutical operations
Different compositions in music
If you are familiar with the fundamentals of programming, you will understand that if we want to repeat a process, loops, or recurrent processes, are the solution. Similarly, the recurrent neural network is the one that forms the directed cycles. The unique thing about it is that the output of the LSTM becomes the input of the RNN. It means these are connected in a sequence, and in this way, the current phase becomes the output of the LSTM.
The main reason why this connection is magical is that you can utilize the feature of memory storage in LSTM and the ability of RNNs to work in a cyclic way. Some uses of RNN are given next:
Recognition of handwriting
Time series analysis
Translation by the machine
Natural language processing
captioning the images
The output of the RNN is obtained by following the equation given next:
If
output=t-1
Then
input=1
So at the output t
input=1+1
And this series goes on
Moreover, RNN can be used with any length of the input, but the size of the model does not increase when the input size is increased.
Next on the list is the GAN or the generative adversarial network. These are known as “adversarial networks" because they use two networks that compete with each other to generate real-time synthesized data. It is one of the major reasons why we found applications of the generative adversarial network in video, image, and voice generation.
GANs were first described in a paper published in 2014 by Ian Goodfellow and other researchers at the University of Montreal, including Yoshua Bengio. Yann LeCun, Facebook's AI research director, referred to GANs as "the most interesting idea in ML in the last 10 years." This made GANs a popular and interesting neural network. Another reason why I like this network is the fantastic feature of mimicking. You can create music, voice, video, or any related application that is difficult to recognize as being made by a machine. The impressive results are making this network more and more popular every day, but the evil of this network is equal. As with all technologies, people can use them for negative purposes, so check and balance are applied to such techniques. Moreover, GAN can generate realistic images and cartoons with high-quality results and render 3D objects.
At first, the network learns to distinguish between the generated fake data and sampled data. It happens when fake data is produced and the discriminator learns to recognise if it is real or false. After that, GAN is responsible to send the results to the generator so that it can learn and memorize the results and go for further training.
If it seems a simple and easy task, then think again because the recognition part is a tough job and you have to feed the perfect data in the perfect way so you may have accurate results every time.
For the problems in the function approximation, we use an artificial intelligence technique called the radial basis function network. It is somehow a little bit different from the previous ones. These are the types of forward-feed neural networks, and the speed and performance of these networks make them better than the other neural networks. These are highly efficient and have a better learning speed than others, but they require experience and hard work. Another reason to call them better is the presence of only one hidden layer and one radial basis function that is used as an activation function. Keep in mind that the activation function is highly efficient in its approximation of the results.
It takes the data from a training set and measures the similarities in the input. In this way, it classifies the data.
In the layer of RBF neurons, the input vector is then fed into the input layer.
After finding the weighted sum of the inputs, we obtain the output. Each category or class of data has one node.
The difference from the other network is, the neurons contain a gaussian transfer function, and the output is inversely proportional to the distance between the centre of the network and the neuron.
In the end, we get the output, which is a combination of both, the input of the radial basis function and the neuron parameters.
So, it seems that these networks are enough for today. Although there are different types of neural networks as well, as we have said earlier, with the advancement in deep learning, more and more algorithms for the neural networks are being introduced that have their own specifications, yet at this level, we just wanted to give you an idea about the neural networks. At the start of this article, we have seen what deep learning algorithms are and how they are different from other types of algorithms. We have seen the types of neural networks that include CNNs, LSTMNs, RNNs, GANs, and RBFs.
Greetings, and welcome to today's tutorial. In the last tutorial, we learned how to construct a system for tallying individuals using Raspberry Pi, astute subtraction, and blob tracking. We demonstrated the total number of building entrances and exits. Feature computation and HOG theory were also discussed. The tests proved that a device based on the raspberry pi could effectively function as a people counting station. One of the many benefits of the Pi 4 is its internet connectivity, which is especially useful for home automation projects due to its low price and ease of use. We're going to see if we can use a web page's buttons to manage our air conditioner today. With this Internet of Things (IoT) based home automation, you can command your home gadgets from the comfort of your couch. The user can access this web server from any gadget capable of loading HTML apps, such as a smartphone, tablet, computer, etc.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Breadboard | Amazon | Buy Now | |
2 | Diodes | Amazon | Buy Now | |
3 | Jumper Wires | Amazon | Buy Now | |
4 | LEDs | Amazon | Buy Now | |
5 | Resistor | Amazon | Buy Now | |
6 | Transistor | Amazon | Buy Now | |
7 | Raspberry Pi 4 | Amazon | Buy Now |
The needs of this project can be broken down into two broad classes: hardware and software.
Raspberry Pi 4
Memory card 8 or 16GB running Raspbian Jessie
5v Relays
2n222 transistors
Diodes
Jumper Wires
Connection Blocks
LEDs to test.
AC lamp to Test
Breadboard and jumper cables
220 or 100 ohms resistor
We'll be using the WebIOPi framework, notepad++ on your PC, and FileZilla to transfer files (particularly web app files) from your computer to the raspberry pi and the Raspbian operating system.
As a good habit, I constantly update the Raspberry Pi before using it for the first time. In this project phase, we will handle the web-to-raspberry-pi connection by upgrading the Pi and setting up the WebIOPi framework. The python Flask framework provides a potentially more straightforward alternative, but getting your hands dirty and looking at how things operate makes DIY appealing. When you get to that point, the fun of DIY begins. Use the updated commands below to upgrade your Raspberry Pi and restart the RPi.
sudo apt-get update
sudo apt-get upgrade
sudo reboot
After this is finished, we can set up the webIOPi framework. Using, verify that you are in your home directory.
cd ~
To download the files from the google page, type wget.
wget http://sourceforge.net/projects/webiopi/files/WebIOPi-0.7.1.tar.gz
Then, once the download is complete, unzip the file and enter the directory;
tar xvzf WebIOPi-0.7.1.tar.gz
cd WebIOPi-0.7.1/
Unfortunately, I could not locate a version of WebIOPi that is compatible with the Pi 4; thus, we have to download a patch before proceeding with the setup. Run the instructions below from within the WebIOPi directory to apply the patch.
wget https://raw.githubusercontent.com/doublebind/raspi/master/webiopi-pi2bplus.patch
patch -p1 -i webiopi-pi2bplus.patch
Once we have those things, we can begin the WebIOPi setup installation process by using the;
sudo ./setup.sh
Just click "Yes" when prompted to install more components during setup. Upon completion, restart your Pi.
sudo reboot
Before diving into the schematics and programs, we should power on the Raspberry Pi and ensure our WebIOPi installation is functioning as expected. Execute the command below;
sudo webiopi -d -c /etc/webiopi/config
After running the above command on the pi, open a web browser and navigate to http://raspberrypi.mshome.net:8000 (or HTTP;//thepi'sIPaddress:8000) on the computer that is attached to the pi. When logging in, you'll be asked for a username and password.
Username is webiopi
Password is raspberry
You may permanently disable this login if you no longer need it. Still, it's important to keep unauthorized users from taking control of your home's appliances and Internet of Things (IoT) components. After you've logged in, go to the GPIO header link.
Make GPIO 17 an output; we'll use it to power an LED in this Test.
Following this, attach the led to the Pi 4 as depicted in the schematics.
When you're ready to activate or deactivate the LED, return to the web page where you made the connection and select the pin 11 button. This allows us to use WebIOPi to manage the Raspberry Pi's GPIO pins. If the Test is successful, we can return to the console and exit the program by pressing CTRL + C. Please let me know in the comments if this arrangement has any problems. Once the pilot is finished, we can begin the actual project.
In this section, we will alter the WebIOPi service's standard setup and inject our code to be executed on demand. FileZilla or another FTP/SCP copy program will be the first tool we install on our computer. You'll agree that using the terminal to write code on the Pi is a stressful experience, so having access to Filezilla or another SCP program will be helpful. Let's make a project directory in which all our web scripts will be stored before we begin writing the HTML, CSS, and javascript programs for this Internet - of - things Home automated Web app and transferring them to the RPi.
First, make sure you're in your home directory using; next, create the folder; finally, open the newly constructed folder and make an HTML folder inside it.
cd ~
mkdir webapp
cd webapp
mkdir HTML
Make subfolders inside the HTML folder for scripts, CSS, and graphics.
mkdir html/css
mkdir html/img
mkdir html/scripts
Now that we have our files prepared, we can start coding on the computer and transfer our work to the Pi using Filezilla.
Writing the javascript will be our first order of business. An easy-to-use script for interacting with the WebIOPi server. Our four-button web app will only use two relays in the demonstration, and we only intend to control four GPIO pins for this project.
webiopi().ready(function() {
webiopi().setFunction(17,"out");
webiopi().setFunction(18,"out");
webiopi().setFunction(22,"out");
webiopi().setFunction(23,"out");
var content, button;
content = $("#content");
button = webiopi().createGPIOButton(17," Relay 1");
content.append(button);
button = webiopi().createGPIOButton(18,"Relay 2");
content.append(button);
button = webiopi().createGPIOButton(22,"Relay 3");
content.append(button);
button = webiopi().createGPIOButton(23,"Relay 4");
content.append(button);
});
Once the WebIOPi is ready, the preceding code is executed. To help you understand JavaScript, we've explained below:
webiopi().ready(function()
All this tells our system to make this function and call it once the webiopi is set.
webiopi().setFunction(23,"out")
We can instruct the WebIOPi program to use GPIO23 for output. Four buttons are now available, but you may add more if necessary.
var content, button
With this line, we're instructing the system to make a new variable called content into a button.
content = $("#content")
We will continue using the content variable in our HTML and CSS. As a result, the WebIOPi framework generates everything connected to #content when it is mentioned.
button = webiopi().createGPIOButton(17,"Relay 1")
WebIOPi can make several distinct types of push buttons. This code instructs the WebIOPi program to generate a GPIO key that operates on the GPIO pin identified as "Relay 1" above. The other ones are the same, too.
content.append(button)
Add this code to the button's existing HTML or external code. New buttons can be made that are identical to this one in every respect. This is especially helpful while coding or writing CSS.
If you made your JS files the same way I did, you can save them and then move them with Filezilla to webapp/HTML/scripts after you've finished making them. Now we can move on to developing the CSS.
With the aid of CSS, our Internet of Things (IoT) Rpi 4 home automation website now looks fantastic. So that the website will look like the one in the picture below, I built a custom style sheet called smarthome.css.
I don't want to paste the entire CSS script here, so I'll use a subset for the explanation. If you want to learn CSS, all you have to do is read the code. You can skip this and use our CSS code if you want to.
The first section of the script, displayed below, represents the web application's main stylesheet.
body {
background-color:#ffffff;
background-image:URL('/img/smart.png');
background-repeat:no-repeat;
background-position:center;
background-size:cover;
font: bold 18px/25px Arial, sans-serif;
color:LightGray;
}
The above code, which I hope needs no explanation, begins by setting the background colour to white (#ffffff), adds a background image to the document from the specified folder (remember the one we created earlier? ), makes sure the picture doesn't duplicate by setting the background-repeat to no-repeat, and finally tells the CSS to center the background. Next, we adjust the background's text size, font, and colour.
After finishing the main content, we styled the buttons with CSS.
button {
display: block;
position: relative;
margin: 10px;
padding: 0 10px;
text-align: center;
text-decoration: none;
width: 130px;
height: 40px;
font: bold 18px/25px Arial, sans-serif; color: black;
text-shadow: 1px 1px 1px rgba(255,255,255, .22);
-WebKit-border-radius: 30px;
-Moz-border-radius: 30px;
border-radius: 30px;
}
Everything else in the script is similarly optimized for readability and brevity. You can play with them and see what happens; this kind of learning is known as "learning by doing," I believe. However, CSS's strengths lie in its simplicity, and its rules are written in plain English. The button's text shadow and button shadow are two of the few supplementary features found in the block's other section. To top it all off, pressing the button triggers a subtle transition effect, making it look polished and lifelike. To guarantee optimal page performance on all browsers, these are defined independently for WebKit, firefox, opera, etc.
The following code snippet notifies the WebIOPi service that it is receiving data as input.
input[type="range"] {
display: block;
width: 160px;
height: 45px;
}
Providing feedback on when a button is pressed will be the last element we want to implement. As a result, the screen's colour scheme and button hues provide a quick indicator of progress. To accomplish this, the following line of code is added to each button's HTML.
#gpio17.LOW {
background-color: Gray;
color: Black;
}
#gpio17.HIGH {
background-color: Red;
color: LightGray;
}
The code snippets up top alter the button's color depending on the user's selection. The button's background is gray when it is inactive (at LOW) and red when it is active (at HIGH). Now that we have our CSS under control let's save it as smarthome.css, upload it to our raspberry pi's styles folder using FileZilla (or another SCP client of your choosing), and fix the remaining HTML code.
The HTML code unifies the style sheets and java scripts.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="mobile-web-app-capable" content="yes">
<meta name="viewport" content = "height = device-height, width = device-width, user-scalable = no" />
<title>Smart Home</title>
<script type="text/javascript" src="/webiopi.js"></script>
<script type="text/javascript" src="/scripts/smarthome.js"></script>
<link rel="stylesheet" type="text/CSS" href="/styles/smarthome.css">
<link rel="shortcut icon" sizes="196x196" href="/img/smart.png" />
</head>
<body>
</br>
</br>
<div id="content" align="center"></div>
</br>
</br>
</br>
<p align="center">Push button; receive bacon</p>
</br>
</br>
</body>
</html>
The head tag contains several crucial elements.
<meta name="mobile-web-app-capable" content="yes">
The code line above makes it possible to add the web app to the mobile device's home screen when using Chrome or Safari. You can access this function using the Chrome menu. This makes it so the app may be quickly launched on any mobile device or desktop computer.
The following line of code provides a measure of responsiveness for the web app. Because of this, it can take up the entire display of any gadget on which it is run.
<meta name="viewport" content = "height = device-height, width = device-width, user-scalable = no" />
The web page's title is defined in the following line of code.
<title>Smart Home</title>
The following four lines of code all connect the Html file to multiple resources it requires to function as intended.
<script type="text/javascript" src="/webiopi.js"></script>
<script type="text/javascript" src="/scripts/smarthome.js"></script>
<link rel="stylesheet" type="text/CSS" href="/styles/smarthome.css">
<link rel="shortcut icon" sizes="196x196" href="/img/smart.png" />
The first line above directly connects to the WebIOPi framework JavaScript, which is stored in the server's root directory. This method must be invoked whenever WebIOPi is used.
The second line tells the HTML document where to find our jQuery script, and the third tells where to get our style sheet. The last line prepares an icon for the mobile desktop, which can be useful if we use the website as an app or a favicon.
To ensure that our HTML code displays whatever is contained in the JavaScript file, we include break tags in the body portion of the code. The definition of our button's content was made previously in the JavaScript code, and its id="content" should bring that to mind.
<div id="content" align="center"></div>
Everybody is familiar with the routine of saving an Html file as index.html and then transferring it to the Pi's HTML folder via Filezilla.
Before we can begin sketching out circuit diagrams and running tests on our web app, we need to make a few adjustments to the webiopi service's configuration file, instructing it to look for configuration information in our HTML folder rather than the default location.
Edit the configuration by executing the following commands as root:
sudo nano /etc/webiopi/config
Find the section of the configuration file labelled "HTTP" and look for the line that begins with "#" Modify the directory where HTML and resources are stored by default with doc-root.
Remove the # comments from anything below it, and if your folder is organized like mine, set the doc-root to the location of your project file.
doc-root = /home/pi/webapp/html
Lastly, save your work and exit. If you already have another server installed on the Pi utilizing port 8000, you may easily change it. If not, let's stop saving and call it a day.
It's worth noting that the WebIOPi service password can be changed using the command;
sudo webiopi-passwd
A new login name and password will be required. Getting rid of this entirely is possible, but safety comes first.
Finally, issue the following command to start the WebIOPi service.
sudo /etc/init.d/webiopi start
If you want to see how the server is doing, you can do so by;
sudo /etc/init.d/webiopi status
That's why there's a way to halt its execution:
sudo /etc/init.d/webiopi stop
Setup WebIOPi to start automatically with;
sudo update-RC.d webiopi defaults
To do the opposite and prevent it from starting up automatically, use the following;
sudo update-RC.d webiopi remove
Now that we have everything set up, we can begin developing the schematics for our Web-controlled home appliance.
Whereas I could not procure relay modules, which in my experience, make electronics projects simpler for do-it-yourselfers. So, I'm going to draw some diagrams for regular, single-relay, 5V-powered standalone devices.
Join the components as seen in the fritzing diagram. It's important to remember that your Relay's COM, NO (usually open), and NC (typically Close) contacts could be on opposite sides. Please verify this with a millimetre.
Relays can be found anywhere that electricity is being switched, from a simple traffic light controller to a high-voltage switchyard. Relays, in the broadest sense, are equivalent to any other switch. They can connect or disconnect a circuit and are frequently employed to activate or deactivate an electrical load. However, this is a comprehensive statement; there are many other relays, and each Relay behaves slightly differently depending on the task at hand; as the electromechanical Relay is one of the most widely used relays, we will devote more space to discussing it here. In spite of variations in design, all relays work according to the same fundamental concept, so let's dive into the nuts and bolts of relays and talk about how they function.
A relay is called an electromechanical switch that may either establish or rupture an electrical connection. A relay is like a mechanical switch, except that it is activated and deactivated by an electronic signal rather than by physically flipping a switch. It comprises a flexible movable mechanical portion controlled electrically through an electromagnet. Once again, this Relay operating concept is suitable exclusively for electromechanical relays.
A common and widely used relay consists of electromagnets typically employed as a switch. However, there are many kinds of relays, each with its purpose. When a signal is received on one side of the device, it controls the switching activity on the other, much like the dictionary definition of Relay. That's right, a relay is an electromechanical switch that can open and close circuits. This device's primary function is to establish or sever contact with the aid of a signal to turn it ON or OFF automatically and without human intervention. Its primary use is to allow a low-power signal to exert control over a circuit with a high power consumption. Typically, the high-voltage circuit is controlled by a direct current (DC) signal.
The following diagram depicts the internal structure and design of a Relay.
A coil of copper wire is wound around a core, which is then placed inside a housing. When the coil is electrified, it attracts the movable armature, which is supported by a spring or stand and has a metal contact attached to one end. This assembly is positioned over the core. In most cases, the movable armature is a shared connection point for the motor's internal components and the other wiring harness. The usually closed (NC) pin is linked to the common terminal, while the ordinarily opened (NO) pin is not used in operation. By connecting the armature to the usually open contact whenever the coil is activated, current can flow uninterruptedly through the armature. When the power is turned off, it returns to its starting position.
The picture below shows a schematic of the Relay's circuit in its most basic form.
In the images below, you can see the main components of an electromechanical relay—an electromagnet, a flexible armature, contacts, a yoke, and a spring/frame/stand. They have been thoughtfully placed into a relay.
The workings of a Relay's mechanical components have been outlined below.
Electromagnet
An electromagnet is crucial to the operation of a relay. This metal lacks magnetic properties but can be transformed into a magnet when exposed to an electrical current. It is healthy knowledge that a conductor takes on the magnetic characteristics of the current flowing through it. Thus, a metal can operate as a magnet and attract magnetic objects within its range when wound with a conductive material and powered by an adequate power source.
Movable Armature
A moveable armature is just one piece of metal that can rotate or stand on its own. It facilitates connection-making and -breaking with the contacts attached to it.
Contacts
Internal conductors are the wires that run through a device and hook up to its terminals.
Yoke
It's a tiny metal piece attached to a core that attracts and retains the armature whenever the coil is activated.
Spring (optional)
While some relays can function without a spring, those that do have one attach it to the armature at one end to prevent any snagging or binding. One can use a metal "stand" in place of a spring.
Let's examine the differences between a relay's normally closed and normally open states.
If no current flows through the core, there will be no magnetic field, and the device will not be a magnet. As a result, it is unable to draw in the flexible framework. So, the ordinarily closed position of the armature is the starting point (NC).
When a high enough voltage is supplied to the core, it begins to have a strong magnetic field around itself, allowing it to function as a magnet. The magnetic field produced by the core attracts the movable armature whenever it comes within its field of influence, changing the armature's location. As it has been wired to a normally open relay pin, any external circuits attached to it will no longer operate in the same way.
It is important to connect the relay pins correctly so that the external circuit can do its job. When a coil is powered, the armature is drawn toward it, revealing the switching action; when the power is cut, the coil loses its magnetic property, and the armature returns to its original location. The animation provided below shows the Relay in action.
There is nothing complicated about a transistor, yet there is a lot going on inside it. Okay, so first, we'll tackle the easy stuff. An electronic transistor is a small component that can switch between two functions. It's a switch that can also act as an amplifier.
An amplifier is a device that takes in a little electric current and outputs a significantly larger electric current (called an output current). It can be thought of as a current booster. One of the earliest applications for transistors, this is particularly helpful in devices like hearing aids. A hearing aid contains a microscopic microphone that converts ambient sound into electrical signals. These are then amplified by a transistor and used to power a miniature loudspeaker, which reproduces the ambient noise at a much higher volume.
It is possible to use a transistor as a switch. A transistor is a device that allows for the passage of one electrical current to induce a much larger current to flow through the next part of the device. What this means is that a relatively small current can activate a much larger one. All computer chips function in this general way. As an illustration, a memory chip may have as many as a billion individually controllable transistors. Due to the fact that each transistor can exist in either of two states, it is capable of storing either a zero or a one. A chip's ability to hold billions of zeroes and ones, as well as almost as many regular numbers and letters, is made possible by its billions of transistors.
Diodes can range in size from what's shown in the image up top. They feature a cylindrical body that is usually black with a stripe at one end and certain leads that protrude so that we may plug it into a circuit. The opposite terminal is called the cathode and is opposite the anode.
A diode is an electrical component that restricts current flow in one direction.
To illustrate, picture a swing valve fitted in a water line. The water pressure inside the pipe will force open the swing gate, allowing the water to flow uninterrupted. In contrast, the gate will be forced shut, and water flow will stop if the river alters its course. As a result, there is only one direction for water to flow.
Very much like a diode, which we also employ to alter the current flow through a circuit, it allows us to switch it on and off at will.
We have now animated this process using electron flow, in which electrons move from negative to positive. However, traditional flow, positive to negative, is the norm in electronics engineering. It's usually best to start with the conventional current because it's more familiar to most people, but feel free to use either one; we'll assume you're aware of the difference.
It's important to remember that the light-emitted diode will only light up properly if the diode is connected to the circuit in the correct orientation when adding it to a simple Light emitted diode circuit like the one shown above. Only one direction of current can travel through it. Accordingly, its conductive or insulating properties are determined by the orientation in which it is mounted.
So that it can conduct electricity, you must join the black end to the neutral and the striped end to the positive. The forward bias is the condition in which current can flow. If we invert the diode, it will become an insulator and stop the passage of electricity. The term for this is "the reverse bias."
You probably know that electricity is the transfer of electrons between atoms that are not bound. Because of its high number of unpaired electrons, copper is widely used for electrical wiring. Since rubber is an insulator—its electrons are kept very securely, so they cannot flow between atoms—it is used to wrap around the copper wires for our protection.
In a simplified form of a metal conducting atom, the nucleus is at the center, and the electrons are housed in a series of shells around it. It takes a specific amount of energy for an electron to be absorbed into each shell, and each shell has a max number of electrons it can hold. Those electrons that are furthest from the nucleus are the most energetic. Conductors have between one and three electrons in their outermost "valence" shell.
The nucleus acts as a magnet, keeping the electrons in place. However, there is yet another layer, the conduction band. If an electron gets here, it can leave its atom and travel to another. Because the valence shell and conduction band of a metal atom overlap, the electron can move quickly and easily between the two.
The insulator has a tightly packed outer layer. No free space for electrons to occupy. Because of the strong attraction between the nucleus and the electrons and the great distance between the nucleus and the conduction band, the electrons are trapped inside the nucleus and cannot leave. Because of this, electricity is unable to travel through it.
Of course, a semiconductor is also a different type of material. A semiconductor might be silicon, for instance. This material behaves as an insulator because it has one more electron than is necessary in its outermost shell to be a conductor. However, with enough external energy, a few valence electrons can generate enough momentum to hop across to the conduction band, where they can finally break free. Consequently, this substance can perform the roles of both an insulator and a conductor.
Due to the lack of free electrons in pure silicon, engineers must add a small number of materials (called "doping") to the silicon to alter its electrical properties.
This process gives rise to P-type and N-type doping, respectively. The diode itself is a combination of these doped materials.
Two leads connect the anode and cathode to various thin plates inside the diode. P-Type doped silicon is on the anode side of these plates, and the cathode side is N-Type doped silicon—an insulating and protective resin that coats the entire structure.
Consider the material to be pure silicon before it has been doped. There are four silicon atoms surrounding each one. Because silicon atoms need eight electrons to fill their valence shells but only have four available, they share one with their neighbours. Covalent bonding describes this type of interaction.
Phosphorus, an N-type element, can be substituted for a number of silicon atoms in a compound semiconductor. Phosphorus has a 5-electron valence shell because of this. This extra electron isn't needed because particles are sharing them to reach the magic number of 8. This means there's an extra electrons in the material, and it's free to go wherever it wants.
In P-type doping, a substance like aluminum is introduced. Due to its limited valence electron pool of 3, this atom is unable to share an electron with any of its four neighbours. An electron-sized void is therefore made available.
We now have silicon with either too many or too few electrons, depending on the doping method.
Upon joining, the two substances forge a p-n junction. This is a depletion region, and it forms at the intersection. Here, some of the surplus electrons on the N-type side migrate over to fill the vacancies on the P-type side. By moving in this direction, electrons and holes will accumulate on either side of a barrier. Holes are thought to be positively charged since they are the opposite of electrons, which are negatively charged. The resulting accumulation produces two distinct regions, one slightly negatively charged and the other slightly positively charged. This forms an electric field that blocks the path of any more electrons. In regular diodes, the voltage drop over this area is only 0.7V.
By applying a voltage across the diode with the P-Type anode linked to the positive and the N-Type cathode attached to the negative, a forward bias is established, and current can flow. The electrons can't get over the 0.7V barrier unless the voltage source is higher.
We can achieve this by connecting the positive terminal of the power supply to the cathode of an N-type device and the negative terminal to the anode of a P-type device. The diode functions as a conductor to block current because the barrier expands as holes are drawn toward the negative and electrons are drawn toward the positive.
A resistor is a two-terminal, non-active electrical component that reduces the amount of current in electric and electronic circuits. A certain amount can lower the current by strategically placing a resistor in a circuit. From the outside, most resistors will appear identical. But if you crack it open, you'll find a ceramic rod used for insulation within, with copper wire covering the rest of the structure. Those copper twists are crucial to the resistance. When copper is sliced thinner, resistance rises because electrons have more difficulty penetrating the material. We now know that electrons can move more freely through some conductors than insulators.
George Ohm investigated the correlation between resistor size and material thickness. His proof showed that an object's resistance (R) grows in proportion to its length. Because of this, the resistance offered by the lengthier and thin wires is greater. However, wire thickness has a negative effect on resistance.
Once everything is hooked up, you can start your server by browsing to the IP address of your RPi and entering the port you chose earlier (as mentioned in the previous section), entering your password and username and seeing a page that looks like the one below.
All it takes is a few clicks of your mouse to operate four AC home appliances from afar. This can be controlled from a mobile device (phone, tablet, etc.) and expanded with additional switches and relays. Thank you all for reading to the end.
This guide showed us how to set up a web-based control system for our home automation system based on the Raspberry Pi 4. We have learned how to utilize the WebIOPi API to manage, debug, and use raspberry Pi's GPIO, sensors, and adapters from an internet browser or any application. We have also implemented JavaScript, CSS, and HTML code for the web application. For those who thrive on difficulty, feel free to build upon this base and add whatever demanding module you can think of to the project. The following tutorial will teach you how to use a Raspberry Pi 4 to create a Line Follower robot that can navigate obstacles and drive itself.
Metal fabrication refers to the manufacturing of sheet metal and other types of metal to fit different shapes.
The metal fabrication industry is vital to a wide range of industries. That’s due to the reliance on metal fabrication for vehicle parts, train tracks, building equipment, electrical devices, etc. A metal fabrication shop has various pieces of equipment needed to design and fabricate a myriad of metals. So, what, then, is custom fabrication?
Custom metal fabrication refers to the production or fabrication of a range of metals to meet unique specifications. Custom metal fabrication projects are carried out for a specific purpose. It involves cutting, bending, rolling, or joining metal to create custom complex shapes based on specific requirements.
Unlike traditional metal fabrication, custom metal fabrication offers more flexibility to meet the product specifications for various industries. It can be used to create tools for construction, mining, aerospace, and energy delivery systems .
Custom metal fabrication offers certain benefits over stock metal fabrication. It can be used to create precise fabrications for use in rail systems, building equipment, automobiles, and many other purposes.
Here are three major benefits of custom metal fabrication:
Improves Product Quality: You can produce durable metal products through custom fabrication. Metal fabricators consider the best materials that can suit the desired application.
Factors such as external forces, stress, and possible strains are also considered during the production process. For custom metal fabrication, the design is application-centered. The custom metal fabricator uses materials that can ensure increased product quality.
Increases Metal Fabrication Efficiency: Instead of using mass-produced metal works for certain services, custom fabrication drives efficiency by producing specialized designs. It allows for the creation of metal structures using a process that suits the requirements the most.
This helps increase efficiency and reduce the time spent on metal fabrication. Also, custom fabrication allows for optimized product processing and product flow. It makes the best use of metal fabrication materials in the production line.
Offers High Component Compatibility: Another major benefit of custom metal fabrication is the high component compatibility it offers. In custom fabrication, the metal works are designed to meet the required specifications, making them compatible with each other. With custom metal fabrication, you can produce custom shapes that work with the hardware you are using.
There are different types of metal fabrication processes. These processes are unique and are utilized based on part geometry, the metal fabrication materials, and the purpose of the product. Below are the common types of metal fabrication processes.
Casting: Casting involves pouring molten metal into a die or mold. The molten metal then cools and hardens into the desired shape. Casting is ideal for the mass production of metal parts with the same shape, for example, in the production of train tracks.
The mold can be reused multiple times to create metal works that are of the same shape. There are different types of casting processes, such as die casting, permanent mold casting, and sand casting.
Cutting: Cutting is one of the most common metal fabrication processes. It involves cutting a large piece into smaller pieces of metal. While sawing remains the traditional and earliest method of cutting, there are other methods, such as laser cutting, waterjet cutting, and plasma arc cutting, among other modern cutting techniques.
Cutting can be done through manual and power tools or through computer-controlled machines. Usually, cutting is the first stage in a metal fabrication process.
Drawing: In drawing, a piece of metal is pulled through tensile force. It is then stretched into a thinner shape. Drawing can be performed at room temperature or at heated temperature. It is known as cold drawing when it is done at room temperature.
However, the metal can be heated to reduce the tensile force needed. Drawing is usually combined with sheet metal fabrication to create box-shaped or cylindrical vessels.
Drilling: In drilling, a rotary cutting tool is used to cut holes in a piece of metal. The drill bit rotates very fast to make a hole in the metal.
Extrusion: In extrusion, the piece of metal is forced between or around a closed or open die. The diameter of the metal will be reduced to the cross-section of the closed or open die if it is forced through.
However, if the metal is forced around a die, it will result in a cavity. The two methods involve a metal slug and a ram to perform the process. Usually, the resulting product is used for piping or wiring. For example, it can be used to create splice kits needed to splice cable.
Extrusion can also be used to create short or long metal pieces. Also, it can be performed at room temperature or at increased temperatures. Cold extrusion is typically used for steel metal fabrication, while hot extrusion is commonly used for copper fabrication.
Forging: This involves the use of compressive force to create metal pieces. The metal fabricator uses a hammer or die to hit the workpiece until it forms the desired shape. It can also be done using high-pressure machinery.
Milling: Milling involves using rotating cutting tools to make perforations into the metal until it forms the desired shape. Usually, milling is done as a finishing or secondary process. It can be done through a CNC machine or manually. There are different types of milling, such as climb milling, form milling, angular milling, and face milling.
Punching: Turrets with unique shapes hit the metal to create metal pieces with holes or shaped metal pieces. It can be used to create delicate metal decorations or for other purposes.
Turning: In turning, a lathe is used to rotate the metal, while a cutting tool is used to shape the metal radially while it spins. The angle of the cutting tool can be adjusted to make different shapes.
The metal fabrication industry caters to many other industries. Metal works are used for different applications, such as rail works, buildings, and other projects. Custom metal fabrication involves applying metal fabrication processes to make specific metal works.
The major benefits of custom metal fabrication are improved product quality, increased metal fabrication efficiency, and high component compatibility. The unique metal fabrication processes are used for different purposes to create specific products. It is important to look for experienced companies with specialized equipment when looking for a metal fabrication service.
A lot of people took engineering for the love of math and machines. Most are introverts as well, diligently doing their project in workshops, plants, or computers. Unless you are in sales, academia, or managerial positions, you don’t meet a lot of people as much as a doctor or a lawyer.
However, this should not stop you from establishing your network. People networks may not be the most popular engineering tool. Albeit, you might be surprised by how useful and powerful it can be.
The good thing is that there are contact management apps nowadays. You can compartmentalize your contacts- suppliers, specialists, laborers, and so on. With the right contact management app , you can reap the following benefits.
Referral fees are sweet. Receiving a check from an old acquaintance because you recommended them to the project you are currently working on may not be an SOP but if it comes, you have the budget for a fancy dinner or a family vacation, all depending on how much you receive of course.
It is ideal to establish your network early, like during your university years. That is when you meet other students taking up engineering courses. For example, you took civil engineering and later, got into residential projects. Someone from the electrical engineering class may be into developing housing units as well. You can easily refer him to your developer because you have an idea of how he performed way back in the undergraduate days and most likely carries a good professional portfolio as well.
Deep but narrow or wide but shallow, that is the question. However, this is the age of information and we can easily swim to different waters. There might be electrical theories that can be tried on fluid flow, especially when you understand voltage deeply. And vice versa.
Knowledge of chemistry may be beneficial for material engineers. Marine engineers may take notes from a marine biologists about how different aquatic life functions so that first can improve their work more.
It’s not imperative that you are in-depth with other fields. The key is knowing who is an authority in those other fields. You just have to be able to ask the relevant questions and understand the basic terminologies in order to sustain a professional collaboration.
It is nice to have a reliable circle that can give technical opinions about your innovations. Peer review helps improve your new machine or novel methodology discovery. After you have performed your calculations and achieved repeatable results to make a conclusion, a review not only from your mentors and colleagues but as well as from other stakeholders or possible users would be beneficial.
If you have a good network, getting insightful feedback would be easy to come by. This would be great especially on product development as usually, you need fresh eyes on something you have been working on for a long while.
When you are operating in the same environment for a long time, it’s difficult to scale how your practice and proficiency translate to the outside world. Connecting with other professionals allows you to get a better understanding of how other people in similar positions go on to have more successful careers. This is called upward comparison. This psychological theory implies that when people compare themselves to someone they perceive to be superior, it motivates them to gain similar achievements.
If you’re eyeing a C-level post in the company on the career ladder, it is not enough that you know only what your institution demands for such a position. Your fellow engineers would be aware of those needed upgrades as well. On the other hand, your connection with several engineering managers even from other industries can be beneficial. It gives you more knowledge regarding people aspiring to be principal engineers than your work colleagues. When one from your network finally gets promoted, you can either directly ask him what he did or simply note the certifications he took and the achievements he fulfilled.
Engineers in the technical field don’t need to accumulate contacts for their work consciously. However, if you will compile all the various suppliers showcasing their innovative designs, the numerous specialists servicing that instrument, or even the more than a dozen consultants that you need signatures from, you might be surprised to know you can already fill an A0 size paper with their contact information and more.
You don’t necessarily need to be at a friend level with all of them. Just keep their calling cards instead of inattentively giving them to the document controller. Better yet, have a contact management app that offers the option to store not only contact typical contact details but also some notes about the person.
Let’s say you’re a building engineer. A salesman walked into your office in order to have a product demo about a new technology in waterproofing. Instead of focusing solely on the technicalities of the product, you can get a little more pleasant with the salesperson and maybe you’ll be able to get better deals than the ordinary sales pitch.
Network, network, network. This is not exactly taught in engineering economics but business graduates already know. You can set up your consultancy in a prime location. However, if your brand is unpopular, you’re just making one person rich: the landlord.
You may have the most efficient system design, but nobody will hear about it if you can’t market it properly. Hence, you will hire a marketing officer and what does he have to do? Build a database of people where he can sell your product.
If you already have accumulated and organized contacts, you don’t have to put up much effort to find initial clients. You save the budget to hire a marketing officer for later when you want an expansion.
It may be HR’s job to hire and fire technicians and laborers. However, keeping a record of all the workers you’ve handled may come in handy.
If you start your own company, it will be easier to have a ready list of employees you want to hire. Pulling up a list of personnel from your phone is more convenient than tracking them one by one after a long time.
Another case in point is as the project is near to end. It is normal to reduce manpower hence some of your team members will have to be let go. Your list of good workers can easily be shared with your other fellow engineers who are just at the peak of their work at another construction site. You share your manpower like you (sometimes) lend your tools to other engineers.
Later on, when you are the one needing extra manpower to recover lost hours due to force majeure, you can easily pull some guys without question on the quality of their work. This mitigates the risks of poor workmanship and possible unprofessional conduct as you have worked with these crews before.
It’s never too late to build your people network. You can gain this new skill set or upgrade your basic knowledge with the help of some contact management tools. Do so and reap more of the benefits of human connections.
Welcome to the next tutorial on our raspberry pi four python programming. In the previous article, we built a system that recognizes when two people are in physical contact using OpenCV and a Raspberry Pi 4. We used the weights from the YOLO version 3 Object Recognition Algorithm to implement the Deep Neural Networks part. Regarding image processing, the Raspberry Pi consistently comes out on top compared to other controllers. A facial recognition program was among the earlier attempts to use Raspberry Pi for sophisticated picture processing. In today's world of cutting-edge technology, digital image processing has expanded rapidly to become an integral feature of many portable electronic gadgets.
Digital image processing is widely used for such tasks as item detection, facial recognition, and people counting. This guide will use a Raspberry Pi 4 and ThingSpeak to create a crowd-counting system based on OpenCV. In this case, we will utilize the pi camera module to take pictures in a continuous loop, and then we will run the images through the Histogram Based Object descriptor to find the things in the photos. Next, we'll compare these images to OpenCV's pre-trained model for facial recognition. The headcount may be seen by anybody, anywhere in the world, because of the public nature of the ThingSpeak channel.
Knowing how many people show up to an event or purchase a newly released product is vital for event management and retail shop owners. Still, it's even more critical that they can use that information to improve future events. To their relief, modern crowd-counting technology has made it simpler for event planners and business owners to acquire actionable data on event attendance that can be used to improve ROI.
Where To Buy? | ||||
---|---|---|---|---|
No. | Components | Distributor | Link To Buy | |
1 | Raspberry Pi 4 | Amazon | Buy Now |
Raspberry Pi 4
Pi Camera
ThingSpeak
Python3
OpenCV3
In this case, the OpenCV framework will make people count. You must first upgrade your Raspberry Pi before you can install OpenCV.
sudo apt-get update
Then, get OpenCV ready for your Raspberry Pi by installing its prerequisites.
sudo apt-get install libhdf5-dev -y
sudo apt-get install libhdf5-serial-dev –y
sudo apt-get install libatlas-base-dev –y
sudo apt-get install libjasper-dev -y
sudo apt-get install libqtgui4 –y
sudo apt-get install libqt4-test –y
Once that is done, use the following command to install OpenCV on your Raspberry Pi.
pip3 install OpenCV-contrib-python==4.1.0.25
We need to get some additional packages on the Raspberry Pi before we can begin writing the code for the Crowd Counting app.
Installing imutils: To perform basic image processing tasks like translating, rotating, resizing, skeletonizing, and displaying Matplotlib images more efficiently in OpenCV, imutils are used. So, run the following command to set up imutils:
pip3 install imutils
matplotlib: The matplotlib library should then be installed. When it comes to Python visualizations, Matplotlib is your one-stop shop for everything from static to animated to interactive.
pip3 install matplotlib
One of the most widely used IoT platforms, ThingSpeak allows us to keep tabs on our data from any location with an Internet connection. The system can also be controlled remotely by using the Channels and web pages provided by ThingSpeak. You must first register for an account on ThingSpeak to create a channel. If you have a ThingSpeak account, please log in with your username and password.
Select Sign up and fill out the required fields.
Double-check your email address and press the "Next" button when you're done. Now that you're logged in, click the "New Channel" button to make a brand-new channel.
When you're ready to begin uploading information, select "New Channel" and give it a descriptive name and brief explanation. One new field, "People," has been added. Any number of areas may be made, as needed. Then, click the "Save Channel" button after entering the necessary information. You'll need to pass your API and channel ID into a Python script whenever you want to submit data to ThingSpeak.
For this OpenCV people-countering project, all you need is a Raspberry Pi and a Pi camera; to get started, plug the camera's ribbon connector into the Raspberry pi's designated camera slot.
The Pi 4 Camera board is a purpose-built expansion board for the Raspberry Pi computer. The Raspberry Pi hardware is connected via a specialized CSI interface. In its native still-capture mode, the sensor's resolution is 5 megapixels. Capturing at up to 1080p and 30 frames/second in video mode is possible. Because of its portability and compact size, this camera module is fantastic for handheld applications.
A ribbon cable connects the camera board to the Raspberry Pi. Camera PCB and Raspberry Pi hardware are associated with a ribbon cable. If you join the ribbon cables correctly, the camera will work. The camera PCB's blue backing must face away from the PCB, while the Raspberry Pi hardware's blue backing must face the Ethernet port.
One example of a feature descriptor is the HOG, similar to the Canny Edge Detector algorithm. Object detection is a typical application of this technique in image processing and computer vision applications. This method uses a count of gradient orientation occurrences in the limited region of an image. There are a lot of similarities between this approach and Scale Invariant Feature Transformation. The HOG descriptor highlights object structure or form. This method of computing features is superior to other edge descriptors because it considers both the magnitude and the angle of the gradient. Histograms are created for the image's regions based on the gradient's intensity and direction.
First, load the image that will serve as the basis for the HOG feature calculation into the system. Reduce the size of the image to 128 by 64 pixels. The research authors utilized and recommended this dimension because improving detection outcomes for pedestrians was their primary goal. After achieving near-perfect scores on the MIT pedestrian's database, the authors of this study opted to create a new, more difficult dataset: the 'INRIA' dataset (http://pascal.inrialpes.fr/data/human/), which includes 1805 (128x64) photographs of individuals cut from a wide range of personal photos.
In this step, we compute the image's gradient. The gradient can be calculated using the image's magnitude and angle. First, we determine Gx and Gy for every pixel in a 3x3 grid. As a first step, we determine the Gx and Gy values for each pixel by plugging their respective values into the following formulas.
Each pixel's magnitude and angle are computed using the following formulae after Gx and are determined.
Once the gradient for each pixel has been calculated, the resulting gradient matrices are each partitioned into eight 8x8 cells that form a block. Each block is assigned a 9-point histogram. Each bin in a 9-point histogram has a 20-degree range, so the resulting histogram has nine bins total. The numbers in Figure 8 are assigned to a 9-bin histogram graphically depicting the results of the calculations. Each of these 9-point graphs can be represented graphically as a histogram whose bins output the relative strength of the gradient across the corresponding intervals. Since a block can have 64 distinct values, the calculation below is carried out for each of the 64 possible combinations of magnitude and gradient. Because 9-point histograms are being used, therefore:
The following terms will define the limits of each jth bin:
The average value of each bucket will be:
Illustration of a histogram with nine discrete bins. For a particular 8x8 block of 64 cells, there will be only one possible histogram. Each of the sixty-four cells will contribute their Vj and Vj+1 values to the array's indices at the jth and (j+1) positions.
When determining the value assigned to cell j in block I, we first determine which bin j will be assigned to it. The following equations will provide the value:
Each pixel's value, Vj, is calculated and stored in the set at the jth and (j+1)the indexes of the bin that serves as the block's bin. Upon completing the preceding steps, the resulting matrix will have dimensions 16 by eight by 9. When the histograms for all blocks have been computed, a new block is formed by joining together four cells of the 9-by-9 histogram matrix (2x2). This chopping is carried out overlappingly, with an 8-pixel stride. We create a 36-feature vector by concatenating the 9-point histograms of each of the four cells that make up the block.
A combined FBI is created from four blocks by traversing a 2x2 grid around the image.
The L2 norm is used to standardize FB values across blocks.
The value of k for normalization is found by applying the following formulae:
Normalizing is performed to lessen the impact of variations in the contrast between photographs of the same object—each section. Data is collected in the form of a 36-point feature vector. Seven blocks line up across the bottom and fifteen at the top. Therefore, the entire length of all histogram-oriented gradient features will be 3780 (7 x 15 x 36). The image's HOG characteristics are extracted.
HOG features are seen parallelly on a single image with the image library.
This page includes the complete Python code for an OpenCV project that counts the people in a crowd. Here, we break down the code's crucial parts so you can understand them better—first, import all the necessary libraries that will be used later in the code.
import cv2
import imutils
from imutils.object_detection import non_max_suppression
import numpy as np
import requests
import time
import base64
from matplotlib import pyplot as plt
from urllib.request import urlopen
Imutils:
For use with OpenCV and either version of Python, this package provides a set of helper functions for everyday image processing tasks such as scaling, cropping, skeletonizing, showing Matplotlib pictures, grouping contours, identifying edges, and more.
Numpy:
You can manipulate arrays in Python with the help of the NumPy library. Matrix operations, the Fourier transform, and linear algebra are all within their purview. Because it is freely available to the public, anyone can use it. That's why it's called "Numerical Python," or "NumPy" for short.
Python's list data structure can replace arrays, but it could be faster. NumPy's intended benefit is an array object up to 50 times quicker than standard Python lists. To make working with NumPy's array object, ndarray, as simple as possible, the library provides several helpful utilities. Data science makes heavy use of arrays because of the importance placed on speed and efficiency.
Requests:
You should use the requests package if you need to send an HTTP request from Python. It hides the difficulties of requests making behind a lovely, straightforward API, freeing you to focus on the application's interactions with services and data consumption.
Time:
In Python, the time module has a built-in method called local time that may be used to determine the current time in a given location depending on the time in seconds that have passed since the epoch (). tm isdst will range from 0 to 1 to indicate whether or not daylight saving time applies to the current time in the region.
Base64:
If you need to store or transmit binary data over a medium better suited for text, you should look into using a Base64 encoding technique. There is less risk of data corruption or loss thanks to this encoding method. Base64 is widely used for many purposes, such as MIME-enabled email storing complicated data in XML and JSON.
Matplotlib:
When it comes to Python visualizations, Matplotlib is your one-stop shop for everything from static to animated to interactive. Matplotlib facilitates both straightforward and challenging tasks. Design graphs worthy of publication. Create movable, updatable, and zoomable figures.
urllib.request:
If you need to make HTTP requests with Python, you may be directed to the brilliant requests library. Though it's a great library, you may have noticed that it needs to be a built-in part of Python. If you prefer, for whatever reason, to limit your dependencies and stick to standard-library Python, then you can reach for urllib.request!
Then, after the libraries have been imported, you can paste in the channel ID and API key for the ThingSpeak account you previously copied.
channel_id = 812060 # PUT CHANNEL ID HERE
WRITE_API = 'X5AQ3EGIKMBYW31H' # PUT YOUR WRITE KEY HERE
BASE_URL = "https://api.thingspeak.com/update?api_key= {}".format(WRITE_API)
Set the default values for the HOG descriptor. Several other uses have been found for HOG, making it one of the most often implemented methods for object detection. In the past, an OpenCV pre-trained model for people detection could be accessed through cv2.HOGDescriptor getDefaultPeopleDetector().
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
Raspberry PI is provided with a three-channel color image inside the detector() function. It then uses imutils to scale the image down to the appropriate size. The SVM classification result is then used to inform the detectMultiScale() method, which examines the image to determine the presence or absence of a human.
def detector(image):
image = imutils.resize(image, width=min(400, image.shape[1]))
clone = image.copy()
rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)
If you're getting false positive results or detection failures due to capture-box overlap, try running the below code, which uses non-max suppressing capability from imutils to activate overlapping regions.
for (x, y, w, h) in rects:
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
result = non_max_suppression(rects, probs=None, overlapThresh=0.7)
return result
With the help of OpenCV's VideoCapture() method, the image is retrieved from the Pi camera within the record() function, where it is resized with the imultis before being sent to ThingSpeak.
def record(sample_time=5):
camera = cv2.VideoCapture(0)
frame = imutils.resize(frame, width=min(400, frame.shape[1]))
result = detector(frame.copy())
thingspeakHttp = BASE_URL + "&field1={}".format(result1)
Now that everything is hooked up and ready to go, let's put it through its paces. Launch the program by extracting it to a new folder. You'll need to give Python a few seconds to load all the necessary modules. Start the program. A new window will pop up, showing the camera's output after a few seconds. Make sure your Raspberry Pi camera is operational before running the python script. The following command is used to activate the python script after a review of the camera has been completed:
At that point, a new window will appear with your live video feed inside of it. OpenCV will count the number of persons in the first frame that Pi processes. The appearance of a box will indicate the detection of humans:
Now that you know how many people are expected to show up, you can check the crowd size from the comfort of your own home via your ThingSpeak channel.
You can now efficiently conduct crowd counts with OpenCV and a Raspberry Pi. This technology helps with guaranteeing the safety of those attending large-scale events, which is a top priority for event planners. Knowing how people will flow through a venue or store is crucial for offering effective crowd management services. It will also improve efficiency and customer service because it is helpful for event and store managers to track the number of people entering and leaving their establishments at any one time. Additionally, it is important for event planners to understand dwell time in order to ascertain which parts of the venue are popular with attendees and which are completely bypassed. This gives them information about how the guest felt, which lets them better use the space they have.
import cv2
import imutils
from imutils.object_detection import non_max_suppression
import numpy as np
import requests
import time
import base64
from matplotlib import pyplot as plt
from urllib.request import urlopen
channel_id = 812060 # PUT CHANNEL ID HERE
WRITE_API = 'X5AQ3EGIKMBYW31H' # PUT YOUR WRITE KEY HERE
BASE_URL = "https://api.thingspeak.com/update?api_key={}".format(WRITE_API)
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
# In[3]:
def detector(image):
image = imutils.resize(image, width=min(400, image.shape[1]))
clone = image.copy()
rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)
for (x, y, w, h) in rects:
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
result = non_max_suppression(rects, probs=None, overlapThresh=0.7)
return result
def record(sample_time=5):
print("recording")
camera = cv2.VideoCapture(0)
init = time.time()
# ubidots sample limit
if sample_time < 3:
sample_time = 1
while(True):
print("cap frames")
ret, frame = camera.read()
frame = imutils.resize(frame, width=min(400, frame.shape[1]))
result = detector(frame.copy())
result1 = len(result)
print (result1)
for (xA, yA, xB, yB) in result:
cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)
plt.imshow(frame)
plt.show()
# sends results
if time.time() - init >= sample_time:
thingspeakHttp = BASE_URL + "&field1={}".format(result1)
print(thingspeakHttp)
conn = urlopen(thingspeakHttp)
print("sending result")
init = time.time()
camera.release()
cv2.destroyAllWindows()
# In[7]:
def main():
record()
# In[8]:
if __name__ == '__main__':
main()
Crowd dynamics can be affected by several things, such as the passage of time, the layout of the venue, the amount of information provided to visitors, and the overall enthusiasm of the gathering. Managers of large crowds need to be flexible and responsive in case of sudden changes in the environment that affect the situation's dynamics in real-time. Trampling events, mob crushes, and acts of violence can break out without proper crowd management.
The complexity and uncertainty of large-scale events emphasize the importance of providing timely, relevant information to crowd managers. Occupancy control technology helps event planners anticipate how many people will show up to their event, so they can prepare appropriately by ensuring adequate security guards, exits, etc.
Using Raspberry Pi and some smart subtractions and blob tracking, this article describes a system for counting individuals. We show how many people have entered and left a building. The principles of HOG and the calculation of features have also been covered. The testing outcomes demonstrate the viability of using this raspberry pi based device as an essential people-counting station. In the following tutorial, we'll learn how to assemble an intelligent energy monitor based on the Internet of Things and a Raspberry Pi 4.
Hello peeps. Welcome to the next tutorial on deep learning. You have learned about the neural network, and it was an interesting way to compare different types of neural networks. Now, we are talking about deep learning frameworks. In the previous sessions, we introduced you to some important frameworks to let you know about the connection of different entities, but at this level, it is not enough. We are telling you in detail about all types of frameworks that are in style because of their latest features. So before we start, have a look at the list of concepts that will be covered today:
Introduction to the frameworks of deep learning.
Why do we require frameworks in deep learning?
What are some important deep learning frameworks?
What is TensorFlow and for which purpose of using TensorBoard?
Why Keras is famous?
What is the relationship between python and PyTorch?
How can we choose the best framework?
Deep learning is a complex field of machine learning, and it is important to have command over different types of tools and tricks so that you may design, train, and understand several types of neural networks efficiently with the minimum amount of time. Frameworks are used in many different types of programming languages, and this is the software that, by combining different tools, improves and simplifies the operation of the programming language.
The best thing about the frameworks is that they allow you to train their models without knowing or bothering about the algorithms that are running behind the programming. Isn’t it amazing to know that you will get a helping hand to understand and train your model without any worries? Once you know much about the different frameworks, it will be clear to you how these frameworks do some specific types of tasks to make your training process easy and interesting.
In the beginning, when you start the programming of the deep learning process by hand, you will see some interesting results related to your task. Yet, when you move towards complex tasks or when you are at the intermediate level, you will realize that it is strenuous and time-consuming to perform a simple task at a higher level. Moreover, the repetition of the same code can sometimes make you sick.
Usually, the need for a framework arises when you start working with advanced neural networks such as convolutional neural networks, or simply CNN, where the involvement of images and video makes the task difficult and time-consuming. These frameworks have pre-defined types of networks and also provide you with an easy way to access a great deal of information.
With the advancement of deep learning, many organizations are working to make it more user-friendly so that more people can use it for advanced technologies. It is one of the reasons behind the popularity of deep learning that a great deal of deep learning frameworks is introduced every year. We have analyzed different platforms and researched different reports. We found some amazing frameworks, and our experts have been checking them for a long time to provide you with the best framework for your learning. Here is the list of the frameworks that we will discuss in detail with you, along with the pros and cons of each.
Tensorflow
Keras
PyTorch
Theano
DL4J
Lasagna
Caffe
Chainer
We are not going to discuss all of them because it may be confusing for you to understand all the frameworks. Moreover, we believe in smart working, and therefore, we are simply discussing the most popular frameworks so that you may learn the way to compare different parameters, and after that, you will get the perfect way to make modules, train, and test the projects in different ways for smart working.
The first framework to be discussed here is TensorFlow, which is undoubtedly the most popular framework for deep learning because of its easy availability and great performance. The backbone of this platform is directly connected to Google’s brain team, which has represented it for deep learning and provided easy access to almost all types of users. It supports Python and some other programming languages, and the good thing about it is that it also works with dataflows. This point makes it more useful because, when dealing with different types of neural networks, it is extremely useful to understand the progress and the efficiency of your model.
Another important point to notice about TensorFlow is, it creates models that are undemanding to build and contain robust.
A plus point about this framework is another large package called TensorBoard. There are several advantages to this fabulous data package, but some of them are listed below:
The basic working of this package is to provide data visualization to the user, which is a great step for the ease of the user, but unfortunately, people are less aware of this, although it is a useful item.
Another advantage of tensorBoard is that it makes the sharing of the data with the shareholders easy and comfortable because of its fantastic data display.
You can use different packages with the help of TensorBoard.
You can get other basic information about TensorFlow by paying attention to the following table:
TensorFlow |
|
Releasing Dates |
November 9, 2015, and January 21, 2021. |
Programming Languages |
Python, C++, CUDA |
category |
Library of machine learning |
Name of Platforms |
JavaScript, Linux, Windows, Android, macOS, |
License |
Apache License 2.0 |
Website’s Link |
|
The next on the list is another famous and useful library for deep learning that most of you may know about. Keras is one of the favourite frameworks for deep learning developers because of its demand and open-source contributors. An estimate says that 35,000+ users are making this platform more and more popular.
Keras is written in the Python programming language, and it can support high-level neural networks. You must keep in mind that Keras is an API, and it runs on top of highly popular libraries such as TensorFlow and Theano. You will see this in action in our coming lectures. Because of its user-friendly features, Keras is used by a large number of companies as a startup and is a great tool for researchers and students.
The most prominent feature of Keras is its user-friendly nature. It seems that the developers have presented this framework to all types of users, no matter if they are professionals or learners. If users encounter an error or issue, they should receive transparent and actionable feedback.
For me, modularity is a useful feature because it makes tasks easier and faster. Moreover, the errors are easily detectable, which is a big relief. The modularity is shown with a graphical representation or sequence of information so that the user may understand it well.
Here's some good news for researchers and students. Keras is one of the best options for researchers because it allows them to make their own modules and test them according to their choice. Adding the modules to your project is super easy on Keras, and you can do advanced research without any issues.
Keras |
|
Releasing Dates |
March 27, 2015, and June 17, 2020. |
Programming Languages |
N/A |
category |
Almost all types of neural networks |
Name of Platforms |
Cross-platforms |
License |
Massachusetts Institute of Technology (MIT) |
Website’s Link |
|
Our next topic of discussion is PyTorch. It is another open-source library for deep learning and is used to build complex neural networks in an easy way. The thing that attracted me to this library is the platform that introduced it. It is developed under the umbrella of Facebook's AI Research Lab. I'm curious about how powerful it is because every time I open my Facebook app, I find the content I've chosen and wished for. People have been using it for deep learning, computer vision, and other related purposes since 2016, as it is a free open source for AI and related fields. By using PyTorch with other powerful libraries such as NumPy, Tensor, etc., you can build, train, and test complex neural networks. Because of its easy accessibility, PyTorch is popular among people. The versatility of the programming languages and different libraries working with PyTorch is another reason for its success.
A feature that makes it easy to use is its hybrid front-end nature, which makes it faster and more flexible to use. The user-friendly nature of this library makes it the perfect choice for professionally non-technical people.
With the help of its torch-distributed backend, you can have optimal performance all the time and keep an eye on the training and working of the network you are using. It has a powerful architecture, and on an advanced level, you can use it for complex neural networks.
As you can guess, PyTorch is run with the help of Python, which is one of the most popular and trending programming languages, and the plus point is that it allows many libraries to be used with it and work on neural networks.
PyTorch |
|
Releasing Dates |
September 2016, and December 10, 2020. |
Category |
Machine learning library, Deep learning library |
Name of Platforms |
Cross-platforms |
License |
Berkeley Software Distribution (BSD) |
Website’s Link |
Since now, we have been talking about the frameworks, and the basic purpose of discussing different features was to tell you the difference between them. A beginner may believe that all frameworks are the same, but this is incorrect because each framework has its own specialities and the difficulty level of using them varies. So, if you want to work perfectly in your field, first you must learn how to choose the best framework for your task. Keep in mind, these are not the only points that you need to know; all the parameters change according to the complexity of your project.
Not all projects are the same. You do not have to use the same framework every time. You must know more than one framework and choose one according to your needs. For example, for simple tasks, there is no need to use a complex framework or a higher-level neural network. There is versatility in the projects in deep learning, and you have to understand the needs of your project every time before choosing your required framework. As a result, before you begin, you should ask yourself the following questions about the project:
What are you using? Modern deep learning framework or are you interested in the classic ML algorithms?
What is your preferred programming language for the AI modules?
For the process of scaling, which type of hardware and software do you have for the working?
Once you know the different features of the frameworks, you may get the answers to all the questions given above.
Machine learning is a vast field, and with the advancement of different techniques, there is always a need to compare the parameters all the time. Different algorithms follow different types of parameters, and you must know all of them while choosing your framework. Moreover, you must also know if you are going with the classic built-in machine-learning algorithms or want to create your own.
Hence, we learned a lot about the frameworks of deep learning today. It was an interesting lecture where we saw the detailed introduction of the framework and compared TensorFlow, PyTorch, and Keras by discussing several features and requirements of all these frameworks. We will see all the discussion in action in the coming lectures. The purpose of this session was to clear the concept of working and variations in the framework and in this way, you have the idea how deep learning is useful in different ways. Researcher are working in deep learning and it is one of the basic reason behind the develorpment of different frameworks.