Deep Learning Using Google’s TensorFlow Keras | by Khyati Bareja | Aug, 2024


A computer would deserve to be called intelligent if it could deceive a human into believing that it was human. — Alan Turing

In this section we discuss the basics of Neural Networks and how deep Learning can be done using TensorFlow.

Neural networks are the structures that allow us to perform deep learning. Deep learning is well, extremely deep. It is certainly possible for anyone to build and train their own deep learning models that can be used to solve real-world problems. This article covers TensorFlow at an introductory level and will also introduce you to Keras which makes working with TensorFlow a lot easier.

TensorFlow

TensorFlow is Google’s open source, end-to-end, Machine Learning framework. It can be used to create many different types of machine learning models but its powerful when working with neural networks. Historically it was developed by the Google Brain team and it allows you to build state of the art machine learning applications.

TensorFlow is compatible with a wide range of hardware and devices and a TensorFlow model can be trained and deployed to a wide range of hardware including CPUs, GPUs and TPUs. Single machines or cluster of machines can be used for training TensorFlow models.

TensorFlow Lite: It is which is specifically for deploying models to mobile and embedded devices.

(No date) Tensorflow lite architecture. Available at: https://www.researchgate.net/figure/TensorFlow-Lite-architecture-19_fig5_337827030 (Accessed: 20 August 2024).
Architecture of TensorFlow Lite

TensorFlow.js: which is a JavaScript library for training and deploying models in the browser and on node.js.

TensorFlow Extended (TFX): which is an end-to-end platform for deploying machine learning pipelines. TensorFlow extended can be used to manage machine learning pipelines throughout their lifecycle.

Keras

Keras is an open-source neural network library, written in Python. It runs on top of other machine learning frameworks, such as TensorFlow, Microsoft’s cognitive toolkit and Theano. Keras is a high level API, which facilitates fast experimentation. Keras was designed to facilitate fast experimentation with deep neural networks . It is easy to use, it is modular and extensible. Keras was designed to be an interface rather than a standalone machine learning framework. It can be used in conjunction with CPUs and GPUs.

In 2017 Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. TensorFlow Keras is TensorFlow’s implementation of the Keras API specification (tk.keras). TensorFlow Keras is a high-level API to build and train models that include first class support for TensorFlow specific functionality. It makes TensorFlow easier to use without sacrificing some of the flexibility and performance.

Relationship between TensorFlow and Keras

Example of ML using Tensorflow Keras:

Lets start with a simple example for machine learning. There is a relationship between the X and Y values.

Which is : Y = 2X — 1

So if you saw it, how did you get that? Maybe you noticed that the Y value increases by 2 while the X value only increases by 1. And then you may have seen that when X was zero, Y was -1, so you figured that Y = 2X — 1. That’s exactly the principle that all machine learning algorithms work on. So if we dive in further this is the entire code that you can use to create a machine-learning model which figures out what matches these numbers to each other.

Sample code illustrating the Machine learning model

This first line defines the model itself. A model is a trained neural network, and this example illustrates the simplest possible neural network, which in this case is a single layer indicated by the keras.layers.Dense code. And that layer has a single neuron in it, indicated by units =1. Here it can be seen that we also feed a single value into the neural network which is the X value. And we will have the neural network predict the Y value for that X. So that’s the reason we just say that the input_shape=[1].

When we compile the model there are two functions, the loss and the optimizer.

Functions during compilation of the model

These are the key to the machine learning. Now how it works is that the model will make a guess about the relationship between the numbers. For example it might guess that Y = 5X + 5. And when training it will calculate how good or how bad that guess is, using the loss function. Further it uses the optimizer function to generate another guess. Here the logic is that the combination of these two functions will eventually get closer and closer to the correct formula. Here in our case as illustrated above it will go through the loop 500 times which is decided using the epochs, making a guess, calculating how accurate that guess is and then using the optimizer to enhance that guess and so on. The data itself is setup as an array of Xs and Ys and our process of matching them to each other is in the fit method of the model. In literal words, it would mean that we are telling the model that “fit the Xs to the Ys and try this 500 times”. Once that is done we’ll have a trained model. In the last line of code we ask the model to predict the value of Y when X = 10, considering the example one might think that the answer is 19, but it isn’t its actually something like 18.9998. Why do you think that might be ? That is because the system was trained to match only six pairs of numbers, which looks like a straight line relationship between them for those six, but it may not be a straight line for the values outside those six. There is a very high probability that it is a straight line but it still isn’t certain. And this probability is built into the prediction, so its telling us a value very close to 19 instead of 19.



Source link

Be the first to comment

Leave a Reply

Your email address will not be published.


*