Deploying a Machine Learning Model using TensorFlow Serving and Docker | by Data Dev Backyard | Oct, 2023


In this tutorial, deploying a machine learning model using TensorFlow serving is demonstrated. TensorFlow serving is developed by Google and provides the functionality to serve a trained ML model. For this tutorial, one needs to to have some basic knowledge regarding Docker and to have it installed locally. For this article, I use a trained model in the Protocol Buffers (Protobuf) format which is a free and open-source cross-platform data format used to serialize structured data.

This post is also explained in my YouTube videos: Part1 and Part2. Please subscribe to the YouTube channel through this link.

please use this link to join medium. Thank you for your great support.

Photo by Pixabay: https://www.pexels.com/photo/close-up-of-text-on-black-background-256502/

First, we require to pull the TensorFlow Docker image. As such, it is required that you have installed Docker beforehand, and also you have a Docker account to pull the images from Docker hub. You can open CMD on your local (if you are on Windows), and run “docker login”, and enter your credentials.

For serving our trained machine learning model i.e., here the model for predicting the fuel efficiency we require to pull the image from Docker hub. You can find the official TensorFlow image here, and you can use the latest tag (default tag). So, let us write in our console:

You can check the list of your images by writing in your console:

As a result you should see tensorflow/serving as an image.

Now that the Docker image is pulled, let us prepare the materials and the config file locally. So, it is needed to create a ML folder, and inside you can create a models folder as well as your models.config. Inside…



Source link

Be the first to comment

Leave a Reply

Your email address will not be published.


*