Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
January 26, 2023 04:10 pm GMT

Deploying ML models straight from Jupyter Notebooks

Winter is a time of magic . Everyone is waiting for something special at this time, and Data Scientists arent different. It is not in the power of software developer to be a magician, but I can help you deploy your models with literally a single command right from your Jupyter notebook (and basically from any place like your command line or Python script - check out the deployment).

Sounds like magic? It is!

Streamlit

To get some winter season vibes, lets do some magic ourselves first. Lets do something that will help us prepare some fun for our friends for the weekends.

To do so, well create a model that translates lyrics to emojis. With all due respect to recent advances in NLP and LLM algorithms, its still both easier and more fun to convince your friends to do the backward translation:

ChatGPT

Ok, Im sure humans are up to the challenge!

Alright, just before we get into the actual coding, everything described in this blog post is available on this Google Colab notebook. Now, let's get to it!

First, lets load an emoji dataset. We need something to base our model on, right?

Load emoji dataset

The secret sauce to creating our emoji language is using a pretrained Distilbert model to tokenize and create embeddings which represent our emoji dictionary:

Turn emojis into embeddings

We can now similarly embed any word and replace it with its closest emoji embedding to create our textemoji translator. Using that, Jingle bells should become something like :

Find the closest emoji for each word

Good start - it guessed half of the emojis correctly!

Our part of magic is done, now to the single command deployment I promised in the beginning. Before we go rogue and deploy it to the cloud, lets run a Streamlit app locally to test things out:

Serving with Streamlit

What happened here? That innocent looking mlem.api.save method inspected the model object to find all python packages to install did the magic of preparing the model to be used!

Now you should have a Streamlit app at localhost:80 that looks just like this:

Streamlit app

Once we finished playing around with the model locally, lets cast our final spell for the day and deploy the model to fly.io:

Deployment to fly.io

Some elvish gibberish is printed to the command line, and you get a deployment up and ready. If you didnt follow along, you can just browse to this example deployment: https://lyrics2emoji.fly.dev.

Now, before you go, remember that these powers extends to serving models as REST API applications, Streamlit apps, building Docker Images and Python packages, and deploying them to Heroku, Flyio, Kubernetes, and AWS Sagemaker.

Or just go here to get a crash course :)


Original Link: https://dev.to/aguschin/deploying-ml-models-straight-from-jupyter-notebooks-12bh

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To