Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
April 14, 2022 03:47 pm GMT

Serve machine learning models with Pinferencia

When reading this post, you perhaps have already known or tried torchserve, triton, seldon core, tf serving, even kserve. They are good products. However, if you are not using a very simple model or you have written many codes and the model is just a part of it. It is not that easy to integrate your codes with them.
Here, you have an alternative: Pinferencia (More tutorial, please visit:https://pinferencia.underneathall.app/)

Github: Pinferencia-If you like it, give it a star.

Install

pip install "pinferencia[uvicorn]"

Quick Start

Serve Any Model

app.py

from pinferencia import Serverclass MyModel:    def predict(self, data):        return sum(data)model = MyModel()service = Server()service.register(    model_name="mymodel",    model=model,    entrypoint="predict",)

Just run:

uvicorn app:service --reload

Hooray, your service is alive. Go to http://127.0.0.1:8000/ and have fun.

You will have a full API documentation page to play with:
swagger ui

You can test your model right here:

try it out

Any Deep Learning Models? Just as easy. Simply train or load your model, and register it with the service. Go alive immediately.

Pytorch

import torchfrom pinferencia import Server# train your modelsmodel = "..."# or load your models (1)# from state_dictmodel = TheModelClass(*args, **kwargs)model.load_state_dict(torch.load(PATH))# entire modelmodel = torch.load(PATH)# torchscriptmodel = torch.jit.load('model_scripted.pt')model.eval()service = Server()service.register(    model_name="mymodel",    model=model,)

Tensorflow

import tensorflow as tffrom pinferencia import Server# train your modelsmodel = "..."# or load your models (1)# saved_modelmodel = tf.keras.models.load_model('saved_model/model')# HDF5model = tf.keras.models.load_model('model.h5')# from weightsmodel = create_model()model.load_weights('./checkpoints/my_checkpoint')loss, acc = model.evaluate(test_images, test_labels, verbose=2)service = Server()service.register(    model_name="mymodel",    model=model,    entrypoint="predict",)

Any model of any framework will just work the same way. Now run uvicorn app:service --reload and enjoy!


Original Link: https://dev.to/wjiuhe/serve-machine-learning-models-with-pinferencia-25mo

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To