Serverless deployment
Serve & deploy with Text Generation Inference (TGI)
By using the Serverless Mode of VESSL Serve, you can quickly launch a fully operational inference server within minutes using a simple configuration file and instances from VESSL-managed Cloud. Furthermore, these instances are billed on an on-demand basis.
In this example, we will use the serverless mode of VESSL Serve to quickly launch a server using the Llama 3 with a Text Generation Interface(TGI). This example can be easily adapted to deploy your model for inference.
What you will do
- Create a new service with a serverless mode
- Create a new service revision
- Send an HTTP request to service
- Getting queued result
Create a new service with a serverless mode
Select your organization and click the “Services” tab. Click the “New Service” button on the right side of the “Services” page. This will allow you to set your first service information:
- Name: Set your service name
- Description: You can add any description for your services.
- Cluster: The cluster in which your service is physically located.
Then, toggle “Serverless” to enable Serverless mode, and click “Create”. Your new service is created, automatically guiding you to make your first “Revision”, for setting your container environment.
Create a new service revision
You can set your container settings on this page.
- Resources: Select GPU resource,
(GPU)gpu-l4-small
. This means that we will be using one NVIDIA L4 GPU and a 42GB RAM instance. - Container image: We will use a pre-created TGI docker image. Click on the “Custom” button and type
ghcr.io/huggingface/text-generation-inference:2.0.2
. - Commands: This is a bash command you can run in the container. Use the following command:
- Port: This is an open HTTP port for the container. Set the port to
HTTP
,8000
and name itdefault
. - Advanced Options:
- Variable: You can set environment variables and secret values which can be used for the container. Click “Add variables or secrets”, and add the following name/value.
- Name:
MODEL_ID
- Value:
casperhansen/llama-3-8b-instruct-awq
- Name:
- Variable: You can set environment variables and secret values which can be used for the container. Click “Add variables or secrets”, and add the following name/value.
Click the “Create” button in the right corner. Then, our first VESSL Serve is created!
Once the revision update is complete, your inference server will be ready to go.
Send an HTTP request to service
You can find your Service overview in the “Overview” Tab.
Click on the upper right “Request” Button. You can find information on how to send an inference request to your service.
You can find HTTP request information such as inference endpoint, authorization token, and sample request to inference server. For more information, please refer to Serverless API documentation.
Below is an example of Python code for requesting your inference results.
Since TGI exposes an OpenAI-compatible interface, you can use OpenAI Python binding to access the server as well.
When the service is in a cold state (i.e. there are no running replicas due to service idleness) and a new request is made, a new replica will be started immediately.
In such case, the first few requests may get aborted due to timeouts, until the replica becomes up and running. Please consult your HTTP client’s timeout configuration.
Make an asynchronous request
Sometimes you may want to have requests processed asynchronously, for example:
- to process a large amount of data in a batch, where it is infeasible or inefficient to make requests one-by-one;
- to ensure all requests are processed eventually, and not be interrupted due to network timeouts;
- where the caller has the capability to periodically poll for results, and immediate HTTP response is not a requirement.
To make asynchronous requests, you will use different pair of APIs: one for request creation and another for result fetch.
First, create an asynchronous request:
Then, you can periodically poll for request.
Using our web interface
You can repeat the same process on the web. Head over to your Organization, select a services, and create a new service.