VESSL — Control plane for machine learning and computing
VESSL provides a unified interface for training and deploying AI models on the cloud. Simply define your GPU resource and pinpoint to your code & dataset. VESSL does the orchestration & heavy lifting for you:
Create a GPU-accelerated container with the right Docker Image.
Mount your code and dataset from GitHub, Hugging Face, Amazon S3, and more.
Launches the workload on our fully managed GPU cloud.
One any cloud, at any scale
Instantly scale workloads across multiple clouds.
Streamlined interface
Launch any AI workloads with a unified YAML definition.
VESSL abstracts the obscure infrastructure and complex backends inherent to launching AI workloads into a simple YAML file, so you don’t have to mess with AWS, Kubernetes, Docker, or more. Here’s an example that launches a chatbot app for Llama 3.2.
With every YAML file, you are creating a VESSL Run. VESSL Run is an atomic unit of VESSL, a single unit of Kubernetes-backed AI workload. You can use our YAML definition as you progress throughout the AI lifecycle from checkpointing models for fine-tuning to exposing ports for inference.