This quickstart deploys a barebone GPU-accelerated workload, a Python container that prints Hello, world!. It illustrates the basic components of a single run and how you can deploy one.

“Terminate” enables the stopping and ending of runs, revisions, and workspaces. Note that if you want to save your credits, remember to “Terminate.”

What you will do

  • Launch a GPU-accelerated workload
  • Set up a runtime for your model
  • Mount a Git codebase
  • Run a simple command

Installation & setup

After creating a free account at VESSL AI, install our Python package and get an API authentication. Set your primary Organization and Project for your account and let’s get going.

pip install --upgrade vessl
vessl configure
If you encounter the error ModuleNotFoundError: No module named 'packaging', please run the command pip install packaging.

Writing the YAML

Launching a workload on VESSL AI begins with writing a YAML file. Our quickstart YAML is in four parts:

  • Compute resource — typically in terms of GPUs — this is defined under resources
  • Runtime environment that points to a Docker Image — this is defined under image
  • Input & output for code, dataset, and others defined under import & export
  • Run commands executed inside the workload as defined under run

Let’s start by creating quickstart.yaml and define the key-value pairs one by one.

1

Spin up a compute instance

resources defines the hardware specs you will use for your run. Here’s an example that uses our managed cloud to launch an L4 instance.

You can see the full list of compute options and their string values for preset under Resources. Later, you will be able to add and launch workloads on your private cloud or on-premise clusters simply by changing the value for cluster.

resources:
  cluster: vessl-gcp-oregon
  preset: gpu-l4-small
2

Configure a runtime environment

VESSL AI uses Docker images to define a runtime. We provide a set of base images as a starting point but you can also bring your own custom Docker images by referencing hosted images on Docker Hub or Red Hat Quay.

You can find the full list of images and the dependencies for the latest build here. Here, we’ll use the latest go-to PyTorch container from NVIDIA NGC.

resources:
  cluster: vessl-gcp-oregon
  preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.3.1-cuda12.1-r5
3

Mount a GitHub repository

Under import, you can mount models, codebases, and datasets from sources like GitHub, Hugging Face, Amazon S3, and more.

In this example, we are mounting a GitHub repo to /code/ folder in our container. You can switch to different versions of code by changing ref to specific branches like dev.

resources:
  cluster: vessl-gcp-oregon
  preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.3.1-cuda12.1-r5
import:
  /code/:
    git:
      url: https://github.com/vessl-ai/hub-model
      ref: main
4

Write a run command

Now that we defined the specifications of the compute resource and the runtime environment for our workload, let’s run main.py.

We can do this by defining command under run and specifying the working directory workdir.

resources:
  cluster: vessl-gcp-oregon
  preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.3.1-cuda12.1-r5
import:
  /code/:
    git: 
      url: https://github.com/vessl-ai/hub-model
      ref: main
run:
  - command: |
      python main.py
    workdir: /code/quickstart
5

Add metadata

Finally, let’s polish up by giving our Run a name and description. Here’s the completed .yaml:

name: Quickstart
description: A barebone GPU-accelerated workload
resources:
  cluster: vessl-gcp-oregon
  preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.3.1-cuda12.1-r5
import:
  /code/:
    git: 
      url: https://github.com/vessl-ai/hub-model
      ref: main
run:
  - command: |
      python main.py
    workdir: /code/quickstart

Running the workload

Now that we have a completed YAML, we can run the workload with vessl run.

vessl run create -f quickstart.yaml

Running the command will verify your YAML and show you the current status of the workload. Click the output link in your terminal to see full details and a realtime logs of the Run on the web, including the result of the run command.

Behind the scenes

When you vessl run, VESSL AI performs the following as defined in quickstart.yaml:

  1. Launch an empty Python container on the cloud with 1 NVIDIA L4 Tensor Core GPU.
  2. Configure runtime with CUDA compute-capable PyTorch 22.09.
  3. Mount a GitHub repo and set the working directory.
  4. Execute main.py and print Hello, world!.

Using our web interface

You can repeat the same process on the web. Head over to your Organization, select a project, and create a New run.

What’s next?

Now that you’ve run a barebone workload, continue with our guide to launch a Jupyter server and host a web app.