Cluster
Overview
VESSL Cluster enables seamless scaling of containerized ML workloads from a personal laptop to cloud instances or Kubernetes-backed on-premise clusters.
Take a quick tour of VESSL Cluster with the demo below.
While VESSL comes with an out-of-the-box, fully managed AWS, you can also integrate an unlimited number of (1) personal Linux machines, (2) on-premise GPU servers, and (3) private clouds. You can then use VESSL as a single point of access to multiple clusters.
VESSL Clusters simplifies the end-to-end management of large-scale, organization-wide ML infrastructure from integration to monitoring. These features are available under Cluster.
- Single-command integration — Set up a hybrid or multi-cloud infrastructure with a single command.
- GPU-accelerated workloads — Run training, optimization, and inference tasks on GPUs in seconds
- Resource optimization — Match and scale workloads automatically based on the required compute resources
- Cluster dashboard — Monitor real-time usage and incident & health status of clusters down to each node.
- Reproducibility — Record runtime metadata such as hardware and instance specifications.