LogoLogo
Ctrlk
Verda HomeSDKAPILogin / Signup
  • Welcome to Verda
    • Overview
    • Locations and Sustainability
    • Pricing and Billing
    • Dynamic Pricing Update
    • Team Projects
    • Support
  • CPU and GPU Instances
    • Set up a CPU or GPU instance
    • Securing Your Instance
    • Shutdown and Delete
    • Adding a New User
    • Managing SSH Keys
    • Connecting to Your Server
    • Connecting to Jupyter notebook with VS Code
    • Remote desktop access
    • Troubleshooting SSH Connection Issues
    • Tips and tricks
  • Clusters
    • Instant Clusters
    • Customized GPU clusters
  • Storage
    • Block Volumes
    • Shared Filesystems (SFS)
    • Deleting storage
  • Containers
    • Overview
    • Container Registries
    • Scaling and health-checks
    • Batching and Streaming
    • Async Inference
    • Batch Jobs
    • Tutorials
      • Quick: Deploy with vLLM
      • Quick: Migrate from Runpod
      • Quick: Deploying GPT-OSS 120B (Ollama) on Serverless Containers
      • In-Depth: Deploy with TGI
      • In-Depth: Deploy with SGLang
      • In-Depth: Deploy with vLLM
      • In-Depth: Deploy with Replicate Cog
      • In-Depth: Asynchronous Inference Requests with Whisper
      • Tutorial: How to Publish Your First Docker Image to Docker Hub
  • Inference
    • Overview
    • Getting Started
    • Authorization
    • Image Models
    • Audio Models
    • Pricing and Billing
  • Resources
    • Resources Overview
    • Services Overview
    • Shared Responsibility Model
    • Verda API
    • Python SDK
    • Get Free Compute Credits
  • Integrations
    • dstack
Powered by GitBook
On this page
  1. Containers

Tutorials

On this section you'll find tutorials how to deploy a selection of models using Verda serverless containers.

  • Quick Tutorial: Deploy with vLLM

  • In-Depth: Deploy with Text Generation Inference

  • In-Depth: Deploy with SGLang

  • In-Depth: Deploy with vLLM

  • In-Depth: Deploy with Replicate Cog

  • In-Depth: Asynchronous Inference Requests with Whisper

PreviousBatch JobsNextQuick: Deploy with vLLM

Last updated 1 month ago

Was this helpful?

© Verda Cloud Oy

Was this helpful?