# Tutorials

On this section you'll find tutorials how to deploy a selection of models using Verda serverless containers.

* [Quick Tutorial: Deploy with vLLM](https://docs.verda.com/containers/tutorials/deploy-with-vllm-quick)
* [In-Depth: Deploy with Text Generation Inference](https://docs.verda.com/containers/tutorials/deploy-with-tgi-indepth)
* [In-Depth: Deploy with SGLang](https://docs.verda.com/containers/tutorials/deploy-with-sglang-indepth)
* [In-Depth: Deploy with vLLM](https://docs.verda.com/containers/tutorials/deploy-with-vllm-indepth)
* [In-Depth: Deploy with Replicate Cog](https://docs.verda.com/containers/tutorials/deploy-with-replicate-cog-indepth)
* [In-Depth: Asynchronous Inference Requests with Whisper](https://docs.verda.com/containers/tutorials/async-whisper)
