In-Depth: Asynchronous Inference Requests with Whisper

In this tutorial, we will deploy a container with openai/whisper-large-v3-turbo and demonstrate how to send asynchronous inference requests when communicating with the model. Whisper is a popular model for automatic speech recognition (ASR) and speech translation.

You can find more information about the model itself from the Hugging Face model hub.

We will create a simplified container image that hosts whisper using Python 3.12, FastAPI Uvicorn and Huggingface transformers package

This tutorial also includes an optional step to send inference results to a webhook, and for this option we use webhook.site.

Prerequisites

For this example you need a Python environment running on your local machine, a Docker (or Docker-compatible) container runtime installed on your computer. A container registry to store the image we create and Verda cloud account to create a deployment.

Python environment

We are using Python version 3.12 for this tutorial. You can set up your Python environment as you see fit, however we are using venv combined with bash shell for this example.

Container Registry

You will need a container registry to store the container image. You can use any container registry you prefer. In this example we use GitHub Container Registry. You can find more information about GitHub Container Registry from the official GitHub documentation.

For the sake of our example, we will use nonexistent GitHub registry url ghcr.io/username/container-image In the examples remember to replace this with your own GitHub registry url.

Please make sure that you have credentials to login to your registry. You can login to GitHub container registry by typing the following command:

docker login <registry-url> -u <registry-username>

Create a container image

Next we will create a container image out of our inference service.

Create a webhook for uploading (optional)

This step is optional if you don't want to upload the inference result using a webhook.

First visit webhook.site. We will use the site to demonstrate how to send inference result to a webhook. You will get an url for webhook from their site. The url looks something like this: https://webhook.site/5bdbe974-713f-4b92-89ea-acb79be5b68f. Save this for later, as we'll send our inference result to this url.

Note that you can also set up your own webhook for uploading the inference results and host it as you please, however that is not part of this tutorial.

Inference service container image

Next we will create a container image. Please create a folder named whisper-example-mp3 and save the following files in it, starting with Dockerfile:

Next we create a requirements.txt file, with following entries:

Next, please create main.py, containing the following python implementation. Notice that you'll need the url from webhook.site, should you want to upload the results of the inference to a webhook. Look for the comment in the async def generate_webhook(body: Dict) -> Dict: function.

Next, run the following command to build the container image:

This step will use the configuration defined in the Dockerfile to create the container image and store it in local container registry. The step can take quite some time to complete.

Push the container image to a remote container registry

When the previous step has completed, you should see the container image in your local container registry. To verify, please run:

You should see something similar to this (this may be different, if you used a different folder name).

Next, tag the image and push it to your remote container registry. We do not support pulling containers with the :latest tag in order to make sure that all deployments are consistent. Please make sure you use distinct tags for your container updates.

This will push the container image to your remote registry. Uploading the image to the container registry can take some time, depending on your network connection.

Create the deployment

Next as a part of this example, we will deploy the image we created earlier on General Compute (24 GB VRAM) GPU type.

  1. Log in to the Verda cloud console

  2. Create a new project or use existing one, open the project

  3. On the left you'll see a navigation menu. Go to Containers -> New deployment. Name your deployment and select the General Compute Type.

  4. Set Container Image to point to your repository where you pushed the image you created earlier. For example toghcr.io/username/whisper-example-mp3:1

  5. You can use the Public option for your image, if you pushed the image to a public repository. You can use the Private if you have a private registry, paired with credentials.

  6. Make sure your preferred tag is selected

  7. Set the Exposed HTTP port to 8989

  8. Set the Healthcheck port to 8989

  9. Set Health Check to /health

  10. Make sure Start Command is off

  11. (Optional) If you want to test webhook functionality, please add an environment variable WEBHOOK pointing to your webhook URL.

  12. Deploy container

(You can leave the Scaling options to their default values for now)

That's it! You have now created a deployment. You can check the logs of the deployment from the logs tab. This will take few minutes to complete.

Accessing the deployment

Before you can connect to the endpoint, you will need to generate an authentication token, by going to Keys -> Inference API Keys, and click Create.

The base endpoint URL for your deployment is in the Containers API section in the top left of the screen. This will be in the form of: https://containers.datacrunch.io/<NAME-OF-OUR-DEPLOYMENT>/

Test Deployment

Once the deployment has been created and is ready to accept requests, you can test that it responds correctly by sending a /health request to the endpoint. Below is an example cURL command for running your test deployment:

Notice the added subpath /health to the base endpoint URL

This should return an status ok response:

After /health returns ok, we are ready to send an inference requests to the model.

Sending asynchronous inference requests

Enabling asynchronous inference with Verda cloud is done by using Prefer header and X-Inference-Id header. The inference services recognize 3 values for Prefer header:

  • Prefer: respond-async

  • Prefer: respond-async-proxy

  • Prefer: respond-async-container

These values and their functionalities are explained in more detail here. In this example we will use two of the possible options to address two asynchronous inference scenarios.

X-Inference-Id header can be set by the client on sending an inference request, should they want to use some identifier of their own, but if omitted the inference services will create one. More about this header later in the tutorial.

Generate text from audio

Navigate to your project directory and create a new virtual environment and run commands below:

You may also need to install some required packages,

In the same folder, create a new file named inference.py and add the following code:

After you have saved the python script to file, execute it:

The output you'll see is similar to the example below:

In the result, Id is the asynchronous inference id, which will also be found in the response headers named as X-Inference-Id. This header is needed to identify the inference request when requesting status or results.StatusPath contains path where to request the inference status and ResultPath is a path to where fetch the results of the inference request.

Next we will check the status of the inference. When requesting the status of the inference request you must provide an identifier for the inference request that you want to access. This is done by setting the X-Inference-Id header to the value you received in the response json as Id, or the one you received in the response headers as X-Inference-Id.

Save the following file to disk as status.py. Notice the X-Inference-Id variable. Set this to your X-Inference-Id

After saving run the following, (please DATACRUNCH_TASK_ID with your current task id):

This script will output following:

Where Id is again the identifier. Error will have an error where you'll find an error text if the inference resulted in an error and Status is one of the following:

0 means inference has been initialized,1 inference request has been sent to the queue,2 inference request has been received from queue and delivered to the actual workload container,3 the workload has completed and result is ready for fetching

If your status is not yet 3, it means the workload is still in progress. Wait for a short period and run the python status.py again, untill you recive a status of 3, as follows:

Now our inference has completed and we are ready to fetch the results. Save the following file to the disk as result.py Again, notice that you need to set the identifier header:

This will return you text generated by whisper. It will look similar to the following:

This concludes the first part of our tutorial on how to run asynchronous inference requests.

Upload the generated result to a webhook

In the tutorial above we sent fully asynchronous inference request, where access to the inference status and result are provided by Verda and we access them directly using our api. However, there might a scenario where you want your container to do asynchronous work, but you want to send synchronous requests to the container or you just don't want to save the status and result to Verda systems.

The next example shows how to use utilize partially asynchronous workflow where we send a synchronous request to the inference container which will trigger an asynchronous operation that will upload the results of the inference to a webhook while returning a status indicator that the operation has started.

In our example of a inference service above (the main.py file we saved earlier), you'll find a function that looks like async def generate_webhook(body: Dict, background_tasks: BackgroundTasks) -> JSONResponse:

Save the following file to disk as inference_webhook.sh

Running the above command should return the following once the request has been received by your endpoint:

The model output should then be available for your webhook endpoint after completion.

Last updated

Was this helpful?