# FLUX.1 \[dev]

## Overview

FLUX.1 \[dev] is a state-of-the-art 12 billion parameter text-to-image model developed by [Black Forest Labs (BFL)](https://bfl.ai/announcements/24-08-01-bfl). It is the open-weight, guidance-distilled version of BFL’s flagship [FLUX.1 \[pro\]](https://bfl.ai/models/flux-pro) model, designed to deliver similar high-quality outputs and strong prompt adherence while being more efficient.

As an open model, FLUX.1 \[dev] has quickly become a go-to choice for AI artists and developers due to its exceptional detail, style diversity, and complex scene generation capabilities. Notably, it excels at following complex prompts and producing anatomically accurate details (even notoriously tricky elements like hands and faces).

FLUX.1 \[dev] supports both **text-to-image** and **image-to-image** generation, enabling users to create images from scratch or transform existing images based on a text prompt.

### **Getting Started**

Before generating images, make sure your account is ready to use the Inference API. Follow the [Getting Started](https://docs.verda.com/inference/getting-started) guide to create an account and top up your balance.

### Authorization

To access and use these API endpoints, authorization is required. Please visit our [Authorization page](https://docs.verda.com/inference/authorization) for detailed instructions on obtaining and using a bearer token for secure API access.

## Generating images

### Parameters

#### `prompt` (string)

The text description to generate an image from.\
This is the core input that drives the output.

***

#### `size` (string)

The resolution of the output image in `"width*height"` format.\
**Default:** `"1024*1024"`

***

#### `num_inference_steps` (integer)

How many denoising steps to run during generation.\
More steps may improve quality, at the cost of speed.\
**Default:** `28`

***

#### `seed` (integer)

A seed value for reproducibility.\
The same seed + prompt + model version = same image.\
Use `-1` to randomize.\
**Default:** `-1`

***

#### `guidance_scale` (float)

CFG (Classifier-Free Guidance) controls how closely the image follows the prompt.\
Higher values = stronger prompt adherence, but may reduce creativity.\
**Default:** `3.5`

***

#### `num_images` (integer)

How many images to generate per request.\
**Default:** `1`

***

#### `enable_safety_checker` (boolean)

If enabled, content will be checked for safety violations.\
**Default:** `true`

***

#### `output_format` (string)

The file format of the generated image.\
**Possible values:** `"jpeg"`, `"png"`, `"webp"`\
**Default:** `"jpeg"`

***

#### `output_quality` (integer)

Applies to `"jpeg"` and `"webp"` formats.\
Defines compression quality.\
**Range:** `1–100`\
**Default:** `95`

***

#### `enable_base64_output` (boolean)

If `true`, the API will return the image as a base64-encoded string in the response.\
**Default:** `false`

### Text to image

{% tabs %}
{% tab title="cURL" %}

```bash
curl --request POST "https://inference.datacrunch.io/flux-dev/predict" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <your_api_key>" \
--data '{
    "input": {
        "prompt": "a scientist racoon eating icecream in a datacenter",
        "num_inference_steps": 50,
        "enable_base64_output": true
    }
}'
```

{% endtab %}

{% tab title="Python" %}

```python
import requests
import os

token = "<your_api_key>"  # Replace with your actual key
bearer_token = f"Bearer {token}"

url = "https://inference.datacrunch.io/flux-dev/predict"
headers = {
    "Content-Type": "application/json",
    "Authorization": bearer_token
}
data = {
    "input": {
        "prompt": "a scientist racoon eating icecream in a datacenter",
        "num_inference_steps": 50,
        "enable_base64_output": True
    }
}

response = requests.post(url, headers=headers, json=data)
print(response.json())
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
const axios = require('axios');

const token = '<your_api_key>'; // Replace with your actual token
const url = 'https://inference.datacrunch.io/flux-dev/predict';
const headers = {
  'Content-Type': 'application/json',
  'Authorization': `Bearer ${token}`
};
const data = {
  input: {
    prompt: 'a scientist racoon eating icecream in a datacenter',
    num_inference_steps: 50,
    enable_base64_output: true
  }
};

axios
  .post(url, data, { headers: headers })
  .then((response) => {
    console.log(response.data);
  })
  .catch((error) => {
    console.error('Error:', error);
  });
```

{% endtab %}
{% endtabs %}

### Image to image

{% tabs %}
{% tab title="cURL" %}

```bash
# Encode the image
INPUT_BASE64=$(base64 -i cats.png)

# Write JSON payload to a file
cat > payload.json <<EOF
{
  "input": {
    "prompt": "Three cats wearing detailed astronaut suits inside a space shuttle",
    "num_inference_steps": 50,
    "guidance_scale": 7.5,
    "strength": 0.7,
    "image": "${INPUT_BASE64}",
    "enable_base64_output": true
  }
}
EOF

# Send the request
curl --request POST "https://inference.datacrunch.io/flux-dev/predict" \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer <your_api_key>" \
  --data @payload.json
```

{% endtab %}

{% tab title="Python" %}

```python
import requests
import base64

# Load your image as base64 string
with open("cats.png", "rb") as image_file:
    input_base64 = base64.b64encode(image_file.read()).decode("utf-8")

token = "<your_api_key>"  # Replace with your actual key
bearer_token = f"Bearer {token}"

url = "https://inference.datacrunch.io/flux-dev/predict"
headers = {
    "Content-Type": "application/json",
    "Authorization": bearer_token
}
data = {
    "input": {
        "prompt": "Three cats wearing detailed astronaut suits inside a space shuttle",
        "num_inference_steps": 50,
        "guidance_scale": 7.5,
        "strength": 0.7,
        "image": input_base64,
        "enable_base64_output": True
    }
}

response = requests.post(url, headers=headers, json=data)
print(response.status_code)
print(response.json())
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
const axios = require('axios');
const fs = require('fs');

// Read image and convert to base64
const imageBase64 = fs.readFileSync('cats.png', { encoding: 'base64' });

const token = '<your_api_key>'; // Replace with your actual token
const url = 'https://inference.datacrunch.io/flux-dev/predict';
const headers = {
  'Content-Type': 'application/json',
  'Authorization': `Bearer ${token}`
};

const data = {
  input: {
    prompt: 'Three cats wearing detailed astronaut suits inside a space shuttle',
    num_inference_steps: 50,
    guidance_scale: 7.5,
    strength: 0.7,
    image: input_base64,
    enable_base64_output: true
  }
};

axios
  .post(url, data, { headers })
  .then((response) => {
    console.log('Response:', response.data);
  })
  .catch((error) => {
    console.error('Error:', error.response?.data || error.message);
  });
```

{% endtab %}
{% endtabs %}

## LoRA support

FLUX.1 \[dev] also supports **LoRA (Low-Rank Adaptation)** extensions for fine-tuning model behavior on specific styles, characters, or domains — without retraining the full model.

> **Endpoint:** Use `https://inference.datacrunch.io/flux-dev-lora/predict` instead of the standard `flux-dev` path.

### Parameters

In addition to the common parameters, you can provide one or more LoRA modules to influence the model’s behavior during image generation. Multiple LoRAs will be merged together before inference.

**`loras`**

A **list of LoRA weights** to apply. Each entry describes a single LoRA file and how strongly it should influence the generation.

Example:

```json
"loras": [
  {
    "path": "https://huggingface.co/your/lora1.safetensors",
    "scale": 0.8
  },
  {
    "path": "https://huggingface.co/your/lora2.safetensors",
    "scale": 1.2
  }
]
```

Each item in the list is a `LoraWeight` object with the following fields:

**`path` (string)**

The full URL or a local path to the LoRA weights file.\
This file must be in `.safetensors` format.

**`scale` (float)**

A scaling factor that adjusts the influence of the LoRA on the final image.\
Higher values mean stronger stylistic or content impact from the LoRA.\
**Default:** `1.0`

### Text to image

{% tabs %}
{% tab title="cURL" %}

```bash
curl --request POST "https://inference.datacrunch.io/flux-dev-lora/predict" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <your_api_key>" \
--data '{
  "input": {
    "prompt": "a cool anteater wearing sunglasses chilling on the beach",
    "enable_base64_output": true,
    "loras": [
      {
        "path": "https://huggingface.co/gradjitta/anteater_lora/resolve/main/anteater_lora.safetensors?download=true",
        "scale": 1.0
      }
    ]
  }
}'
```

{% endtab %}

{% tab title="Python" %}

```python
import requests
import os

token = "<your_api_key>"  # Replace with your actual key
bearer_token = f"Bearer {token}"

url = "https://inference.datacrunch.io/flux-dev-lora/predict"
headers = {
    "Content-Type": "application/json",
    "Authorization": bearer_token
}
data = {
    "input": {
        "prompt": "a cool anteater wearing sunglasses chilling on the beach",
        "enable_base64_output": True,
        "loras": [
            {
                "path": "https://huggingface.co/gradjitta/anteater_lora/resolve/main/anteater_lora.safetensors?download=true",
                "scale": 1.0
            }
        ]
    }
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
const axios = require('axios');

const token = '<your_api_key>'; // Replace with your actual token
const url = 'https://inference.datacrunch.io/flux-dev-lora/predict';
const headers = {
  'Content-Type': 'application/json',
  'Authorization': `Bearer ${token}
};
const data = {
  input: {
    prompt: 'a cool anteater wearing sunglasses chilling on the beach',
    enable_base64_output: true,
    loras: [
      {
        path: "https://huggingface.co/gradjitta/anteater_lora/resolve/main/anteater_lora.safetensors?download=true",
        scale: 1.0
      }
    ]
  }
};

axios
  .post(url, data, { headers: headers })
  .then((response) => {
    console.log(response.data);
  })
  .catch((error) => {
    console.error('Error:', error);
  });
```

{% endtab %}
{% endtabs %}

### Image to image

{% tabs %}
{% tab title="cURL" %}

```bash
# Encode the image
INPUT_BASE64=$(base64 -i palm.png)

# Write JSON payload to a file
cat > payload.json <<EOF
{
  "input": {
    "prompt": "a cool anteater wearing sunglasses chilling on the beach",
    "strength": 0.7,
    "image": "${INPUT_BASE64}",
    "enable_base64_output": true,
    "loras": [
      {
        "path": "https://huggingface.co/gradjitta/anteater_lora/resolve/main/anteater_lora.safetensors?download=true",
        "scale": 1.0
      }
    ]
  }
}
EOF

# Send the request
curl --request POST "https://inference.datacrunch.io/flux-dev-lora/predict" \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer <your_api_key>" \
  --data @payload.json
```

{% endtab %}

{% tab title="Python" %}

```python
import requests
import base64

# Load your image as base64 string
with open("palm.png", "rb") as image_file:
    input_base64 = base64.b64encode(image_file.read()).decode("utf-8")

token = "<your_api_key>"  # Replace with your actual key
bearer_token = f"Bearer {token}"

url = "https://inference.datacrunch.io/flux-dev-lora/predict"
headers = {
    "Content-Type": "application/json",
    "Authorization": bearer_token
}

data = {
    "input": {
        "prompt": "a cool anteater wearing sunglasses chilling on the beach",
        "strength": 0.7,
        "image": input_base64,
        "enable_base64_output": True,
        "loras": [
            {
                "path": "https://huggingface.co/gradjitta/anteater_lora/resolve/main/anteater_lora.safetensors?download=true",
                "scale": 1.0
            }
        ]
    }
}

response = requests.post(url, headers=headers, json=data)
print(response.status_code)
print(response.json())
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
const axios = require('axios');
const fs = require('fs');

// Read image and convert to base64
const imageBase64 = fs.readFileSync('palm.png', { encoding: 'base64' });

const token = '<your_api_key>'; // Replace with your actual token
const url = 'https://inference.datacrunch.io/flux-dev-lora/predict';
const headers = {
  'Content-Type': 'application/json',
  'Authorization': `Bearer ${token}`
};

const data = {
  input: {
    prompt: 'a cool anteater wearing sunglasses chilling on the beach',
    strength: 0.7,
    image: imageBase64,
    enable_base64_output: true,
    loras: [
      {
        path: 'https://huggingface.co/gradjitta/anteater_lora/resolve/main/anteater_lora.safetensors?download=true',
        scale: 1.0
      }
    ]
  }
};

axios
  .post(url, data, { headers })
  .then((response) => {
    console.log('Response:', response.data);
  })
  .catch((error) => {
    console.error('Error:', error.response?.data || error.message);
  });

```

{% endtab %}
{% endtabs %}
