FLUX.1-Kontext-dev API, Deployment, Pricing

black-forest-labs/FLUX.1-Kontext-dev

FLUX.1 Kontext [dev] is a 12 billion parameter image editing model developed by Black Forest Labs. Based on advanced Flow Matching technology, it functions as a diffusion transformer capable of precise image editing based on text instructions. The model's core feature is its powerful contextual understanding, allowing it to process both text and image inputs simultaneously and maintain a high degree of consistency for characters, styles, and objects over multiple successive edits with minimal visual drift. As an open-weight model, FLUX.1 Kontext [dev] aims to drive new scientific research and empower developers and artists with innovative workflows. Users can leverage it for various tasks, including style transfer, object modification, background swapping, and even text editing

API Usage

curl --request POST \
  --url https://api.siliconflow.com/v1/images/generations \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "black-forest-labs/FLUX.1-Kontext-dev",
  "prompt": "an island near sea, with seagulls, moon shining over the sea, light house, boats int he background, fish flying over the sea",
  "prompt_enhancement": false,
  "image":"data:image/png;base64, XXX"
}'

Details

Model Provider

black-forest-labs

Type

image

Sub Type

image-to-image

Publish Time

Jun 27, 2025

Price

$

0.015

/ Image

Tags

12B

Compare with Other Models

See how this model stacks up against others.

image-to-image

Qwen-Image-Edit

Qwen-Image-Edit is the image editing version of Qwen-Image, released by Alibaba's Qwen team. Built upon the 20B Qwen-Image model, it has been further trained to extend its unique text rendering capabilities to image editing tasks, enabling precise text editing within images. Furthermore, Qwen-Image-Edit utilizes an innovative architecture that feeds the input image into both Qwen2.5-VL (for visual semantic control) and a VAE Encoder (for visual appearance control), achieving capabilities in both semantic and appearance editing. This allows it to support not only low-level visual appearance edits like adding, removing, or modifying elements, but also high-level visual semantic editing such as IP creation and style transfer, which require maintaining semantic consistency. The model has achieved state-of-the-art (SOTA) performance on multiple public benchmarks, establishing it as a powerful foundation model for image editing

Qwen-Image

text-to-image

Qwen-Image

Qwen-Image is an image generation foundation model released by the Alibaba Qwen team, featuring 20 billion parameters. The model has achieved significant advances in complex text rendering and precise image editing, excelling particularly at generating images with high-fidelity Chinese and English text. Qwen-Image can handle multi-line layouts and paragraph-level text while maintaining layout coherence and contextual harmony in the generated images. Beyond its superior text-rendering capabilities, the model supports a wide range of artistic styles, from photorealistic scenes to anime aesthetics, adapting fluidly to various creative prompts. It also possesses powerful image editing and understanding abilities, supporting advanced operations such as style transfer, object insertion or removal, detail enhancement, text editing, and even human pose manipulation, aiming to be a comprehensive foundation model for intelligent visual creation and manipulation where language, layout, and imagery converge

text-to-image

FLUX.1 Kontext [pro]

FLUX.1 Kontext Pro is an advanced image generation and editing model that supports both natural language prompts and reference images. It delivers high semantic understanding, precise local control, and consistent outputs, making it ideal for brand design, product visualization, and narrative illustration. It enables fine-grained edits and context-aware transformations with high fidelity.

12B

text-to-image

FLUX.1 Kontext [max]

FLUX.1 Kontext Max is the most powerful and feature-rich model in the Kontext series, designed for high-resolution, high-precision visual editing and generation. It offers superior prompt adherence, detailed rendering, and advanced typographic control. Ideal for enterprise design systems, marketing visuals, and automated creative pipelines that require robust scene transformations and layout control.

12B

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

What is the FLUX.1 Kontext [dev] model, and what are its core capabilities and technical specifications?

In which business scenarios does FLUX.1 Kontext [dev] perform well? Which industries or applications is it suitable for?

How can the performance and effectiveness of FLUX.1 Kontext [dev] be optimized in actual business use?

Compared with other models, when should FLUX.1 Kontext [dev] be selected?

What are SiliconFlow's key strengths in AI serverless deployment for FLUX.1 Kontext [dev]?

What makes SiliconFlow the top platform for FLUX.1 Kontext [dev] API?

Ready to accelerate your AI development?

Ready to accelerate your AI development?

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.