Qwen-Image-Edit API, Deployment, Pricing

Qwen/Qwen-Image-Edit

Qwen-Image-Edit is the image editing version of Qwen-Image, released by Alibaba's Qwen team. Built upon the 20B Qwen-Image model, it has been further trained to extend its unique text rendering capabilities to image editing tasks, enabling precise text editing within images. Furthermore, Qwen-Image-Edit utilizes an innovative architecture that feeds the input image into both Qwen2.5-VL (for visual semantic control) and a VAE Encoder (for visual appearance control), achieving capabilities in both semantic and appearance editing. This allows it to support not only low-level visual appearance edits like adding, removing, or modifying elements, but also high-level visual semantic editing such as IP creation and style transfer, which require maintaining semantic consistency. The model has achieved state-of-the-art (SOTA) performance on multiple public benchmarks, establishing it as a powerful foundation model for image editing

API Usage

curl --request POST \
  --url https://api.siliconflow.com/v1/images/generations \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "batch_size": 1,
  "num_inference_steps": 20,
  "guidance_scale": 7.5,
  "model": "Qwen/Qwen-Image-Edit",
  "prompt": "an island near sea, with seagulls, moon shining over the sea, light house, boats int he background, fish flying over the sea",
  "image": "data:image/png;base64, XXX"
}'

Details

Model Provider

Qwen

Type

image

Sub Type

image-to-image

Publish Time

Sep 18, 2025

Price

$

undefined

/ Image

Tags

MoE,235B,128K

Compare with Other Models

See how this model stacks up against others.

FLUX.1-Kontext-dev

FLUX.1-Kontext-dev

FLUX.1 Kontext [dev] is a 12 billion parameter image editing model developed by Black Forest Labs. Based on advanced Flow Matching technology, it functions as a diffusion transformer capable of precise image editing based on text instructions. The model's core feature is its powerful contextual understanding, allowing it to process both text and image inputs simultaneously and maintain a high degree of consistency for characters, styles, and objects over multiple successive edits with minimal visual drift. As an open-weight model, FLUX.1 Kontext [dev] aims to drive new scientific research and empower developers and artists with innovative workflows. Users can leverage it for various tasks, including style transfer, object modification, background swapping, and even text editing

FLUX.1-Kontext-dev

Qwen-Image-Edit

Qwen-Image-Edit

Qwen-Image-Edit is the image editing version of Qwen-Image, released by Alibaba's Qwen team. Built upon the 20B Qwen-Image model, it has been further trained to extend its unique text rendering capabilities to image editing tasks, enabling precise text editing within images. Furthermore, Qwen-Image-Edit utilizes an innovative architecture that feeds the input image into both Qwen2.5-VL (for visual semantic control) and a VAE Encoder (for visual appearance control), achieving capabilities in both semantic and appearance editing. This allows it to support not only low-level visual appearance edits like adding, removing, or modifying elements, but also high-level visual semantic editing such as IP creation and style transfer, which require maintaining semantic consistency. The model has achieved state-of-the-art (SOTA) performance on multiple public benchmarks, establishing it as a powerful foundation model for image editing

Qwen-Image-Edit

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.