MULTIMODAL

High-Speed Inference for

Image Models

text-to-image

flux-1-1-pro

FLUX1.1 Pro is an enhanced text-to-image model built on the FLUX.1 architecture, offering improved composition, detail, and rendering speed. With better visual consistency and artistic fidelity, it's suitable for illustration, creative content generation, and e-commerce visual assets—delivering diverse styles with strong prompt alignment.

text-to-image

flux-1-1-pro-ultra

FLUX1.1 Pro Ultra is the high-resolution version of FLUX1.1 Pro, capable of generating images up to 4 megapixels (2K resolution). It improves photo realism and prompt controllability for advanced use cases. The Ultra mode is optimized for composition and precision, while Raw mode prioritizes natural textures and realism—ideal for commercial visual production, art direction, and realistic concept rendering.

text-to-image

flux-1-kontext-max

FLUX.1 Kontext Max is the most powerful and feature-rich model in the Kontext series, designed for high-resolution, high-precision visual editing and generation. It offers superior prompt adherence, detailed rendering, and advanced typographic control. Ideal for enterprise design systems, marketing visuals, and automated creative pipelines that require robust scene transformations and layout control.

text-to-image

flux-1-kontext-pro

FLUX.1 Kontext Pro is an advanced image generation and editing model that supports both natural language prompts and reference images. It delivers high semantic understanding, precise local control, and consistent outputs, making it ideal for brand design, product visualization, and narrative illustration. It enables fine-grained edits and context-aware transformations with high fidelity.

image-to-image

flux-1-kontext-dev

FLUX.1 Kontext [dev] is a 12 billion parameter image editing model developed by Black Forest Labs. Based on advanced Flow Matching technology, it functions as a diffusion transformer capable of precise image editing based on text instructions. The model's core feature is its powerful contextual understanding, allowing it to process both text and image inputs simultaneously and maintain a high degree of consistency for characters, styles, and objects over multiple successive edits with minimal visual drift. As an open-weight model, FLUX.1 Kontext [dev] aims to drive new scientific research and empower developers and artists with innovative workflows. Users can leverage it for various tasks, including style transfer, object modification, background swapping, and even text editing

text-to-image

flux-1-dev

FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. It offers cutting-edge output quality, second only to their state-of-the-art model FLUX.1 [pro]. The model features competitive prompt following, matching the performance of closed source alternatives. Trained using guidance distillation, FLUX.1 [dev] is more efficient. Open weights are provided to drive new scientific research and empower artists to develop innovative workflows

text-to-image

flux-1-schnell

FLUX.1 [schnell] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. Trained using latent adversarial diffusion distillation, it can generate high-quality images in only 1 to 4 steps. The model offers cutting-edge output quality and competitive prompt following, matching the performance of closed source alternatives. Released under the Apache 2.0 license, it can be used for personal, scientific, and commercial purposes

text-to-image

qwen-image

Qwen-Image is an image generation foundation model released by the Alibaba Qwen team, featuring 20 billion parameters. The model has achieved significant advances in complex text rendering and precise image editing, excelling particularly at generating images with high-fidelity Chinese and English text. Qwen-Image can handle multi-line layouts and paragraph-level text while maintaining layout coherence and contextual harmony in the generated images. Beyond its superior text-rendering capabilities, the model supports a wide range of artistic styles, from photorealistic scenes to anime aesthetics, adapting fluidly to various creative prompts. It also possesses powerful image editing and understanding abilities, supporting advanced operations such as style transfer, object insertion or removal, detail enhancement, text editing, and even human pose manipulation, aiming to be a comprehensive foundation model for intelligent visual creation and manipulation where language, layout, and imagery converge

image-to-image

qwen-image-edit

Qwen-Image-Edit is the image editing version of Qwen-Image, released by Alibaba's Qwen team. Built upon the 20B Qwen-Image model, it has been further trained to extend its unique text rendering capabilities to image editing tasks, enabling precise text editing within images. Furthermore, Qwen-Image-Edit utilizes an innovative architecture that feeds the input image into both Qwen2.5-VL (for visual semantic control) and a VAE Encoder (for visual appearance control), achieving capabilities in both semantic and appearance editing. This allows it to support not only low-level visual appearance edits like adding, removing, or modifying elements, but also high-level visual semantic editing such as IP creation and style transfer, which require maintaining semantic consistency. The model has achieved state-of-the-art (SOTA) performance on multiple public benchmarks, establishing it as a powerful foundation model for image editing

Ready to accelerate your AI development?

Ready to accelerate your AI development?

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.