gemma-4-26B-A4B-it

gemma-4-26B-A4B-it

google/gemma-4-26B-A4B-it

About gemma-4-26B-A4B-it

Gemma 4 26B is Google DeepMind's latest open-source MoE model, built on a 26B-parameter Mixture of Experts architecture that activates only 3.8B parameters during inference for exceptionally fast token throughput. Purpose-built for advanced reasoning and agentic workflows, it ranks #6 among all open models on the Arena AI leaderboard — outperforming models up to 20x its size — with native function-calling, 256K context, and full Apache 2.0 licensing.

Available Serverless

Run queries immediately, pay only for usage

Input Price

$

0.12

/ M Tokens

Output Price

$

0.4

/ M Tokens

Metadata

Create on

License

APACHE 2.0

Provider

Google

Specification

State

Available

Architecture

Mixture of Experts

Calibrated

Yes

Mixture of Experts

Yes

Total Parameters

31B

Activated Parameters

3.8B

Reasoning

No

Precision

FP8

Context length

262K

Max Tokens

262K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?