Hunyuan-A13B-Instruct
About Hunyuan-A13B-Instruct
Hunyuan-A13B-Instruct activates only 13 B of its 80 B parameters, yet matches much larger LLMs on mainstream benchmarks. It offers hybrid reasoning: low-latency “fast” mode or high-precision “slow” mode, switchable per call. Native 256 K-token context lets it digest book-length documents without degradation. Agent skills are tuned for BFCL-v3, τ-Bench and C3-Bench leadership, making it an excellent autonomous assistant backbone. Grouped Query Attention plus multi-format quantization delivers memory-light, GPU-efficient inference for real-world deployment, with built-in multilingual support and robust safety alignment for enterprise-grade applications.
Available Serverless
Run queries immediately, pay only for usage
Input Price
$
0.14
/ M Tokens
Output Price
$
0.57
/ M Tokens
Metadata
Specification
State
Available
Architecture
Mixture of Experts
Calibrated
Yes
Mixture of Experts
Yes
Total Parameters
80B
Activated Parameters
13B
Reasoning
No
Precision
FP8
Context length
131K
Max Tokens
131K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Supported
Structured Outputs
Not supported
Tools
Not supported
Fim Completion
Not supported
Chat Prefix Completion
Not supported
Compare with Other Models
See how this model stacks up against others.

Tencent
chat
Hy3-preview
Release on: Apr 7, 2026
Total Context:
131K
Max output:
262K
Input:
$
0.066
/ M Tokens
Output:
$
0.26
/ M Tokens

Tencent
chat
Hunyuan-MT-7B
Release on: Sep 18, 2025
Total Context:
33K
Max output:
33K
Input:
$
0.0
/ M Tokens
Output:
$
0.0
/ M Tokens

Tencent
chat
Hunyuan-A13B-Instruct
Release on: Jun 30, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.14
/ M Tokens
Output:
$
0.57
/ M Tokens
