Hunyuan-A13B-Instruct

Hunyuan-A13B-Instruct

tencent/Hunyuan-A13B-Instruct

About Hunyuan-A13B-Instruct

Hunyuan-A13B-Instruct activates only 13 B of its 80 B parameters, yet matches much larger LLMs on mainstream benchmarks. It offers hybrid reasoning: low-latency “fast” mode or high-precision “slow” mode, switchable per call. Native 256 K-token context lets it digest book-length documents without degradation. Agent skills are tuned for BFCL-v3, τ-Bench and C3-Bench leadership, making it an excellent autonomous assistant backbone. Grouped Query Attention plus multi-format quantization delivers memory-light, GPU-efficient inference for real-world deployment, with built-in multilingual support and robust safety alignment for enterprise-grade applications.

Available Serverless

Run queries immediately, pay only for usage

$

0.14

/

$

0.57

Per 1M Tokens (input/output)

Metadata

Create on

Jun 30, 2025

License

other

Provider

Tencent

Specification

State

Available

Architecture

Calibrated

Yes

Mixture of Experts

Yes

Total Parameters

80

Activated Parameters

13B

Reasoning

No

Precision

FP8

Context length

131K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Not supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?