Hy3-preview
About Hy3-preview
Hy3 preview is a 295B-parameter Mixture-of-Experts (MoE) language model from Tencent Hunyuan, built for production-grade agent workloads. With only 21B parameters activated per token and native 256K context support, it handles complex tasks like cross-file code refactoring, long-document analysis, and multi-step tool use, rather than just generating fluent dialogue. Hy3 scores near state-of-the-art on SWE-bench Verified and advanced STEM benchmarks, while offering three inference modes (no_think, think_low, think_high) to dynamically trade off latency and reasoning depth. Its sparse activation architecture delivers competitive intelligence at a significantly lower token cost.
Available Serverless
Run queries immediately, pay only for usage
Input Price
$
0.0
/ M Tokens
Output Price
$
0.0
/ M Tokens
Metadata
Specification
State
Available
Architecture
Mixture-of-Experts
Calibrated
No
Mixture of Experts
Yes
Total Parameters
80B
Activated Parameters
21B
Reasoning
No
Precision
FP8
Context length
131K
Max Tokens
262K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Not supported
Structured Outputs
Not supported
Tools
Supported
Fim Completion
Not supported
Chat Prefix Completion
Supported
Compare with Other Models
See how this model stacks up against others.

Tencent
chat
Hy3-preview
Release on: Apr 7, 2026
Total Context:
131K
Max output:
262K
Input:
$
0.0
/ M Tokens
Output:
$
0.0
/ M Tokens

Tencent
chat
Hunyuan-MT-7B
Release on: Sep 18, 2025
Total Context:
33K
Max output:
33K
Input:
$
/ M Tokens
Output:
$
/ M Tokens

Tencent
chat
Hunyuan-A13B-Instruct
Release on: Jun 30, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.14
/ M Tokens
Output:
$
0.57
/ M Tokens
