Hy3-preview

Hy3-preview

tencent/Hy3-preview

About Hy3-preview

Hy3 preview is a 295B-parameter Mixture-of-Experts (MoE) language model from Tencent Hunyuan, built for production-grade agent workloads. With only 21B parameters activated per token and native 256K context support, it handles complex tasks like cross-file code refactoring, long-document analysis, and multi-step tool use, rather than just generating fluent dialogue. Hy3 scores near state-of-the-art on SWE-bench Verified and advanced STEM benchmarks, while offering three inference modes (no_think, think_low, think_high) to dynamically trade off latency and reasoning depth. Its sparse activation architecture delivers competitive intelligence at a significantly lower token cost.

Available Serverless

Run queries immediately, pay only for usage

Input Price

$

0.0

/ M Tokens

Output Price

$

0.0

/ M Tokens

Metadata

Create on

License

TENCENT HY COMMUNITY LICENSE AGREEMENT

Provider

Tencent

HuggingFace

Specification

State

Available

Architecture

Mixture-of-Experts

Calibrated

No

Mixture of Experts

Yes

Total Parameters

80B

Activated Parameters

21B

Reasoning

No

Precision

FP8

Context length

131K

Max Tokens

262K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Not supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?