gpt-oss-120b
About gpt-oss-120b
The gpt-oss series is OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. gpt-oss-120b is for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X).
Available Serverless
Run queries immediately, pay only for usage
$
0.05
/
$
0.45
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Calibrated
Yes
Mixture of Experts
Yes
Total Parameters
120B
Activated Parameters
5.1B
Reasoning
No
Precision
FP8
Context length
131K
Max Tokens
8K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Supported
Structured Outputs
Not supported
Tools
Not supported
Fim Completion
Not supported
Chat Prefix Completion
Not supported
Compare with Other Models
See how this model stacks up against others.
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
