GPT OSS 120B
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
The date this AI finished learning. It may not know about things that happened after this date.
The types of content this AI can receive, and what it can produce in return.
The maximum amount of text the AI can read and process in a single request. A larger number means it can handle longer documents or conversations.
The cost of using this AI directly in your own application. Shown in USD per 1 million units of text (tokens).