OpenAI’s lightweight, terminal-optimized Codex model, designed specifically for the Codex CLI to drive code-generation workflows directly from your shell.
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Codex Mini has a cost structure of $1.50 per million input tokens and $6.00 per million output tokens.
The input token cost for Codex Mini is $1.50 per million input tokens.
The output token cost for Codex Mini is $6.00 per million output tokens.
Codex Mini supports a context window of up to 200,000 tokens.
Codex Mini can generate up to 100,000 tokens in a single output.
Codex Mini was released on May 16, 2025.
The knowledge cut-off date for Codex Mini is May 31, 2024.
Yes, Codex Mini supports vision capabilities.
No, Codex Mini does not support tool calling (functions).
Yes, Codex Mini supports multiple languages, allowing it to handle input and output in several languages.
No, Codex Mini does not support fine-tuning.
You can find the official documentation for Codex Mini here.
Collaborate with thousands of AI builders to discover, manage, and improve prompts—free to get started.