Grok 4 is a powerful reasoning model with multimodal support.
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Grok 4 is xAI’s flagship reasoning model, offering powerful chain-of-thought capabilities alongside native multimodal support for both vision and text.
It costs $3.00 per million input tokens and $15.00 per million output tokens.
Grok 4 supports a context window of up to 256,000 tokens, making it well-suited for long-form inputs.
The maximum output length isn’t explicitly specified, but it can generate responses up to the limit of its 256K context window.
Grok 4 was released on July 10, 2025.
The knowledge cut-off date isn’t publicly documented.
Yes, Grok 4 can process and reason over visual inputs like images.
Yes, it supports function calling for integration with external tools and APIs.
Yes, Grok 4 handles multiple languages for both input and output.
No, fine-tuning is not available for Grok 4.
See the xAI models documentation here:
https://docs.x.ai/docs#models
Collaborate with thousands of AI builders to discover, manage, and improve prompts—free to get started.