Gemini 1.5 Pro overview

Provider

The company that provides the model

Anthropic Logo
Google

Context window

The number of tokens you can send in a prompt

2,097,152 tokens

Maximum output

The maximum number of tokens a model can generate in one request

8,192 tokens

Input token cost

The cost of prompt tokens sent to the model

$1.25 / 1M input tokens (for prompts up to 128k tokens)

Output token cost

The cost of output tokens generated by the model

$5.00 / 1M output tokens (for prompts up to 128k tokens)

Knowledge cut-off date

When the model's knowledge ends

Unknown

Release date

When the model was launched

February 15, 2024

Gemini 1.5 Pro functionality

Function (tool calling) support

Capability for the model to use external tools

Yes

Vision support

Ability to process and analyze visual inputs, like images

Yes

Multilingual

Support for multiple languages

Yes

Fine-tuning

Whether the model supports fine-tuning on custom datasets

Yes - gemini-1.5-pro-002 can be fine-tuned

Common questions about Gemini 1.5 Pro

How much does Gemini 1.5 Pro cost?

Gemini 1.5 Pro has a cost structure of $1.25 per million input tokens for prompts up to 128k tokens and $5.00 per million output tokens for prompts up to 128k tokens.

What is the API cost for Gemini 1.5 Pro?

The API cost for Gemini 1.5 Pro is $1.25 per million input tokens and $5.00 per million output tokens for prompts up to 128k tokens.

What is the price per token for Gemini 1.5 Pro?

For Gemini 1.5 Pro, the price is $0.00125 per 1,000 input tokens and $0.005 per 1,000 output tokens.

What is the context window for Gemini 1.5 Pro?

Gemini 1.5 Pro supports a context window of up to 2,097,152 tokens, offering extensive input capabilities.

What is the maximum output length for Gemini 1.5 Pro?

Gemini 1.5 Pro can generate up to 8,192 tokens in a single output.

When was Gemini 1.5 Pro released?

Gemini 1.5 Pro was released on February 15, 2024.

Does Gemini 1.5 Pro support vision capabilities?

Yes, Gemini 1.5 Pro supports vision capabilities.

Can Gemini 1.5 Pro perform tool calling or functions?

Yes, Gemini 1.5 Pro supports tool calling (functions).

Is Gemini 1.5 Pro a multilingual model?

Yes, Gemini 1.5 Pro supports multiple languages, allowing it to handle input and output in several languages.

Does Gemini 1.5 Pro support fine-tuning?

Yes, Gemini 1.5 Pro supports fine-tuning. The model version gemini-1.5-pro-002 can be fine-tuned.

Where can I find the official documentation for Gemini 1.5 Pro?

You can find the official documentation for Gemini 1.5 Pro on Google’s website: Gemini 1.5 Pro Documentation

Better LLM outputs are a click away

PromptHub is better way to test, manage, and deploy prompts for your AI products