Gemini
Gemini
Properties used to connect to Google AI Gemini.
gemini
- Type: {
model?: string,
system_prompt?: string,
maxOutputTokens?: number,
temperature?: number,
topP?: number,
topK?: number,
stopSequences?: string[],
responseMimeType?: string,
responseSchema?: object,
function_handler?: FunctionHandler,
tools?: GeminiTool[]
} - Default: {model: "gemini-2.0-flash"}
Connect to Google AI's generateContent API.
model is the name of the Gemini model to be used by the API. Check available models for more.
system_prompt is used to provide behavioral context and instructions to the model.
maxOutputTokens limits the maximum number of tokens in the generated response.
temperature controls the randomness/creativity of responses (0.0-1.0). Higher values produce more diverse outputs.
topP controls diversity through nucleus sampling (0.0-1.0).
topK limits token selection to the top K most probable tokens for controlling randomness.
stopSequences are strings that will cause generation to stop when encountered.
responseMimeType specifies the desired output format (e.g., "application/json").
responseSchema defines a structured schema for JSON responses when using JSON output format.
function_handler enables function calling capabilities for tool use.
tools defines available function declarations for the model to call.
Basic Example
- Sample code
- Full code
<deep-chat
directConnection='{
"gemini": {
"key": "placeholder key",
"system_prompt": "You are a helpful assistant.",
"temperature": 0.7
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"gemini": {
"key": "placeholder key",
"system_prompt": "You are a helpful assistant.",
"temperature": 0.7
}
}'
style="border-radius: 8px"
></deep-chat>
Use stream to stream the AI responses.
Vision Example
Upload images alongside your text prompts for visual understanding.
- Sample code
- Full code
<deep-chat
directConnection='{
"gemini": {
"key": "placeholder key",
}
}'
images="true"
camera="true"
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"gemini": {
"key": "placeholder key",
}
}'
images="true"
camera="true"
style="border-radius: 8px"
textInput='{"styles": {"container": {"width": "77%"}}}'
></deep-chat>
When sending images we advise you to set maxMessages to 1 to send less data and reduce costs.
Tool Calling
Gemini supports function calling functionality:
GeminiTool
- Type: {
functionDeclarations:{
name: string,
description: string,
parameters:{
type: string,
properties: object,
required?: string[]
}
}[]
}
Array describing tools that the model may call.
functionDeclarations contains an array of function definitions.
name is the name of the tool function.
description explains what the tool does and when it should be used.
parameters defines the parameters the tool accepts in JSON Schema format.
FunctionHandler
- Type: (
functionsDetails: FunctionsDetails) =>{response: string}[]|{text: string}
The actual function that the component will call if the model wants to use tools.
functionsDetails contains information about what tool functions should be called.
This function should either return an array of JSONs containing a response property for each tool function (in the same order as in functionsDetails)
which will feed it back into the model to finalize a response, or return a JSON containing text which will immediately display it in the chat.
Example
- Sample code
- Full code
// using JavaScript for a simplified example
chatElementRef.directConnection = {
gemini: {
tools: [
{
functionDeclarations: [
{
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
],
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
key: 'placeholder-key',
},
};
// using JavaScript for a simplified example
chatElementRef.directConnection = {
gemini: {
tools: [
{
functionDeclarations: [
{
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
],
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
key: 'placeholder-key',
},
};
function getCurrentWeather(location) {
location = location.toLowerCase();
if (location.includes('tokyo')) {
return JSON.stringify({location, temperature: '10', unit: 'celsius'});
} else if (location.includes('san francisco')) {
return JSON.stringify({location, temperature: '72', unit: 'fahrenheit'});
} else {
return JSON.stringify({location, temperature: '22', unit: 'celsius'});
}
}