Completions
OpenAI Completions
Properties used to connect to OpenAI's legacy Chat Completions API.
completions
- Type:
true| {
system_prompt?: string,
model?: string,
max_tokens?: number,
temperature?: number,
top_p?: number,
modalities?: ['text', 'audio'],
audio?: {format: string, voice: string},
ChatFunctions
} - Default: {model: "gpt-4o"}
Connect to OpenAI's legacy Completions API. You can set this property to true or configure it using an object:
system_prompt is used to set the "system" message for the conversation context.
model is the name of the model to be used by the API. Check /v1/chat/completions for more.
max_tokens the maximum number of tokens to generate in the chat. Check tokenizer for more info.
temperature is used for sampling; between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused.
top_p is an alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens
comprising the top 10% probability mass are considered.
modalities and audio are required to generate responses in audio format. Info here and see autoPlay.
ChatFunctions encompasses properties used for function calling.
Basic Example
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"completions": {"max_tokens": 2000, "system_prompt": "Assist me with anything you can"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"completions": {"max_tokens": 2000, "system_prompt": "Assist me with anything you can"}
}
}'
style="border-radius: 8px"
></deep-chat>
Use stream to stream the AI responses.
Files Example
You can send image and audio files in your conversation. Make sure you are using the correct model for each by checking model modalities.
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"completions": {"model": "gpt-4.1"}
}}'
images="true"
camera="true"
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"completions": {"model": "gpt-4.1"}
}}'
images="true"
camera="true"
style="border-radius: 8px"
textInput='{"styles": {"container": {"width": "77%"}}}'
></deep-chat>
When sending files we advise you to set maxMessages to 1 to send less data and reduce costs.
microphone is not supported for audio chat, instead we recommend using the realtime sts API.
Functions
Examples for OpenAI's Function Calling features:
Chat Functions
- Type: {
tools: Tools,
tool_choice?:"auto"|{type: "function", function: {name: string}},
function_handler: FunctionHandler
}
Configure the completions API to call your functions via the OpenAI Function calling API.
This is particularly useful if you want the model to analyze user's requests, check whether a function should be called, extract the relevant information
from their text and return it all in a standardized response for you to act on.
tools defines the functions that the model can signal to call based on the user's text.
tool_choice controls which (if any) function should be called.
function_handler is the actual function that is called with the model's instructions.
- Sample code
- Full code
// using JavaScript for a simplified example
chatElementRef.directConnection = {
openAI: {
completions: {
tools: [
{
type: 'function',
function: {
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
},
key: 'placeholder-key',
},
};
// using JavaScript for a simplified example
chatElementRef.directConnection = {
openAI: {
completions: {
tools: [
{
type: 'function',
function: {
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
},
key: 'placeholder-key',
},
};
function getCurrentWeather(location) {
location = location.toLowerCase();
if (location.includes('tokyo')) {
return JSON.stringify({location, temperature: '10', unit: 'celsius'});
} else if (location.includes('san francisco')) {
return JSON.stringify({location, temperature: '72', unit: 'fahrenheit'});
} else {
return JSON.stringify({location, temperature: '22', unit: 'celsius'});
}
}
Tools
- Type: {
type: "function" | "object",
function:{name: string,description?: string,parameters: JSONSchema}
}[]
An array describing tools that the model may call.
name is the name of a function.
description is used by the model to understand what the function does and when it should be called.
parameters are arguments that the function accepts defined in a JSON Schema (see example above).
Checkout the following guide for more about function calling.
If your function accepts arguments - the type property should be set to "function", otherwise use the following object {"type": "object", "properties": {}}.
FunctionHandler
- Type: (
functionsDetails: FunctionsDetails) =>{response: string}[]|{text: string}
The actual function that the component will call if the model wants a response from tools functions.
functionsDetails contains information about what tool functions should be called.
This function should either return an array of JSONs containing a response property for each tool function (in the same order as in
functionsDetails) which will feed it back into the model to finalise a response, or return a JSON containing
text which will immediately display it in the chat and not send any details to the model.