OpenAI
Properties used to connect to OpenAI.
openAI
- Type: {
chat?: Chat
,
assistant?: Assistant
,
images?: Images
,
textToSpeech?: TextToSpeech
,
speechToText?: SpeechToText
} - Default: {chat: true}
Service Types
Chat
- Type:
true
| {
system_prompt?: string
,
model?: string
,
max_tokens?: number
,
temperature?: number
,
top_p?: number
,
ChatFunctions
} - Default: {system_prompt: "You are a helpful assistant.", model: "gpt-4o"}
Connect to Open AI's chat
API. You can set this property to true or configure it using an object:
system_prompt
is used to set the "system" message for the conversation context.
model
is the name of the model to be used by the API. Check /v1/chat/completions for more.
max_tokens
the maximum number of tokens to generate in the chat. Check tokenizer for more info.
temperature
is used for sampling; between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused.
top_p
is an alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens
comprising the top 10% probability mass are considered.
ChatFunctions
encompasses properties used for function calling.
Basic Example
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"chat": {"max_tokens": 2000, "system_prompt": "Assist me with anything you can"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"chat": {"max_tokens": 2000, "system_prompt": "Assist me with anything you can"}
}
}'
style="border-radius: 8px"
></deep-chat>
Vision Example
If max_tokens
is not set, the component sets it to 300 as otherwise the API does not send a full response.
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"chat": {"model": "gpt-4-vision-preview"}
}}'
images="true"
camera="true"
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"chat": {"model": "gpt-4-vision-preview"}
}}'
images="true"
camera="true"
style="border-radius: 8px"
textInput='{"styles": {"container": {"width": "77%"}}}'
></deep-chat>
Assistant
- Type:
true
| {
assistant_id?: string
,
thread_id?: string
,
load_thread_history?: boolean
,
new_assistant?: NewAssistant
,
files_tool_type?: FileToolTypes
,
function_handler?: AssistantFunctionHandler
}
Connect to your Open AI assistant
.
When set to true
or the assistant_id
is not defined, Deep Chat will automatically create a new assistant when the user sends the first message.
assistant_id
is the id of your assistant.
thread_id
allows you to communicate in the context of an already existing conversation/thread.
load_thread_history
toggles a preload of the previous conversation/thread messages on chat initialisation.
new_assistant
defines the details for the newly created assistant.
files_tool_type
defines the type of a tool to be used to process an uploaded file.
function_handler
is the actual function used to handle the model's function response. Please navigate to Assistant Functions for more info.
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"assistant": true
}}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"assistant": true
}}'
style="border-radius: 8px"
></deep-chat>
Returned MessageContent contains a hidden property called _sessionId
which
stores the thread id and allows conversation to continue on a new session.
NewAssistant
-
Type: {
model?: string
,
name?: string
,
description?: string
,
instructions?: string
,
tools?
: {
type?
:"code_interpreter"
|"file_search"
|"function"
,
function?
: {name: string
,description?: string
,parameters?: object
}
},
tool_resources?
: {
code_interpreter?
: {file_ids: string[]
},
file_search?
: {
vector_store_ids: string[]
,
vector_stores?
: {file_ids: string[]
}
}}} -
Default: {model: "gpt-4"}
When assistant_id
is not used, this object is used to define the details of the new assistant that will be created by Deep Chat when
the user sends a new message. This object follows the Open AI Create Asssistant API.
model
is the name of the model to be used by the API. Check the model overview for more.
name
and description
are used to describe the new assistant.
instructions
direct the assistant's behaviour.
tool_resources
defines the resources that the assistant has access to.
tools
is an array of objects that describe the tools the assistant will have access to.
When using the "function"
tool, you will need to also define the function
object.
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"assistant": {
"new_assistant": {
"name": "Demo Assistant",
"tools": [{"type": "code_interpreter"}]
}}
}}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"assistant": {
"new_assistant": {
"name": "Demo Assistant",
"tools": [{"type": "code_interpreter"}]
}}
}}'
mixedFiles="true"
style="border-radius: 8px"
></deep-chat>
You can access the created assistant_id
via chatElementRef._activeService.rawBody.assistant_id
.
FileToolTypes
- Type: FileToolType | (
fileNames: string[]
) => FileToolType - Default: "images"
This is used to define the type of tool that will be used to process uploaded files. You can either define it as a string or a function that will return
the tool type based on the uploaded files.
When nothing is defined and the user uploads an image, Deep Chat will automatically use "images"
which will not use any tools and send the image directly to the vision model.
It is important to note that the "code_interpreter"
and "file_search"
tools must be toggled ON in the assistant that you are using before
the files are uploaded. This can either be done in the OpenAI Assistant Playground or in the NewAssistant object.
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"assistant": {
"assistant_id": "assistant with code interpreter",
"files_tool_type": "code_interpreter"
}}}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"assistant": {
"assistant_id": "assistant with code interpreter",
"files_tool_type": "code_interpreter"
}}}'
mixedFiles="true"
style="border-radius: 8px"
></deep-chat>
When uploading a file, the user must also submit a text message.
FileToolType
- Type:
"code_interpreter"
|"file_search"
|"images"
Type of tool used to process an uploaded file. Find out more information in "code_interpreter"
and
"file_search"
. "images"
is
technically not a tool but a way to indicate that image files will be sent directly to a vision model.
Images
Connect to Open AI's Images
API.
Set this property to true or use either of the Dall-e-2
or Dall-e-3
objects.
You can automatically call any of the following three APIs by combining different inputs:
- Create Image - Send text.
- Create Image Variation - Upload and send an image with no text.
- Create Image Edit - Upload an image and add text. You can also upload a second image to be used as a mask.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"images": {"n": 1, "size": "1024x1024", "response_format": "url"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"images": {"n": 2, "size": "1024x1024", "response_format": "url"}
}
}'
style="border-radius: 8px"
></deep-chat>
Dall-e-2
- Type: {
model?: "dall-e-2"
,
n?: number
,
size?:
"256x256"
|"512x512"
|"1024x1024"
,
response_format?:
"url"
|"b64_json"
,
user?: number
} - Default: {model: "dall-e-2", size: "1024x1024"}
model
is the name of the specific model to be used by the API.
n
is the number of images to generate. Ranges between 1 and 10.
size
is the pixel dimensions of the generated images.
response_format
is the format in which the generated images are returned.
user
is a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. More info can be found here
.
Dall-e-3
- Type: {
model: "dall-e-3"
,
size?:
"1024x1024"
|"1792x1024"
|"1024x1792"
,
response_format?:
"url"
|"b64_json"
,
user?: number
} - Default: {size: "1024x1024"}
model
is the name of the specific model to be used by the API.
size
is the pixel dimensions of the generated images.
response_format
is the format in which the generated images are returned.
user
is a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. More info can be found here
.
TextToSpeech
- Type:
true
| {
model?: string
,
voice?: string
,
speed?: number
} - Default: {model: "tts-1", voice: "alloy", speed: 1}
Connect to Open AI's Text To Speech
API.
You can set this property to true or configure it using an object:
model
defines the target model used by the API. Check /v1/audio/speech for more.
voice
is the name of the voice used in the generated audio.
speed
defines speed of the generated audio. It accepts a value between 0.25 and 4.0.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"textToSpeech": {"voice": "echo"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"textToSpeech": {"voice": "echo"}
}
}'
style="border-radius: 8px"
></deep-chat>
SpeechToText
- Type:
true
| {
model?: "whisper-1"
,
temperature?: number
,
language?: string
,
type?:
"transcription" | "translation"
} - Default: {model: "whisper-1", type: "transcription"}
Connect to Open AI's Speech To Text
API.
You can set this property to true or configure it using an object:
model
is the name of the model to use. "whisper-1" is currently the only one available.
temperature
is used for sampling; between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused.
language
is the language used the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency. (Only used for transcription based API).
type
is used to toggle between the transcription and the translation APIs.
Note that translation can only attempt to translate audio into English.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"speechToText": {"model": "whisper-1", "temperature": 0.3, "language": "en", "type": "transcription"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"openAI": {
"key": "placeholder key",
"audio": {"model": "whisper-1", "temperature": 0.3, "language": "en", "type": "transcription"}
}
}'
style="border-radius: 8px"
></deep-chat>
Functions
Examples for OpenAI's Function Calling features:
Chat Functions
- Type: {
tools: Tools
,
tool_choice?:
"auto"
|{type: "function", function: {name: string}}
,
function_handler: FunctionHandler
}
Configure the chat to call your functions via the OpenAI Function calling API.
This is particularly useful if you want the model to analyze user's requests, check whether a function should be called, extract the relevant information
from their text and return it all in a standardized response for you to act on.
tools
defines the functions that the model can signal to call based on the user's text.
tool_choice
controls which (if any) function should be called.
function_handler
is the actual function that is called with the model's instructions.
- Sample code
- Full code
// using JavaScript for a simplified example
chatElementRef.directConnection = {
openAI: {
chat: {
tools: [
{
type: 'function',
function: {
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
},
key: 'placeholder-key',
},
};
// using JavaScript for a simplified example
chatElementRef.directConnection = {
openAI: {
chat: {
tools: [{
type: "function",
function: {
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: {type: "string", enum: ["celsius", "fahrenheit"]},
},
required: ["location"],
}}}],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};});}
},
key: "placeholder-key",
},
};
}
function getCurrentWeather(location) {
location = location.toLowerCase();
if (location.includes('tokyo')) {
return JSON.stringify({location, temperature: '10', unit: 'celsius'});
} else if (location.includes('san francisco')) {
return JSON.stringify({location, temperature: '72', unit: 'fahrenheit'});
} else {
return JSON.stringify({location, temperature: '22', unit: 'celsius'});
}
}
Tools
- Type: {
type: "function" | "object"
,
function:
{name: string
,description?: string
,parameters: JSONSchema
}
}[]
An array describing tools that the model may call.
name
is the name of a function.
description
is used by the model to understand what the function does and when it should be called.
parameters
are arguments that the function accepts defined in a JSON Schema (see example above).
Checkout the following guide for more about function calling.
If your function accepts arguments - the type
property should be set to "function", otherwise use the following object {"type": "object", "properties": {}}
.
FunctionHandler
- Type: (
functionsDetails: FunctionsDetails
) =>{response: string}[]
|{text: string}
The actual function that the component will call if the model wants a response from tools functions.
functionsDetails
contains information about what tool functions should be called.
This function should either return an array of JSONs containing a response
property for each tool function (in the same order as in functionsDetails
)
which will feed it back into the model to finalise a response, or return a JSON containing text
which will immediately display it in the chat
and not send any details to the model.
Assistant Functions
- Type: (
functionsDetails: FunctionsDetails
) =>string[]
The function_handler
property can be assigned with the actual function that the component will call if the model wants a response from your preconfigured assistant's functions
inside the OpneAI assistants platform.
functionsDetails
contains information about what functions should be called.
This function should return an array of strings defining the response for each function described in functionDetails
(in the same order)
which will feed it back into the assistant to finalise a response.
Try it out live in the Deep Chat Playground.
- Sample code
- Full code
// using JavaScript for a simplified example
chatElementRef.directConnection = {
openAI: {
assistant: {
assistant_id: 'placeholder-id',
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => this.getCurrentWeather(functionDetails.arguments));
},
},
key: 'placeholder-key',
},
};
// using JavaScript for a simplified example
chatElementRef.directConnection = {
openAI: {
assistant: {
assistant_id: 'placeholder-id',
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => this.getCurrentWeather(functionDetails.arguments));
},
},
key: 'placeholder-key',
},
};
function getCurrentWeather(location) {
location = location.toLowerCase();
if (location.includes('tokyo')) {
return 'Good';
} else if (location.includes('san francisco')) {
return 'Mild';
} else {
return 'Very Hot';
}
}
Shared Types
Types used in Functions
properties:
FunctionsDetails
- Type: {
name: string
,arguments: string
}[]
Array of objects containing information about the functions that the model wants to call.
name
is the name of the target function.
arguments
is a stringified JSON containing properties based on the parameters
defined for the function.