BigModel
BigModel
Properties used to connect to BigModel (智谱 AI).
bigModel
- Type: {
chat?: Chat,images?: Images,textToSpeech?: TextToSpeech} - Default: {chat: true}
Service Types
Chat
- Type:
true| {
model?: string,
system_prompt?: string,
max_tokens?: number,
temperature?: number,
top_p?: number,
tools?: object[],
tool_choice?: 'auto' | {type: 'function'; function: {name: string}},
function_handler?: FunctionHandler
} - Default: {model: "glm-4.5"}
Connect to BigModel's chat/completions API. You can set this property to true or configure it using an object:
model is the name of the BigModel model to be used by the API. Check available models for more.
system_prompt provides behavioral context and instructions to the model.
max_tokens limits the maximum number of tokens in the generated response.
temperature controls the randomness/creativity of responses (0.0-1.0). Higher values produce more diverse outputs.
top_p controls diversity through nucleus sampling (0.0-1.0).
tools array of function definitions that the model can call.
tool_choice controls how the model uses tools - 'auto' lets model decide, or specify a particular function.
function_handler enables function calling capabilities for tool use.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"bigModel": {
"key": "placeholder key",
"chat": {"system_prompt": "You are a helpful assistant.", "temperature": 0.7}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"bigModel": {
"key": "placeholder key",
"chat": {"system_prompt": "You are a helpful assistant.", "temperature": 0.7}
}
}'
style="border-radius: 8px"
></deep-chat>
Use stream to stream the AI responses.
Files Example
Upload images or other files alongside your text prompts. These capabilities are available for GLM-4V models.
- Sample code
- Full code
<deep-chat
directConnection='{
"bigModel": {
"key": "placeholder key",
"chat": {"model": "glm-4v"}
}
}'
images="true"
mixedFiles="true"
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"bigModel": {
"key": "placeholder key",
"chat": {"model": "glm-4v"}
}
}'
images="true"
mixedFiles="true"
style="border-radius: 8px"
></deep-chat>
When sending files we advise you to set maxMessages to 1 to send less data and reduce costs.
Images
- Type:
true| {model?: string} - Default: {model: "cogview-4-250304"}
Connect to BigModel's images/generations API. You can set this property to true or configure it using an object:
model is the name of the image generation model.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"bigModel": {
"key": "placeholder key",
"images": true
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"bigModel": {
"key": "placeholder key",
"images": true
}
}'
style="border-radius: 8px"
></deep-chat>
TextToSpeech
- Type:
true| {model?: string,voice?: string} - Default: {model: "cogtts", voice: "tongtong"}
Connect to BigModel's audio/speech API. You can set this property to true or configure it using an object:
model is the name of the text-to-speech model.
voice specifies the voice to use for speech synthesis.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"bigModel": {
"key": "placeholder key",
"textToSpeech": {"voice": "tongtong"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"bigModel": {
"key": "placeholder key",
"textToSpeech": {"voice": "tongtong"}
}
}'
style="border-radius: 8px"
></deep-chat>
Tool Calling
BigModel supports function calling functionality:
FunctionHandler
- Type: (
functionsDetails: FunctionsDetails) =>{response: string}[]|{text: string}
The actual function that the component will call if the model wants to use tools.
functionsDetails contains information about what tool functions should be called.
This function should either return an array of JSONs containing a response property for each tool function (in the same order as in functionsDetails)
which will feed it back into the model to finalize a response, or return a JSON containing text which will immediately display it in the chat.
Example
- Sample code
- Full code
// using JavaScript for a simplified example
chatElementRef.directConnection = {
bigModel: {
chat: {
tools: [
{
type: 'function',
function: {
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
},
key: 'placeholder-key',
},
};
// using JavaScript for a simplified example
chatElementRef.directConnection = {
bigModel: {
chat: {
tools: [
{
type: 'function',
function: {
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
},
key: 'placeholder-key',
},
};
function getCurrentWeather(location) {
location = location.toLowerCase();
if (location.includes('tokyo')) {
return JSON.stringify({location, temperature: '10', unit: 'celsius'});
} else if (location.includes('san francisco')) {
return JSON.stringify({location, temperature: '72', unit: 'fahrenheit'});
} else {
return JSON.stringify({location, temperature: '22', unit: 'celsius'});
}
}