Claude
Claude
Properties used to connect to Claude.
claude
- Type: {
model?: string,
max_tokens?: number,
temperature?: number,
top_p?: number,
top_k?: number,
stop_sequences?: string[],
system_prompt?: string,
tools?: ClaudeTool[],
tool_choice?: "auto" | "any" | {type: "tool", name: string} | {type: "function", name: string},
function_handler?: FunctionHandler,
mcp_servers?: ClaudeMCPServer[]
} - Default: {model: "claude-3-5-sonnet-20241022", max_tokens: 4096}
Connect to Claude's messages API.
model is the Claude model to use (e.g., "claude-3-5-sonnet-20241022", "claude-3-haiku-20240307").
max_tokens is the maximum number of tokens to generate.
temperature controls randomness (0.0-1.0). Higher values produce more creative outputs.
top_p controls diversity through nucleus sampling (0.0-1.0).
top_k controls diversity by limiting token choices to the top K tokens.
stop_sequences defines sequences where the API will stop generating.
system_prompt provides behavioral context and instructions to the model.
tools is an array that defines functions that the model can call.
tool_choice controls which (if any) tool should be called.
function_handler is the actual function called with the model's instructions.
mcp_servers enables integration with Model Context Protocol servers.
Basic Example
- Sample code
- Full code
<deep-chat
directConnection='{
"claude": {
"key": "placeholder key",
"max_tokens": 1000,
"system_prompt": "You are a helpful assistant."
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"claude": {
"key": "placeholder key",
"max_tokens": 1000,
"system_prompt": "You are a helpful assistant."
}
}'
style="border-radius: 8px"
></deep-chat>
Use stream to stream the AI responses.
Vision Example
Upload images alongside your text prompts for visual understanding.
- Sample code
- Full code
<deep-chat
directConnection='{
"claude": {
"key": "placeholder key"
}
}'
images="true"
camera="true"
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"claude": {
"key": "placeholder key"
}
}'
images="true"
camera="true"
style="border-radius: 8px"
textInput='{"styles": {"container": {"width": "77%"}}}'
></deep-chat>
When sending images we advise you to set maxMessages to 1 to send less data and reduce costs.
Tool Calling
Claude supports tool calling functionality:
ClaudeTool
- Type: {
name: string,
description: string,
input_schema:{
type: "object",
properties: object,
required?: string[]
}
}
Array describing tools that the model may call.
name is the name of the tool function.
description explains what the tool does and when it should be used.
input_schema defines the parameters the tool accepts in JSON Schema format.
ClaudeMCPServer
- Type: {
type: "url",
url: string,
name: string,
authorization_token?: string
}
Configuration for Model Context Protocol server integration.
type must be "url" for URL-based MCP servers.
url is the endpoint URL of the MCP server.
name is a unique identifier for the MCP server.
authorization_token is an optional token for server authentication.
- Sample code
- Full code
<deep-chat
directConnection='{
"claude": {
"key": "placeholder-key",
"mcp_servers": [
{
"type": "url",
"url": "https://example-server.modelcontextprotocol.io/sse",
"name": "my-mcp-server",
"authorization_token": "my-auth-token"
}
]
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"claude": {
"key": "placeholder-key",
"mcp_servers": [
{
"type": "url",
"url": "https://example-server.modelcontextprotocol.io/sse",
"name": "my-mcp-server",
"authorization_token": "my-auth-token"
}
]
}
}'
style="border-radius: 8px"
></deep-chat>
FunctionHandler
- Type: (
functionsDetails: FunctionsDetails) =>{response: string}[]|{text: string}
The actual function that the component will call if the model wants to use tools.
functionsDetails contains information about what tool functions should be called.
This function should either return an array of JSONs containing a response property for each tool function (in the same order as in
functionsDetails) which will feed it back into the model to finalize a response, or return a JSON containing
text which will immediately display it in the chat.
Example
- Sample code
- Full code
// using JavaScript for a simplified example
chatElementRef.directConnection = {
claude: {
tools: [
{
name: 'get_weather',
description: 'Get the current weather in a given location',
input_schema: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
key: 'placeholder-key',
},
};
// using JavaScript for a simplified example
chatElementRef.directConnection = {
claude: {
tools: [
{
name: 'get_weather',
description: 'Get the current weather in a given location',
input_schema: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
key: 'placeholder-key',
},
};
function getCurrentWeather(location) {
location = location.toLowerCase();
if (location.includes('tokyo')) {
return JSON.stringify({location, temperature: '10', unit: 'celsius'});
} else if (location.includes('san francisco')) {
return JSON.stringify({location, temperature: '72', unit: 'fahrenheit'});
} else {
return JSON.stringify({location, temperature: '22', unit: 'celsius'});
}
}