DeepSeek
DeepSeek
Properties used to connect to DeepSeek AI.
deepSeek
- Type: {
model?: string,
temperature?: number,
max_tokens?: number,
top_p?: number,
frequency_penalty?: number,
presence_penalty?: number,
stop?: string | string[],
system_prompt?: string
} - Default: {model: "deepseek-chat", temperature: 1, max_tokens: 4096}
Connect to DeepSeek's chat completion API.
model is the DeepSeek model to use ("deepseek-chat" or "deepseek-reasoner").
temperature controls randomness (0.0-2.0). Higher values produce more creative outputs.
max_tokens limits the maximum number of tokens in the response (1-8192).
top_p controls diversity through nucleus sampling (0.0-1.0).
frequency_penalty penalizes tokens based on their frequency in the text (-2.0 to 2.0).
presence_penalty penalizes tokens based on their presence in the text (-2.0 to 2.0).
stop defines sequences where the API will stop generating (up to 16 sequences).
system_prompt provides behavioral context and instructions to the model.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"deepSeek": {
"key": "placeholder key",
"system_prompt": "You are a helpful assistant.",
"temperature": 0.7
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"deepSeek": {
"key": "placeholder key",
"system_prompt": "You are a helpful assistant.",
"temperature": 0.7
}
}'
style="border-radius: 8px"
></deep-chat>
Use stream to stream the AI responses.