Skip to main content

DeepSeek

DeepSeek

Properties used to connect to DeepSeek AI.

deepSeek

  • Type: {
         model?: string,
         temperature?: number,
         max_tokens?: number,
         top_p?: number,
         frequency_penalty?: number,
         presence_penalty?: number,
         stop?: string | string[],
         system_prompt?: string
    }
  • Default: {model: "deepseek-chat", temperature: 1, max_tokens: 4096}

Connect to DeepSeek's chat completion API.
model is the DeepSeek model to use ("deepseek-chat" or "deepseek-reasoner").
temperature controls randomness (0.0-2.0). Higher values produce more creative outputs.
max_tokens limits the maximum number of tokens in the response (1-8192).
top_p controls diversity through nucleus sampling (0.0-1.0).
frequency_penalty penalizes tokens based on their frequency in the text (-2.0 to 2.0).
presence_penalty penalizes tokens based on their presence in the text (-2.0 to 2.0).
stop defines sequences where the API will stop generating (up to 16 sequences).
system_prompt provides behavioral context and instructions to the model.

Example

<deep-chat
directConnection='{
"deepSeek": {
"key": "placeholder key",
"system_prompt": "You are a helpful assistant.",
"temperature": 0.7
}
}'
></deep-chat>
info

Use stream to stream the AI responses.