Skip to main content

MiniMax

MiniMax

Properties used to connect to MiniMax.

miniMax

  • Type: {
         model?: string,
         system_prompt?: string,
         max_tokens?: number,
         temperature?: number,
         top_p?: number,
         frequency_penalty?: number,
         presence_penalty?: number,
         stop?: string | string[]
    }
  • Default: {model: "MiniMax-M1"}

Connect to MiniMax's ChatCompletion v2 API.
model is the name of the MiniMax model to be used by the API. Check available models for more.
system_prompt provides behavioral context and instructions to the model.
max_tokens limits the maximum number of tokens in the generated response.
temperature controls the randomness/creativity of responses (0.0-2.0). Higher values produce more diverse outputs.
top_p controls diversity through nucleus sampling (0.0-1.0).
frequency_penalty penalizes frequent tokens to reduce repetition (-2.0 to 2.0).
presence_penalty penalizes new tokens to encourage topic diversity (-2.0 to 2.0).
stop string or array of strings that will stop generation when encountered.

Example

<deep-chat
directConnection='{
"miniMax": {
"key": "placeholder key",
"system_prompt": "You are a helpful assistant.",
"temperature": 0.7,
"max_tokens": 1000
}
}'
></deep-chat>
info

Use stream to stream the AI responses.