Skip to main content

Perplexity

Perplexity

Properties used to connect to Perplexity.

perplexity

  • Type: {
         model?: string,
         system_prompt?: string,
         max_tokens?: number,
         temperature?: number,
         top_p?: number,
         top_k?: number,
         frequency_penalty?: number,
         presence_penalty?: number,
         stop?: string | string[],
         search_mode?: 'web' | 'academic',
         reasoning_effort?: 'low' | 'medium' | 'high',
         search_domain_filter?: string[],
         disable_search?: boolean,
         enable_search_classifier?: boolean
    }
  • Default: {model: "sonar"}

Connect to Perplexity's chat completions API.
model is the name of the Perplexity model to be used by the API. Check available models for more.
system_prompt provides behavioral context and instructions to the model.
max_tokens limits the maximum number of tokens in the generated response.
temperature controls the randomness/creativity of responses (0.0-2.0). Higher values produce more diverse outputs.
top_p controls diversity through nucleus sampling (0.0-1.0).
top_k limits the number of highest probability tokens to consider (1+).
frequency_penalty penalizes frequent tokens to reduce repetition (-2.0 to 2.0).
presence_penalty penalizes new tokens to encourage topic diversity (-2.0 to 2.0).
stop string or array of strings that will stop generation when encountered.
search_mode specifies whether to search the web or academic sources ('web' or 'academic').
reasoning_effort controls the reasoning intensity ('low', 'medium', or 'high').
search_domain_filter array of domains to restrict search results to specific websites.
disable_search when true, disables web search functionality.
enable_search_classifier when true, enables automatic classification of search queries.

Example

<deep-chat
directConnection='{
"perplexity": {
"key": "placeholder key",
"system_prompt": "You are a helpful research assistant.",
"temperature": 0.7,
"search_mode": "web"
}
}'
></deep-chat>
info

Use stream to stream the AI responses.