Skip to main content

Groq

roq

Properties used to connect to Groq.

groq

Service Types

Chat

  • Type: true | {
         model?: string,
         system_prompt?: string,
         max_completion_tokens?: number,
         temperature?: number,
         top_p?: number,
         stop?: string[],
         seed?: number,
         tools?: object[],
         tool_choice?: 'none' | 'auto' | 'required' | {type: 'function'; function: {name: string}},
         function_handler?: FunctionHandler,
         parallel_tool_calls?: boolean
    }
  • Default: {model: "llama-3.3-70b-versatile"}

Connect to Groq's chat/completions API. You can set this property to true or configure it using an object:
model is the name of the Groq model to be used by the API. Check available models for more.
system_prompt provides behavioral context and instructions to the model.
max_completion_tokens limits the maximum number of tokens in the generated response.
temperature controls the randomness/creativity of responses (0.0-2.0). Higher values produce more diverse outputs.
top_p controls diversity through nucleus sampling (0.0-1.0).
stop are strings that will cause generation to stop when encountered.
seed ensures deterministic outputs for the same input when set.
tools array of function definitions that the model can call.
tool_choice controls how the model uses tools - 'auto' lets model decide, 'none' disables tools, 'required' forces tool use.
function_handler enables function calling capabilities for tool use.
parallel_tool_calls enables the model to call multiple tools simultaneously in one response.

Example

<deep-chat
directConnection='{
"groq": {
"key": "placeholder key",
"chat": {"system_prompt": "You are a helpful assistant.", "temperature": 0.7}
}
}'
></deep-chat>
info

Use stream to stream the AI responses.

Vision Example

Upload images alongside your text prompts for visual understanding. You must use a model with vision capabilities.

<deep-chat
directConnection='{
"groq": {
"key": "placeholder key",
"chat": {"model": "llama-3.2-11b-vision-preview"}
}
}'
images="true"
camera="true"
></deep-chat>
tip

When sending images we advise you to set maxMessages to 1 to send less data and reduce costs.

TextToSpeech

  • Type: true | {
         model?: string,
         voice?: string,
         speed?: number,
         response_format?: 'mp3' | 'opus' | 'aac' | 'flac'
    }
  • Default: {model: "playai-tts", voice: "Fritz-PlayAI", response_format: "mp3"}

Connect to Groq's audio/speech API. You can set this property to true or configure it using an object:
model is the name of the text-to-speech model.
voice specifies the voice to use for speech synthesis.
speed controls the playback speed of the generated audio (0.25-4.0).
response_format specifies the audio format for the output.

Example

<deep-chat
directConnection='{
"groq": {
"key": "placeholder key",
"textToSpeech": {"voice": "Fritz-PlayAI", "speed": 1.2}
}
}'
></deep-chat>

Tool Calling

Groq supports function calling functionality:

FunctionHandler

The actual function that the component will call if the model wants to use tools.
functionsDetails contains information about what tool functions should be called.
This function should either return an array of JSONs containing a response property for each tool function (in the same order as in functionsDetails) which will feed it back into the model to finalize a response, or return a JSON containing text which will immediately display it in the chat.

Example

// using JavaScript for a simplified example

chatElementRef.directConnection = {
groq: {
chat: {
tools: [
{
type: 'function',
function: {
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {type: 'string', enum: ['celsius', 'fahrenheit']},
},
required: ['location'],
},
},
},
],
function_handler: (functionsDetails) => {
return functionsDetails.map((functionDetails) => {
return {
response: getCurrentWeather(functionDetails.arguments),
};
});
},
},
key: 'placeholder-key',
},
};