Together
Together
Properties used to connect to Together AI.
together
- Type: {
chat?: Chat,images?: Images,textToSpeech?: TextToSpeech} - Default: {chat: true}
Service Types
Chat
- Type:
true| {
model?: string,
system_prompt?: string,
max_tokens?: number,
temperature?: number,
top_p?: number,
top_k?: number,
repetition_penalty?: number,
stop?: string[]
} - Default: {model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"}
Connect to Together AI's chat/completions API. You can set this property to true or configure it using an object:
model is the name of the Together AI model to be used by the API. Check available models for more.
system_prompt provides behavioral context and instructions to the model.
max_tokens limits the maximum number of tokens in the generated response.
temperature controls the randomness/creativity of responses (0.0-2.0). Higher values produce more diverse outputs.
top_p controls diversity through nucleus sampling (0.0-1.0).
top_k limits the number of highest probability tokens to consider (1+).
repetition_penalty penalizes repeated tokens (0.0-2.0). Values > 1.0 discourage repetition.
stop array of strings that will stop generation when encountered.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"together": {
"key": "placeholder key",
"chat": {"system_prompt": "You are a helpful assistant.", "temperature": 0.7}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"together": {
"key": "placeholder key",
"chat": {"system_prompt": "You are a helpful assistant.", "temperature": 0.7}
}
}'
style="border-radius: 8px"
></deep-chat>
Use stream to stream the AI responses.
Images
- Type:
true| {
model?: string,
width?: number,
height?: number,
steps?: number,
n?: number,
seed?: number,
response_format?: 'url' | 'base64'
} - Default: {model: "black-forest-labs/FLUX.1-schnell-Free"}
Connect to Together AI's images/generations API. You can set this property to true or configure it using an object:
model is the name of the image generation model. Check available models for more.
width specifies the width of the generated image in pixels.
height specifies the height of the generated image in pixels.
steps controls the number of denoising steps (1-50). More steps generally produce higher quality images.
n specifies the number of images to generate (1-10).
seed provides a seed for reproducible generation.
response_format determines whether images are returned as URLs or base64-encoded data.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"together": {
"key": "placeholder key",
"images": {"steps": 20, "n": 2}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"together": {
"key": "placeholder key",
"images": {"steps": 20, "n": 2}
}
}'
style="border-radius: 8px"
></deep-chat>
TextToSpeech
- Type:
true| {
model?: string,
voice?: string,
speed?: number
} - Default: {model: "cartesia/sonic", voice: "laidback woman"}
Connect to Together AI's audio/speech API. You can set this property to true or configure it using an object:
model is the name of the text-to-speech model.
voice specifies the voice to use for speech synthesis.
speed controls the speed of the generated speech (0.25-4.0).
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"together": {
"key": "placeholder key",
"textToSpeech": {"voice": "enthusiastic woman", "speed": 1.2}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"together": {
"key": "placeholder key",
"textToSpeech": {"voice": "enthusiastic woman", "speed": 1.2}
}
}'
style="border-radius: 8px"
></deep-chat>