StabilityAI
Properties used to connect to Stability AI.
stabilityAI
- Type: {
textToImage?: TextToImage
,
imageToImage?: ImageToImage
,
imageToImageMasking?: ImageToImageMasking
,
imageToImageUpscale?: ImageToImageUpscale
} - Default: {textToImage: true}
Service Types
TextToImage
- Type:
true
| {StabilityAICommon
,width?: number
,height?: number
} - Default: {engine_id: "stable-diffusion-v1-6", width: 512, height: 512}
Connect to Stability AI's text-to-image
API.
StabilityAICommon
properties can be used to set the engine Id and other image parameters.
width
and height
is used to set the image dimensions. They must be multiples of 64 and pass the following:
For 768 engines: 589,824 ≤ width * height ≤ 1,048,576 and for other engines: 262,144 ≤ width * height ≤ 1,048,576.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"textToImage": {"engine_id": "stable-diffusion-v1-6", "height": 640, "samples": 1}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"textToImage": {"engine_id": "stable-diffusion-v1-6", "height": 640, "samples": 1}
}
}'
style="border-radius: 8px"
></deep-chat>
ImageToImage
-
Type:
true
| {
StabilityAICommon
,
init_image_mode?:
"image_strength"
|"step_schedule_*"
,
image_strength?: number
,
step_schedule_start?: number
,
step_schedule_end?: number
} -
Type: {
engine_id: "stable-diffusion-v1-6",
init_image_mode: "image_strength",
image_strength: 0.35,
step_schedule_start: 0.65,
weight: 1
}
Connect to Stability AI's image-to-image
API.
StabilityAICommon
properties can be used to set the engine Id and other image parameters.
init_image_mode
denotes whether the image_strength
or step_schedule
properties control the influence of the uploaded image on the new image.
image_strength
determines how much influence the uploaded image has on the diffusion process. A value close to 1 will yield an image
very similar to the original, whilst a value closer to 0 will yield an image that is wildly different. (0 to 1)
step_schedule_start
and step_schedule_end
are used to skip a proportion of the start/end of the diffusion steps,
allowing the uploaded image to influence the final generated image. Lower values will result in more influence from the original image, while higher
values will result in more influence from the diffusion steps. (0 to 1)
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"imageToImage": {"engine_id": "stable-diffusion-v1-6", "init_image_mode": "image_strength", "samples": 1}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"imageToImage": {"engine_id": "stable-diffusion-v1-6", "width": 1024, "height": 1024, "samples": 1}
}
}'
style="border-radius: 8px"
></deep-chat>
ImageToImageMasking
-
Type:
true
| {
StabilityAICommon
,
mask_source?:
"MASK_IMAGE_WHITE"
|"MASK_IMAGE_BLACK"
|"INIT_IMAGE_ALPHA"
} -
Default: {engine_id: "stable-diffusion-xl-1024-v1-0", mask_source: "MASK_IMAGE_WHITE", weight: 1}
Connect to Stability AI's image-to-image-masking
API.
StabilityAICommon
properties can be used to set the engine Id and other image parameters.
mask_source
is used to define where the source of the mask is from. "MASK_IMAGE_WHITE" will use the white pixels of the mask image (second image) as the mask,
where white pixels are completely replaced and black pixels are unchanged. "MASK_IMAGE_BLACK" will use the black pixels of the mask image (second image) as the mask,
where black pixels are completely replaced and white pixels are unchanged. "INIT_IMAGE_ALPHA" will use the alpha channel of the uploaded image as the mask,
where fully transparent pixels are completely replaced and fully opaque pixels are unchanged - in this instance the mask image does not need to be uploaded.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"imageToImageMasking": {"mask_source": "MASK_IMAGE_WHITE", "samples": 1}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"imageToImageMasking": {"mask_source": "MASK_IMAGE_WHITE", "samples": 1}
}
}'
style="border-radius: 8px"
></deep-chat>
ImageToImageUpscale
- Type:
true
| {engine_id?: string
,width?: number
,height?: number
} - Default: {engine_id: "esrgan-v1-x2plus"}
Connect to Stability AI's image-to-image-upscale
API.
engine_id
is the engine that will be used to process the image.
width
and height
are used to define the desired with of the result image where only EITHER ONE of the two can be set.
Minimum dimension number is 512.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"imageToImageUpscale": {"width": 1000}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"imageToImageUpscale": {"width": 1000}
}
}'
style="border-radius: 8px"
></deep-chat>
Shared Types
Types used in stabilityAI
properties:
StabilityAICommon
-
Type: {
engine_id?: string
,
samples?: number
,
weight?: number
,
cfg_scale?: number
,
sampler?: string
,
seed?: number
,
steps?: number
,
style_preset?: string
,
clip_guidance_preset?: string
} -
Type: {
samples: 1,
cfg_scale: 7,
seed: 0,
steps: 50,
clip_guidance_preset: "NONE"
}
Object that is used to define the target engine and other image processing parameters.
engine_id
is the identifier for the engine that will be used to process the images.
samples
is the number of images that will be generated (1 to 10).
weight
defines how specific to the prompt the generated image should be (0 to 1).
cfg_scale
defines how strictly the diffusion process should adhere to the prompt (0 to 35).
sampler
is the sampler that will be used for the diffusion process. If this value is not set - the most appropriate one is automatically selected.
seed
is the number for the random noise (0 to 4294967295).
steps
is the number of diffusion steps to run (10 to 150).
style_preset
guides the image model towards a particular style.
clip_guidance_preset
is the clip guidance preset.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"textToImage": {
"engine_id": "stable-diffusion-v1-6",
"weight": 1,
"style_preset": "fantasy-art",
"samples": 2
}}}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"stabilityAI": {
"key": "placeholder key",
"textToImage": {
"engine_id": "stable-diffusion-v1-6",
"weight": 1,
"style_preset": "fantasy-art",
"samples": 2
}}}'
style="border-radius: 8px"
></deep-chat>