Hugging Face
Properties used to connect to Hugging Face API.
huggingFace
-
Type: {
conversation?: Conversation
,
textGeneration?: TextGeneration
,
summarization?: Summarization
,
translation?: Translation
,
fillMask?: FillMask
,
questionAnswer?: QuestionAnswer
,
audioSpeechRecognition?: AudioSpeechRecognition
,
audioClassification?: AudioClassification
,
imageClassification?: ImageClassification
} -
Default: {conversation: true}
Service Types
Conversation
-
Type:
true
| {
model?: string
,
parameters?:
{
min_length?: string
,
max_length?: string
,
top_k?: string
,
top_p?: string
,
temperature?: string
,
repetition_penalty?: string
},
options?:
{use_cache?: boolean
}
} -
Default: {model: "facebook/blenderbot-400M-distill", options: {use_cache: true}}
Connect to Hugging Face Conversational
API.
model
is the name of the model used for the task.
min_length
is the minimum length in tokens of the output summary.
max_length
is the maximum length in tokens of the output summary.
top_k
defines the top tokens considered within the sample operation to create new text.
top_p
is a float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top * p.
temperature
is a float (ranging from 0.0 to 100.0) temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability.
repetition_penalty
is a float (ranging from 0.0 to 100.0) that controls where a token is used more within generation the more it is penalized to not be picked in successive generation passes.
use_cache
is used to speed up requests by using the inference API cache.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"conversation": {"model": "facebook/blenderbot-400M-distill", "parameters": {"temperature": 1}}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"conversation": {"model": "facebook/blenderbot-400M-distill", "parameters": {"temperature": 1}}
}
}'
style="border-radius: 8px"
></deep-chat>
TextGeneration
-
Type:
true
| {
model?: string
,
parameters?:
{
top_k?: string
,
top_p?: string
,
temperature?: string
,
repetition_penalty?: string
,
max_new_tokens?: string
,
do_sample?: boolean
},
options?:
{use_cache?: boolean
}
} -
Default: {model: "gpt2", options: {use_cache: true}}
Connect to Hugging Face Text Generation
API.
model
is the name of the model used for the task.
top_k
defines the top tokens considered within the sample operation to create new text.
top_p
is a float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top * p.
temperature
is a float (ranging from 0.0 to 100.0) temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability.
repetition_penalty
is a float (ranging from 0.0 to 100.0) that controls where a token is used more within generation the more it is penalized to not be picked in successive generation passes.
max_new_tokens
is an integer (ranging from 0 to 250) amount of new tokens to be generated by the response.
do_sample
controls whether or not to use sampling. If false
it uses greedy decoding sampling.
use_cache
is used to speed up requests by using the inference API cache.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"textGeneration": {"model": "gpt2", "parameters": {"temperature": 1}}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"textGeneration": {"model": "gpt2", "parameters": {"temperature": 1}}
}
}'
style="border-radius: 8px"
></deep-chat>
Summarization
-
Type:
true
| {
model?: string
,
parameters?:
{
min_length?: string
,
max_length?: string
,
top_k?: string
,
top_p?: string
,
temperature?: string
,
repetition_penalty?: string
},
options?:
{use_cache?: boolean
}
} -
Default: {model: "facebook/bart-large-cnn", options: {use_cache: true}}
Connect to Hugging Face Summarization
API.
model
is the name of the model used for the task.
min_length
is the minimum length in tokens of the output summary.
max_length
is the maximum length in tokens of the output summary.
top_k
defines the top tokens considered within the sample operation to create new text.
top_p
is a float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top * p.
temperature
is a float (ranging from 0.0 to 100.0) temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability.
repetition_penalty
is a float (ranging from 0.0 to 100.0) that controls where a token is used more within generation the more it is penalized to not be picked in successive generation passes.
use_cache
is used to speed up requests by using the inference API cache.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"summarization": {"model": "facebook/bart-large-cnn", "parameters": {"temperature": 1}}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"summarization": {"model": "facebook/bart-large-cnn", "parameters": {"temperature": 1}}
}
}'
style="border-radius: 8px"
></deep-chat>
Translation
-
Type:
true
| {
model?: string
,
options?:
{use_cache?: boolean
}
} -
Default: {model: "Helsinki-NLP/opus-tatoeba-en-ja", options: {use_cache: true}}
Connect to Hugging Face Translation
API.
model
is the name of the model used for the task.
use_cache
is used to speed up requests by using the inference API cache.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"translation": {"model": "Helsinki-NLP/opus-tatoeba-en-ja"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"translation": {"model": "Helsinki-NLP/opus-tatoeba-en-ja"}
}
}'
style="border-radius: 8px"
></deep-chat>
FillMask
-
Type:
true
| {
model?: string
,
options?:
{use_cache?: boolean
}
} -
Default: {model: "bert-base-uncased", options: {use_cache: true}}
Connect to Hugging Face Fill Mask
API.
model
is the name of the model used for the task.
use_cache
is used to speed up requests by using the inference API cache.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"fillMask": {"model": "bert-base-uncased"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"fillMask": {"model": "bert-base-uncased"}
}
}'
style="border-radius: 8px"
></deep-chat>
QuestionAnswer
- Type:
true
| {context: string
,model?: string
} - Default: {model: "bert-large-uncased-whole-word-masking-finetuned-squad"}
Connect to Hugging Face Question Answer
API.
context
is a string containing details that AI can use to answer the given questions.
model
is the name of the model used for the task.
Example (Ask about labrador looks)
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"questionAnswer": {
"model": "bert-large-uncased-whole-word-masking-finetuned-squad",
"context": "Labrador retrievers are easily recognized by their broad head, drop ears and large, expressive eyes. Two trademarks of the Lab are the thick but fairly short double coat, which is very water repellent, and the well known otter tail. The tail is thick and sturdy and comes off the topline almost straight."
}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"questionAnswer": {
"model": "bert-large-uncased-whole-word-masking-finetuned-squad",
"context": "Labrador retrievers are easily recognized by their broad head, drop ears and large, expressive eyes. Two trademarks of the Lab are the thick but fairly short double coat, which is very water repellent, and the well known otter tail. The tail is thick and sturdy and comes off the topline almost straight."
}
}
}'
style="border-radius: 8px"
></deep-chat>
AudioSpeechRecognition
- Type:
true
| {model?: string
} - Default: {model: "facebook/wav2vec2-large-960h-lv60-self"}
Connect to Hugging Face Audio Speech Recognition
API.
model
is the name of the model used for the task.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"huggingFace": {"model": "facebook/wav2vec2-large-960h-lv60-self"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"huggingFace": {"model": "facebook/wav2vec2-large-960h-lv60-self"}
}
}'
style="border-radius: 8px"
></deep-chat>
AudioClassification
- Type:
true
| {model?: string
} - Default: {model: "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"}
Connect to Hugging Face Audio Classification
API.
model
is the name of the model used for the task.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"audioSpeechRecognition": {"model": "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"audioSpeechRecognition": {"model": "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"}
}
}'
style="border-radius: 8px"
></deep-chat>
ImageClassification
- Type:
true
| {model?: string
} - Default: {model: "google/vit-base-patch16-224"}
Connect to Hugging Face Image Classification
API.
model
is the name of the model used for the task.
Example
- Sample code
- Full code
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"imageClassification": {"model": "google/vit-base-patch16-224"}
}
}'
></deep-chat>
<!-- This example is for Vanilla JS and should be tailored to your framework (see Examples) -->
<deep-chat
directConnection='{
"huggingFace": {
"key": "placeholder key",
"imageClassification": {"model": "google/vit-base-patch16-224"}
}
}'
style="border-radius: 8px"
></deep-chat>