Skip to main content

Models

Model components are used to generate text using language models. These components can be used to generate text for various tasks such as chatbots, content generation, and more.

AI/ML API​

This component creates a ChatOpenAI model instance using the AIML API.

For more information, see AIML documentation.

Parameters​

Inputs​

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens. Range: 0-128000.
model_kwargsDictionaryAdditional keyword arguments for the model.
model_nameStringThe name of the AIML model to use. Options are predefined in AIML_CHAT_MODELS.
aiml_api_baseStringThe base URL of the AIML API. Defaults to https://api.aimlapi.com.
api_keySecretStringThe AIML API Key to use for the model.
temperatureFloatControls randomness in the output. Default: 0.1.
seedIntegerControls reproducibility of the job.

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatOpenAI configured with the specified parameters.

Amazon Bedrock​

This component generates text using Amazon Bedrock LLMs.

For more information, see Amazon Bedrock documentation.

Parameters​

Inputs​

NameTypeDescription
model_idStringThe ID of the Amazon Bedrock model to use. Options include various models.
aws_access_keySecretStringAWS Access Key for authentication.
aws_secret_keySecretStringAWS Secret Key for authentication.
credentials_profile_nameStringName of the AWS credentials profile to use (advanced).
region_nameStringAWS region name. Default: "us-east-1".
model_kwargsDictionaryAdditional keyword arguments for the model (advanced).
endpoint_urlStringCustom endpoint URL for the Bedrock service (advanced).

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatBedrock configured with the specified parameters.

Anthropic​

This component allows the generation of text using Anthropic Chat and Language models.

For more information, see the Anthropic documentation.

Parameters​

Inputs​

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens. Default: 4096.
modelStringThe name of the Anthropic model to use. Options include various Claude 3 models.
anthropic_api_keySecretStringYour Anthropic API key for authentication.
temperatureFloatControls randomness in the output. Default: 0.1.
anthropic_api_urlStringEndpoint of the Anthropic API. Defaults to 'https://api.anthropic.com' if not specified (advanced).
prefillStringPrefill text to guide the model's response (advanced).

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatAnthropic configured with the specified parameters.

Azure OpenAI​

This component generates text using Azure OpenAI LLM.

For more information, see the Azure OpenAI documentation.

Parameters​

Inputs​

NameDisplay NameInfo
Model NameModel NameSpecifies the name of the Azure OpenAI model to be used for text generation.
Azure EndpointAzure EndpointYour Azure endpoint, including the resource.
Deployment NameDeployment NameSpecifies the name of the deployment.
API VersionAPI VersionSpecifies the version of the Azure OpenAI API to be used.
API KeyAPI KeyYour Azure OpenAI API key.
TemperatureTemperatureSpecifies the sampling temperature. Defaults to 0.7.
Max TokensMax TokensSpecifies the maximum number of tokens to generate. Defaults to 1000.
Input ValueInput ValueSpecifies the input text for text generation.
StreamStreamSpecifies whether to stream the response from the model. Defaults to False.

Cohere​

This component generates text using Cohere's language models.

For more information, see the Cohere documentation.

Parameters​

Inputs​

NameDisplay NameInfo
Cohere API KeyCohere API KeyYour Cohere API key.
Max TokensMax TokensSpecifies the maximum number of tokens to generate. Defaults to 256.
TemperatureTemperatureSpecifies the sampling temperature. Defaults to 0.75.
Input ValueInput ValueSpecifies the input text for text generation.

Google Generative AI​

This component generates text using Google's Generative AI models.

For more information, see the Google Generative AI documentation.

Parameters​

Inputs​

NameDisplay NameInfo
Google API KeyGoogle API KeyYour Google API key to use for the Google Generative AI.
ModelModelThe name of the model to use, such as "gemini-pro".
Max Output TokensMax Output TokensThe maximum number of tokens to generate.
TemperatureTemperatureRun inference with this temperature.
Top KTop KConsider the set of top K most probable tokens.
Top PTop PThe maximum cumulative probability of tokens to consider when sampling.
NNNumber of chat completions to generate for each prompt.

Groq​

This component generates text using Groq's language models.

For more information, see the Groq documentation.

Parameters​

Inputs​

NameTypeDescription
groq_api_keySecretStringAPI key for the Groq API.
groq_api_baseStringBase URL path for API requests. Default: "https://api.groq.com" (advanced).
max_tokensIntegerThe maximum number of tokens to generate (advanced).
temperatureFloatControls randomness in the output. Range: [0.0, 1.0]. Default: 0.1.
nIntegerNumber of chat completions to generate for each prompt (advanced).
model_nameStringThe name of the Groq model to use. Options are dynamically fetched from the Groq API.

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatGroq configured with the specified parameters.

Hugging Face API​

This component generates text using Hugging Face's language models.

For more information, see the Hugging Face documentation.

Parameters​

Inputs​

NameDisplay NameInfo
Endpoint URLEndpoint URLThe URL of the Hugging Face Inference API endpoint.
TaskTaskSpecifies the task for text generation.
API TokenAPI TokenThe API token required for authentication.
Model KwargsModel KwargsAdditional keyword arguments for the model.
Input ValueInput ValueThe input text for text generation.

Maritalk​

This component generates text using Maritalk LLMs.

For more information, see Maritalk documentation.

Parameters​

Inputs​

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens. Default: 512.
model_nameStringThe name of the Maritalk model to use. Options: "sabia-2-small", "sabia-2-medium". Default: "sabia-2-small".
api_keySecretStringThe Maritalk API Key to use for authentication.
temperatureFloatControls randomness in the output. Range: [0.0, 1.0]. Default: 0.5.
endpoint_urlStringThe Maritalk API endpoint. Default: https://api.maritalk.com.

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatMaritalk configured with the specified parameters.

Mistral​

This component generates text using MistralAI LLMs.

For more information, see Mistral AI documentation.

Parameters​

Inputs​

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens (advanced).
model_nameStringThe name of the Mistral AI model to use. Options include "open-mixtral-8x7b", "open-mixtral-8x22b", "mistral-small-latest", "mistral-medium-latest", "mistral-large-latest", and "codestral-latest". Default: "codestral-latest".
mistral_api_baseStringThe base URL of the Mistral API. Defaults to https://api.mistral.ai/v1 (advanced).
api_keySecretStringThe Mistral API Key to use for authentication.
temperatureFloatControls randomness in the output. Default: 0.5.
max_retriesIntegerMaximum number of retries for API calls. Default: 5 (advanced).
timeoutIntegerTimeout for API calls in seconds. Default: 60 (advanced).
max_concurrent_requestsIntegerMaximum number of concurrent API requests. Default: 3 (advanced).
top_pFloatNucleus sampling parameter. Default: 1 (advanced).
random_seedIntegerSeed for random number generation. Default: 1 (advanced).
safe_modeBooleanEnables safe mode for content generation (advanced).

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatMistralAI configured with the specified parameters.

NVIDIA​

This component generates text using NVIDIA LLMs.

For more information, see NVIDIA AI Foundation Models documentation.

Parameters​

Inputs​

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens (advanced).
model_nameStringThe name of the NVIDIA model to use. Default: "mistralai/mixtral-8x7b-instruct-v0.1".
base_urlStringThe base URL of the NVIDIA API. Default: "https://integrate.api.nvidia.com/v1".
nvidia_api_keySecretStringThe NVIDIA API Key for authentication.
temperatureFloatControls randomness in the output. Default: 0.1.
seedIntegerThe seed controls the reproducibility of the job (advanced). Default: 1.

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatNVIDIA configured with the specified parameters.

Ollama​

This component generates text using Ollama's language models.

For more information, see Ollama documentation.

Parameters​

Inputs​

NameDisplay NameInfo
Base URLBase URLEndpoint of the Ollama API.
Model NameModel NameThe model name to use.
TemperatureTemperatureControls the creativity of model responses.

OpenAI​

This component generates text using OpenAI's language models.

For more information, see OpenAI documentation.

Parameters​

Inputs​

NameTypeDescription
api_keySecretStringYour OpenAI API Key.
modelStringThe name of the OpenAI model to use. Options include "gpt-3.5-turbo" and "gpt-4".
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens.
temperatureFloatControls randomness in the output. Range: [0.0, 1.0]. Default: 0.7.
top_pFloatControls the nucleus sampling. Range: [0.0, 1.0]. Default: 1.0.
frequency_penaltyFloatControls the frequency penalty. Range: [0.0, 2.0]. Default: 0.0.
presence_penaltyFloatControls the presence penalty. Range: [0.0, 2.0]. Default: 0.0.

Outputs​

NameTypeDescription
modelLanguageModelAn instance of OpenAI model configured with the specified parameters.

Qianfan​

This component generates text using Qianfan's language models.

For more information, see Qianfan documentation.

Perplexity​

This component generates text using Perplexity's language models.

For more information, see Perplexity documentation.

Parameters​

Inputs​

NameTypeDescription
model_nameStringThe name of the Perplexity model to use. Options include various Llama 3.1 models.
max_output_tokensIntegerThe maximum number of tokens to generate.
api_keySecretStringThe Perplexity API Key for authentication.
temperatureFloatControls randomness in the output. Default: 0.75.
top_pFloatThe maximum cumulative probability of tokens to consider when sampling (advanced).
nIntegerNumber of chat completions to generate for each prompt (advanced).
top_kIntegerNumber of top tokens to consider for top-k sampling. Must be positive (advanced).

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatPerplexity configured with the specified parameters.

VertexAI​

This component generates text using Vertex AI LLMs.

For more information, see Google Vertex AI documentation.

Parameters​

Inputs​

NameTypeDescription
credentialsFileJSON credentials file. Leave empty to fallback to environment variables. File type: JSON.
model_nameStringThe name of the Vertex AI model to use. Default: "gemini-1.5-pro".
projectStringThe project ID (advanced).
locationStringThe location for the Vertex AI API. Default: "us-central1" (advanced).
max_output_tokensIntegerThe maximum number of tokens to generate (advanced).
max_retriesIntegerMaximum number of retries for API calls. Default: 1 (advanced).
temperatureFloatControls randomness in the output. Default: 0.0.
top_kIntegerThe number of highest probability vocabulary tokens to keep for top-k-filtering (advanced).
top_pFloatThe cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Default: 0.95 (advanced).
verboseBooleanWhether to print verbose output. Default: False (advanced).

Outputs​

NameTypeDescription
modelLanguageModelAn instance of ChatVertexAI configured with the specified parameters.

Hi, how can I help you?