Skip to main content

Model components in Langflow

Model components generate text using large language models.

Refer to your specific component's documentation for more information on parameters.

Use a model component in a flow

Model components receive inputs and prompts for generating text, and the generated text is sent to an output component.

The model output can also be sent to the Language Model port and on to a Parse Data component, where the output can be parsed into structured Data objects.

This example has the OpenAI model in a chatbot flow. For more information, see the Basic prompting flow.

AI/ML API

This component creates a ChatOpenAI model instance using the AIML API.

For more information, see AIML documentation.

Inputs

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens. Range: 0-128000.
model_kwargsDictionaryAdditional keyword arguments for the model.
model_nameStringThe name of the AIML model to use. Options are predefined in AIML_CHAT_MODELS.
aiml_api_baseStringThe base URL of the AIML API. Defaults to https://api.aimlapi.com.
api_keySecretStringThe AIML API Key to use for the model.
temperatureFloatControls randomness in the output. Default: 0.1.
seedIntegerControls reproducibility of the job.

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatOpenAI configured with the specified parameters.

Amazon Bedrock

This component generates text using Amazon Bedrock LLMs.

For more information, see Amazon Bedrock documentation.

Inputs

NameTypeDescription
model_idStringThe ID of the Amazon Bedrock model to use. Options include various models.
aws_access_keySecretStringAWS Access Key for authentication.
aws_secret_keySecretStringAWS Secret Key for authentication.
credentials_profile_nameStringName of the AWS credentials profile to use (advanced).
region_nameStringAWS region name. Default: us-east-1.
model_kwargsDictionaryAdditional keyword arguments for the model (advanced).
endpoint_urlStringCustom endpoint URL for the Bedrock service (advanced).

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatBedrock configured with the specified parameters.

Anthropic

This component allows the generation of text using Anthropic Chat and Language models.

For more information, see the Anthropic documentation.

Inputs

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens. Default: 4096.
modelStringThe name of the Anthropic model to use. Options include various Claude 3 models.
anthropic_api_keySecretStringYour Anthropic API key for authentication.
temperatureFloatControls randomness in the output. Default: 0.1.
anthropic_api_urlStringEndpoint of the Anthropic API. Defaults to https://api.anthropic.com if not specified (advanced).
prefillStringPrefill text to guide the model's response (advanced).

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatAnthropic configured with the specified parameters.

Azure OpenAI

This component generates text using Azure OpenAI LLM.

For more information, see the Azure OpenAI documentation.

Inputs

NameDisplay NameInfo
Model NameModel NameSpecifies the name of the Azure OpenAI model to be used for text generation.
Azure EndpointAzure EndpointYour Azure endpoint, including the resource.
Deployment NameDeployment NameSpecifies the name of the deployment.
API VersionAPI VersionSpecifies the version of the Azure OpenAI API to be used.
API KeyAPI KeyYour Azure OpenAI API key.
TemperatureTemperatureSpecifies the sampling temperature. Defaults to 0.7.
Max TokensMax TokensSpecifies the maximum number of tokens to generate. Defaults to 1000.
Input ValueInput ValueSpecifies the input text for text generation.
StreamStreamSpecifies whether to stream the response from the model. Defaults to False.

Outputs

NameTypeDescription
modelLanguageModelAn instance of AzureOpenAI configured with the specified parameters.

Cohere

This component generates text using Cohere's language models.

For more information, see the Cohere documentation.

Inputs

NameDisplay NameInfo
Cohere API KeyCohere API KeyYour Cohere API key.
Max TokensMax TokensSpecifies the maximum number of tokens to generate. Defaults to 256.
TemperatureTemperatureSpecifies the sampling temperature. Defaults to 0.75.
Input ValueInput ValueSpecifies the input text for text generation.

Outputs

NameTypeDescription
modelLanguageModelAn instance of the Cohere model configured with the specified parameters.

DeepSeek

This component generates text using DeepSeek's language models.

For more information, see the DeepSeek documentation.

Inputs

NameTypeDescription
max_tokensIntegerMaximum number of tokens to generate. Set to 0 for unlimited. Range: 0-128000.
model_kwargsDictionaryAdditional keyword arguments for the model.
json_modeBooleanIf True, outputs JSON regardless of passing a schema.
model_nameStringThe DeepSeek model to use. Default: deepseek-chat.
api_baseStringBase URL for API requests. Default: https://api.deepseek.com.
api_keySecretStringYour DeepSeek API key for authentication.
temperatureFloatControls randomness in responses. Range: [0.0, 2.0]. Default: 1.0.
seedIntegerNumber initialized for random number generation. Use the same seed integer for more reproducible results, and use a different seed number for more random results.

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatOpenAI configured with the specified parameters.

Google Generative AI

This component generates text using Google's Generative AI models.

For more information, see the Google Generative AI documentation.

Inputs

NameDisplay NameInfo
Google API KeyGoogle API KeyYour Google API key to use for the Google Generative AI.
ModelModelThe name of the model to use, such as "gemini-pro".
Max Output TokensMax Output TokensThe maximum number of tokens to generate.
TemperatureTemperatureRun inference with this temperature.
Top KTop KConsider the set of top K most probable tokens.
Top PTop PThe maximum cumulative probability of tokens to consider when sampling.
NNNumber of chat completions to generate for each prompt.

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatGoogleGenerativeAI configured with the specified parameters.

Groq

This component generates text using Groq's language models.

For more information, see the Groq documentation.

Inputs

NameTypeDescription
groq_api_keySecretStringAPI key for the Groq API.
groq_api_baseStringBase URL path for API requests. Default: https://api.groq.com (advanced).
max_tokensIntegerThe maximum number of tokens to generate (advanced).
temperatureFloatControls randomness in the output. Range: [0.0, 1.0]. Default: 0.1.
nIntegerNumber of chat completions to generate for each prompt (advanced).
model_nameStringThe name of the Groq model to use. Options are dynamically fetched from the Groq API.

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatGroq configured with the specified parameters.

Hugging Face API

This component generates text using Hugging Face's language models.

For more information, see the Hugging Face documentation.

Inputs

NameDisplay NameInfo
Endpoint URLEndpoint URLThe URL of the Hugging Face Inference API endpoint.
TaskTaskSpecifies the task for text generation.
API TokenAPI TokenThe API token required for authentication.
Model KwargsModel KwargsAdditional keyword arguments for the model.
Input ValueInput ValueThe input text for text generation.

LMStudio

This component generates text using LM Studio's local language models.

For more information, see LM Studio documentation.

Inputs

NameTypeDescription
base_urlStringThe URL where LM Studio is running. Default: "http://localhost:1234".
max_tokensIntegerMaximum number of tokens to generate in the response. Default: 512.
temperatureFloatControls randomness in the output. Range: [0.0, 2.0]. Default: 0.7.
top_pFloatControls diversity via nucleus sampling. Range: [0.0, 1.0]. Default: 1.0.
stopList[String]List of strings that will stop generation when encountered (advanced).
streamBooleanWhether to stream the response. Default: False.
presence_penaltyFloatPenalizes repeated tokens. Range: [-2.0, 2.0]. Default: 0.0.
frequency_penaltyFloatPenalizes frequent tokens. Range: [-2.0, 2.0]. Default: 0.0.

Outputs

NameTypeDescription
modelLanguageModelAn instance of LMStudio configured with the specified parameters.

Maritalk

This component generates text using Maritalk LLMs.

For more information, see Maritalk documentation.

Inputs

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens. Default: 512.
model_nameStringThe name of the Maritalk model to use. Options: sabia-2-small, sabia-2-medium. Default: sabia-2-small.
api_keySecretStringThe Maritalk API Key to use for authentication.
temperatureFloatControls randomness in the output. Range: [0.0, 1.0]. Default: 0.5.
endpoint_urlStringThe Maritalk API endpoint. Default: https://api.maritalk.com.

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatMaritalk configured with the specified parameters.

Mistral

This component generates text using MistralAI LLMs.

For more information, see Mistral AI documentation.

Inputs

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens (advanced).
model_nameStringThe name of the Mistral AI model to use. Options include open-mixtral-8x7b, open-mixtral-8x22b, mistral-small-latest, mistral-medium-latest, mistral-large-latest, and codestral-latest. Default: codestral-latest.
mistral_api_baseStringThe base URL of the Mistral API. Defaults to https://api.mistral.ai/v1 (advanced).
api_keySecretStringThe Mistral API Key to use for authentication.
temperatureFloatControls randomness in the output. Default: 0.5.
max_retriesIntegerMaximum number of retries for API calls. Default: 5 (advanced).
timeoutIntegerTimeout for API calls in seconds. Default: 60 (advanced).
max_concurrent_requestsIntegerMaximum number of concurrent API requests. Default: 3 (advanced).
top_pFloatNucleus sampling parameter. Default: 1 (advanced).
random_seedIntegerSeed for random number generation. Default: 1 (advanced).
safe_modeBooleanEnables safe mode for content generation (advanced).

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatMistralAI configured with the specified parameters.

Novita AI

This component generates text using Novita AI's language models.

For more information, see Novita AI documentation.

Inputs

NameTypeDescription
api_keySecretStringYour Novita AI API Key.
modelStringThe id of the Novita AI model to use.
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens.
temperatureFloatControls randomness in the output. Range: [0.0, 1.0]. Default: 0.7.
top_pFloatControls the nucleus sampling. Range: [0.0, 1.0]. Default: 1.0.
frequency_penaltyFloatControls the frequency penalty. Range: [0.0, 2.0]. Default: 0.0.
presence_penaltyFloatControls the presence penalty. Range: [0.0, 2.0]. Default: 0.0.

Outputs

NameTypeDescription
modelLanguageModelAn instance of Novita AI model configured with the specified parameters.

NVIDIA

This component generates text using NVIDIA LLMs.

For more information, see NVIDIA AI documentation.

Inputs

NameTypeDescription
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens (advanced).
model_nameStringThe name of the NVIDIA model to use. Default: mistralai/mixtral-8x7b-instruct-v0.1.
base_urlStringThe base URL of the NVIDIA API. Default: https://integrate.api.nvidia.com/v1.
nvidia_api_keySecretStringThe NVIDIA API Key for authentication.
temperatureFloatControls randomness in the output. Default: 0.1.
seedIntegerThe seed controls the reproducibility of the job (advanced). Default: 1.

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatNVIDIA configured with the specified parameters.

Ollama

This component generates text using Ollama's language models.

For more information, see Ollama documentation.

Inputs

NameDisplay NameInfo
Base URLBase URLEndpoint of the Ollama API.
Model NameModel NameThe model name to use.
TemperatureTemperatureControls the creativity of model responses.

Outputs

NameTypeDescription
modelLanguageModelAn instance of an Ollama model configured with the specified parameters.

OpenAI

This component generates text using OpenAI's language models.

For more information, see OpenAI documentation.

Inputs

NameTypeDescription
api_keySecretStringYour OpenAI API Key.
modelStringThe name of the OpenAI model to use. Options include "gpt-3.5-turbo" and "gpt-4".
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens.
temperatureFloatControls randomness in the output. Range: [0.0, 1.0]. Default: 0.7.
top_pFloatControls the nucleus sampling. Range: [0.0, 1.0]. Default: 1.0.
frequency_penaltyFloatControls the frequency penalty. Range: [0.0, 2.0]. Default: 0.0.
presence_penaltyFloatControls the presence penalty. Range: [0.0, 2.0]. Default: 0.0.

Outputs

NameTypeDescription
modelLanguageModelAn instance of OpenAI model configured with the specified parameters.

OpenRouter

This component generates text using OpenRouter's unified API for multiple AI models from different providers.

For more information, see OpenRouter documentation.

Inputs

NameTypeDescription
api_keySecretStringYour OpenRouter API key for authentication.
site_urlStringYour site URL for OpenRouter rankings (advanced).
app_nameStringYour app name for OpenRouter rankings (advanced).
providerStringThe AI model provider to use.
model_nameStringThe specific model to use for chat completion.
temperatureFloatControls randomness in the output. Range: [0.0, 2.0]. Default: 0.7.
max_tokensIntegerThe maximum number of tokens to generate (advanced).

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatOpenAI configured with the specified parameters.

Perplexity

This component generates text using Perplexity's language models.

For more information, see Perplexity documentation.

Inputs

NameTypeDescription
model_nameStringThe name of the Perplexity model to use. Options include various Llama 3.1 models.
max_output_tokensIntegerThe maximum number of tokens to generate.
api_keySecretStringThe Perplexity API Key for authentication.
temperatureFloatControls randomness in the output. Default: 0.75.
top_pFloatThe maximum cumulative probability of tokens to consider when sampling (advanced).
nIntegerNumber of chat completions to generate for each prompt (advanced).
top_kIntegerNumber of top tokens to consider for top-k sampling. Must be positive (advanced).

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatPerplexity configured with the specified parameters.

Qianfan

This component generates text using Qianfan's language models.

For more information, see Qianfan documentation.

SambaNova

This component generates text using SambaNova LLMs.

For more information, see Sambanova Cloud documentation.

Inputs

NameTypeDescription
sambanova_urlStringBase URL path for API requests. Default: https://api.sambanova.ai/v1/chat/completions.
sambanova_api_keySecretStringYour SambaNova API Key.
model_nameStringThe name of the Sambanova model to use. Options include various Llama models.
max_tokensIntegerThe maximum number of tokens to generate. Set to 0 for unlimited tokens.
temperatureFloatControls randomness in the output. Range: [0.0, 1.0]. Default: 0.07.

Outputs

NameTypeDescription
modelLanguageModelAn instance of SambaNova model configured with the specified parameters.

VertexAI

This component generates text using Vertex AI LLMs.

For more information, see Google Vertex AI documentation.

Inputs

NameTypeDescription
credentialsFileJSON credentials file. Leave empty to fallback to environment variables. File type: JSON.
model_nameStringThe name of the Vertex AI model to use. Default: "gemini-1.5-pro".
projectStringThe project ID (advanced).
locationStringThe location for the Vertex AI API. Default: "us-central1" (advanced).
max_output_tokensIntegerThe maximum number of tokens to generate (advanced).
max_retriesIntegerMaximum number of retries for API calls. Default: 1 (advanced).
temperatureFloatControls randomness in the output. Default: 0.0.
top_kIntegerThe number of highest probability vocabulary tokens to keep for top-k-filtering (advanced).
top_pFloatThe cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Default: 0.95 (advanced).
verboseBooleanWhether to print verbose output. Default: False (advanced).

Outputs

NameTypeDescription
modelLanguageModelAn instance of ChatVertexAI configured with the specified parameters.

Hi, how can I help you?