Skip to main content

IBM

Bundles contain custom components that support specific third-party integrations with Langflow.

IBM bundle provides access to IBM watsonx.ai models for text and embedding generation. These components require an IBM watsonx.ai deployment and watsonx API credentials.

IBM watsonx.ai

The IBM watsonx.ai component generates text using supported foundation models in IBM watsonx.ai.

You can use this component anywhere you need a language model in a flow.

A basic prompting flow using the IBM watsonx.ai component as the central language model component.

IBM watsonx.ai parameters

Many IBM watsonx.ai component input parameters are hidden by default in the visual editor. You can toggle parameters through the Controls in the component's header menu.

NameTypeDescription
urlStringInput parameter. The watsonx API base URL for your deployment and region.
project_idStringInput parameter. Your watsonx Project ID.
api_keySecretStringInput parameter. A watsonx API key to authenticate watsonx API access to the specified watsonx.ai deployment and model.
model_nameStringInput parameter. The name of the watsonx model to use. Options are dynamically fetched from the API.
max_tokensIntegerInput parameter. The maximum number of tokens to generate. Default: 1000.
stop_sequenceStringInput parameter. The sequence where generation should stop.
temperatureFloatInput parameter. Controls randomness in the output. Default: 0.1.
top_pFloatInput parameter. Controls nucleus sampling, which limits the model to tokens whose probability is below the top_p value. Range: Default: 0.9.
frequency_penaltyFloatInput parameter. Controls frequency penalty. A positive value decreases the probability of repeating tokens, and a negative value increases the probability. Range: Default: 0.5.
presence_penaltyFloatInput parameter. Controls presence penalty. A positive value increases the likelihood of new topics being introduced. Default: 0.3.
seedIntegerInput parameter. A random seed for the model. Default: 8.
logprobsBooleanInput parameter. Whether to return log probabilities of output tokens or not. Default: True.
top_logprobsIntegerInput parameter. The number of most likely tokens to return at each position. Default: 3.
logit_biasStringInput parameter. A JSON string of token IDs to bias or suppress.

IBM watsonx.ai output

The IBM watsonx.ai component can output either a Model Response (Message) or a Language Model (LanguageModel).

Use the Language Model output when you want to use an IBM watsonx.ai model as the LLM for another LLM-driven component, such as a Language Model or Smart Function component. For more information, see Language Model components.

The LanguageModel output from the IBM watsonx.ai component is an instance of ChatWatsonx configured according to the component's parameters.

IBM watsonx.ai Embeddings

The IBM watsonx.ai Embeddings component uses the supported foundation models in IBM watsonx.ai for embedding generation.

The output is Embeddings generated with WatsonxEmbeddings.

For more information about using embedding model components in flows, see Embedding Model components.

A basic embedding generation flow using the IBM watsonx.ai Embeddings component

IBM watsonx.ai Embeddings parameters

Some IBM watsonx.ai Embeddings component input parameters are hidden by default in the visual editor. You can toggle parameters through the Controls in the component's header menu.

NameDisplay NameInfo
urlwatsonx API EndpointInput parameter. The watsonx API base URL for your deployment and region.
project_idwatsonx project idInput parameter. Your watsonx Project ID.
api_keyAPI KeyInput parameter. A watsonx API key to authenticate watsonx API access to the specified watsonx.ai deployment and model.
model_nameModel NameInput parameter. The name of the embedding model to use. Supports default embedding models and automatically updates after connecting to your watsonx.ai deployment.
truncate_input_tokensTruncate Input TokensInput parameter. The maximum number of tokens to process. Default: 200.
input_textInclude the original text in the outputInput parameter. Determines if the original text is included in the output. Default: True.

Default embedding models

By default, the IBM watsonx.ai Embeddings component supports the following default models:

  • sentence-transformers/all-minilm-l12-v2: 384-dimensional embeddings
  • ibm/slate-125m-english-rtrvr-v2: 768-dimensional embeddings
  • ibm/slate-30m-english-rtrvr-v2: 768-dimensional embeddings
  • intfloat/multilingual-e5-large: 1024-dimensional embeddings

After entering your API endpoint and credentials, the component automatically fetches the list of available models from your watsonx.ai deployment.

See also

Search