Skip to main content

LiteLLM

Bundles contain custom components that support specific third-party integrations with Langflow.

The LiteLLM bundle component connects to models through a LiteLLM proxy, which routes requests to multiple LLM providers. Using a proxy lets you change model providers without changing credentials in your flows. You authenticate to the proxy using a single key, and the proxy then uses its own configured credentials to call providers. Virtual keys are created by the proxy administrator. For more information on managing virtual keys, see Virtual Keys in the LiteLLM documentation.

LiteLLM Proxy text generation

The LiteLLM Proxy component generates text using an LLM provider.

It can output either a Model Response (Message) or a Language Model (LanguageModel).

Use the Language Model output when you want to use a LiteLLM proxy-backed model as the LLM for another LLM-driven component, such as an Agent or Smart Transform component.

For more information, see Language model components.

LiteLLM Proxy parameters

Some parameters are hidden by default in the visual editor. You can modify all component parameters through the component inspection panel that appears when you select a component.

NameTypeDescription
api_baseStringInput parameter. Base URL of the LiteLLM proxy. Default: "http://localhost:4000/v1".
api_keyStringInput parameter. Virtual key for authentication with the LiteLLM proxy.
model_nameStringInput parameter. Model name to use, such as gpt-4o, claude-3-opus.
temperatureFloatInput parameter. Controls randomness. Lower values are more deterministic. Range: [0.0, 2.0]. Default: 0.7.
max_tokensIntegerInput parameter. Maximum number of tokens to generate. Set to 0 for no limit. Range: [0, 128000]. Advanced.
timeoutIntegerInput parameter. Request timeout in seconds. Default: 60.
max_retriesIntegerInput parameter. Maximum number of retries on failure. Default: 2.
streamBooleanInput parameter. Whether to stream the response.
Search