Skip to main content

Perplexity

Bundles contain custom components that support specific third-party integrations with Langflow.

This page describes the components that are available in the Perplexity bundle.

For more information about Perplexity features and functionality used by Perplexity components, see the Perplexity documentation.

Perplexity text generation

This component generates text using Perplexity's language models.

It can output either a Model Response (Message) or a Language Model (LanguageModel).

Use the Language Model output when you want to use a Perplexity model as the LLM for another LLM-driven component, such as a Language Model or Smart Function component.

For more information, see Language Model components.

Perplexity text generation parameters

Many Perplexity component input parameters are hidden by default in the visual editor. You can toggle parameters through the Controls in the component's header menu.

NameTypeDescription
model_nameStringInput parameter. The name of the Perplexity model to use. Options include various Llama 3.1 models.
max_output_tokensIntegerInput parameter. The maximum number of tokens to generate.
api_keySecretStringInput parameter. The Perplexity API Key for authentication.
temperatureFloatInput parameter. Controls randomness in the output. Default: 0.75.
top_pFloatInput parameter. The maximum cumulative probability of tokens to consider when sampling (advanced).
nIntegerInput parameter. Number of chat completions to generate for each prompt (advanced).
top_kIntegerInput parameter. Number of top tokens to consider for top-k sampling. Must be positive (advanced).
Search