Perplexity
Bundles contain custom components that support specific third-party integrations with Langflow.
This page describes the components that are available in the Perplexity bundle.
For more information about Perplexity features and functionality used by Perplexity components, see the Perplexity documentation.
Perplexity text generation
This component generates text using Perplexity's language models.
It can output either a Model Response (Message
) or a Language Model (LanguageModel
).
Use the Language Model output when you want to use a Perplexity model as the LLM for another LLM-driven component, such as a Language Model or Smart Function component.
For more information, see Language Model components.
Perplexity text generation parameters
Many Perplexity component input parameters are hidden by default in the visual editor. You can toggle parameters through the Controls in the component's header menu.
Name | Type | Description |
---|---|---|
model_name | String | Input parameter. The name of the Perplexity model to use. Options include various Llama 3.1 models. |
max_output_tokens | Integer | Input parameter. The maximum number of tokens to generate. |
api_key | SecretString | Input parameter. The Perplexity API Key for authentication. |
temperature | Float | Input parameter. Controls randomness in the output. Default: 0.75. |
top_p | Float | Input parameter. The maximum cumulative probability of tokens to consider when sampling (advanced). |
n | Integer | Input parameter. Number of chat completions to generate for each prompt (advanced). |
top_k | Integer | Input parameter. Number of top tokens to consider for top-k sampling. Must be positive (advanced). |