Skip to main content

Chains

🚧ZONE UNDER CONSTRUCTION

We appreciate your understanding as we polish our documentation – it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝

Chains, in the context of language models, refer to a series of calls made to a language model. It allows for the output of one call to be used as the input for another call. Different types of chains allow for different levels of complexity. Chains are useful for creating pipelines and executing specific scenarios.


CombineDocsChain

The CombineDocsChain incorporates methods to combine or aggregate loaded documents for question-answering functionality.

info

Works as a proxy of LangChain’s documents chains generated by the load_qa_chain function.

Params

  • LLM: Language Model to use in the chain.

  • chain_type: The chain type to be used. Each one of them applies a different “combination strategy”.

    • stuff: The stuff documents chain (“stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt, and passes that prompt to an LLM. This chain is well-suited for applications where documents are small and only a few are passed in for most calls.

    • map_reduce: The map-reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combined documents chain to get a single output (the Reduce step). It can optionally first compress or collapse the mapped documents to make sure that they fit in the combined documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.

    • map_rerank: The map re-rank documents chain runs an initial prompt on each document that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest-scoring response is returned.

    • refine: The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.

      Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain. There are also certain tasks that are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.


ConversationChain

The ConversationChain is a straightforward chain for interactive conversations with a language model, making it ideal for chatbots or virtual assistants. It allows for dynamic conversations, question-answering, and complex dialogues.

Params

  • LLM: Language Model to use in the chain.
  • Memory: Default memory store.
  • input_key: Used to specify the key under which the user input will be stored in the conversation memory. It allows you to provide the user's input to the chain for processing and generating a response.
  • output_key: Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
  • verbose: This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can be helpful for debugging and understanding the chain's behavior. If set to False, it will suppress the verbose output — defaults to False.

ConversationalRetrievalChain

The ConversationalRetrievalChain extracts information and provides answers by combining document search and question-answering abilities.

info

A retriever is a component that finds documents based on a query. It doesn't store the documents themselves, but it returns the ones that match the query.

Params

  • LLM: Language Model to use in the chain.

  • Memory: Default memory store.

  • Retriever: The retriever used to fetch relevant documents.

  • chain_type: The chain type to be used. Each one of them applies a different “combination strategy”.

    • stuff: The stuff documents chain (“stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt, and passes that prompt to an LLM. This chain is well-suited for applications where documents are small and only a few are passed in for most calls.

    • map_reduce: The map-reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combined documents chain to get a single output (the Reduce step). It can optionally first compress or collapse the mapped documents to make sure that they fit in the combined documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.

    • map_rerank: The map re-rank documents chain runs an initial prompt on each document that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest-scoring response is returned.

    • refine: The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.

      Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain. There are also certain tasks that are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.

  • return_source_documents: Used to specify whether or not to include the source documents that were used to answer the question in the output. When set to True, source documents will be included in the output along with the generated answer. This can be useful for providing additional context or references to the user — defaults to True.

  • verbose: Whether or not to run in verbose mode. In verbose mode, intermediate logs will be printed to the console — defaults to False.


LLMChain

The LLMChain is a straightforward chain that adds functionality around language models. It combines a prompt template with a language model. To use it, create input variables to format the prompt template. The formatted prompt is then sent to the language model, and the generated output is returned as the result of the LLMChain.

Params

  • LLM: Language Model to use in the chain.
  • Memory: Default memory store.
  • Prompt: Prompt template object to use in the chain.
  • output_key: This parameter is used to specify which key in the LLM output dictionary should be returned as the final output. By default, the LLMChain returns both the input and output key values — defaults to text.
  • verbose: Whether or not to run in verbose mode. In verbose mode, intermediate logs will be printed to the console — defaults to False.

LLMMathChain

The LLMMathChain combines a language model (LLM) and a math calculation component. It allows the user to input math problems and get the corresponding solutions.

The LLMMathChain works by using the language model with an LLMChain to understand the input math problem and generate a math expression. It then passes this expression to the math component, which evaluates it and returns the result.

Params

  • LLM: Language Model to use in the chain.
  • LLMChain: LLM Chain to use in the chain.
  • Memory: Default memory store.
  • input_key: Used to specify the input value for the mathematical calculation. It allows you to provide the specific values or variables that you want to use in the calculation — defaults to question.
  • output_key: Used to specify the key under which the output of the mathematical calculation will be stored. It allows you to retrieve the result of the calculation using the specified key — defaults to answer.
  • verbose: Whether or not to run in verbose mode. In verbose mode, intermediate logs will be printed to the console — defaults to False.

RetrievalQA

RetrievalQA is a chain used to find relevant documents or information to answer a given query. The retriever is responsible for returning the relevant documents based on the query, and the QA component then extracts the answer from those documents. The retrieval QA system combines the capabilities of both the retriever and the QA component to provide accurate and relevant answers to user queries.

info

A retriever is a component that finds documents based on a query. It doesn't store the documents themselves, but it returns the ones that match the query.

Params

  • Combine Documents Chain: Chain to use to combine the documents.
  • Memory: Default memory store.
  • Retriever: The retriever used to fetch relevant documents.
  • input_key: This parameter is used to specify the key in the input data that contains the question. It is used to retrieve the question from the input data and pass it to the question-answering model for generating the answer — defaults to query.
  • output_key: This parameter is used to specify the key in the output data where the generated answer will be stored. It is used to retrieve the answer from the output data after the question-answering model has generated it — defaults to result.
  • return_source_documents: Used to specify whether or not to include the source documents that were used to answer the question in the output. When set to True, source documents will be included in the output along with the generated answer. This can be useful for providing additional context or references to the user — defaults to True.
  • verbose: Whether or not to run in verbose mode. In verbose mode, intermediate logs will be printed to the console — defaults to False.

SQLDatabaseChain

The SQLDatabaseChain finds answers to questions using a SQL database. It works by using the language model to understand the SQL query and generate the corresponding SQL code. It then passes the SQL code to the SQL database component, which executes the query on the database and returns the result.

Params

  • Db: SQL Database to connect to.
  • LLM: Language Model to use in the chain.
  • Prompt: Prompt template to translate natural language to SQL.

Hi, how can I help you?