Skip to main content

Create a vector RAG chatbot

This tutorial demonstrates how you can use Langflow to create a chatbot application that uses Retrieval Augmented Generation (RAG) to embed your data as vectors in a vector database, and then chat with the data.

Prerequisitesโ€‹

Create a vector RAG flowโ€‹

  1. In Langflow, click New Flow, and then select the Vector Store RAG template.

    About the Vector Store RAG template

    This template has two flows.

    The Load Data Flow at the bottom of the workspace populates a vector store with data from a file. This data is used to respond to queries submitted to the Retriever Flow, which is at the top of the workspace.

    Specifically, the Load Data Flow ingests data from a local file, splits the data into chunks, loads and indexes the data in your vector database, and then computes embeddings for the chunks. The embeddings are also stored with the loaded data. This flow only needs to run when you need to load data into your vector database.

    The Retriever Flow receives chat input, generates an embedding for the input, and then uses several components to reconstruct chunks into text and generate a response by comparing the new embedding to the stored embeddings to find similar data.

  2. Add your OpenAI API key to both OpenAI Embeddings components.

  3. Optional: Replace both Astra DB vector store components with a Chrome DB or another vector store component of your choice. This tutorial uses Chroma DB.

    The Load Data Flow should have File, Split Text, Embedding Model, vector store (such as Chroma DB), and Chat Output components:

    File loader chat flow

    The Retriever Flow should have Chat Input, Embedding Model, vector store, Parser, Prompt, Language Model, and Chat Output components:

    Chat with RAG flow

    The flows are ready to use. Continue the tutorial to learn how to use the loading flow to load data into your vector store, and then call the chat flow in a chatbot application.

Load data and generate embeddingsโ€‹

To load data and generate embeddings, you can use the Langflow UI or the /v2/files endpoint.

The Langflow UI option is simpler, but it is only recommended for scenarios where the user who created the flow is the same user who loads data into the database.

In situations where many users load data or you need to load data programmatically, use the Langflow API option.

  1. In your RAG chatbot flow, click the File component, and then click File.
  2. Select the local file you want to upload, and then click Open. The file is loaded to your Langflow server.
  3. To load the data into your vector store, click the vector store component, and then click Run component to run the selected component and all prior dependent components.

When the flow runs, the flow ingests the selected file, chunks the data, loads the data into the vector store database, and then generates embeddings for the chunks, which are also stored in the vector store.

Your database now contains data with vector embeddings that an LLM can use as context to respond to queries, as demonstrated in the next section of the tutorial.

Chat with your flow from a JavaScript applicationโ€‹

To chat with the data in your vector database, create a chatbot application that runs the Retriever Flow programmatically.

This tutorial uses JavaScript for demonstration purposes.

  1. To construct the chatbot, gather the following information:

    • LANGFLOW_SERVER_ADDRESS: Your Langflow server's domain. The default value is 127.0.0.1:7860. You can get this value from the code snippets on your flow's API access pane.
    • FLOW_ID: Your flow's UUID or custom endpoint name. You can get this value from the code snippets on your flow's API access pane.
    • LANGFLOW_API_KEY: A valid Langflow API key. To create an API key, see API keys.
  2. Copy the following script into a JavaScript file, and then replace the placeholders with the information you gathered in the previous step:


    _49
    const readline = require('readline');
    _49
    const { LangflowClient } = require('@datastax/langflow-client');
    _49
    _49
    const API_KEY = 'LANGFLOW_API_KEY';
    _49
    const SERVER = 'LANGFLOW_SERVER_ADDRESS';
    _49
    const FLOW_ID = 'FLOW_ID';
    _49
    _49
    const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
    _49
    _49
    // Initialize the Langflow client
    _49
    const client = new LangflowClient({
    _49
    baseUrl: SERVER,
    _49
    apiKey: API_KEY
    _49
    });
    _49
    _49
    async function sendMessage(message) {
    _49
    try {
    _49
    const response = await client.flow(FLOW_ID).run(message, {
    _49
    session_id: 'user_1'
    _49
    });
    _49
    _49
    // Use the convenience method to get the chat output text
    _49
    return response.chatOutputText() || 'No response';
    _49
    } catch (error) {
    _49
    return `Error: ${error.message}`;
    _49
    }
    _49
    }
    _49
    _49
    function chat() {
    _49
    console.log('๐Ÿค– Langflow RAG Chatbot (type "quit" to exit)\n');
    _49
    _49
    const ask = () => {
    _49
    rl.question('๐Ÿ‘ค You: ', async (input) => {
    _49
    if (['quit', 'exit', 'bye'].includes(input.trim().toLowerCase())) {
    _49
    console.log('๐Ÿ‘‹ Goodbye!');
    _49
    rl.close();
    _49
    return;
    _49
    }
    _49
    _49
    const response = await sendMessage(input.trim());
    _49
    console.log(`๐Ÿค– Assistant: ${response}\n`);
    _49
    ask();
    _49
    });
    _49
    };
    _49
    _49
    ask();
    _49
    }
    _49
    _49
    chat();

    The script creates a Node application that chats with the content in your vector database, using the chat input and output types to communicate with your flow. Chat maintains ongoing conversation context across multiple messages. If you used text type inputs and outputs, each request is a standalone text string.

    tip

    The Langflow TypeScript client has a chatOutputText() convenience method that simplifies working with Langflow's complex JSON response structure. Instead of manually navigating through multiple levels of nested objects with data.outputs[0].outputs[0].results.message.data.text, the client automatically extracts the message text and handles potentially undefined values gracefully.

  3. Save and run the script to send the requests and test the flow.

    Response

    The following is an example of a response returned from this tutorial's flow. Due to the nature of LLMs and variations in your inputs, your response might be different.


    _10
    ๐Ÿ‘ค You: Do you have any documents about engines?
    _10
    ๐Ÿค– Assistant: Yes, the provided text contains several warnings and guidelines related to engine installation, maintenance, and selection. It emphasizes the importance of using the correct engine for specific applications, ensuring all components are in good condition, and following safety precautions to prevent fire or explosion. If you need more specific information or details, please let me know!
    _10
    _10
    ๐Ÿ‘ค You: It should be about a Briggs and Stratton engine.
    _10
    ๐Ÿค– Assistant: The text provides important safety and installation guidelines for Briggs & Stratton engines. It emphasizes that these engines should not be used on 3-wheel All-Terrain Vehicles (ATVs), motor bikes, aircraft products, or vehicles intended for competitive events, as such uses are not approved by Briggs & Stratton.
    _10
    _10
    If you have any specific questions about Briggs & Stratton engines or need further information, feel free to ask!

Next stepsโ€‹

For more information on building or extending this tutorial, see the following:

Search