Skip to main content

Quickstart

Get started with Langflow by loading a template flow, running it, and then serving it at the /run API endpoint.

Prerequisitesโ€‹

Create a Langflow API key

A Langflow API key is a user-specific token you can use with Langflow.

To create a Langflow API key, do the following:

  1. In Langflow, click your user icon, and then select Settings.

  2. Click Langflow API Keys, and then click Add New.

  3. Name your key, and then click Create API Key.

  4. Copy the API key and store it securely.

  5. To use your Langflow API key in a request, set a LANGFLOW_API_KEY environment variable in your terminal, and then include an x-api-key header or query parameter with your request. For example:


    _13
    # Set variable
    _13
    export LANGFLOW_API_KEY="sk..."
    _13
    _13
    # Send request
    _13
    curl --request POST \
    _13
    --url 'http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID' \
    _13
    --header 'Content-Type: application/json' \
    _13
    --header 'x-api-key: LANGFLOW_API_KEY' \
    _13
    --data '{
    _13
    "output_type": "chat",
    _13
    "input_type": "chat",
    _13
    "input_value": "Hello"
    _13
    }'

Run the Simple Agent template flowโ€‹

  1. In Langflow, click New Flow, and then select the Simple Agent template.

Simple agent starter flow

The Simple Agent flow consists of an Agent component connected to Chat I/O components, a Calculator component, and a URL component. When you run this flow, you submit a query to the agent through the Chat Input component, the agent uses the Calculator and URL tools to generate a response, and then returns the response through the Chat Output component.

Many components can be tools for agents, including Model Context Protocol (MCP) servers. The agent decides which tools to call based on the context of a given query.

  1. In the Agent component's settings, in the OpenAI API Key field, enter your OpenAI API key. This guide uses an OpenAI model for demonstration purposes. If you want to use a different provider, change the Model Provider field, and then provide credentials for your selected provider.

    Optionally, you can click Globe to store the key in a Langflow global variable.

  2. To run the flow, click Playground.

  3. To test the Calculator tool, ask the agent a simple math question, such as I want to add 4 and 4. To help you test and evaluate your flows, the Playground shows the agent's reasoning process as it analyzes the prompt, selects a tool, and then uses the tool to generate a response. In this case, a math question causes the agent to select the Calculator tool and use an action like evaluate_expression.

Playground with Agent tool

  1. To test the URL tool, ask the agent about current events. For this request, the agent selects the URL tool's fetch_content action, and then returns a summary of current news headlines.

  2. When you are done testing the flow, click Close.

Next steps

Now that you've run your first flow, try these next steps:

Run your flows from external applicationsโ€‹

Langflow is an IDE, but it's also a runtime you can call through the Langflow API with Python, JavaScript, or HTTP.

When you start Langflow locally, you can send requests to the local Langflow server. For production applications, you need to deploy a stable Langflow instance to handle API calls.

For example, you can use the /run endpoint to run a flow and get the result.

Langflow provides code snippets to help you get started with the Langflow API.

  1. To open the API access pane, in the Playground, click Share, and then click API access.

    The default code in the API access pane constructs a request with the Langflow server url, headers, and a payload of request data. The code snippets automatically include the LANGFLOW_SERVER_ADDRESS and FLOW_ID values for the flow, and a script to include your LANGFLOW_API_KEY if you've set it as an environment variable in your terminal session. Replace these values if you're using the code for a different server or flow. The default Langflow server address is http://localhost:7860.


    _29
    import requests
    _29
    _29
    url = "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID" # The complete API endpoint URL for this flow
    _29
    _29
    # Request payload configuration
    _29
    payload = {
    _29
    "output_type": "chat",
    _29
    "input_type": "chat",
    _29
    "input_value": "hello world!"
    _29
    }
    _29
    _29
    # Request headers
    _29
    headers = {
    _29
    "Content-Type": "application/json",
    _29
    "x-api-key": "$LANGFLOW_API_KEY"
    _29
    }
    _29
    _29
    try:
    _29
    # Send API request
    _29
    response = requests.request("POST", url, json=payload, headers=headers)
    _29
    response.raise_for_status() # Raise exception for bad status codes
    _29
    _29
    # Print response
    _29
    print(response.text)
    _29
    _29
    except requests.exceptions.RequestException as e:
    _29
    print(f"Error making API request: {e}")
    _29
    except ValueError as e:
    _29
    print(f"Error parsing response: {e}")

  2. Copy the snippet, paste it in a script file, and then run the script to send the request. If you are using the curl snippet, you can run the command directly in your terminal.

If the request is successful, the response includes many details about the flow run, including the session ID, inputs, outputs, components, durations, and more. The following is an example of a response from running the Simple Agent template flow:

Response

_162
{
_162
"session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
_162
"outputs": [
_162
{
_162
"inputs": {
_162
"input_value": "hello world!"
_162
},
_162
"outputs": [
_162
{
_162
"results": {
_162
"message": {
_162
"text_key": "text",
_162
"data": {
_162
"timestamp": "2025-06-16 19:58:23 UTC",
_162
"sender": "Machine",
_162
"sender_name": "AI",
_162
"session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
_162
"text": "Hello world! ๐ŸŒ How can I assist you today?",
_162
"files": [],
_162
"error": false,
_162
"edit": false,
_162
"properties": {
_162
"text_color": "",
_162
"background_color": "",
_162
"edited": false,
_162
"source": {
_162
"id": "Agent-ZOknz",
_162
"display_name": "Agent",
_162
"source": "gpt-4o-mini"
_162
},
_162
"icon": "bot",
_162
"allow_markdown": false,
_162
"positive_feedback": null,
_162
"state": "complete",
_162
"targets": []
_162
},
_162
"category": "message",
_162
"content_blocks": [
_162
{
_162
"title": "Agent Steps",
_162
"contents": [
_162
{
_162
"type": "text",
_162
"duration": 2,
_162
"header": {
_162
"title": "Input",
_162
"icon": "MessageSquare"
_162
},
_162
"text": "**Input**: hello world!"
_162
},
_162
{
_162
"type": "text",
_162
"duration": 226,
_162
"header": {
_162
"title": "Output",
_162
"icon": "MessageSquare"
_162
},
_162
"text": "Hello world! ๐ŸŒ How can I assist you today?"
_162
}
_162
],
_162
"allow_markdown": true,
_162
"media_url": null
_162
}
_162
],
_162
"id": "f3d85d9a-261c-4325-b004-95a1bf5de7ca",
_162
"flow_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
_162
"duration": null
_162
},
_162
"default_value": "",
_162
"text": "Hello world! ๐ŸŒ How can I assist you today?",
_162
"sender": "Machine",
_162
"sender_name": "AI",
_162
"files": [],
_162
"session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
_162
"timestamp": "2025-06-16T19:58:23+00:00",
_162
"flow_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
_162
"error": false,
_162
"edit": false,
_162
"properties": {
_162
"text_color": "",
_162
"background_color": "",
_162
"edited": false,
_162
"source": {
_162
"id": "Agent-ZOknz",
_162
"display_name": "Agent",
_162
"source": "gpt-4o-mini"
_162
},
_162
"icon": "bot",
_162
"allow_markdown": false,
_162
"positive_feedback": null,
_162
"state": "complete",
_162
"targets": []
_162
},
_162
"category": "message",
_162
"content_blocks": [
_162
{
_162
"title": "Agent Steps",
_162
"contents": [
_162
{
_162
"type": "text",
_162
"duration": 2,
_162
"header": {
_162
"title": "Input",
_162
"icon": "MessageSquare"
_162
},
_162
"text": "**Input**: hello world!"
_162
},
_162
{
_162
"type": "text",
_162
"duration": 226,
_162
"header": {
_162
"title": "Output",
_162
"icon": "MessageSquare"
_162
},
_162
"text": "Hello world! ๐ŸŒ How can I assist you today?"
_162
}
_162
],
_162
"allow_markdown": true,
_162
"media_url": null
_162
}
_162
],
_162
"duration": null
_162
}
_162
},
_162
"artifacts": {
_162
"message": "Hello world! ๐ŸŒ How can I assist you today?",
_162
"sender": "Machine",
_162
"sender_name": "AI",
_162
"files": [],
_162
"type": "object"
_162
},
_162
"outputs": {
_162
"message": {
_162
"message": "Hello world! ๐ŸŒ How can I assist you today?",
_162
"type": "text"
_162
}
_162
},
_162
"logs": {
_162
"message": []
_162
},
_162
"messages": [
_162
{
_162
"message": "Hello world! ๐ŸŒ How can I assist you today?",
_162
"sender": "Machine",
_162
"sender_name": "AI",
_162
"session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
_162
"stream_url": null,
_162
"component_id": "ChatOutput-aF5lw",
_162
"files": [],
_162
"type": "text"
_162
}
_162
],
_162
"timedelta": null,
_162
"duration": null,
_162
"component_display_name": "Chat Output",
_162
"component_id": "ChatOutput-aF5lw",
_162
"used_frozen_result": false
_162
}
_162
]
_162
}
_162
]
_162
}

In a production application, you probably want to select parts of this response to return to the user, store in logs, and so on. The next steps demonstrate how you can extract data from a Langflow API response to use in your application.

Extract data from the responseโ€‹

The following example builds on the API pane's example code to create a question-and-answer chat in your terminal that stores the Agent's previous answer.

  1. Incorporate your Simple Agent flow's /run snippet into the following script. This script runs a question-and-answer chat in your terminal and stores the Agent's previous answer so you can compare them.


    _59
    import requests
    _59
    import json
    _59
    _59
    url = "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID"
    _59
    _59
    def ask_agent(question):
    _59
    payload = {
    _59
    "output_type": "chat",
    _59
    "input_type": "chat",
    _59
    "input_value": question,
    _59
    }
    _59
    _59
    headers = {
    _59
    "Content-Type": "application/json",
    _59
    "x-api-key": "LANGFLOW_API_KEY"
    _59
    }
    _59
    _59
    try:
    _59
    response = requests.post(url, json=payload, headers=headers)
    _59
    response.raise_for_status()
    _59
    _59
    # Get the response message
    _59
    data = response.json()
    _59
    message = data["outputs"][0]["outputs"][0]["outputs"]["message"]["message"]
    _59
    return message
    _59
    _59
    except Exception as e:
    _59
    return f"Error: {str(e)}"
    _59
    _59
    def extract_message(data):
    _59
    try:
    _59
    return data["outputs"][0]["outputs"][0]["outputs"]["message"]["message"]
    _59
    except (KeyError, IndexError):
    _59
    return None
    _59
    _59
    # Store the previous answer from ask_agent response
    _59
    previous_answer = None
    _59
    _59
    # the terminal chat
    _59
    while True:
    _59
    # Get user input
    _59
    print("\nAsk the agent anything, such as 'What is 15 * 7?' or 'What is the capital of France?')")
    _59
    print("Type 'quit' to exit or 'compare' to see the previous answer")
    _59
    user_question = input("Your question: ")
    _59
    _59
    if user_question.lower() == 'quit':
    _59
    break
    _59
    elif user_question.lower() == 'compare':
    _59
    if previous_answer:
    _59
    print(f"\nPrevious answer was: {previous_answer}")
    _59
    else:
    _59
    print("\nNo previous answer to compare with!")
    _59
    continue
    _59
    _59
    # Get and display the answer
    _59
    result = ask_agent(user_question)
    _59
    print(f"\nAgent's answer: {result}")
    _59
    # Store the answer for comparison
    _59
    previous_answer = result

  2. To view the Agent's previous answer, type compare. To close the terminal chat, type exit.

Use tweaks to apply temporary overrides to a flow runโ€‹

You can include tweaks with your requests to temporarily modify flow parameters. Tweaks are added to the API request, and temporarily change component parameters within your flow. Tweaks override the flow's components' settings for a single run only. They don't modify the underlying flow configuration or persist between runs.

Tweaks are added to the /run endpoint's payload. To assist with formatting, you can define tweaks in Langflow's Input Schema pane before copying the code snippet.

  1. To open the Input Schema pane, from the API access pane, click Input Schema.
  2. In the Input Schema pane, select the parameter you want to modify in your next request. Enabling parameters in the Input Schema pane does not allow modifications to the listed parameters. It only adds them to the example code.
  3. For example, to change the LLM provider from OpenAI to Groq, and include your Groq API key with the request, select the values Model Providers, Model, and Groq API Key. Langflow updates the tweaks object in the code snippets based on your input parameters, and includes default values to guide you. Use the updated code snippets in your script to run your flow with your overrides.

_12
payload = {
_12
"output_type": "chat",
_12
"input_type": "chat",
_12
"input_value": "hello world!",
_12
"tweaks": {
_12
"Agent-ZOknz": {
_12
"agent_llm": "Groq",
_12
"api_key": "GROQ_API_KEY",
_12
"model_name": "llama-3.1-8b-instant"
_12
}
_12
}
_12
}

Next stepsโ€‹

Search