OpenAI Responses API
Langflow includes an endpoint that is compatible with the OpenAI Responses API.
It is available at POST /api/v1/responses
.
This endpoint allows you to use existing OpenAI client libraries with minimal code changes.
You only need to replace the model
name, such as gpt-4
, with your flow_id
.
You can find Flow IDs in the code snippets on the API access pane or in a flow's URL.
Prerequisites
To be compatible with Langflow's OpenAI Responses API endpoint, your flow and request must adhere to the following requirements:
- Chat Input: Your flow must contain a Chat Input component.
Flows without this component return an error when passed to this endpoint.
The component types
ChatInput
andChat Input
are recognized as chat inputs. - Tools: The
tools
parameter isn't supported, and returns an error if provided. - Model Names: In your request, the
model
field must contain a valid flow ID or endpoint name. - Authentication: All requests require an API key passed in the
x-api-key
header. For more information, see API keys and authentication.
Additional configuration for OpenAI client libraries
This endpoint is compatible with OpenAI's API, but requires special configuration when using OpenAI client libraries.
Langflow uses x-api-key
headers for authentication, while OpenAI uses Authorization: Bearer
headers.
When sending requests to Langflow with OpenAI client libraries, you must configure custom headers and include an api_key
configuration.
The api_key
parameter can have any value, such as "dummy-api-key"
in the client examples, as the actual authentication is handled through the default_headers
configuration.
In the following examples, replace the values for LANGFLOW_SERVER_URL
, LANGFLOW_API_KEY
, and FLOW_ID
with values from your deployment.
- OpenAI Python Client
- OpenAI TypeScript Client
_14from openai import OpenAI_14_14client = OpenAI(_14 base_url="LANGFLOW_SERVER_URL/api/v1/",_14 default_headers={"x-api-key": "LANGFLOW_API_KEY"},_14 api_key="dummy-api-key" # Required by OpenAI SDK but not used by Langflow_14)_14_14response = client.responses.create(_14 model="FLOW_ID",_14 input="There is an event that happens on the second wednesday of every month. What are the event dates in 2026?",_14)_14_14print(response.output_text)
_16import OpenAI from "openai";_16_16const client = new OpenAI({_16 baseURL: "LANGFLOW_SERVER_URL/api/v1/",_16 defaultHeaders: {_16 "x-api-key": "LANGFLOW_API_KEY"_16 },_16 apiKey: "dummy-api-key" // Required by OpenAI SDK but not used by Langflow_16});_16_16const response = await client.responses.create({_16 model: "FLOW_ID",_16 input: "There is an event that happens on the second wednesday of every month. What are the event dates in 2026?"_16});_16_16console.log(response.output_text);
Response
_14Here are the event dates for the second Wednesday of each month in 2026:_14- January 14, 2026_14- February 11, 2026_14- March 11, 2026_14- April 8, 2026_14- May 13, 2026_14- June 10, 2026_14- July 8, 2026_14- August 12, 2026_14- September 9, 2026_14- October 14, 2026_14- November 11, 2026_14- December 9, 2026_14If you need these in a different format or want a downloadable calendar, let me know!
Example request
_10curl -X POST \_10 "$LANGFLOW_SERVER_URL/api/v1/responses" \_10 -H "x-api-key: $LANGFLOW_API_KEY" \_10 -H "Content-Type: application/json" \_10 -d '{_10 "model": "$YOUR_FLOW_ID",_10 "input": "Hello, how are you?",_10 "stream": false_10 }'
Headers
Header | Required | Description | Example |
---|---|---|---|
x-api-key | Yes | Your Langflow API key for authentication | "sk-..." |
Content-Type | Yes | Specifies the JSON format | "application/json" |
X-LANGFLOW-GLOBAL-VAR-* | No | Global variables for the flow | "X-LANGFLOW-GLOBAL-VAR-API_KEY: sk-..." For more, see Pass global variables to your flows in headers. |
Request body
Field | Type | Required | Default | Description |
---|---|---|---|---|
model | string | Yes | - | The flow ID or endpoint name to execute. |
input | string | Yes | - | The input text to process. |
stream | boolean | No | false | Whether to stream the response. |
background | boolean | No | false | Whether to process in background. |
tools | list[Any] | No | null | Tools are not supported yet. |
previous_response_id | string | No | null | ID of previous response to continue conversation. For more, see Continue conversations with response and session IDs. |
include | list[string] | No | null | Additional response data to include, such as ['tool_call.results'] . For more, see Retrieve tool call results. |
Example response
_35{_35 "id": "e5e8ef8a-7efd-4090-a110-6aca082bceb7",_35 "object": "response",_35 "created_at": 1756837941,_35 "status": "completed",_35 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_35 "output": [_35 {_35 "type": "message",_35 "id": "msg_e5e8ef8a-7efd-4090-a110-6aca082bceb7",_35 "status": "completed",_35 "role": "assistant",_35 "content": [_35 {_35 "type": "output_text",_35 "text": "Hello! I'm here and ready to help. How can I assist you today?",_35 "annotations": []_35 }_35 ]_35 }_35 ],_35 "parallel_tool_calls": true,_35 "previous_response_id": null,_35 "reasoning": {"effort": null, "summary": null},_35 "store": true,_35 "temperature": 1.0,_35 "text": {"format": {"type": "text"}},_35 "tool_choice": "auto",_35 "tools": [],_35 "top_p": 1.0,_35 "truncation": "disabled",_35 "usage": null,_35 "user": null,_35 "metadata": {}_35}
Response body
The response contains fields that Langflow sets dynamically and fields that use OpenAI-compatible defaults.
The OpenAI-compatible default values shown above are currently fixed and cannot be modified via the request. They are included to maintain API compatibility and provide a consistent response format.
For your requests, you will only be setting the dynamic fields. The default values are documented here for completeness and to show the full response structure.
Fields set dynamically by Langflow:
Field | Type | Description |
---|---|---|
id | string | Unique response identifier. |
created_at | int | Unix timestamp of response creation. |
model | string | The flow ID that was executed. |
output | list[dict] | Array of output items (messages, tool calls, etc.). |
previous_response_id | string | ID of previous response if continuing conversation. |
Fields with OpenAI-compatible default values
Field | Type | Default Value | Description |
---|---|---|---|
object | string | "response" | Always "response" . |
status | string | "completed" | Response status: "completed" , "in_progress" , or "failed" . |
error | dict | null | Error details (if any). |
incomplete_details | dict | null | Incomplete response details (if any). |
instructions | string | null | Response instructions (if any). |
max_output_tokens | int | null | Maximum output tokens (if any). |
parallel_tool_calls | boolean | true | Whether parallel tool calls are enabled. |
reasoning | dict | {"effort": null, "summary": null} | Reasoning information with effort and summary. |
store | boolean | true | Whether response is stored. |
temperature | float | 1.0 | Temperature setting. |
text | dict | {"format": {"type": "text"}} | Text format configuration. |
tool_choice | string | "auto" | Tool choice setting. |
tools | list[dict] | [] | Available tools. |
top_p | float | 1.0 | Top-p setting. |
truncation | string | "disabled" | Truncation setting. |
usage | dict | null | Usage statistics (if any). |
user | string | null | User identifier (if any). |
metadata | dict | {} | Additional metadata. |
Example streaming request
When you set "stream": true
with your request, the API returns a stream where each chunk contains a small piece of the response as it's generated. This provides a real-time experience where users can see the AI's output appear word by word, similar to ChatGPT's typing effect.
_10curl -X POST \_10 "$LANGFLOW_SERVER_URL/api/v1/responses" \_10 -H "x-api-key: $LANGFLOW_API_KEY" \_10 -H "Content-Type: application/json" \_10 -d '{_10 "model": "$FLOW_ID",_10 "input": "Tell me a story about a robot",_10 "stream": true_10 }'
Result
_10{_10 "id": "f7fcea36-f128-41c4-9ac1-e683137375d5",_10 "object": "response.chunk",_10 "created": 1756838094,_10 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_10 "delta": {_10 "content": "Once"_10 },_10 "status": null_10}
Streaming response body
Field | Type | Description |
---|---|---|
id | string | Unique response identifier. |
object | string | Always "response.chunk" . |
created | int | Unix timestamp of chunk creation. |
model | string | The flow ID that was executed. |
delta | dict | The new content chunk. |
status | string | Response status: "completed" , "in_progress" , or "failed" (optional). |
The stream continues until a final chunk with "status": "completed"
indicates the response is finished.
Final completion chunk
_10{_10 "id": "f7fcea36-f128-41c4-9ac1-e683137375d5",_10 "object": "response.chunk",_10 "created": 1756838094,_10 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_10 "delta": {},_10 "status": "completed"_10}
Continue conversations with response and session IDs
Conversation continuity allows you to maintain context across multiple API calls, enabling multi-turn conversations with your flows. This is essential for building chat applications where users can have ongoing conversations.
When you make a request, the API returns a response with an id
field. You can use this id
as the previous_response_id
in your next request to continue the conversation from where it left off.
First Message:
_10curl -X POST \_10 "http://$LANGFLOW_SERVER_URL/api/v1/responses" \_10 -H "x-api-key: $LANGFLOW_API_KEY" \_10 -H "Content-Type: application/json" \_10 -d '{_10 "model": "$FLOW_ID",_10 "input": "Hello, my name is Alice"_10 }'
Result
_23{_23 "id": "c45f4ac8-772b-4675-8551-c560b1afd590",_23 "object": "response",_23 "created_at": 1756839042,_23 "status": "completed",_23 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_23 "output": [_23 {_23 "type": "message",_23 "id": "msg_c45f4ac8-772b-4675-8551-c560b1afd590",_23 "status": "completed",_23 "role": "assistant",_23 "content": [_23 {_23 "type": "output_text",_23 "text": "Hello, Alice! How can I assist you today?",_23 "annotations": []_23 }_23 ]_23 }_23 ],_23 "previous_response_id": null_23}
Follow-up message:
_10curl -X POST \_10 "http://$LANGFLOW_SERVER_URL/api/v1/responses" \_10 -H "x-api-key: $LANGFLOW_API_KEY" \_10 -H "Content-Type: application/json" \_10 -d '{_10 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_10 "input": "What's my name?",_10 "previous_response_id": "c45f4ac8-772b-4675-8551-c560b1afd590"_10 }'
Result
_23{_23 "id": "c45f4ac8-772b-4675-8551-c560b1afd590",_23 "object": "response",_23 "created_at": 1756839043,_23 "status": "completed",_23 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_23 "output": [_23 {_23 "type": "message",_23 "id": "msg_c45f4ac8-772b-4675-8551-c560b1afd590",_23 "status": "completed",_23 "role": "assistant",_23 "content": [_23 {_23 "type": "output_text",_23 "text": "Your name is Alice. How can I help you today?",_23 "annotations": []_23 }_23 ]_23 }_23 ],_23 "previous_response_id": "c45f4ac8-772b-4675-8551-c560b1afd590"_23}
Optionally, you can use your own session ID values for the previous_response_id
:
_10curl -X POST \_10 "http://$LANGFLOW_SERVER_URL/api/v1/responses" \_10 -H "x-api-key: $LANGFLOW_API_KEY" \_10 -H "Content-Type: application/json" \_10 -d '{_10 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_10 "input": "What's my name?",_10 "previous_response_id": "session-alice-1756839048"_10 }'
Result
This example uses the same flow as the other previous_response_id
examples, but the LLM had not yet been introduced to Alice in the specified session:
_23{_23 "id": "session-alice-1756839048",_23 "object": "response",_23 "created_at": 1756839048,_23 "status": "completed",_23 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_23 "output": [_23 {_23 "type": "message",_23 "id": "msg_session-alice-1756839048",_23 "status": "completed",_23 "role": "assistant",_23 "content": [_23 {_23 "type": "output_text",_23 "text": "I don't have access to your name unless you tell me. If you'd like, you can share your name, and I'll remember it for this conversation!",_23 "annotations": []_23 }_23 ]_23 }_23 ],_23 "previous_response_id": "session-alice-1756839048"_23}
Retrieve tool call results
When you send a request to the /api/v1/responses
endpoint to run a flow that includes tools or function calls, you can retrieve the raw tool execution details by adding "include": ["tool_call.results"]
to the request payload.
Without the include
parameter, tool calls return basic function call information, but not the raw tool results.
For example:
_10{_10 "id": "fc_1",_10 "type": "function_call",_10 "status": "completed",_10 "name": "evaluate_expression",_10 "arguments": "{\"expression\": \"15*23\"}"_10},
To get the raw results
of each tool execution, add include: ["tool_call.results"]
to the request payload:
_10curl -X POST \_10 "http://$LANGFLOW_SERVER_URL/api/v1/responses" \_10 -H "Content-Type: application/json" \_10 -H "x-api-key: $LANGFLOW_API_KEY" \_10 -d '{_10 "model": "FLOW_ID",_10 "input": "Calculate 23 * 15 and show me the result",_10 "stream": false,_10 "include": ["tool_call.results"]_10 }'
The response now includes the tool call's results. For example:
_10{_10 "id": "evaluate_expression_1",_10 "type": "tool_call",_10 "tool_name": "evaluate_expression",_10 "queries": ["15*23"],_10 "results": {"result": "345"}_10}
Result
_58{_58 "id": "a6e5511e-71f8-457a-88d2-7d8c6ea34e36",_58 "object": "response",_58 "created_at": 1756835379,_58 "status": "completed",_58 "error": null,_58 "incomplete_details": null,_58 "instructions": null,_58 "max_output_tokens": null,_58 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_58 "output": [_58 {_58 "id": "evaluate_expression_1",_58 "queries": [_58 "15*23"_58 ],_58 "status": "completed",_58 "tool_name": "evaluate_expression",_58 "type": "tool_call",_58 "results": {_58 "result": "345"_58 }_58 },_58 {_58 "type": "message",_58 "id": "msg_a6e5511e-71f8-457a-88d2-7d8c6ea34e36",_58 "status": "completed",_58 "role": "assistant",_58 "content": [_58 {_58 "type": "output_text",_58 "text": "The result of 23 * 15 is 345.",_58 "annotations": []_58 }_58 ]_58 }_58 ],_58 "parallel_tool_calls": true,_58 "previous_response_id": null,_58 "reasoning": {_58 "effort": null,_58 "summary": null_58 },_58 "store": true,_58 "temperature": 1.0,_58 "text": {_58 "format": {_58 "type": "text"_58 }_58 },_58 "tool_choice": "auto",_58 "tools": [],_58 "top_p": 1.0,_58 "truncation": "disabled",_58 "usage": null,_58 "user": null,_58 "metadata": {}_58}
Pass global variables to your flows in headers
Global variables allow you to pass dynamic values to your flows that can be used by components within that flow run. This is useful for passing API keys, user IDs, or any other configuration that might change between requests.
The /responses
endpoint accepts global variables as custom HTTP headers with the format X-LANGFLOW-GLOBAL-VAR-{VARIABLE_NAME}
.
Variables are only available during this specific request execution and aren't persisted.
Variable names are automatically converted to uppercase.
This example demonstrates passing an OPENAI_API_KEY
variable, which is a variable Langflow automatically detects from environment variables, with two custom variables for USER_ID
and ENVIRONMENT
. The variables don't have to be created in Langflow's Global Variables section - you can pass any variable name in the X-LANGFLOW-GLOBAL-VAR-{VARIABLE_NAME}
header format.
_11curl -X POST \_11 "$LANGFLOW_SERVER_URL/api/v1/responses" \_11 -H "x-api-key: $LANGFLOW_API_KEY" \_11 -H "Content-Type: application/json" \_11 -H "X-LANGFLOW-GLOBAL-VAR-OPENAI_API_KEY: sk-..." \_11 -H "X-LANGFLOW-GLOBAL-VAR-USER_ID: user123" \_11 -H "X-LANGFLOW-GLOBAL-VAR-ENVIRONMENT: production" \_11 -d '{_11 "model": "your-flow-id",_11 "input": "Hello"_11 }'
Result
_23{_23 "id": "4a4d2f24-bb45-4a55-a499-0191305264be",_23 "object": "response",_23 "created_at": 1756839935,_23 "status": "completed",_23 "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",_23 "output": [_23 {_23 "type": "message",_23 "id": "msg_4a4d2f24-bb45-4a55-a499-0191305264be",_23 "status": "completed",_23 "role": "assistant",_23 "content": [_23 {_23 "type": "output_text",_23 "text": "Hello! How can I assist you today?",_23 "annotations": []_23 }_23 ]_23 }_23 ],_23 "previous_response_id": null_23}
Variables passed with X-LANGFLOW-GLOBAL-VAR-{VARIABLE_NAME}
are always available to your flow, regardless of whether they exist in the database.
If your flow components reference variables that aren't provided in headers or your Langflow database, the flow fails by default.
To avoid this, you can set the FALLBACK_TO_ENV_VARS
environment variable is true
, which allows the flow to use values from the .env
file if they aren't otherwise specified.
In the above example, OPENAI_API_KEY
will fall back to the database variable if not provided in the header.
USER_ID
and ENVIRONMENT
will fall back to environment variables if FALLBACK_TO_ENV_VARS
is enabled.
Otherwise, the flow fails.