Skip to main content

Langflow TypeScript client

The Langflow TypeScript client allows your TypeScript applications to programmatically interact with the Langflow API.

For the client code repository, see langflow-client-ts.

For the npm package, see @datastax/langflow-client.

Install the Langflow TypeScript package

To install the Langflow typescript client package, use one of the following commands:


_10
npm install @datastax/langflow-client

Initialize the Langflow TypeScript client

  1. Import the client into your code.


    _10
    import { LangflowClient } from "@datastax/langflow-client";

  2. Initialize a LangflowClient object to interact with your server:


    _10
    const baseUrl = "BASE_URL";
    _10
    const apiKey = "API_KEY";
    _10
    const client = new LangflowClient({ baseUrl, apiKey });

    Replace BASE_URL and API_KEY with values from your deployment. The default Langflow base URL is http://localhost:7860. To create an API key, see API keys and authentication.

Langflow TypeScript client quickstart

  1. With your Langflow client initialized, test the connection by calling your Langflow server.

    The following example runs a flow (runFlow) by sending the flow ID and a chat input string:


    _15
    import { LangflowClient } from "@datastax/langflow-client";
    _15
    _15
    const baseUrl = "http://localhost:7860";
    _15
    const client = new LangflowClient({ baseUrl });
    _15
    _15
    async function runFlow() {
    _15
    const flowId = "aa5a238b-02c0-4f03-bc5c-cc3a83335cdf";
    _15
    const flow = client.flow(flowId);
    _15
    const input = "Is anyone there?";
    _15
    _15
    const response = await flow.run(input);
    _15
    console.log(response);
    _15
    }
    _15
    _15
    runFlow().catch(console.error);

    Replace the following:

    • baseUrl: The URL of your Langflow server
    • flowId: The ID of the flow you want to run
    • input: The chat input message you want to send to trigger the flow
  2. Review the result to confirm that the client connected to your Langflow server.

    The following example shows the response from a well-formed runFlow request that reached the Langflow server and successfully started the flow:


    _10
    FlowResponse {
    _10
    sessionId: 'aa5a238b-02c0-4f03-bc5c-cc3a83335cdf',
    _10
    outputs: [ { inputs: [Object], outputs: [Array] } ]
    _10
    }

    In this case, the response includes a sessionID that is a unique identifier for the client-server session and an outputs array that contains information about the flow run.

  3. If you want to get full response objects from the server, change console.log to stringify the returned JSON object:


    _10
    console.log(JSON.stringify(response, null, 2));

    The exact structure of the returned inputs and outputs objects depends on the components and configuration of your flow.

  4. If you want the response to include only the chat message from the Chat Output component, change console.log to use the chatOutputText convenience function:


    _10
    console.log(response.chatOutputText());

Use advanced TypeScript client features

The TypeScript client can do more than just connect to your server and run a flow.

This example builds on the quickstart with additional features for interacting with Langflow.

  1. Pass tweaks to your code as an object with the request.

    Tweaks change values within components for all calls to your flow.

    This example tweaks the OpenAI component to enforce using the gpt-4o-mini model:


    _10
    const tweaks = { model_name: "gpt-4o-mini" };

  2. Pass a session ID with the request to separate the conversation from other flow runs, and to be able to continue this conversation by calling the same session ID in the future:


    _10
    const session_id = "aa5a238b-02c0-4f03-bc5c-cc3a83335cdf";

  3. Instead of calling run on the Flow object, call stream with the same arguments:


    _10
    const response = await client.flow(flowId).stream(input);
    _10
    _10
    for await (const event of response) {
    _10
    console.log(event);
    _10
    }

    The response is a ReadableStream of objects. For more information on streaming Langflow responses, see the /run endpoint.

  4. Run the modified TypeScript application to run the flow with tweaks and session_id, and then stream the response back.

Replace baseUrl and flowId with values from your deployment.


_22
import { LangflowClient } from "@datastax/langflow-client";
_22
_22
const baseUrl = "http://localhost:7860";
_22
const client = new LangflowClient({ baseUrl });
_22
_22
async function runFlow() {
_22
const flowId = "aa5a238b-02c0-4f03-bc5c-cc3a83335cdf";
_22
const input = "Is anyone there?";
_22
const tweaks = { model_name: "gpt-4o-mini" };
_22
const session_id = "test-session";
_22
_22
const response = await client.flow(flowId).stream(input, {
_22
session_id,
_22
tweaks,
_22
});
_22
_22
for await (const event of response) {
_22
console.log(event);
_22
}
_22
_22
}
_22
runFlow().catch(console.error);

Replace baseUrl and flowId with your server URL and flow ID, as you did in the previous run.

Result

With streaming enabled, the response includes the flow metatadata and timestamped events for flow activity. For example:


_68
{
_68
event: 'add_message',
_68
data: {
_68
timestamp: '2025-05-23 15:52:48 UTC',
_68
sender: 'User',
_68
sender_name: 'User',
_68
session_id: 'test-session',
_68
text: 'Is anyone there?',
_68
files: [],
_68
error: false,
_68
edit: false,
_68
properties: {
_68
text_color: '',
_68
background_color: '',
_68
edited: false,
_68
source: [Object],
_68
icon: '',
_68
allow_markdown: false,
_68
positive_feedback: null,
_68
state: 'complete',
_68
targets: []
_68
},
_68
category: 'message',
_68
content_blocks: [],
_68
id: '7f096715-3f2d-4d84-88d6-5e2f76bf3fbe',
_68
flow_id: 'aa5a238b-02c0-4f03-bc5c-cc3a83335cdf',
_68
duration: null
_68
}
_68
}
_68
{
_68
event: 'token',
_68
data: {
_68
chunk: 'Absolutely',
_68
id: 'c5a99314-6b23-488b-84e2-038aa3e87fb5',
_68
timestamp: '2025-05-23 15:52:48 UTC'
_68
}
_68
}
_68
{
_68
event: 'token',
_68
data: {
_68
chunk: ',',
_68
id: 'c5a99314-6b23-488b-84e2-038aa3e87fb5',
_68
timestamp: '2025-05-23 15:52:48 UTC'
_68
}
_68
}
_68
{
_68
event: 'token',
_68
data: {
_68
chunk: " I'm",
_68
id: 'c5a99314-6b23-488b-84e2-038aa3e87fb5',
_68
timestamp: '2025-05-23 15:52:48 UTC'
_68
}
_68
}
_68
{
_68
event: 'token',
_68
data: {
_68
chunk: ' here',
_68
id: 'c5a99314-6b23-488b-84e2-038aa3e87fb5',
_68
timestamp: '2025-05-23 15:52:48 UTC'
_68
}
_68
}
_68
_68
// this response is abbreviated
_68
_68
{
_68
event: 'end',
_68
data: { result: { session_id: 'test-session', outputs: [Array] } }
_68
}

Retrieve Langflow logs with the TypeScript client

To retrieve Langflow logs, you must enable log retrieval on your Langflow server by including the following values in your server's .env file:


_10
LANGFLOW_ENABLE_LOG_RETRIEVAL=True
_10
LANGFLOW_LOG_RETRIEVER_BUFFER_SIZE=10000
_10
LANGFLOW_LOG_LEVEL=DEBUG

The following example script starts streaming logs in the background, and then runs a flow so you can monitor the flow run:


_26
import { LangflowClient } from "@datastax/langflow-client";
_26
_26
const baseUrl = "http://localhost:7863";
_26
const flowId = "86f0bf45-0544-4e88-b0b1-8e622da7a7f0";
_26
_26
async function runFlow(client: LangflowClient) {
_26
const input = "Is anyone there?";
_26
const response = await client.flow(flowId).run(input);
_26
console.log('Flow response:', response);
_26
}
_26
_26
async function main() {
_26
const client = new LangflowClient({ baseUrl: baseUrl });
_26
_26
// Start streaming logs
_26
console.log('Starting log stream...');
_26
for await (const log of await client.logs.stream()) {
_26
console.log('Log:', log);
_26
}
_26
_26
// Run the flow
_26
await runFlow(client);
_26
_26
}
_26
_26
main().catch(console.error);

Replace baseUrl and flowId with your server URL and flow ID, as you did in the previous run.

Logs begin streaming indefinitely, and the flow runs once.

The following example result is truncated for readability, but you can follow the messages to see how the flow instantiates its components, configures its model, and processes the outputs.

The FlowResponse object, at the end of the stream, is returned to the client with the flow result in the outputs array.

Result

_57
Starting log stream...
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.006Z,
_57
message: '2025-05-30T07:49:16.006127-0400 DEBUG Instantiating ChatInput of type component\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.029Z,
_57
message: '2025-05-30T07:49:16.029957-0400 DEBUG Instantiating Prompt of type component\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.049Z,
_57
message: '2025-05-30T07:49:16.049520-0400 DEBUG Instantiating ChatOutput of type component\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.069Z,
_57
message: '2025-05-30T07:49:16.069359-0400 DEBUG Instantiating OpenAIModel of type component\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.086Z,
_57
message: "2025-05-30T07:49:16.086426-0400 DEBUG Running layer 0 with 2 tasks, ['ChatInput-xjucM', 'Prompt-I3pxU']\n"
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.101Z,
_57
message: '2025-05-30T07:49:16.101766-0400 DEBUG Building Chat Input\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.113Z,
_57
message: '2025-05-30T07:49:16.113343-0400 DEBUG Building Prompt\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.131Z,
_57
message: '2025-05-30T07:49:16.131423-0400 DEBUG Logged vertex build: 6bd9fe9c-5eea-4f05-a96d-f6de9dc77e3c\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.143Z,
_57
message: '2025-05-30T07:49:16.143295-0400 DEBUG Logged vertex build: 39c68ec9-3859-4fff-9b14-80b3271f8fbf\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.188Z,
_57
message: "2025-05-30T07:49:16.188730-0400 DEBUG Running layer 1 with 1 tasks, ['OpenAIModel-RtlZm']\n"
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.201Z,
_57
message: '2025-05-30T07:49:16.201946-0400 DEBUG Building OpenAI\n'
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:16.216Z,
_57
message: '2025-05-30T07:49:16.216622-0400 INFO Model name: gpt-4.1-mini\n'
_57
}
_57
Flow response: FlowResponse {
_57
sessionId: '86f0bf45-0544-4e88-b0b1-8e622da7a7f0',
_57
outputs: [ { inputs: [Object], outputs: [Array] } ]
_57
}
_57
Log: Log {
_57
timestamp: 2025-05-30T11:49:18.094Z,
_57
message: `2025-05-30T07:49:18.094364-0400 DEBUG Vertex OpenAIModel-RtlZm, result: <langflow.graph.utils.UnbuiltResult object at 0x364d24dd0>, object: {'text_output': "Hey there! I'm here and ready to help you build something awesome with AI. What are you thinking about creating today?"}\n`
_57
}

For more information, see Logs endpoints.

Search