Langflow TypeScript client
The Langflow TypeScript client allows your TypeScript applications to programmatically interact with the Langflow API.
For the client code repository, see langflow-client-ts.
For the npm package, see @datastax/langflow-client.
Install the Langflow TypeScript package
To install the Langflow typescript client package, use one of the following commands:
- npm
- yarn
- pnpm
_10npm install @datastax/langflow-client
_10yarn add @datastax/langflow-client
_10pnpm add @datastax/langflow-client
Initialize the Langflow TypeScript client
-
Import the client into your code.
_10import { LangflowClient } from "@datastax/langflow-client"; -
Initialize a
LangflowClient
object to interact with your server:_10const baseUrl = "BASE_URL";_10const apiKey = "API_KEY";_10const client = new LangflowClient({ baseUrl, apiKey });Replace
BASE_URL
andAPI_KEY
with values from your deployment. The default Langflow base URL ishttp://localhost:7860
. To create an API key, see API keys and authentication.
Langflow TypeScript client quickstart
-
With your Langflow client initialized, test the connection by calling your Langflow server.
The following example runs a flow (
runFlow
) by sending the flow ID and a chat input string:_15import { LangflowClient } from "@datastax/langflow-client";_15_15const baseUrl = "http://localhost:7860";_15const client = new LangflowClient({ baseUrl });_15_15async function runFlow() {_15const flowId = "aa5a238b-02c0-4f03-bc5c-cc3a83335cdf";_15const flow = client.flow(flowId);_15const input = "Is anyone there?";_15_15const response = await flow.run(input);_15console.log(response);_15}_15_15runFlow().catch(console.error);Replace the following:
baseUrl
: The URL of your Langflow serverflowId
: The ID of the flow you want to runinput
: The chat input message you want to send to trigger the flow
-
Review the result to confirm that the client connected to your Langflow server.
The following example shows the response from a well-formed
runFlow
request that reached the Langflow server and successfully started the flow:_10FlowResponse {_10sessionId: 'aa5a238b-02c0-4f03-bc5c-cc3a83335cdf',_10outputs: [ { inputs: [Object], outputs: [Array] } ]_10}In this case, the response includes a
sessionID
that is a unique identifier for the client-server session and anoutputs
array that contains information about the flow run. -
If you want to get full response objects from the server, change
console.log
to stringify the returned JSON object:_10console.log(JSON.stringify(response, null, 2));The exact structure of the returned
inputs
andoutputs
objects depends on the components and configuration of your flow. -
If you want the response to include only the chat message from the Chat Output component, change
console.log
to use thechatOutputText
convenience function:_10console.log(response.chatOutputText());
Use advanced TypeScript client features
The TypeScript client can do more than just connect to your server and run a flow.
This example builds on the quickstart with additional features for interacting with Langflow.
-
Pass tweaks to your code as an object with the request.
Tweaks change values within components for all calls to your flow.
This example tweaks the OpenAI component to enforce using the
gpt-4o-mini
model:_10const tweaks = { model_name: "gpt-4o-mini" }; -
Pass a session ID with the request to separate the conversation from other flow runs, and to be able to continue this conversation by calling the same session ID in the future:
_10const session_id = "aa5a238b-02c0-4f03-bc5c-cc3a83335cdf"; -
Instead of calling
run
on the Flow object, callstream
with the same arguments:_10const response = await client.flow(flowId).stream(input);_10_10for await (const event of response) {_10console.log(event);_10}The response is a
ReadableStream
of objects. For more information on streaming Langflow responses, see the/run
endpoint. -
Run the modified TypeScript application to run the flow with
tweaks
andsession_id
, and then stream the response back.
Replace baseUrl
and flowId
with values from your deployment.
_22import { LangflowClient } from "@datastax/langflow-client";_22_22const baseUrl = "http://localhost:7860";_22const client = new LangflowClient({ baseUrl });_22_22async function runFlow() {_22 const flowId = "aa5a238b-02c0-4f03-bc5c-cc3a83335cdf";_22 const input = "Is anyone there?";_22 const tweaks = { model_name: "gpt-4o-mini" };_22 const session_id = "test-session";_22_22 const response = await client.flow(flowId).stream(input, {_22 session_id,_22 tweaks,_22 });_22_22 for await (const event of response) {_22 console.log(event);_22 }_22_22}_22runFlow().catch(console.error);
Replace baseUrl
and flowId
with your server URL and flow ID, as you did in the previous run.
Result
With streaming enabled, the response includes the flow metatadata and timestamped events for flow activity. For example:
_68{_68 event: 'add_message',_68 data: {_68 timestamp: '2025-05-23 15:52:48 UTC',_68 sender: 'User',_68 sender_name: 'User',_68 session_id: 'test-session',_68 text: 'Is anyone there?',_68 files: [],_68 error: false,_68 edit: false,_68 properties: {_68 text_color: '',_68 background_color: '',_68 edited: false,_68 source: [Object],_68 icon: '',_68 allow_markdown: false,_68 positive_feedback: null,_68 state: 'complete',_68 targets: []_68 },_68 category: 'message',_68 content_blocks: [],_68 id: '7f096715-3f2d-4d84-88d6-5e2f76bf3fbe',_68 flow_id: 'aa5a238b-02c0-4f03-bc5c-cc3a83335cdf',_68 duration: null_68 }_68}_68{_68 event: 'token',_68 data: {_68 chunk: 'Absolutely',_68 id: 'c5a99314-6b23-488b-84e2-038aa3e87fb5',_68 timestamp: '2025-05-23 15:52:48 UTC'_68 }_68}_68{_68 event: 'token',_68 data: {_68 chunk: ',',_68 id: 'c5a99314-6b23-488b-84e2-038aa3e87fb5',_68 timestamp: '2025-05-23 15:52:48 UTC'_68 }_68}_68{_68 event: 'token',_68 data: {_68 chunk: " I'm",_68 id: 'c5a99314-6b23-488b-84e2-038aa3e87fb5',_68 timestamp: '2025-05-23 15:52:48 UTC'_68 }_68}_68{_68 event: 'token',_68 data: {_68 chunk: ' here',_68 id: 'c5a99314-6b23-488b-84e2-038aa3e87fb5',_68 timestamp: '2025-05-23 15:52:48 UTC'_68 }_68}_68_68// this response is abbreviated_68_68{_68 event: 'end',_68 data: { result: { session_id: 'test-session', outputs: [Array] } }_68}
Retrieve Langflow logs with the TypeScript client
To retrieve Langflow logs, you must enable log retrieval on your Langflow server by including the following values in your server's .env
file:
_10LANGFLOW_ENABLE_LOG_RETRIEVAL=True_10LANGFLOW_LOG_RETRIEVER_BUFFER_SIZE=10000_10LANGFLOW_LOG_LEVEL=DEBUG
The following example script starts streaming logs in the background, and then runs a flow so you can monitor the flow run:
_26import { LangflowClient } from "@datastax/langflow-client";_26_26const baseUrl = "http://localhost:7863";_26const flowId = "86f0bf45-0544-4e88-b0b1-8e622da7a7f0";_26_26async function runFlow(client: LangflowClient) {_26 const input = "Is anyone there?";_26 const response = await client.flow(flowId).run(input);_26 console.log('Flow response:', response);_26}_26_26async function main() {_26 const client = new LangflowClient({ baseUrl: baseUrl });_26_26 // Start streaming logs_26 console.log('Starting log stream...');_26 for await (const log of await client.logs.stream()) {_26 console.log('Log:', log);_26 }_26_26 // Run the flow_26 await runFlow(client);_26_26}_26_26main().catch(console.error);
Replace baseUrl
and flowId
with your server URL and flow ID, as you did in the previous run.
Logs begin streaming indefinitely, and the flow runs once.
The following example result is truncated for readability, but you can follow the messages to see how the flow instantiates its components, configures its model, and processes the outputs.
The FlowResponse
object, at the end of the stream, is returned to the client with the flow result in the outputs
array.
Result
_57Starting log stream..._57Log: Log {_57 timestamp: 2025-05-30T11:49:16.006Z,_57 message: '2025-05-30T07:49:16.006127-0400 DEBUG Instantiating ChatInput of type component\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.029Z,_57 message: '2025-05-30T07:49:16.029957-0400 DEBUG Instantiating Prompt of type component\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.049Z,_57 message: '2025-05-30T07:49:16.049520-0400 DEBUG Instantiating ChatOutput of type component\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.069Z,_57 message: '2025-05-30T07:49:16.069359-0400 DEBUG Instantiating OpenAIModel of type component\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.086Z,_57 message: "2025-05-30T07:49:16.086426-0400 DEBUG Running layer 0 with 2 tasks, ['ChatInput-xjucM', 'Prompt-I3pxU']\n"_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.101Z,_57 message: '2025-05-30T07:49:16.101766-0400 DEBUG Building Chat Input\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.113Z,_57 message: '2025-05-30T07:49:16.113343-0400 DEBUG Building Prompt\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.131Z,_57 message: '2025-05-30T07:49:16.131423-0400 DEBUG Logged vertex build: 6bd9fe9c-5eea-4f05-a96d-f6de9dc77e3c\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.143Z,_57 message: '2025-05-30T07:49:16.143295-0400 DEBUG Logged vertex build: 39c68ec9-3859-4fff-9b14-80b3271f8fbf\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.188Z,_57 message: "2025-05-30T07:49:16.188730-0400 DEBUG Running layer 1 with 1 tasks, ['OpenAIModel-RtlZm']\n"_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.201Z,_57 message: '2025-05-30T07:49:16.201946-0400 DEBUG Building OpenAI\n'_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:16.216Z,_57 message: '2025-05-30T07:49:16.216622-0400 INFO Model name: gpt-4.1-mini\n'_57}_57Flow response: FlowResponse {_57 sessionId: '86f0bf45-0544-4e88-b0b1-8e622da7a7f0',_57 outputs: [ { inputs: [Object], outputs: [Array] } ]_57}_57Log: Log {_57 timestamp: 2025-05-30T11:49:18.094Z,_57 message: `2025-05-30T07:49:18.094364-0400 DEBUG Vertex OpenAIModel-RtlZm, result: <langflow.graph.utils.UnbuiltResult object at 0x364d24dd0>, object: {'text_output': "Hey there! I'm here and ready to help you build something awesome with AI. What are you thinking about creating today?"}\n`_57}
For more information, see Logs endpoints.