Langflow database guide for enterprise DBAs
The Langflow database stores data that is essential for more Langflow operations, including startup, flow execution, user interactions, and administrative tasks. The database supports both frontend (visual editor) and backend (API) operations, making its availability critical to Langflow's stability and functionality. For details about the database schema, see Memory management options.
This guide is designed for enterprise database administrators (DBAs) and operators responsible for deploying and managing Langflow in production environments. It explains how to configure Langflow to use PostgreSQL, including high availability (HA) and active-active configurations, as well as best practices for monitoring, maintenance, and security.
Configure Langflow with PostgreSQL
Langflow's default database is SQLite. However, PostgreSQL is recommended for production deployments due to its scalability, performance, and robustness.
The following steps explain how to configure Langflow to use PostgreSQL for a standalone or containerized deployment. For more information, see Configure an external PostgreSQL database.
-
Set up PostgreSQL:
- Deploy a PostgreSQL instance (version 12 or higher recommended) using a local server, Docker, or a managed cloud service.
- Create a database for Langflow.
- Create a PostgreSQL user with appropriate, minimal permissions to manage and write to the database, such as CREATE, SELECT, INSERT, UPDATE, DELETE on your Langflow tables.
-
Obtain the connection string in the format
postgresql://user:password@host:port/dbname
, such aspostgresql://langflow:securepassword@postgres:5432/langflow
.For High Availability, use the virtual IP or proxy hostname instead of the direct database host. For more information, see High Availability for PostgreSQL.
-
Configure Langflow with the
.env
ordocker-compose.yml
files.- .env
- docker-compose.yml
-
Create a
.env
file in thelangflow
directory:_10touch .env -
Add the connection string to the
.env
file:_10LANGFLOW_DATABASE_URL="postgresql://langflow:securepassword@postgres:5432/langflow"
For more environment variables, see the
.env.example
file in the Langflow repository.Use the sample
docker-compose.yml
from the Langflow Repository. You can use the default values or customize them as needed._20version: '3'_20services:_20langflow:_20image: langflowai/langflow:latest_20ports:_20- "7860:7860"_20environment:_20- LANGFLOW_DATABASE_URL=postgresql://langflow:langflow@postgres:5432/langflow_20postgres:_20image: postgres:16_20ports:_20- "5432:5432"_20environment:_20- POSTGRES_USER=langflow_20- POSTGRES_PASSWORD=langflow_20- POSTGRES_DB=langflow_20volumes:_20- langflow-postgres:/var/lib/postgresql/data_20volumes:_20- langflow-postgres: -
Start Langflow with your PostgreSQL connection:
- .env
- docker-compose.yml
_10uv run langflow run --env-file .envNavigate to the directory containing
docker-compose.yml
, and then rundocker-compose up
. -
Optional: Run migrations.
Langflow uses migrations to manage its database schema. When you first connect to PostgreSQL, Langflow automatically runs migrations to create the necessary tables.
Direct schema modification can cause conflicts with Langflow's built-in schema management. If you need to update the schema, you can manually run migrations with the Langflow CLI:
-
Run
langflow migration
to preview the changes. -
Review the changes to ensure that it's safe to proceed with the migration.
-
Run
langflow migration --fix
to run the migration and permanently apply the changes.This is a destructive operation that can delete data. For more information, see
langflow migration
.
-
-
To verify the configuration, create any flow using the Langflow visual editor or API, and then query your database to confirm the tables and activity are recorded there. The content of the flow doesn't matter; you only need to confirm that the flow is stored in your PostgreSQL database. You can query the database in two ways:
-
Query the database container:
_10docker exec -it <postgres-container> psql -U langflow -d langflow -
Use SQL:
_10SELECT * FROM pg_stat_activity WHERE datname = 'langflow';
-
High Availability for PostgreSQL
To further improve performance, reliability, and scalability, use a High Availability (HA) or Active-Active HA PostgreSQL configuration. This is recommended for production deployments to minimize downtime and ensure continuous operations if your database server fails, especially when multiple Langflow instances rely on the same database.
- High Availability (HA)
- Active-Active HA
-
Set up streaming replication:
-
Configure one primary database for writes.
-
Configure one or more replicas for reads and failover.
Select either synchronous or asynchronous replication based on your latency and consistency requirements.
-
-
Implement automatic failover using one of the following options:
- Use an HA orchestrator, distributed configuration store, and traffic router, such as Patroni, etcd or Consul, and HAProxy.
- Use Pgpool-II alone or add supporting services for more robust HA support.
- Use managed services that provide built-in HA with automatic failover, such as AWS RDS or Google Cloud SQL.
-
Update your PostgreSQL connection string to point to the HA setup. If you have a multi-instance deployment, make sure all of your Langflow instances connect to the same HA PostgreSQL database.
The connection string you use depends on your HA configuration and services.
- Use a virtual IP or DNS name that resolves to the current primary database, such as
postgresql://langflow:securepassword@db-proxy:5432/langflow?sslmode=require
. - Use the provided endpoint for a managed service, such as
langflow.cluster-xyz.us-east-1.rds.amazonaws.com
.
- Use a virtual IP or DNS name that resolves to the current primary database, such as
-
Optional: Implement load balancing for read-heavy workloads:
- Use a connection pooler like PgBouncer to distribute read queries across replicas.
- Configure Langflow to use a single connection string pointing to the primary PostgreSQL database or proxy.
To implement Active-Active HA, you must deploy multiple Langflow instances, use load balancing to distribute traffic across the instances, and ensure all instances connect to the same HA PostgreSQL database:
-
Deploy multiple Langflow instances using Kubernetes or Docker Swarm.
You must configure your instances to use a shared PostgreSQL database. For more information, see Best practices for Langflow on Kubernetes.
-
Set up streaming replication:
-
Configure one primary database for writes.
-
Configure one or more replicas for reads and failover.
Select either synchronous or asynchronous replication based on your latency and consistency requirements.
-
-
Implement automatic failover using one of the following options:
- Use an HA orchestrator, distributed configuration store, and traffic router, such as Patroni, etcd or Consul, and HAProxy.
- Use Pgpool-II alone or add supporting services for more robust HA support.
- Use managed services that provide built-in HA with automatic failover, such as AWS RDS or Google Cloud SQL.
-
Update your PostgreSQL connection string to point to the HA setup. Make sure all of your Langflow instances connect to the same HA PostgreSQL database.
The connection string you use depends on your HA configuration and services:
- Use a virtual IP or DNS name that resolves to the current primary database, such as
postgresql://langflow:securepassword@db-proxy:5432/langflow?sslmode=require
. - Use the provided endpoint for a managed service, such as
langflow.cluster-xyz.us-east-1.rds.amazonaws.com
.
- Use a virtual IP or DNS name that resolves to the current primary database, such as
-
Use a load balancer to distribute requests across your instances.
The following example configuration is for a production deployment that has three langflow-runtime
replicas, uses the Kubernetes load balancer service to distribute traffic to healthy pods, and uses the HA PostgreSQL database connection string.
_34apiVersion: apps/v1_34kind: Deployment_34metadata:_34 name: langflow-runtime_34spec:_34 replicas: 3_34 selector:_34 matchLabels:_34 app: langflow-runtime_34 template:_34 metadata:_34 labels:_34 app: langflow-runtime_34 spec:_34 containers:_34 - name: langflow_34 image: langflowai/langflow:latest_34 ports:_34 - containerPort: 7860_34 env:_34 - name: LANGFLOW_DATABASE_URL_34 value: "postgresql://langflow:securepassword@db-proxy:5432/langflow?sslmode=require"_34---_34apiVersion: v1_34kind: Service_34metadata:_34 name: langflow-runtime_34spec:_34 selector:_34 app: langflow-runtime_34 ports:_34 - port: 80_34 targetPort: 7860_34 type: LoadBalancer
After implementing HA or Active-Active HA, monitor failover events and ensure replicas are in sync.
Langflow, through SQLAlchemy, supports reconnection attempts if LANGFLOW_DATABASE_CONNECTION_RETRY=True
, ensures recovery after failover, and reduces disruption once the database is back online.
Although PostgreSQL handles concurrent connections well, you must still monitor for contention, deadlocks, or other performance degradation during high load.
Impact of database failure
If the PostgreSQL database becomes unavailable, the following Langflow functions will fail:
- Flow Retrieval: Cannot load new or existing flows from the database.
- Flow Saving: Unable to save new flows or updates to existing flows.
- User Authentication: Login and user management functions fail.
- Project Collection Access: Cannot access or share community/custom project collections.
- Configuration Retrieval: Unable to load application settings.
- Configuration Updates: Changes to settings cannot be saved.
- Execution Log Access: Cannot retrieve historical flow execution logs.
- Log Writing: New execution or system activity logs cannot be recorded.
- Multi-User Collaboration: Sharing flows or projects across users fails.
- API Flow Loading: API requests to load new flows (non-cached) fail.
Flows already loaded in memory may continue to function with cached configurations. However, any operation requiring database access fails until the database is restored. For example, a cached flow might run, but it won't record logs or message history to the database.
To minimize the possibility and impact of database failure, use HA configurations and record backups regularly.
For example, you can use pg_dump
to create logical backups or set up continuous archiving with write-ahead logs (WAL) for point-in-time recovery.
Test restoration procedures regularly to ensure your team understands how to execute them in a disaster recovery scenario.
Database monitoring
Monitor your PostgreSQL database to ensure optimal performance and reliability:
- Use tools like pgAdmin, Prometheus with PostgreSQL exporter, or cloud-based monitoring for PostgreSQL.
- Track performance metrics such as CPU, memory, and disk I/O usage.
- Monitor replica health, availability, lag, and synchronization.
For example, use
pg_stat_activity
to monitor connection counts and contention. - Set up alerts and notifications for high latency, failover events, or replication issues.
- Enable PostgreSQL logging, such as
log_connections
andlog_statements
, to track access and changes.