Skip to main content

Deploy the Langflow production environment on Kubernetes

The Langflow runtime Helm chart is tailored for deploying applications in a production environment. It is focused on stability, performance, isolation, and security to ensure that applications run reliably and efficiently.

important

For security reasons, the default Langflow runtime Helm chart sets readOnlyRootFilesystem: true. This setting prevents modifications to the container's root filesystem at runtime, which is a recommended security measure in production environments.

If readOnlyRootFilesystem is disabled (false), it degrades your deployment's security posture. Only disable this setting if you understand the security implications and you have implemented other security measures.

For more information, see the Kubernetes documentation.

Prerequisites

Install the Langflow runtime Helm chart

  1. Add the repository to Helm:


    _10
    helm repo add langflow https://langflow-ai.github.io/langflow-helm-charts
    _10
    helm repo update

  2. Install the Langflow app with the default options in the langflow namespace.

    If you have a custom image with packaged flows, you can deploy Langflow by overriding the default values.yaml with the --set flag:


    _10
    helm install my-langflow-app langflow/langflow-runtime -n langflow --create-namespace --set image.repository=myuser/langflow-hello-world --set image.tag=1.0.0

  3. Check the status of the pods:


    _10
    kubectl get pods -n langflow

Access the Langflow runtime

  1. Get your service name:


    _10
    kubectl get svc -n langflow

    The service name is your release name suffixed by -langflow-runtime. For example, if you used helm install my-langflow-app-with-flow, then the service name is my-langflow-app-with-flow-langflow-runtime.

  2. Enable port forwarding to access Langflow from your local machine:


    _10
    kubectl port-forward -n langflow svc/my-langflow-app-with-flow-langflow-runtime 7860:7860

  3. Confirm you can access the API by calling http://localhost:7860/api/v1/flows/:


    _10
    curl -v http://localhost:7860/api/v1/flows/

    A successful request returns a list of flows.

  4. Run a packaged flow. The following example gets the first flow ID from the flows list, and then runs the flow:


    _12
    # Get flow ID
    _12
    id=$(curl -s "http://localhost:7860/api/v1/flows/" | jq -r '.[0].id')
    _12
    _12
    # Run flow
    _12
    curl -X POST \
    _12
    "http://localhost:7860/api/v1/run/$id?stream=false" \
    _12
    -H 'Content-Type: application/json' \
    _12
    -d '{
    _12
    "input_value": "Hello!",
    _12
    "output_type": "chat",
    _12
    "input_type": "chat"
    _12
    }'

Configure secrets and environment variables

Use the .env section of the Langflow runtime Helm chart's values.yaml file to define environment variables for your Langflow deployment. This includes built-in Langflow environment variables, as well as global variables used by your flows.

Langflow can source global variables from your runtime environment, such as Kubernetes secrets referenced in values.yaml. For example, the Langflow runtime Helm chart's example flow JSON uses a global variable that is a secret. If you want to run this flow in your Langflow deployment on Kubernetes, you need to include the secret in your runtime configuration.

tip

When you export flows as JSON files, it's recommended to omit secrets. Whether or not a secret is included depends on how you declare the secret in your flow and whether you use the Save with my API keys option. For more information, see Import and export flows.

Set secrets

Kubernetes secrets are the recommended way to store sensitive values and credentials.

Use secretKeyRef to reference a Kubernetes secret in values.yaml:


_10
env:
_10
- name: OPENAI_API_KEY
_10
valueFrom:
_10
secretKeyRef:
_10
name: openai-credentials
_10
key: openai-key

Create and set secrets with kubectl and helm

You can use kubectl and helm commands to create and set secrets:

  1. Create a secret:


    _10
    kubectl create secret generic openai-credentials \
    _10
    --namespace langflow \
    _10
    --from-literal=OPENAI_API_KEY=sk...

  2. Verify the secret exists:


    _10
    kubectl get secrets -n langflow openai-credentials

    The result is encrypted.

  3. Upgrade the Helm release to use the secret:


    _10
    helm upgrade my-langflow-app-image langflow/langflow-runtime -n langflow \
    _10
    --reuse-values \
    _10
    --set "extraEnv[0].name=OPENAI_API_KEY" \
    _10
    --set "extraEnv[0].valueFrom.secretKeyRef.name=openai-credentials" \
    _10
    --set "extraEnv[0].valueFrom.secretKeyRef.key=OPENAI_API_KEY"

    Escape square brackets if required by your shell.

Set the log level and other configuration variables

For non-sensitive variables, such as LANGFLOW_LOG_LEVEL, you can set the value directly in values.yaml:


_10
env:
_10
- name: LANGFLOW_LOG_LEVEL
_10
value: "INFO"

Configure scaling

Use replicaCount and resources in the Langflow runtime Helm chart's values.yaml file to configure scaling:

  • Horizontal scaling: Use replicaCount to set the number of replicas for your Langflow deployment.


    _10
    replicaCount: 3

  • Vertical scaling: Use the resources section to adjust pod resources depending on your application's needs.


    _10
    resources:
    _10
    requests:
    _10
    memory: "2Gi"
    _10
    cpu: "1000m"

See also

Search