Skip to main content

Migrate Appsmith Helm Deployment from Non-HA to HA (Kubernetes)

This guide explains how to migrate an existing Appsmith Helm deployment from single-pod storage (typically ReadWriteOnce) to shared storage (ReadWriteMany) so you can run Appsmith in high availability mode with multiple pods.

The default Helm configuration sets autoscaling.enabled: false, which deploys Appsmith as a StatefulSet and creates a pod volume mounted at /appsmith-stacks. In most clusters, this uses the default StorageClass and is not shareable across multiple pods.

To enable HA safely, migrate data to a new ReadWriteMany (RWX) volume first, then cut Appsmith over to that claim.

For all available chart parameters, see Helm values.yaml.

Prerequisites

Before you begin, ensure:

  1. You already have an Appsmith instance installed on Kubernetes using Helm using a StatefulSet. If this is not your current situation, start your Kubernetes deployment with HA enabled.
  2. You have kubectl and helm access to the target Kubernetes cluster.
  3. You have access to create the required storage resources in your cloud provider and/or Kubernetes cluster.
  4. It is recommended to download the latest backup of your Appsmith instance from the cluster before you start migration. See Backup instance.
  5. You have a maintenance window for final cutover.

Step 1: Provision RWX storage outside the Helm chart

Create a new PersistentVolume (PV) and PersistentVolumeClaim (PVC) backed by a RWX-capable storage class using your cloud/on-prem CSI driver.

Use provider guides as needed:

For this migration scenario, it is cleaner to create and manage the PV/PVC outside the Appsmith Helm chart and then reference the PVC from values.yaml.

In the examples below, the target PVC name is appsmith-data-ha.

An example PVC you can use might look like this:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: appsmith-data-ha
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-csi
resources:
requests:
storage: 5Gi

But adapt it for your provider and/or situation.

Step 2: Mount the new PVC as a temporary secondary path

Follow the below steps to mount the new claim as a temporary secondary path:

  1. Update values.yaml to add the new claim as an extra volume and mount point:

    extraVolumes:
    - name: appsmith-data-ha-volume
    persistentVolumeClaim:
    claimName: appsmith-data-ha
    extraVolumeMounts:
    - name: appsmith-data-ha-volume
    mountPath: /appsmith-stacks-ha
  2. Apply the change:

    helm upgrade <release_name> appsmith-ee/appsmith -n <namespace> -f values.yaml
  3. Wait until Appsmith is healthy:

    kubectl get pods -n <namespace>

Step 3: Copy data from current volume to RWX volume

Follow the below steps to copy data from the existing volume to the new RWX volume:

  1. Open a shell in the running Appsmith pod:

    kubectl exec -it pod/<appsmith_pod_name> -n <namespace> -- bash
  2. Copy all data from the original path to the temporary RWX path:

    cp -a /appsmith-stacks/* /appsmith-stacks-ha/

Step 4: Cut over Appsmith to the new existing claim

Follow the below steps to cut over Appsmith to the new claim:

  1. Remove the temporary extraVolumes and extraVolumeMounts entries.

  2. Set Appsmith persistence to use the new claim directly:

    persistence:
    existingClaim:
    enabled: true
    name: appsmith-data-ha
    claimName: appsmith-data-ha
    autoscaling:
    enabled: true
  3. Apply the final cutover and verify:

    helm upgrade -i <release_name> appsmith-ee/appsmith -n <namespace> -f values.yaml

Watch for pods to become healthy:

kubectl get pods -n <namespace>

Verify the pods are mounting your new volume by desribing the deployment and checking the volumes section:

kubectl describe deployment/<release_name> -n <namespace>

Data consistency note

caution

Data written between the copy step and final cutover will be left behind on the old volume.

To minimize risk:

  • Keep the copy-to-cutover window as short as possible.
  • Consider temporarily blocking user traffic (for example, disable ingress) during final cutover.
  • If needed, run an additional sync pass (for example, with rsync) just before cutover.

In most cases, the highest-risk loss is recent filesystem writes such as logs, but configuration artifacts can also be affected if the window is large.

See also