Overview
N8N is a popular open source workflow automation tool. It allows you to connect different services together via predefined triggers and actions. Some common examples are sending Slack notifications when a new Trello card is created or adding data from a Google Sheet into a Salesforce object.
Under the hood, n8n relies on a SQLite database to store configuration information about workflows, credentials, and execution logs. By default this sqlite database file is located at /home/node/.n8n/database.sqlite
within the n8n container.
Since this SQLite database contains critical data required for n8n to operate, we need a way to back it up. Losing the database would mean losing workflow configurations and history.
This is where Litestream comes in. Litestream provides continuous backup and replication for SQLite databases. It can sync a SQLite database to an S3 bucket in real-time. This gives us an automated offsite backup in case of infrastructure failures or accidental deletions.
In this guide, we’ll cover the steps to set up Litestream to backup n8n’s SQLite database to an S3 bucket.
Steps
Set Up S3 Bucket
First, we need an S3 bucket to store the SQLite database backups.
- Create an S3 bucket, ideally named to indicate it contains n8n database backups (e.g.
n8n-db-backups
) - Enable versioning on the bucket to preserve history
- Create an IAM user with read/write access to the S3 bucket
- Generate an access key and secret for the IAM user
Record the bucket name, access key, secret, and region - we’ll need them later.
Litestream Configuration
Next we need to configure Litestream. This is done via a litestream.yml
file that specifies the database path and S3 replica:
dbs:
- path: /home/node/.n8n/database.sqlite
replicas:
- url: s3://n8n-db-backups/n8n-db.sqlite
access-key-id: <your-access-key>
secret-access-key: <your-secret>
region: <your-region>
This tells Litestream to replicate /home/node/.n8n/database.sqlite
to our S3 bucket continuously.
Docker Configuration
We can run Litestream alongside n8n in a Docker container.
Start with the official n8n Docker image as the base. In the Dockerfile, download the Litestream binary and copy the litestream.yml
file.
Add a script that will run litestream replicate
to begin backing up the SQLite database on container startup:
#!/bin/sh
litestream replicate -config /etc/litestream.yml
n8n start
This script will start the Litestream replication process before launching n8n.
Local Testing
With the Docker configuration in place, we can build the image and run it locally to test.
Launch the container with environment variables providing the S3 access credentials:
docker run -e S3_ACCESS_KEY=... -e S3_SECRET=... my-n8n-image
To verify backups are working, you can manually delete the local SQLite database file:
rm /home/node/.n8n/database.sqlite
After a minute or two, the database should be automatically restored from S3 by Litestream.
Deploying to Fly.io
For production deployments, we can host our n8n Docker container on Fly.io. Fly is a platform optimized for running containers.
The S3 environment variables can be specified in a fly.toml
config file:
[env]
LITESTREAM_S3_ACCESS_KEY = "..."
LITESTREAM_S3_SECRET = "..."
With the config set, we can use the Fly CLI to deploy our container.
Fly will handle running multiple instances of the container across availability zones. We can configure auto scaling rules to dynamically add capacity when needed.
Built-in monitoring will track CPU, memory, and network utilization. The Fly dashboard gives visibility into request metrics and logs.
Conclusion
By integrating Litestream into our n8n Docker container, we can efficiently back up n8n’s SQLite database to S3. Storing backups offsite gives us protection against infrastructure issues.
Automated deployments with Docker and Fly.io provide a scalable, highly available platform for n8n. Backups facilitate quick disaster recovery scenarios with minimal data loss.
Litestream delivers a hands-off real-time replication solution for n8n’s critical SQLite database contents. This gives confidence that workflow configurations and execution history can be restored if ever needed.