← All articles
A rack of electronic components with labels.

Building a Bulletproof Backup Strategy for Your Self-Hosted Services

Infrastructure 2026-02-15 · 10 min read backup restic borgbackup duplicati kopia disaster-recovery automation
By Selfhosted Guides Editorial TeamSelf-hosting practitioners covering open source software, home lab infrastructure, and data sovereignty.

You're running Immich with 80,000 family photos. Vaultwarden holds every password you and your partner own. Paperless-ngx has four years of scanned tax documents. Your Nextcloud instance is the primary file storage for your household. And all of it lives on one server, on one disk, in one location.

Photo by Shahabudin Ibragimov on Unsplash

If that disk fails — and it will eventually — what's your plan?

Self-hosting without a backup strategy isn't self-hosting. It's gambling with your data. This guide covers everything you need to build a backup system that actually protects you: the 3-2-1 rule, which tools to use, how to automate everything, how to verify your backups work, and where to store them off-site.

Restic backup tool logo

The 3-2-1 Rule

The 3-2-1 rule is the foundation of any backup strategy:

For a typical homelab, this means:

  1. Copy 1: Your live data on the server (production)
  2. Copy 2: A backup on a different drive or NAS on your local network
  3. Copy 3: A backup in the cloud or at a friend's house

If your server's SSD dies, you restore from the local NAS backup. If your house floods and takes out both the server and the NAS, you restore from the cloud. The 3-2-1 rule protects you against hardware failure, accidental deletion, ransomware, and physical disasters.

What About 3-2-1-1-0?

The extended rule adds:

This is the gold standard. Immutable backups protect against ransomware that encrypts your backup drives. Regular restore testing catches silent corruption and configuration drift. We'll cover both later in this guide.

What to Back Up

Not everything on your server needs the same backup treatment. Categorize your data by replaceability:

Critical (Cannot Be Recreated)

Back these up with full 3-2-1, encrypted, tested regularly.

Important (Hard to Recreate)

Back these up with at least 2 copies. Configuration files should be version-controlled in Git as an additional safety net.

Replaceable (Easy to Recreate)

Skip these in your backup to save space and time. They can be rebuilt from source.

Choosing a Backup Tool

Four tools dominate the self-hosted backup space. Each has distinct strengths.

Restic

Best for: Cloud-first backups, multi-backend flexibility, cross-platform needs

Restic is written in Go and stands out for its broad backend support. It can back up directly to local directories, SFTP servers, Amazon S3, Backblaze B2, Azure Blob, Google Cloud Storage, and anything rclone supports (which is essentially everything).

# Initialize a Restic repository on Backblaze B2
export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"
restic -r b2:your-bucket-name:restic-repo init

# Create a backup
restic -r b2:your-bucket-name:restic-repo backup \
  /srv/docker/immich/data \
  /srv/docker/vaultwarden/data \
  /srv/docker/paperless/media \
  --exclude-caches

# List snapshots
restic -r b2:your-bucket-name:restic-repo snapshots

# Restore a specific snapshot
restic -r b2:your-bucket-name:restic-repo restore latest \
  --target /tmp/restore-test

Key characteristics:

BorgBackup

Best for: Local/SSH backups, maximum compression, Linux-only environments

BorgBackup (Borg) has been the homelab backup workhorse since 2015 (and its predecessor Attic since 2010). It excels at backing up to local drives and remote servers over SSH.

# Initialize a Borg repository on a remote server
borg init --encryption=repokey-blake2 ssh://backup-server/~/borg-repo

# Create a backup
borg create ssh://backup-server/~/borg-repo::'{hostname}-{now}' \
  /srv/docker/immich/data \
  /srv/docker/vaultwarden/data \
  /srv/docker/paperless/media \
  --exclude-caches \
  --compression zstd,3

# List archives
borg list ssh://backup-server/~/borg-repo

# Prune old backups (keep 7 daily, 4 weekly, 6 monthly)
borg prune ssh://backup-server/~/borg-repo \
  --keep-daily 7 \
  --keep-weekly 4 \
  --keep-monthly 6

Key characteristics:

Duplicati

Best for: Users who prefer a web UI over CLI, Windows environments

Duplicati is the only major backup tool with a full web interface. Non-technical household members can configure and monitor backups through a browser.

# Docker Compose for Duplicati
services:
  duplicati:
    image: lscr.io/linuxserver/duplicati:latest
    container_name: duplicati
    restart: unless-stopped
    ports:
      - "8200:8200"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    volumes:
      - duplicati_config:/config
      - /srv/docker:/source:ro  # Mount your data as read-only
      - /mnt/backup-drive:/backups  # Local backup destination

Configure everything through the web UI at http://your-server:8200. Create backup jobs by selecting source directories, choosing a destination (local, S3, B2, Google Drive, OneDrive, SFTP), setting encryption, and configuring a schedule.

Key characteristics:

Kopia

Best for: Users who want a modern tool with both CLI and web UI

Kopia is the newest contender, written in Go with a philosophy of "Restic's flexibility plus a web UI." It's rapidly gaining adoption in the homelab community.

# Initialize a Kopia repository on local storage
kopia repository create filesystem --path /mnt/backup-drive/kopia-repo

# Or on Backblaze B2
kopia repository create b2 \
  --bucket your-bucket-name \
  --key-id your-key-id \
  --key your-key

# Create a snapshot
kopia snapshot create /srv/docker/immich/data
kopia snapshot create /srv/docker/vaultwarden/data

# List snapshots
kopia snapshot list

# Start the web UI
kopia server start --address 0.0.0.0:51515

Key characteristics:

Like what you're reading? Subscribe to Self-Hosted Weekly — free weekly guides in your inbox.

The Recommended Setup

For most homelabs, here's the practical recommendation:

Use Restic for backing up to cloud storage (Backblaze B2 or S3-compatible). Use BorgBackup for backing up to a local NAS or secondary drive over SSH. Run both to achieve 3-2-1.

If you want simplicity, use just Restic with two targets: one local directory on a separate drive, one Backblaze B2 bucket. Two commands, two destinations, 3-2-1 achieved.

Automating Backups

A backup you have to remember to run is a backup that won't get run. Automate everything.

Option 1: Cron Jobs

Create a backup script:

#!/bin/bash
# /usr/local/bin/backup-homelab.sh
set -euo pipefail

LOGFILE="/var/log/homelab-backup.log"
RESTIC_REPOSITORY="b2:your-bucket:restic-repo"
RESTIC_PASSWORD_FILE="/root/.restic-password"

export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"

log() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') $1" >> "$LOGFILE"
}

log "Starting backup"

# Stop services that need consistent snapshots
docker compose -f /srv/docker/vaultwarden/docker-compose.yml stop

# Pre-backup: dump databases
docker exec postgres pg_dumpall -U postgres > /srv/docker/db-dumps/all-databases.sql
log "Database dump complete"

# Run the backup
restic -r "$RESTIC_REPOSITORY" \
  --password-file "$RESTIC_PASSWORD_FILE" \
  backup \
  /srv/docker/immich/data \
  /srv/docker/vaultwarden/data \
  /srv/docker/paperless/media \
  /srv/docker/nextcloud/data \
  /srv/docker/db-dumps \
  --exclude-caches \
  --tag homelab \
  2>> "$LOGFILE"

log "Backup complete"

# Restart stopped services
docker compose -f /srv/docker/vaultwarden/docker-compose.yml start

# Prune old snapshots (keep 7 daily, 4 weekly, 6 monthly, 2 yearly)
restic -r "$RESTIC_REPOSITORY" \
  --password-file "$RESTIC_PASSWORD_FILE" \
  forget \
  --keep-daily 7 \
  --keep-weekly 4 \
  --keep-monthly 6 \
  --keep-yearly 2 \
  --prune \
  2>> "$LOGFILE"

log "Prune complete"

Schedule it with cron:

# Run backup daily at 3 AM
0 3 * * * /usr/local/bin/backup-homelab.sh

Option 2: Systemd Timers

Systemd timers are more robust than cron — they handle missed runs (if the machine was off), provide better logging, and integrate with systemd's notification system.

# /etc/systemd/system/homelab-backup.service
[Unit]
Description=Homelab backup to Backblaze B2
After=docker.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup-homelab.sh
StandardOutput=journal
StandardError=journal
# /etc/systemd/system/homelab-backup.timer
[Unit]
Description=Run homelab backup daily

[Timer]
OnCalendar=*-*-* 03:00:00
Persistent=true
RandomizedDelaySec=900

[Install]
WantedBy=timers.target

Enable the timer:

sudo systemctl enable --now homelab-backup.timer

# Check timer status
systemctl list-timers homelab-backup.timer

# View backup logs
journalctl -u homelab-backup.service -f

The Persistent=true flag means if the machine was off at 3 AM, the backup runs as soon as the machine boots. RandomizedDelaySec=900 adds up to 15 minutes of random delay to avoid thundering herd problems if you have multiple timers.

Database Backups

Databases need special handling. You can't just copy a running PostgreSQL data directory and expect a consistent backup. Always use the database's own dump tools.

PostgreSQL

# Dump all databases
docker exec postgres pg_dumpall -U postgres > /srv/backups/all-databases.sql

# Dump a specific database
docker exec postgres pg_dump -U postgres immich > /srv/backups/immich.sql

# For large databases, use custom format (compressed, parallel restore)
docker exec postgres pg_dump -U postgres -Fc immich > /srv/backups/immich.dump

MariaDB/MySQL

# Dump all databases
docker exec mariadb mysqldump -u root -p"$MYSQL_ROOT_PASSWORD" \
  --all-databases > /srv/backups/all-databases.sql

# Single database
docker exec mariadb mysqldump -u root -p"$MYSQL_ROOT_PASSWORD" \
  nextcloud > /srv/backups/nextcloud.sql

SQLite

SQLite files can be safely copied if no writes are happening, but the safest approach uses SQLite's backup command:

# Safe SQLite backup
docker exec vaultwarden sqlite3 /data/db.sqlite3 ".backup '/data/db-backup.sqlite3'"

Include the dump files in your Restic/Borg backup. This gives you both the raw data directories and a guaranteed-consistent database dump you can restore from.

Testing Restores

A backup you've never restored from is a backup you hope works. Hope is not a strategy.

Monthly Restore Test

Set a calendar reminder to test restores monthly. Here's a quick validation script:

#!/bin/bash
# /usr/local/bin/test-restore.sh
set -euo pipefail

RESTORE_DIR="/tmp/restore-test-$(date +%Y%m%d)"
RESTIC_REPOSITORY="b2:your-bucket:restic-repo"
RESTIC_PASSWORD_FILE="/root/.restic-password"

export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"

mkdir -p "$RESTORE_DIR"

echo "Restoring latest snapshot to $RESTORE_DIR..."
restic -r "$RESTIC_REPOSITORY" \
  --password-file "$RESTIC_PASSWORD_FILE" \
  restore latest \
  --target "$RESTORE_DIR" \
  --include /srv/docker/vaultwarden/data

echo "Checking restored files..."
if [ -f "$RESTORE_DIR/srv/docker/vaultwarden/data/db.sqlite3" ]; then
    # Verify SQLite integrity
    sqlite3 "$RESTORE_DIR/srv/docker/vaultwarden/data/db.sqlite3" "PRAGMA integrity_check;"
    echo "Vaultwarden database: OK"
else
    echo "ERROR: Vaultwarden database not found in restore!"
    exit 1
fi

# Verify Restic repository integrity
echo "Running repository check..."
restic -r "$RESTIC_REPOSITORY" \
  --password-file "$RESTIC_PASSWORD_FILE" \
  check

echo "Restore test passed"
rm -rf "$RESTORE_DIR"

What to Verify

During a restore test, check:

  1. Files exist — The expected directories and files are present
  2. Database integrity — SQLite PRAGMA integrity_check, PostgreSQL pg_restore --list
  3. File counts — Compare file counts against a known baseline
  4. Sample content — Open a few files to verify they're not corrupted or zero-length
  5. Repository health — Run restic check or borg check to verify repository integrity

Off-Site Storage Options

Backblaze B2

The homelab community's favorite cloud storage backend. At $0.006/GB/month for storage and $0.01/GB for downloads, it's dramatically cheaper than AWS S3.

B2 is natively supported by Restic, Borg (via rclone), Duplicati, and Kopia. Create a B2 bucket, generate an application key, and you're ready.

Another Server (SSH/SFTP)

If you have a friend or family member who also self-hosts, exchange backup storage. You store an encrypted Borg repository on their server; they store one on yours. Free, off-site, and encrypted so neither party can read the other's data.

# Initialize an encrypted Borg repo on a friend's server
borg init --encryption=repokey-blake2 ssh://friend-server/~/borg-backups/your-name

Hetzner Storage Box

Hetzner offers "Storage Boxes" — cheap, SSH/SFTP-accessible storage starting at 1TB for about 3.81 EUR/month. They support BorgBackup natively (Borg over SSH) and are popular in the European homelab community.

Wasabi

S3-compatible storage at $6.99/TB/month with no egress fees. A good S3-compatible option if you want more predictable pricing than AWS or Azure.

Monitoring Your Backups

A backup that silently fails is worse than no backup — it gives you false confidence. Monitor your backups actively.

Healthchecks.io

Healthchecks.io is a cron monitoring service (also self-hostable) that alerts you when a job fails to check in. Add a ping at the end of your backup script:

# At the end of backup-homelab.sh
curl -fsS -m 10 --retry 5 https://hc-ping.com/your-unique-uuid

If the backup script fails or doesn't run, Healthchecks sends you an alert.

Uptime Kuma

If you're already running Uptime Kuma for service monitoring, add a "Push" monitor for your backup job. Same concept as Healthchecks.io but integrated with your existing monitoring.

Manual Checks

Even with monitoring, periodically check:

# List recent snapshots — verify dates are current
restic -r "$RESTIC_REPOSITORY" snapshots --latest 5

# Check repository size and stats
restic -r "$RESTIC_REPOSITORY" stats

# Verify repository integrity
restic -r "$RESTIC_REPOSITORY" check

Disaster Recovery Plan

Having backups is step one. Having a tested plan to restore everything is step two. Document your disaster recovery procedure:

  1. Hardware replacement — Where will you run services if your server dies? A spare machine? A VPS temporarily?
  2. OS and Docker setup — Document or script your base OS installation, Docker setup, and network configuration
  3. Restore order — Which services do you restore first? (Hint: DNS and reverse proxy first, then critical services like passwords and email, then everything else)
  4. Credentials — Your Restic/Borg passwords and cloud API keys need to be stored somewhere you can access without your server. A printed copy in a safe, a password manager on your phone, or a sealed envelope with a trusted person.
  5. Time estimate — How long does a full restore take? Test this. A 500GB restore from B2 at 100Mbps takes about 11 hours.

Write this plan down. Store it somewhere accessible when your server is down. Review it every 6 months.

Putting It All Together

Here's a complete, production-ready backup setup for a typical homelab:

Tools: Restic for cloud backup, local directory backup on separate drive

Schedule: Daily at 3 AM via systemd timer

Retention: 7 daily, 4 weekly, 6 monthly, 2 yearly snapshots

Off-site: Backblaze B2 bucket with application key

Monitoring: Healthchecks.io ping on success, email alert on failure

Testing: Monthly restore test of one service's data

Cost: ~$3-6/month for B2 storage (500GB-1TB of deduplicated, compressed backups)

The total setup takes about 2-3 hours. After that, it runs unattended. The monthly restore test takes 15 minutes. For $5/month and 15 minutes of your time, you get the confidence that a disk failure, ransomware attack, or house fire won't cost you years of irreplaceable data.

Do it today. Not tomorrow, not "when you have time." The day you need your backups is never the day you planned for.

Get free weekly tips in your inbox. Subscribe to Self-Hosted Weekly