I just got my home server up and running and was wondering what you guys recommend for backups. I figure it will probably be worth having backups on cloud servers tjay are external, are there any good services yall use for that?
I have an unraid server which hosts an docker image of Duplicacy. It is paid though for the web interface. And it backs up to Backblaze B2. I have roughly 175GB backed up, for which I pay $0.87 a month.
version: "1"
# Schedules (https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events)
{{ $SCHEDULE_RESTIC_BACKUP := "*-*-* 22:00:00" }} # Daily at 10PM
{{ $SCHEDULE_RESTIC_CHECK := "Sat *-*-* 04:00:00" }} # Weekly at 4AM on Saturday
{{ $SCHEDULE_SYNC_BACKUP := "Sun *-*-* 21:30:00" }} # Weekly at 11.30PM on Sunday
{{ $SCHEDULE_POSTGRES_BACKUP := "Fri *-*-* 20:00:00" }} # Weekly at 8PM on Friday
# Directories
{{ $LOCATION_RESTIC_BINARY := "/home/deck/Desktop/.config/restic/bin/restic_0.15.2_linux_arm64" }}
{{ $LOCATION_RESTIC_REPO := "/home/deck/Desktop/restic-repo" }}
{{ $LOCATION_RESTIC_LOG := "/home/deck/Desktop/.config/restic/logs" }}
{{ $LOCATION_RESTIC_STATUS := "/home/deck/Desktop/.config/restic/logs/statuses" }}
{{ $LOCATION_RESTIC_BLOCKED_FILE := "/home/deck/Desktop/.config/restic/BLOCKED" }}
{{ $LOCATION_RCLONE_BINARY := "/home/deck/Desktop/.config/restic/bin/rclone_1.63.1_linux_arm64" }}
{{ $LOCATION_RCLONE_REPO := "bucket:restic-backup-12345" }}
{{ $LOCATION_RCLONE_CONFIG := "/home/deck/Desktop/.config/restic/config/rclone.conf" }}
{{ $LOCATION_RESTICPROFILE_LOCK := "/tmp/resticprofile-default.lock" }}
{{ $LOCATION_POSTGRES_DUMP := "/home/deck/Desktop/dumps" }}
{{ $LOCATION_PRIMARY_BACKUP_SOURCE := "/home/deck/Desktop/" }}
# Configs
{{ $CONFIG_CURRENT_TIME := .Now.Format "20060102T150405" }}
{{ $CONFIG_RESTIC_PASSWORD := "/home/deck/Desktop/.config/restic/config/password.txt" }}
{{ $CONFIG_RESTIC_EXCLUDE := "/home/deck/Desktop/.config/restic/excludes.txt" }}
global:
default-command: snapshots # Run 'snapshots' when no command is specified
initialize: false # Do not initialize a repository if none exists
priority: low # Use priority class on Windows and "nice" on Unixes
min-memory: 100 # Minimum required RAM for Resticprofile to start
restic-lock-retry-after: 5m # Retry failed restic command acquisition every 5 minutes
restic-stale-lock-age: 10h # Unlock stale lock if age exceeds 10 hours
restic-binary: '{{ $LOCATION_RESTIC_BINARY }}' # Location of the Restic binary
default:
lock: '{{ $LOCATION_RESTICPROFILE_LOCK }}' # Local lockfile to prevent concurrent profile runs
force-inactive-lock: true # Detect and remove stale locks
initialize: true # Initialize repository if it doesn't exist
repository: '{{ $LOCATION_RESTIC_REPO }}' # Path to Restic repository
password-file: '{{ $CONFIG_RESTIC_PASSWORD }}' # File containing repository password
status-file: '{{ $LOCATION_RESTIC_STATUS }}/{{ $CONFIG_CURRENT_TIME }}-restic-status.json' # Output status file
compression: 'max' # Maximum compression level
run-after-fail: # Block syncing if there was a failure. TODO: Add an email
- 'echo "The command ${PROFILE_COMMAND} has failed in ${PROFILE_NAME}. Please check the logs." > {{ $LOCATION_RESTIC_BLOCKED_FILE }}'
backup:
run-before: # Bring down Docker before backup
- 'systemctl stop docker.socket'
- 'systemctl stop docker'
run-finally:
- 'grep --invert-match -E "^unchanged|\(0 B added, 0 B stored\)|\(0 B added\)" {{ tempFile "backup.log" }} > {{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-backup.log' # Copy log file, stripping out any unchanced files
- 'systemctl start docker' # Bring Docker back online after backup
one-file-system: false # Exclude other file systems
no-error-on-warning: true # Don't consider warnings as backup failures
source: # Directories to back up
- '{{ $LOCATION_PRIMARY_BACKUP_SOURCE }}'
exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}' # File containing exclude patterns
exclude-caches: true # Exclude cache files
schedule: '{{ $SCHEDULE_RESTIC_BACKUP }}' # Backup schedule
schedule-permission: system # Schedule permission
schedule-lock-wait: 10m # Wait time for the lock during schedule
schedule-log: '{{ tempFile "backup.log" }}' # Log file to /tmp. This contains all information, including unchanged files which we do not care about
verbose: 2 # Log details about processed files
check:
schedule: '{{ $SCHEDULE_RESTIC_CHECK }}' # Verification schedule
schedule-permission: system # Schedule permission
schedule-lock-wait: 10m # Wait time for the lock during schedule
schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-check.log' # Log file
read-data: true # Verify data during check
prune:
dry-run: true # Only prune if safe to do so, change manually
repack-uncompressed: true # Repack all uncompressed data
forget:
dry-run: true # Only forget if safe to do so, change manually
rewrite:
dry-run: true # Only rewrite if safe to do so, change manually
forget: true # Remove original snapshots after creating new ones
exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}' # File containing exclude patterns
mount:
allow-other: true # Allow other users to access the mount point
rebuild-index:
read-all-packs: true # Read all pack files to generate new index from scratch
# The following shell profiles are simply to run other shell scripts at a scheduled time
# We do not actually run the primary Restic commands listed, as we exit the process early
shell-postgres: # Profile to run shell scripts only. We exit the current process before Restic can run.
backup:
schedule: '{{ $SCHEDULE_POSTGRES_BACKUP }}' # Postgres backup schedule
schedule-permission: system # Schedule permission
schedule-lock-mode: ignore # Ignore locks, if any
schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-postgres-backup.log' # Log file
dry-run: true # Don't write data
run-before: # Dump postgres databases
- 'chmod 777 /var/run/docker.sock'
- 'docker exec -t immich-postgres pg_dumpall -c -U postgres | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Immich database: {{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
- 'docker exec -t joplin-postgres pg_dumpall -c -U joplin | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Joplin database: {{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
- 'kill $$'
shell-sync:
backup:
schedule: '{{ $SCHEDULE_SYNC_BACKUP }}' # Sync backup schedule
schedule-permission: system # Schedule permission
schedule-lock-mode: ignore # Ignore locks, if any
schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-rsync-backup.log' # Log file
dry-run: true # Don't write data
run-before: # Sync the Restic repo, after checking if the repository is in good health
- 'if [ -f "{{ $LOCATION_RESTIC_BLOCKED_FILE }}" ]; then echo "There has been a problem with the Restic repository, please check the logs. If everything is okay, delete the BLOCKED file." && kill $$; fi'
- '{{ $LOCATION_RCLONE_BINARY }} -v sync {{ $LOCATION_RESTIC_REPO }} {{ $LOCATION_RCLONE_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }} --b2-hard-delete'
- '{{ $LOCATION_RCLONE_BINARY }} cleanup {{ $LOCATION_RESTIC_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }}'
- 'kill $$'
Resticprofile doesn't let me run other shell commands on a schedule, and because I wanted everything in a single configuration, I just created two new profiles which call the backup command. I then made the shell commands run before Restic, and then finally killed the instance before it got to actually run, which effectively does what I needed.
borg with an external hard drive and borgbase as a remote. I use the 2-2-1 rule (🙈), as I struggle to find a good way to do another backup and RAID does not count 😬
rsync.net and learn to use Borg; they're stupid cheap if you're technically proficient enough to handle the Borg setup yourself. Like, charge by the gigabyte, but it's 1.5¢/GB at the most expensive, and cheaper in bulk
I used to have everything backed up to a 2TB USB drive. Which I accidentally dropped down the stairs. I lost thousands of family photos and documents. That changed my backup perspective.
I now have a Synology NAS, with 12TB in a RAID5 array (for a bit of disk redundancy). All my home devices, Proxmox servers etc back up here. The NAS also holds a few TB of media. Attached to it I have a USB hard drive (also 12TB). The NAS gets fully backed up to the USB drive nightly.
I also have a remote Raspberry Pi with a smaller USB drive (4TB) attached to it at my brother's house (in another country), where I backup most of the contents of my home NAS. I don't back up the media, just the important stuff. I might have to upgrade to a larger drive...
I use Duplicati connected to Storj with data volumes that incrementally get backed up once per month. My files don't change very often, so monthly is a good balance. Not counting my Jellyfin library, those backups are around 1 TB. With the Jellyfin library, almost 15 TB.
Earlier this year, I recovered from a 100% data loss scenario, as I didn't (and still don't) have space for physical backups. I have a 25 TB allowance, so my actual cost was €0. If I had to pay, it would have been under €1.
I do once a day rsync my data to another drive. I can restore a file, if I accidentaly deleted it. Important stuff goes encrypted via rclone additionaly to a hetzner storagebox.
I use OneDrive. Buy the Costco subscription and get like 15 months for around 110 CAD. GIVES 6 TB. I create some fake accountsink the sharing to my main account. I have an encrypted rxlone share for some things and others I GPG encryot the tar before sending it up. Been working fine for a couple years and I have multiple TB backed up.
I use nightly borg backup to a separate box and then that box uses rclone to back up the borg repo offsite. Before running the borg backup I export all databases and docker volumes so they get picked up.
I have been with idrive since 2009. At the time they were the only ones that allowed backups of network attached storage on their cheaper personal plans. Everyone else saw that as an "enterprise" feature which required a business plan. Which was bullsh*t, because lots of home NAS devices were being sold.
Anyway, I haven't done a recent comparison of services, but I remain happy with idrive.
Thesedays I no longer backup on a computer with a mapped drive, but directly from my NAS which runs the idrive software.
I had a catastrophic dual drive failure a few years ago, one failed and another failed during the raid rebuild! I was able to restore about 1tb of data and didn't lose anything important.
They also offer backup and restore by shipping a drive to you if you want to avoid the huge initial backup or a total restore, but I haven't used that feature.
They do also have a mobile app, but last time I tried it, it wasn't great.