My Backup settings

My Freenas setup

Having a good network storage to share the files one has at home is a very useful thing. I have used for several years freenas, it is free, open source, built on the top of open zfs arguably one of the first new generation files systems with lots of nice enterprise features: snapshots, checksums for everything, growable volumes, configurable redundancy… It might be a bit more work, configuration than ready made solutions like synology, but it works well.

I use a very old system, that I assembled 10 years ago: it is an AMD Phenom(tm) II X2 555 (Back edition, 2 CPU 3.2GHz) with 20GB ECC memory on an Asus M4A89GTD PRO USB3. I considered upgrading it, mainly because a newer system would use much less electricity, power efficiency did increase quite a bit, but every time I looked at it it was more expensive than I expected/was ready to expend to have a safe system with enough ECC RAM. AMD Zen2 (Ryzen/EPIC2) might have changed that (Intel always had a large premium for ECC RAM). Anyway, throwing away HW also has an environmental cost, so I am staying with it for now.

Freenas works mostly without issues, I have a pool with raid 1 disks (2x3T & 2x8T), an ssd for the OS.


Against mistakes: deletions incorrect modifications… I keep ~100 periodic snapshots via the Tasks > Periodic Snapshot Tasks:

  • Every 1 Week for 6 months (~24).
  • Every 1 day for 2 weeks (~14).
  • Every 2 hours for 2 days (~24).
  • Every 15 mins for 10 hours (~40).

This way I am able to recover older files.

Backup Procedure

Still, in case of disaster an extra backup is needed. I use a hot swappable SATA Enclosure built into my server tower to perform backups. This way I can use naked 3.5’’ SATA Hard Disks that I store in storage boxes (I use renkforce storage box for 3.5’’ HDs) to protect them a bit. Luckily I am still at a level where all my data fit in a single 8T HD.

What I changed recently is the filesystem I use on it, I switched from UFS to ZFS. This means that my backup can contain snapshots, and is checksummed. Clearly, if a disk fails the data is lost, but I always keep at least two backups (and I have begun to keep a backup off-site, now that I keep it encrypted.

It is possible to create a pool (zpool create ... for example zpool create -m /mnt/Backup -O compression=lz4 Backup), import a pool from previously created pool (zpool import ...), sync the disk (sync) from the command line, check the pool (zpool status [pool]), export a pool (zpool export [pool]) and remove the disk, check the available pools (zpool list and initially I did it, but it is cumbersome, and the GUI does not like it.

Finally, I decided to create and import single disk pools from the GUI. This also makes it easier to create and handle encrypted volumes.

  • To create a new volume (first time using an HD): Storage > Pools > Add > Create new pool Choose a name (I use bk_*) activate Encryption, choose the disk, move it in the VDev and finally Create the pool. Save the encryption key in a safe place. I have a directory with a subdirectory for each pool name where I save the corresponding geli.key . I also write the pool name on the disk.

  • To import an existing volume (all the following times to update a backup): Storage > Pools > Add > Import existing pool Choose yes decrypt disks, upload the key

I perform the actual backup from the command line, using the shell provided from the gui (I did not open the ssh port). Thus being able to recover if the connection is lost is crucial. For this I use the screen command (screen -ls to see the running screeens, screen -l to start a new login screen, screen -r to recover the connection to a detached screen).

if [ -z "$STY" ] ; then
  echo "Please run in a screen ( screen -ls)"
  exit 1

By naming the pool to backup with a name starting with bk_ and containing just letters and underscore I can easily get its name with

export bkPool=$(zpool list | grep -E '^bk_' | head -n1 | cut -d " " -f1)

Whereas the pool to backup is passed as argument, so that in my case

export poolToBackup=swimmingpool

The strategy for the backup is to create a snapshot named exactly as the backup pool with

if zfs list -t snapshot | grep ${poolToBackup}@${bkPool}\  >& /dev/null ; then
  echo "WARNING a snapshot named ${bkPool} is already present in ${poolToBackup} we will not overwrite it, stopping."
  exit 1
  zfs snapshot -r ${poolToBackup}@${bkPool}

If on the backup disk and the disk to backup we have an old backup

if zfs list -t snapshot | grep ${bkPool}@${bkPool}-last\  >& /dev/null && zfs list -t snapshot | grep ${poolToBackup}@${bkPool}-last\  >& /dev/null ; then

then the backup can be sped up quite a bit by sending just what is missing (the old snapshot is kept on both pool with the name ${bkPool}-last):

  zfs send -Rce -i ${poolToBackup}@${bkPool}-last ${poolToBackup}@${bkPool} | zfs receive -F ${bkPool}

(receive -Fs $bkPool should be resumable but I had issues with it, so I went with a plain -F).

otherwise, if there is no previous backup, send a full backup from scratch

  zfs send -Rce ${poolToBackup}@${bkPool} | zfs receive -F ${bkPool}

After the backup we should have a snapshot on the backup disk

if ! zfs list -t snapshot | grep ${bkPool}@${bkPool}\  >& /dev/null ; then
  echo "ERROR did not find snapshot on backup disk: ${bkPool}@${bkPool}"
  exit 1

Remove the old snapshots (to avoid keeping old files too long, and clean up for the next backup) the snapshots are renamed

if zfs list -t snapshot | grep ${poolToBackup}@${bkPool}-last\  >& /dev/null ; then
  zfs destroy -r ${poolToBackup}@${bkPool}-last
if zfs list -t snapshot | grep ${bkPool}@${bkPool}-last\  >& /dev/null ; then
  zfs destroy -r ${bkPool}@${bkPool}-last

Rename the snapshots to prepare the next backup

zfs rename -r ${bkPool}@${bkPool} ${bkPool}@${bkPool}-last
zfs rename -r ${poolToBackup}@${bkPool} ${poolToBackup}@${bkPool}-last

Ensure all writes are complete


Now the backup is complete and the disk can be removed using the GUI.

IMPORTANT: make sure that you have the key otherwise the data will be lost

Storage > Pools choose the gear icon and Export/Disconnect. Then select Delete configuration of shares that used this pool and Confirm export/disconnect. Finally choose Export/Disconnect and you can remove the disk and store it away.

And let’s hope we do not need it…

2020-10-25 Update Fixed minor mistakes in code (together with a backuo script update)



Leave a comment

Comments are moderated. Your email address is neither published nor stored, only an md5 hash of it. Required fields are marked with *