In this post, I describe how I manage ZFS snapshots and remote replication from a source host, phe, to a target host, bravo. I assume you have version 0.7.3 or newer of zap installed via the FreeBSD package and the sudo command is available.
Target host (bravo)
Partition the disks to be used for the backup pool.
# for i in 1 2; do gpart create -s gpt ada$i gpart add -a 1M -l gptboot$i -t freebsd-boot -s 512k ada$i gpart bootcode -b boot/pmbr -p /boot/gptzfsboot -i 1 ada$i gpart add -a 1m -l zfs$i -t freebsd-zfs ada$i done
Create the backup pool and a top-level filesystem with the necessary permissions. Make the backup datasets readonly, otherwise incremental replication may fail.
# zpool create -m none zback mirror gpt/zfs1 gpt/zfs2 # zfs create -o readonly=on zback/phe # zfs allow -u zap canmount,compression,create,mountpoint,mount,receive,refreservation,userprop,volmode zback/phe
Source host (phe)
Set permissions on the datasets to be replicated.
# zfs allow -u zap bookmark,diff,hold,send,snapshot zroot/ROOT # zfs allow -u zap bookmark,diff,hold,send,snapshot zroot/usr/home # zfs allow -u zap bookmark,diff,hold,send,snapshot zroot/var
Set the zap:snap user property to on for datasets to snapshot. The property will be inherited, so explicitly turn it off for datasets that we do not want to snapshot.
# zfs set zap:snap=on zroot/ROOT zroot/usr/home zroot/var # zfs set zap:snap=off zroot/var/crash zroot/var/tmp zroot/var/mail
Create the first snapshots.
# sudo -u zap zap snap -v 1w
Set the zap:rep property for datasets that will be replicated to bravo.
# zfs set zap:rep='zap@bravo:zback/phe' zroot/ROOT zroot/usr/home zroot/var # zfs set zap:rep=off zroot/var/crash zroot/var/tmp zroot/var/mail
Before replicating datasets, key-based ssh authentication to bravo must be set up. An important decision at this point is whether or not to protect the private key that will be generated with a passphrase. If anyone else has access to the source host, the decision is clear: add a passphrase. The downside to adding a passphrase is that you will have to enter it before you can replicate. The ssh-agent makes this more manageable.
# sudo -u zap zap@phe % ssh-keygen
Manually copy the contents of the newly created public key on the source host (~/.ssh/id_rsa.pub by default) to ~/.ssh/authorized_keys on the target host and confirm that you can ssh from the source host to the target host using key-based authentication.
Since this is the first time these datasets are being replicated to bravo, a full stream will be sent.
zap@phe % zap rep -v
Target host (bravo)
Optionally mount the backup datasets under /zback/phe.
# mkdir -p /zback/phe # zfs set mountpoint=/zback/phe zback/phe/ROOT/default # zfs mount zback/phe/ROOT/default # zfs set mountpoint=/zback/phe/var zback/phe/var # zfs mount zback/phe/var # zfs set mountpoint=/zback/phe/usr/home zback/phe/usr/home # zfs mount zback/phe/usr/home
Since I also use zap on bravo, I turn off the zap:snap and zap:rep properties for the backups.
# zfs set zap:snap=off zback/phe/ROOT zback/phe/usr/home zback/phe/var
Source host (phe)
Create new snapshots and send the incremental changes to bravo.
zap@phe % zap snap -v 1d zap@phe % zap rep -v
Automate rolling spanshots and replication with cron. Taking snapshots is normally cheap, so do it often. Destroying snapshots can thrash disks, so I only do it every 24 hours. Sensible replication frequencies can vary with different factors. Adjust according to your needs.
Edit the crontab for the zap user by adding the contents shown below.
zap@phe % crontab -e
#minute hour mday month wday command # take snapshots */5 * * * * zap snap 1d 14 */4 * * * zap snap 1w 14 00 * * 1 zap snap 1m # replicate datasets 54 */1 * * * zap rep -v
Edit the crontab for the root user by adding the contents shown below.
# crontab -e
#minute hour mday month wday command # destroy snapshots 44 04 * * * zap destroy