This is a rough guide describing how to migrate existing Debian system from any filesystem to ZFS on root. It is loosely based on Debian Bullseye Root on ZFS Local copy
Q: Why?
A: TL:DR - Because I love ZFS, Debian works great on ZFS and I do not like to reinstall things unless it absolutely necessary.
I recently upgraded my personal laptop from old Dell XPS15 (Core2 Duo, 8G of RAM) to new shiny Intel NUC Laptop (Core I7 11gen and 16G of RAM). There are too much stuff already configured in the system so it way faster to migrate it then to reinstall and reconfigure. This guide starts from moment when system was already moved to NVME drive in my new NUC and migrated from BIOS boot + MBR to UEFI and GPT but still use ext4.
Please do not consider it as accurate, step by step guide. It just a rough HOWTO guide. I’m using Debian Bookworm which is still testing release at time of writing. System uses EFI boot and GPT partitioning.
Initial disk partitioning
Before I started partition table was looking approximately like this
$ sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SAMSUNG
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 206847 204800 100M EFI System
/dev/nvme0n1p2 206848 239615 32768 16M Microsoft reserved
/dev/nvme0n1p3 239616 209954815 209715200 100G Microsoft basic data
/dev/nvme0n1p4 997142528 1000214527 3072000 1.5G Windows recovery environment
/dev/nvme0n1p5 209954816 212051967 2097152 1G Linux filesystem
/dev/nvme0n1p6 956182528 997142527 40960000 19.5G Linux swap
/dev/nvme0n1p8 746467328 956182527 209715200 100G Linux filesystem
Partition table entries are not in disk order.
- /dev/nvme0n1p1 - EFI formated as vfat and mounted to /boot/EFI
- /dev/nvme0n1p5 - boot partition. While it is no longer requred I still love to have dedicated boot partition. It is ext4 and mounted to /boot
- /dev/nvme0n1p8 - root partition. ext4 mounted to / ( /home is sitting on same partition)
Steps
- Go to BIOS config ans disable secure boot. Adding ZFS DKMS module will taint kernel and tainted kernels will not boot with secure boot enabled. (I have plan to fix it by signing my own kernel)
- Install zdf DKMS and initramfs scripts for ZFS
apt install zfs-dkms zfs-initramfs
- Partition disk(s): we will need two ZFS pools. One for boot and one for root. Root pool can be any geometry, but I’m to so sure about boot pool. But single disk and mirror are definitely working. Use any tool you like, just set partition types to
BF01
for boot, andBF00
for root. As example for single NVME disk:sgdisk -n9:0:+1G -t9:BF01 /dev/nvme0n1 sgdisk -n10:0:+200G -t10:BF00 /dev/nvme0n1
1G partition 9 will used for boot pool and 200G partition 10 for root. Root does not need that amount of disk space, but I’m using same pool for /home , so I need a bigger pool
Result:
$ sudo fdisk -l /dev/nvme0n1 -- snip -- Device Start End Sectors Size Type /dev/nvme0n1p1 2048 206847 204800 100M EFI System /dev/nvme0n1p2 206848 239615 32768 16M Microsoft reserved /dev/nvme0n1p3 239616 209954815 209715200 100G Microsoft basic data /dev/nvme0n1p4 997142528 1000214527 3072000 1.5G Windows recovery environment /dev/nvme0n1p5 209954816 212051967 2097152 1G Linux filesystem /dev/nvme0n1p6 956182528 997142527 40960000 19.5G Linux swap /dev/nvme0n1p8 746467328 956182527 209715200 100G Linux filesystem /dev/nvme0n1p9 209954816 212051967 2097152 1G Solaris /usr & Apple ZFS /dev/nvme0n1p10 212051968 631482367 419430400 200G Solaris root
Be mindful about partition alignment, It should correspond to ashift value on your pool. I’m using 8k blocks on 512bytes/sector drive, so partitions should be aligned to 16 blocks boundaries.
- TRIM disk under new partitions if you use SSD ( just in case).
sudo blkdiscard /dev/nvme0n1p9 sudo blkdiscard /dev/nvme0n1p10
- Create boot pool. It will be automatically mounted to /mnt
zpool create \ -o ashift=13 -d \ -o feature@async_destroy=enabled \ -o feature@bookmarks=enabled \ -o feature@embedded_data=enabled \ -o feature@empty_bpobj=enabled \ -o feature@enabled_txg=enabled \ -o feature@extensible_dataset=enabled \ -o feature@filesystem_limits=enabled \ -o feature@hole_birth=enabled \ -o feature@large_blocks=enabled \ -o feature@lz4_compress=enabled \ -o feature@spacemap_histogram=enabled \ -o feature@zpool_checkpoint=enabled \ -O acltype=posixacl -O canmount=off -O compression=lz4 \ -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ -O mountpoint=/boot -R /mnt \ bootpool /dev/nvme0n1p9
As usual be mindful with ashift value and partition aliment. My SSD apparently uses 8k write blocks, so I use ashift=13
- Copy content of existing /boot ( excluding /boot/efi )
umount /boot/efi rsync -a /boot/ /mnt/ zpool export bootpool
- Create a root pool and dataset
zpool create \ -o ashift=13 \ -O acltype=posixacl -O canmount=off -O compression=zstd-3 \ -O dnodesize=auto -O normalization=formD -O relatime=on \ -O xattr=sa -O mountpoint=none \ rootpool /dev/nvme0n1p10
- Create root dataset
zfs create -o canmount=noauto -o mountpoint=/ rootpool/ROOT/debian
-
Reboot and drop to single user mode as copying live system is not a great idea.
when you in single user mode
- mount new root dataset
zpool import rootpool mount -t zfs rootpool/ROOT/debian /mnt
- create and mount /home dataset or any other datasets you find fancy
zfs create -o mountpoint=/mnt/home rootpool/home
- Copy stuff from old root FS to ZFS. You need to be careful to do not copy non-disk filesystems like /dev or /proc. It would be much easer for you if you booted from some other media, so all this special directories are not mounted.
rsync -a /bin/ /mnt/bin/ mkdir /mnt/boot mkdir /mnt/dev rsync -a /etc/ /mnt/etc/ rsync -a /home/ /mnt/home/ rsync -a /lib/ /mnt/lib/ rsync -a /lib32/ /mnt/lib32/ rsync -a /lib64/ /mnt/lib64/ rsync -a /media/ /mnt/media mkdir /mnt/mnt rsync -a /opt/ /mnt/opt/ mkdir /mnt/proc rsync -a /root/ /mnt/root/ mkdir /mnt/run rsync -a /sbin/ /mnt/sbin/ rsync -a /srv/ /mnt/svr/ mkdir /mnt/sys rsync -a /tmp/ /mnt/tmp/ rsync -a /usr/ /mnt/usr/ rsync -a /var/ /mnt/var/
- edit
/mnt/etc/default/grub
find line with"GRUB_CMDLINE_LINUX_DEFAULT"
, uncomment it if commented out and addroot=ZFS=rootpool/ROOT/debian
to it. In my case variable was empty, so in my config the like looks likeGRUB_CMDLINE_LINUX_DEFAULT="root=ZFS=rootpool/ROOT/debian"
This step is may be no longer required as
update-grub2
should be fixed to handle ZFS-on-Root automatically. - edit
/mnt/etc/fstab
- comment out old non-zfs partitions with exception of VFAT one mounted
/boot/efi
- add
x-systemd.requires=zfs-mount.service
to mount options for/boot/efi
. This will tell sysytemd planner to mount this partition after ZFS is mounted. So fstab line should look like bellow ( I hope you know how to get UUID for partition. But you can use absolute path instead.)UUID=xxxx-xxxxx /boot/efi vfat x-systemd.requires=zfs-mount.service,defaults 0 3
- comment out old non-zfs partitions with exception of VFAT one mounted
- mount new root dataset
- Cross fingers and reboot. when you get to grub prompt get to “Advanced otions” select one of “Recovery options” and press e to start grub editor. then replace
root=xxxxxx
withroot=ZFS=rootpool/ROOT/debian
. Pres F10 to boot - You should endup in initramfs prompt complaining that no root pools are available. Do as it says,
zpool import -N rootpool
then Ctrl-D to continue - It next will stop and ask for root password to go to single user mode. Enter root password and now you should get system running in single user mode with root on zfs and rest of the stuff not mounted properly.
Let’s fix it
1. Fix mount path for /home
and any other dataset you happen to create.
zfs set mountpoint=/home rootpool/home
1. Import bootpool and change mount point.
umount /boot/efi
zpool import bootpool
zfs set mountpoint=/boot bootpool
mount /boot/efi
1. Mounts should looks approximately like bellow.
root# mount | grep -E 'zfs|vfat'
rootpool/ROOT/debian on / type zfs (rw,relatime,xattr,posixacl)
rootpool/home on /home type zfs (rw,relatime,xattr,posixacl)
bootpool on /boot type zfs (rw,nodev,relatime,xattr,posixacl)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro,x-systemd.requires=zfs-mount.service)
-
Update grub by running.
update-grub2
if you check
/boot/grub/grub.cfg
after that you should see that root= now pointing to ZFS. - Reboot as
shutdown -r now
. I would recommend to boot to single user mode again. But this time should be no initramfs prompt. Check that everything is mounted correctly and then go to multiuser mode. - At the end do not forget to
blkdiscard
old partitions before deleting them.