HOWTO
FILESYSTEM
ZFS ON RASPBERRY PI

Published: 20200113
Updated: 20200407 (updated the Fan Shim link, added USB Hub Device ID)
Updated: 20200727 (Added "chapter" 10, about Ubuntu 20.04, read!)
Updated: 20210105 (Modified "chapter" 3, concerning zfs-dkms not needed any longer)

Hardware:

Software:

-

To be able to run ZFS you need:

And Raspberry Pi 4 Model B 4GB (the device is called RPi in the HOWTO) just about fit the requirements for a working hobby system, so here's how to do it.

INDEX
01. Download Ubuntu for IoT
02. Install Ubuntu as usual
03. Install ZFS on Linux (20.04.1 version)
04. Add some block devices
05. Creating the pool and the filesystem
06. What are the speeds?
07. Adding encryption
08. Speeds with dmcrypt
09. Conclusion
10. 6 months later... Ubuntu for IoT 20.04

EXTRA
11. Install ZFS on Linux (19.10.1 version)

  1. Download Ubuntu for IoT

    The IoT version has ready images for Raspberry Pi.
    Make sure you get 19.10.1 (or newer) as this is the first Ubuntu for RPi distribution that has proper support for RPi4. You're going to need the 64-bit version.

  1. Install Ubuntu as usual

    Basically you only need apt update/upgrade and a reboot.
    Maybe change the default password and make yourself an account.

    In this HOWTO I'm using the default ubuntu user account.

  1. Install ZFS on Linux (Ubuntu for IoT 20.04.1)

    $ sudo apt install zfsutils-linux
    

    Now you should be able to run these commands:

    $ sudo zpool status
    $ sudo zfs list
    

    (If they don't work, try rebooting the machine once)
    Just make sure that zfs is loaded:

    $ lsmod | grep zfs
    

  1. Add some block devices

    We will add two drives, one in each USB 3.0 port. The great benefit of using ZFS is data integrity and for that you need at minimum: two drives.

    I've noticed that it works quite well with HDD, SSHD and SSD.

    For this HOWTO I am using 2 Seagate Laptop Thin SSHD 500GB, because that was what I had on my shelf after upgrading some old laptops to SSD.

  1. Creating the pool and the filesystem

    The pool is the collection of block devices (hard drives).
    ZFS consists of two parts: a pool and a filesystem (or several).

    First we create a pool consisting of two drives in mirror mode (The ZFS equal to RAID 1):

    $ sudo zpool create zpool0 mirror /dev/sda /dev/sdb
    

    You can view the status of your pool with:

    $ sudo zpool status
      pool: zpool0
     state: ONLINE
      scan: none requested
    config:
         
            NAME        STATE     READ WRITE CKSUM
            zpool0      ONLINE       0     0     0
              mirror-0  ONLINE       0     0     0
                sda     ONLINE       0     0     0
                sdb     ONLINE       0     0     0
         
    errors: No known data errors
    $
    

    Now we create the ZFS filesystem on this pool:

    $ sudo zfs create zpool0/zfs0
    

    You can view the status of your filesystem with:

    $ sudo zfs list
    NAME          USED  AVAIL     REFER  MOUNTPOINT
    zpool0        528K   449G       96K  /zpool0
    zpool0/zfs0    96K   449G       96K  /zpool0/zfs0
    $
    

    And that's it!

    There is A LOT to be said about ZFS pools and filesystems, but this HOWTO only covers how to get started with ZFS on RPi. If you want to learn about ZFS in more depth I suggest you do all this on a 64-bit PC, read the manpages for zpool and zfs and join some reddit groups (r/ZFS and r/DataHoarder).

  1. What are the speeds?

    Local write speeds:

    $ sudo mkdir /zpool0/zfs0/dir
    $ sudo chown ubuntu.ubuntu /zpool0/zfs0/dir
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-1.img bs=1M count=5120 status=progress
    5365563392 bytes (5.4 GB, 5.0 GiB) copied, 64 s, 83.8 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 64.0311 s, 83.8 MB/s
    $ uptime
     05:53:05 up 10:15,  2 users,  load average: 6.16, 3.07, 4.37
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-2.img bs=1M count=5120 status=progress
    5319426048 bytes (5.3 GB, 5.0 GiB) copied, 67 s, 79.4 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 67.5813 s, 79.4 MB/s
    $ uptime
     05:54:12 up 10:16,  2 users,  load average: 8.97, 4.59, 4.81
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-3.img bs=1M count=5120 status=progress
    5356126208 bytes (5.4 GB, 5.0 GiB) copied, 70 s, 76.5 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 70.336 s, 76.3 MB/s
    $ uptime
     05:55:23 up 10:18,  2 users,  load average: 9.93, 5.81, 5.22
    $ 
    

    Local read speeds:

    $ dd if=/zpool0/zfs0/dir/5GB-1.img of=/dev/null bs=1M count=5120 status=progress
    5221908480 bytes (5.2 GB, 4.9 GiB) copied, 45 s, 116 MB/s 
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 45.9452 s, 117 MB/s
    $ uptime
     05:56:09 up 10:18,  1 user,  load average: 5.53, 5.24, 5.06
    $ dd if=/zpool0/zfs0/dir/5GB-2.img of=/dev/null bs=1M count=5120 status=progress
    5314183168 bytes (5.3 GB, 4.9 GiB) copied, 38 s, 140 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 38.3558 s, 140 MB/s
    $ uptime
     05:56:47 up 10:19,  1 user,  load average: 3.62, 4.78, 4.91
    $ dd if=/zpool0/zfs0/dir/5GB-3.img of=/dev/null bs=1M count=5120 status=progress
    5304745984 bytes (5.3 GB, 4.9 GiB) copied, 38 s, 140 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 38.4509 s, 140 MB/s
    $ uptime
     05:57:26 up 10:20,  1 user,  load average: 2.58, 4.40, 4.77
    $ 
    

    Network write speeds:

    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-4.img
    5GB.img                                       100% 5120MB  54.7MB/s   01:33    
    $ ssh rpi4b uptime
     05:42:53 up 10:05,  2 users,  load average: 2.99, 4.07, 6.12
    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-5.img
    5GB.img                                       100% 5120MB  54.7MB/s   01:33    
    $ ssh rpi4b uptime
     05:44:29 up 10:07,  2 users,  load average: 3.13, 3.76, 5.80
    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-6.img
    5GB.img                                       100% 5120MB  53.1MB/s   01:36    
    $ ssh rpi4b uptime
     05:46:09 up 10:08,  2 users,  load average: 2.22, 3.25, 5.40
    $ 
    

    Network read speeds:

    $ scp rpi4b:/zpool0/zfs0/dir/5GB-4.img ./
    5GB-4.img                                     100% 5120MB  51.5MB/s   01:39    
    $ ssh rpi4b uptime
     05:47:52 up 10:10,  2 users,  load average: 2.04, 2.85, 5.03
    $ scp rpi4b:/zpool0/zfs0/dir/5GB-5.img ./
    5GB-5.img                                     100% 5120MB  51.3MB/s   01:39    
    $ ssh rpi4b uptime
     05:49:35 up 10:12,  2 users,  load average: 1.47, 2.42, 4.65
    $ scp rpi4b:/zpool0/zfs0/dir/5GB-6.img ./
    5GB-6.img                                     100% 5120MB  54.7MB/s   01:33    
    $ ssh rpi4b uptime
     05:51:12 up 10:13,  2 users,  load average: 1.28, 2.09, 4.30
    $ 
    
    Speeds
    Local write speed average 639 Mbps
    Local read speed average 1058 Mbps
    Network write speed average 440 Mbps
    Network read speed average 420 Mbps

  1. Adding encryption

    If you want encryption you need to loop the drives through dmcrypt and you need to do it before you create the pool.

    $ sudo cryptsetup --cipher aes-cbc-essiv:sha256 --verify-passphrase create disk0 /dev/sda
    Enter passphrase for /dev/sda: 
    Verify passphrase: 
    $ sudo cryptsetup --cipher aes-cbc-essiv:sha256 --verify-passphrase create disk1 /dev/sdb
    Enter passphrase for /dev/sdb: 
    Verify passphrase: 
    $ sudo zpool create zpool0 mirror /dev/mapper/disk0 /dev/mapper/disk1
    $ sudo zfs create zpool0/zfs0
    $ 
    

    After this everything is the same.

    IMPORTANT!
    If you are going to use ZFS with dmcrypt you have to remember that automount on boot will not work. You will have to manually add the dmcrypt devices on each boot, and after that run:

    $ sudo zpool import zpool0
    

  1. Speeds with dmcrypt

    The speeds I get with encryption enabled are:

    Local write speeds (with dmcrypt):

    $ sudo mkdir /zpool0/zfs0/dir
    $ sudo chown ubuntu.ubuntu /zpool0/zfs0/dir
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-1.img bs=1M count=5120 status=progress
    5337251840 bytes (5.3 GB, 5.0 GiB) copied, 169 s, 31.6 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 169.952 s, 31.6 MB/s
    $ uptime
     05:04:34 up  9:27,  2 users,  load average: 16.43, 9.86, 8.58
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-2.img bs=1M count=5120 status=progress
    5350883328 bytes (5.4 GB, 5.0 GiB) copied, 177 s, 30.2 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 177.314 s, 30.3 MB/s
    $ uptime
     05:07:31 up  9:30,  2 users,  load average: 16.52, 12.86, 9.97
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-3.img bs=1M count=5120 status=progress
    5361369088 bytes (5.4 GB, 5.0 GiB) copied, 177 s, 30.3 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 177.572 s, 30.2 MB/s
    $ uptime
     05:10:29 up  9:33,  2 users,  load average: 15.95, 14.61, 11.19
    $ 
    

    Local read speeds (with dmcrypt):

    $ dd if=/zpool0/zfs0/dir/5GB-1.img of=/dev/null bs=1M count=5120 status=progress
    5281677312 bytes (5.3 GB, 4.9 GiB) copied, 44 s, 120 MB/s 
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 44.604 s, 120 MB/s
    $ uptime
     05:11:14 up  9:33,  2 users,  load average: 12.96, 14.06, 11.16
    $ dd if=/zpool0/zfs0/dir/5GB-2.img of=/dev/null bs=1M count=5120 status=progress
    5236588544 bytes (5.2 GB, 4.9 GiB) copied, 36 s, 145 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 36.905 s, 145 MB/s
    $ uptime
     05:11:50 up  9:34,  2 users,  load average: 9.94, 13.11, 10.96
    $ dd if=/zpool0/zfs0/dir/5GB-3.img of=/dev/null bs=1M count=5120 status=progress
    5336203264 bytes (5.3 GB, 5.0 GiB) copied, 40 s, 133 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 40.2862 s, 133 MB/s
    $ uptime
     05:12:31 up  9:35,  2 users,  load average: 8.14, 12.23, 10.75
    $ 
    

    Network write speeds (with dmcrypt):

    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-4.img
    5GB.img                                       100% 5120MB  21.5MB/s   03:57    
    $ ssh rpi4b uptime
     05:20:23 up  9:43,  2 users,  load average: 9.91, 8.17, 8.79
    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-5.img
    5GB.img                                       100% 5120MB  21.5MB/s   03:58    
    $ ssh rpi4b uptime
     05:24:22 up  9:47,  2 users,  load average: 9.87, 9.20, 9.08
    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-6.img
    5GB.img                                       100% 5120MB  21.5MB/s   03:58    
    $ ssh rpi4b uptime
     05:28:22 up  9:51,  2 users,  load average: 11.22, 10.38, 9.60
    $ 
    

    Network read speeds (with dmcrypt):

    $ scp rpi4b:/zpool0/zfs0/dir/5GB-4.img ./
    5GB-4.img                                     100% 5120MB  46.6MB/s   01:49    
    $ ssh rpi4b uptime
     05:30:14 up  9:52,  2 users,  load average: 3.51, 7.91, 8.80
    $ scp rpi4b:/zpool0/zfs0/dir/5GB-5.img ./
    5GB-5.img                                     100% 5120MB  49.1MB/s   01:44    
    $ ssh rpi4b uptime
     05:32:03 up  9:54,  2 users,  load average: 2.62, 6.22, 8.08
    $ scp rpi4b:/zpool0/zfs0/dir/5GB-6.img ./
    5GB-6.img                                     100% 5120MB  49.7MB/s   01:43    
    $ ssh rpi4b uptime
     05:33:50 up  9:56,  2 users,  load average: 1.73, 4.87, 7.38
    $ 
    
    Speeds
    Local write speed average (with dmcrypt) 245 Mbps
    Local read speed average (with dmcrypt) 1051 Mbps
    Network write speed average (with dmcrypt) 172 Mbps
    Network read speed average (with dmcrypt) 387 Mbps

  1. Conclusion

    During normal read and write, the CPU was fine.
    During write (with dmcrypt) all the 4 cores maxed out.
    During read (with dmcrypt) the CPU was fine.
    RAM was never maxed out (but it was often over 2GB).

    When using dmcrypt, writing got much slower.
    The load got pretty high but I haven't analyzed it any further.
    The system is also slower through scp.
    It is clear that encryption (dmcrypt, scp) is the weak point of the RPi 4B, at least if you want to reach gigabit speeds.

    The bandwidth speeds above are not super correct. (I ran the tests more times than I showed above). In general read was much faster than write. The values did vary a bit, but no more than 5%. Of course other factors can affect the RPi4 and it's speeds.

    Without encryption, the network speeds are pretty OK and I think this project has shown that you can run ZFS on a Raspberry Pi 4 Model B 4GB. If you also want to add encryption you have to live with things getting slower.

    Since ZFS on Linux isn't even supported on RPi, I can only recommend this for hobby usage.
    However... the system never crashed or got stuck during the 7 days of testing and writing this HOWTO.

  1. 6 months later... Ubuntu for IoT 20.04

    So Ubuntu for IoT 19.10 reached end of life at 2020-07-17.

    I used ZFS-DKMS on Ubuntu for IoT 19.10 on RPi 4B 4G in hobby projects, with several users, for 6 months, without problems.

    But now I have to migrate to Ubuntu for IoT 20.04, to keep getting security updates.

    Ubuntu for IoT 20.04 (arm64) seems to be more mature. There are prepared kernel modules for ZFS, ZFS-DKMS is no longer needed.

    I also realized that with previous setup I used two HDD via USB3 via a powered USB3 hub. I wondered if I could get better speeds if I attached HDDs directly to the RPi, without multiplexing via a HUB.

    So I did some testing with the following hardware:

    Used these old HDDs because, that was what I had next to me and I do/did not expect the speeds to exceed 3 Gbps. Here are the results:

    Local write speeds:

    $ sudo mkdir /zpool0/zfs0/dir
    $ sudo chown ubuntu.ubuntu /zpool0/zfs0/dir
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-1.img bs=1M count=5120 status=progress
    5323620352 bytes (5.3 GB, 5.0 GiB) copied, 66 s, 80.6 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 66.4515 s, 80.8 MB/s
    $ uptime
     19:43:54 up  1:32,  1 user,  load average: 7.01, 2.04, 0.71
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-2.img bs=1M count=5120 status=progress
    5296357376 bytes (5.3 GB, 4.9 GiB) copied, 61 s, 86.8 MB/s 
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 61.8998 s, 86.7 MB/s
    $ uptime
     19:44:56 up  1:33,  1 user,  load average: 7.68, 3.11, 1.16
    $ dd if=/dev/zero of=/zpool0/zfs0/dir/5GB-3.img bs=1M count=5120 status=progress
    5298454528 bytes (5.3 GB, 4.9 GiB) copied, 66 s, 80.3 MB/s 
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 66.8229 s, 80.3 MB/s
    $ uptime
     19:46:03 up  1:34,  1 user,  load average: 6.83, 3.79, 1.54
    $ 
    

    Local read speeds:

    $ dd if=/zpool0/zfs0/dir/5GB-1.img of=/dev/null bs=1M count=5120 status=progress
    5229248512 bytes (5.2 GB, 4.9 GiB) copied, 39 s, 134 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 39.9529 s, 134 MB/s
    $ uptime
     19:48:12 up  1:36,  1 user,  load average: 1.39, 2.62, 1.39
    $ dd if=/zpool0/zfs0/dir/5GB-2.img of=/dev/null bs=1M count=5120 status=progress
    5258608640 bytes (5.3 GB, 4.9 GiB) copied, 39 s, 135 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 39.788 s, 135 MB/s
    $ uptime
     19:48:52 up  1:37,  1 user,  load average: 1.78, 2.57, 1.43
    $ dd if=/zpool0/zfs0/dir/5GB-3.img of=/dev/null bs=1M count=5120 status=progress
    5308940288 bytes (5.3 GB, 4.9 GiB) copied, 42 s, 126 MB/s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 42.4618 s, 126 MB/s
    $ uptime
     19:49:34 up  1:38,  1 user,  load average: 1.62, 2.42, 1.42
    $ 
    

    Network write speeds:

    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-4.img
    5GB.img                                       100% 5120MB  52.3MB/s   01:37
    $ ssh rpi4b uptime
     19:59:05 up  1:47,  1 user,  load average: 1.96, 1.04, 1.01
    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-5.img
    5GB.img                                       100% 5120MB  52.5MB/s   01:37
    $ ssh rpi4b uptime
     20:00:45 up  1:49,  1 user,  load average: 2.48, 1.52, 1.19
    $ scp 5GB.img rpi4b:/zpool0/zfs0/dir/5GB-6.img
    5GB.img                                       100% 5120MB  51.9MB/s   01:38
    $ ssh rpi4b uptime
     20:02:27 up  1:50,  1 user,  load average: 3.14, 2.06, 1.42
    $ 
    

    Network read speeds:

    $ scp rpi4b:/zpool0/zfs0/dir/5GB-4.img ./
    5GB-4.img                                     100% 5120MB  49.0MB/s   01:44
    $ ssh rpi4b uptime
     20:05:03 up  1:53,  1 user,  load average: 1.45, 1.66, 1.36
    $ scp rpi4b:/zpool0/zfs0/dir/5GB-5.img ./
    5GB-5.img                                     100% 5120MB  47.4MB/s   01:48
    $ ssh rpi4b uptime
     20:06:54 up  1:55,  1 user,  load average: 1.27, 1.51, 1.34
    $ scp rpi4b:/zpool0/zfs0/dir/5GB-6.img ./
    5GB-6.img                                     100% 5120MB  49.8MB/s   01:42
    $ ssh rpi4b uptime
     20:08:40 up  1:57,  1 user,  load average: 1.23, 1.45, 1.34
    $ 
    
    Speeds
    Local write speed average 660 Mbps
    Local read speed average 1053 Mbps
    Network write speed average 417 Mbps
    Network read speed average 389 Mbps

    Comparing:
    2x 2.5" HDDs, in USB3 chassis, via a powered USB3 hub
    to:
    2x 3.5" HDDs, in powered USB3 chassis, connected directly to the RPi 4B,
    shows that it doesn't really matter which solution you chose.

    I haven't tested SSD.

EXTRAS

Here is the previous version of "chapter" 03.
It was written for 19.10.1, the first Ubuntu for IoT (RPi) that "supported" ZFS on Linux.

  1. Install ZFS on Linux (Ubuntu for IoT 19.10.1)

    Normally, on x86_64 you can just do: sudo apt install zfsutils-linux and Ubuntu will take care of the rest.

    However, no one really thought of ZFS to run well on a Raspberry Pi, so the custom ZFS-kernel modules are not prepared from the maintainers. That means that if you try "apt install zfsutils-linux" it will just "half-install" two packages that wont work, because the zfs module is not loaded.

    That being said, installing ZFS on RPi4 it actually works, you just need to do some extra things.

    First you need to install the special kernel modules for Linux:

    $ sudo apt install zfs-dkms
    

    This will download, compile and install the modules needed.
    During the install, you will come to a point where apt seem to have stuck.
    When this happens, leave it be, open a new terminal and type:

    $ htop
    

    ...followed by press "t", to get tree view.
    Here you can see that your machine is busy and the compiler is working.
    It takes about 15mins for this step to finish and after this the package is installed.

    Installing zfs-dkms will also try to install zfsutils-linux, but since the zfs module still isn't loaded, this will fail, but don't worry.

    We have a few steps to get it working:

    Now you should be able to run these commands:

    $ sudo zpool status
    $ sudo zfs list
    

    (If they don't work, try rebooting the machine once)
    Just make sure that zfs is loaded:

    $ lsmod | grep zfs
    

    A reminder...

    Every time you upgrade your kernel (with apt upgrade) these modules need to be recompiled. It works automatically, so you don't have to do anything. But it will add 15mins to every upgrade. I only mention this so you don't think your system is broken next time you upgrade.