<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.devilplan.com/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="https://wiki.devilplan.com/feed.php">
        <title>Luci4 Wiki Blog - linux:zfs</title>
        <description></description>
        <link>https://wiki.devilplan.com/</link>
        <image rdf:resource="https://wiki.devilplan.com/_media/wiki:logo.png" />
       <dc:date>2026-04-05T12:19:29+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="https://wiki.devilplan.com/linux:zfs:encryption"/>
                <rdf:li rdf:resource="https://wiki.devilplan.com/linux:zfs:overview"/>
                <rdf:li rdf:resource="https://wiki.devilplan.com/linux:zfs:raid"/>
                <rdf:li rdf:resource="https://wiki.devilplan.com/linux:zfs:usb_backup"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="https://wiki.devilplan.com/_media/wiki:logo.png">
        <title>Luci4 Wiki Blog</title>
        <link>https://wiki.devilplan.com/</link>
        <url>https://wiki.devilplan.com/_media/wiki:logo.png</url>
    </image>
    <item rdf:about="https://wiki.devilplan.com/linux:zfs:encryption">
        <dc:format>text/html</dc:format>
        <dc:date>2025-06-04T18:30:46+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>encryption</title>
        <link>https://wiki.devilplan.com/linux:zfs:encryption</link>
        <description>


&lt;h1 class=&quot;sectionedit1&quot; id=&quot;zfs_-_encryption&quot;&gt;ZFS - Encryption&lt;/h1&gt;
&lt;div class=&quot;level1&quot;&gt;

&lt;p&gt;
To &lt;strong&gt;automate the decryption&lt;/strong&gt; of the ZFS pool on reboot, you can &lt;strong&gt;store the encryption key&lt;/strong&gt; securely and configure ZFS to &lt;strong&gt;unlock the pool automatically&lt;/strong&gt;. This can be achieved in several ways, but the most common method involves &lt;strong&gt;storing the key in a file&lt;/strong&gt; (in a secure location) so it can be loaded automatically during boot.
&lt;/p&gt;

&lt;p&gt;
Here’s how you can set up &lt;strong&gt;automatic decryption&lt;/strong&gt;:
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;ZFS - Encryption&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;zfs_-_encryption&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:1,&amp;quot;range&amp;quot;:&amp;quot;10-418&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit2&quot; id=&quot;automate_zfs_pool_decryption_on_reboot&quot;&gt;Automate ZFS Pool Decryption on Reboot&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;/div&gt;

&lt;h4 id=&quot;create_a_key_file_for_the_pool&quot;&gt;1. Create a Key File for the Pool&lt;/h4&gt;
&lt;div class=&quot;level4&quot;&gt;

&lt;p&gt;
First, generate and store the encryption key in a file. If you used &lt;code&gt;keyformat=passphrase&lt;/code&gt;, you&amp;#039;ll need to create and store the passphrase in a secure location. If you&amp;#039;re using a keyfile (&lt;code&gt;keyformat=raw&lt;/code&gt;), follow these steps:
&lt;/p&gt;

&lt;p&gt;
1- &lt;strong&gt;Create the keyfile&lt;/strong&gt; (replace &lt;code&gt;/etc/zfs/backup_pool.key&lt;/code&gt; with your desired file location):
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo dd if=/dev/urandom of=/etc/zfs/backup_pool.key bs=32 count=1
sudo chmod 600 /etc/zfs/backup_pool.key&lt;/pre&gt;

&lt;p&gt;
2- &lt;strong&gt;Set the key location to the keyfile&lt;/strong&gt;:
&lt;/p&gt;

&lt;p&gt;
&lt;code&gt;
sudo zfs set keylocation=file:/etc/zfs/backup_pool.key backup_pool
&lt;/code&gt;
&lt;/p&gt;

&lt;/div&gt;

&lt;h4 id=&quot;ensure_zfs_can_load_the_key_automatically_on_boot&quot;&gt;2. Ensure ZFS Can Load the Key Automatically on Boot&lt;/h4&gt;
&lt;div class=&quot;level4&quot;&gt;

&lt;p&gt;
ZFS can load the key automatically at boot if it knows where to find the key. To set this up:
&lt;strong&gt;Set up a systemd service&lt;/strong&gt; to automatically load the ZFS key and mount the pool on boot.
&lt;/p&gt;

&lt;p&gt;
1- &lt;strong&gt;Create a systemd service for key loading&lt;/strong&gt;:
Create a service to load the key at boot by creating a new file in the &lt;code&gt;/etc/systemd/system/&lt;/code&gt; directory, for example, &lt;code&gt;zfs-load-backup-key.service&lt;/code&gt;.
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo vim /etc/systemd/system/zfs-load-backup-key.service&lt;/pre&gt;

&lt;p&gt;
2- &lt;strong&gt;Add the following content to the service file&lt;/strong&gt;:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
[Unit]
Description=Load ZFS encryption key for backup_pool
DefaultDependencies=no
After=zfs-import-cache.service

[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs load-key backup_pool
RemainAfterExit=true

[Install]
WantedBy=multi-user.target&lt;/pre&gt;

&lt;p&gt;
This service ensures that ZFS loads the key for the pool before mounting it.
&lt;/p&gt;

&lt;p&gt;
3- &lt;strong&gt;Enable the systemd service&lt;/strong&gt; to run at boot:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo systemctl enable zfs-load-backup-key.service&lt;/pre&gt;

&lt;/div&gt;

&lt;h4 id=&quot;enable_the_necessary_zfs_systemd_services&quot;&gt;3. Enable the Necessary ZFS Systemd Services&lt;/h4&gt;
&lt;div class=&quot;level4&quot;&gt;

&lt;p&gt;
Ensure that ZFS automatically imports and mounts the pool at boot. If you haven’t already done so, enable the following systemd services:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo systemctl enable zfs-import-cache
sudo systemctl enable zfs-mount
sudo systemctl enable zfs.target&lt;/pre&gt;

&lt;/div&gt;

&lt;h4 id=&quot;test_the_setup&quot;&gt;4. Test the Setup&lt;/h4&gt;
&lt;div class=&quot;level4&quot;&gt;

&lt;p&gt;
- &lt;strong&gt;Reboot&lt;/strong&gt; the system to ensure that everything works automatically:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo reboot&lt;/pre&gt;

&lt;p&gt;
-After rebooting, check that the pool is mounted:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
zfs list&lt;/pre&gt;

&lt;p&gt;
The pool should be automatically decrypted and mounted at &lt;code&gt;/backup&lt;/code&gt; (or wherever you specified the mount point).
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Automate ZFS Pool Decryption on Reboot&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;automate_zfs_pool_decryption_on_reboot&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:2,&amp;quot;range&amp;quot;:&amp;quot;419-2671&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit3&quot; id=&quot;security_considerations&quot;&gt;Security Considerations&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
- &lt;strong&gt;Storing the Key&lt;/strong&gt;: The encryption key is stored in &lt;code&gt;/etc/zfs/backup_pool.key&lt;/code&gt; by default in this example. Ensure the file is &lt;strong&gt;secure&lt;/strong&gt; by using permissions (&lt;code&gt;chmod 600&lt;/code&gt;), and consider placing it in a directory only accessible by root.
&lt;/p&gt;

&lt;p&gt;
- &lt;strong&gt;Alternative Key Management&lt;/strong&gt;: If you want additional security, you could use a &lt;strong&gt;hardware security module (HSM)&lt;/strong&gt; or &lt;strong&gt;encrypted external key storage&lt;/strong&gt; for storing keys. However, this is more complex and requires specialized hardware or configuration.
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Security Considerations&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;security_considerations&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:7,&amp;quot;secid&amp;quot;:3,&amp;quot;range&amp;quot;:&amp;quot;2672-&amp;quot;} --&gt;</description>
    </item>
    <item rdf:about="https://wiki.devilplan.com/linux:zfs:overview">
        <dc:format>text/html</dc:format>
        <dc:date>2025-06-08T14:34:22+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>overview</title>
        <link>https://wiki.devilplan.com/linux:zfs:overview</link>
        <description>


&lt;h1 class=&quot;sectionedit1&quot; id=&quot;zfs_on_linux&quot;&gt;ZFS on Linux&lt;/h1&gt;
&lt;div class=&quot;level1&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;ZFS on Linux&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;zfs_on_linux&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:1,&amp;quot;range&amp;quot;:&amp;quot;10-25&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit2&quot; id=&quot;overview&quot;&gt;Overview&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
ZFS (&lt;strong&gt;Zettabyte File System&lt;/strong&gt;) is both a &lt;strong&gt;filesystem&lt;/strong&gt; and a &lt;strong&gt;volume manager&lt;/strong&gt;. This means it handles &lt;strong&gt;disk management&lt;/strong&gt; (like RAID, partitioning) and &lt;strong&gt;data layout&lt;/strong&gt; (like EXT4 or Btrfs) in a unified way.
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Overview&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;overview&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:2,&amp;quot;range&amp;quot;:&amp;quot;26-249&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit3&quot; id=&quot;core_concepts&quot;&gt;Core Concepts&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;div class=&quot;table sectionedit4&quot;&gt;&lt;table class=&quot;inline&quot;&gt;
	&lt;tr class=&quot;row0&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;strong&gt;Pool (zpool)&lt;/strong&gt;  &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Group of physical disks, managed as a unit.                                  &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row1&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;strong&gt;Vdev&lt;/strong&gt;          &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Virtual device; pools are made of vdevs. Can be single, mirror, RAID-Z, etc. &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row2&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;strong&gt;Dataset&lt;/strong&gt;       &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; A ZFS-managed filesystem, like a subvolume or folder with its own settings.  &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row3&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;strong&gt;Snapshot&lt;/strong&gt;      &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; A read-only, point-in-time copy of a dataset.                                &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row4&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;strong&gt;Clone&lt;/strong&gt;         &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; A writable copy of a snapshot.                                               &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row5&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;strong&gt;Volume (zvol)&lt;/strong&gt; &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Block device managed by ZFS (e.g. for iSCSI or swap).                        &lt;/td&gt;
	&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;secid&amp;quot;:4,&amp;quot;range&amp;quot;:&amp;quot;270-875&amp;quot;} --&gt;&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Core Concepts&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;core_concepts&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:3,&amp;quot;range&amp;quot;:&amp;quot;250-880&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit5&quot; id=&quot;installing_zfs&quot;&gt;Installing ZFS&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Installing ZFS&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;installing_zfs&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:5,&amp;quot;range&amp;quot;:&amp;quot;881-899&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit6&quot; id=&quot;ubuntu_server&quot;&gt;Ubuntu Server&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo apt update
sudo apt install zfsutils-linux
sudo modprobe zfs&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Ubuntu Server&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;ubuntu_server&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:6,&amp;quot;range&amp;quot;:&amp;quot;900-993&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit7&quot; id=&quot;arch_linux&quot;&gt;Arch Linux&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Install required packages:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo pacman -S zfs-dkms zfs-utils&lt;/pre&gt;

&lt;p&gt;
Load kernel module:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo modprobe zfs&lt;/pre&gt;
&lt;blockquote&gt;&lt;div class=&quot;no&quot;&gt;
If using a custom kernel, ensure you match with the right `dkms` version.&lt;/div&gt;&lt;/blockquote&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Arch Linux&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;arch_linux&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:1,&amp;quot;secid&amp;quot;:7,&amp;quot;range&amp;quot;:&amp;quot;994-1211&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit8&quot; id=&quot;creating_a_zfs_pool&quot;&gt;Creating a ZFS Pool&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Creating a ZFS Pool&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;creating_a_zfs_pool&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:3,&amp;quot;secid&amp;quot;:8,&amp;quot;range&amp;quot;:&amp;quot;1212-1235&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit9&quot; id=&quot;basic_single_disk_pool&quot;&gt;Basic Single Disk Pool&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create mypool /dev/sdX&lt;/pre&gt;

&lt;/div&gt;

&lt;h4 id=&quot;with_options&quot;&gt;With Options:&lt;/h4&gt;
&lt;div class=&quot;level4&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create \
  -o ashift=12 \
  -o autotrim=on \
  -O compression=lz4 \
  -O normalization=formD \
  -O atime=off \
  mypool /dev/sdX&lt;/pre&gt;
&lt;div class=&quot;table sectionedit10&quot;&gt;&lt;table class=&quot;inline&quot;&gt;
	&lt;tr class=&quot;row0&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;strong&gt;Option&lt;/strong&gt;                   &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; &lt;strong&gt;Description&lt;/strong&gt;                                                   &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row1&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;code&gt;-o ashift=12&lt;/code&gt;           &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Aligns to 4K sectors. Recommended for modern HDDs/SSDs.       &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row2&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;code&gt;-o autotrim=on&lt;/code&gt;         &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Enables SSD TRIM support.                                     &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row3&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;code&gt;-O compression=lz4&lt;/code&gt;     &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Enables lightweight compression.                              &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row4&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-O normalization=formD&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Unicode normalization, important for international filenames. &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row5&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;code&gt;-O atime=off&lt;/code&gt;           &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Disables access time updates for better performance.          &lt;/td&gt;
	&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;table1&amp;quot;,&amp;quot;secid&amp;quot;:10,&amp;quot;range&amp;quot;:&amp;quot;1478-2043&amp;quot;} --&gt;
&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Basic Single Disk Pool&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;basic_single_disk_pool&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:3,&amp;quot;secid&amp;quot;:9,&amp;quot;range&amp;quot;:&amp;quot;1236-2043&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit11&quot; id=&quot;raid-z_pool&quot;&gt;RAID-Z Pool&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create myraid raidz /dev/sd{b,c,d}&lt;/pre&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;&lt;code&gt;raidz&lt;/code&gt; = Single parity (RAID-5 equivalent)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;&lt;code&gt;raidz2&lt;/code&gt; = Double parity (RAID-6)&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;&lt;code&gt;raidz3&lt;/code&gt; = Triple parity&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;RAID-Z Pool&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;raid-z_pool&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:5,&amp;quot;secid&amp;quot;:11,&amp;quot;range&amp;quot;:&amp;quot;2044-2225&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit12&quot; id=&quot;mirror_pool_raid-1&quot;&gt;Mirror Pool (RAID-1)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create mymirror mirror /dev/sdX /dev/sdY&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Mirror Pool (RAID-1)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;mirror_pool_raid-1&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:6,&amp;quot;secid&amp;quot;:12,&amp;quot;range&amp;quot;:&amp;quot;2226-2317&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit13&quot; id=&quot;📂_creating_datasets&quot;&gt;📂 Creating Datasets&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
A &lt;strong&gt;dataset&lt;/strong&gt; is a ZFS-managed subvolume with independent settings.
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs create mypool/mydataset&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;\ud83d\udcc2 Creating Datasets&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;\ud83d\udcc2_creating_datasets&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:7,&amp;quot;secid&amp;quot;:13,&amp;quot;range&amp;quot;:&amp;quot;2318-2455&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit14&quot; id=&quot;common_dataset_options&quot;&gt;Common Dataset Options&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs set \
  compression=zstd \
  atime=off \
  quota=20G \
  recordsize=1M \
  mountpoint=/mnt/mydataset \
  mypool/mydataset&lt;/pre&gt;
&lt;div class=&quot;table sectionedit15&quot;&gt;&lt;table class=&quot;inline&quot;&gt;
	&lt;tr class=&quot;row0&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;strong&gt;Property&lt;/strong&gt;                    &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; &lt;strong&gt;Description&lt;/strong&gt;                                                           &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row1&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;code&gt;compression=zstd&lt;/code&gt;          &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Transparent compression using Zstandard.                              &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row2&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;code&gt;atime=off&lt;/code&gt;                 &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Disables last-accessed time.                                          &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row3&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;code&gt;quota=20G&lt;/code&gt;                 &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Limits dataset to 20GB.                                               &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row4&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; &lt;code&gt;recordsize=1M&lt;/code&gt;             &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Optimize for large files (default is 128K, change based on workload). &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row5&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;mountpoint=/mnt/mydataset&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Auto-mount location.                                                  &lt;/td&gt;
	&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;table2&amp;quot;,&amp;quot;secid&amp;quot;:15,&amp;quot;range&amp;quot;:&amp;quot;2625-3256&amp;quot;} --&gt;
&lt;p&gt;
To open/edit files:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
vim /mnt/mydataset/config.json&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Common Dataset Options&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;common_dataset_options&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:8,&amp;quot;secid&amp;quot;:14,&amp;quot;range&amp;quot;:&amp;quot;2456-3322&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit16&quot; id=&quot;snapshots_and_clones&quot;&gt;Snapshots and Clones&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Snapshots and Clones&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;snapshots_and_clones&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:10,&amp;quot;secid&amp;quot;:16,&amp;quot;range&amp;quot;:&amp;quot;3323-3347&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit17&quot; id=&quot;create_a_snapshot&quot;&gt;Create a Snapshot&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs snapshot mypool/mydataset@before-upgrade&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Create a Snapshot&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;create_a_snapshot&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:10,&amp;quot;secid&amp;quot;:17,&amp;quot;range&amp;quot;:&amp;quot;3348-3429&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit18&quot; id=&quot;list_snapshots&quot;&gt;List Snapshots&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
zfs list -t snapshot&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;List Snapshots&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;list_snapshots&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:11,&amp;quot;secid&amp;quot;:18,&amp;quot;range&amp;quot;:&amp;quot;3430-3479&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit19&quot; id=&quot;roll_back_to_snapshot&quot;&gt;Roll Back to Snapshot&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs rollback mypool/mydataset@before-upgrade&lt;/pre&gt;
&lt;blockquote&gt;&lt;div class=&quot;no&quot;&gt;
Note: This will destroy any changes made since that snapshot.&lt;/div&gt;&lt;/blockquote&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Roll Back to Snapshot&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;roll_back_to_snapshot&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:12,&amp;quot;secid&amp;quot;:19,&amp;quot;range&amp;quot;:&amp;quot;3480-3631&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit20&quot; id=&quot;clone_a_snapshot_writable&quot;&gt;Clone a Snapshot (Writable)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs clone mypool/mydataset@before-upgrade mypool/mydataset-clone&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Clone a Snapshot (Writable)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;clone_a_snapshot_writable&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:13,&amp;quot;secid&amp;quot;:20,&amp;quot;range&amp;quot;:&amp;quot;3632-3748&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit21&quot; id=&quot;creating_zvols_block_devices&quot;&gt;Creating Zvols (Block Devices)&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
Zvols are used for swap, VM disks, iSCSI, etc.
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs create -V 8G mypool/vm_swap
sudo mkswap /dev/zvol/mypool/vm_swap
sudo swapon /dev/zvol/mypool/vm_swap&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Creating Zvols (Block Devices)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;creating_zvols_block_devices&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:14,&amp;quot;secid&amp;quot;:21,&amp;quot;range&amp;quot;:&amp;quot;3749-3956&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit22&quot; id=&quot;🔧_maintenance_tasks&quot;&gt;🔧 Maintenance Tasks&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;\ud83d\udd27 Maintenance Tasks&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;\ud83d\udd27_maintenance_tasks&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:15,&amp;quot;secid&amp;quot;:22,&amp;quot;range&amp;quot;:&amp;quot;3957-3983&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit23&quot; id=&quot;check_pool_health&quot;&gt;Check Pool Health&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool status&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Check Pool Health&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;check_pool_health&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:15,&amp;quot;secid&amp;quot;:23,&amp;quot;range&amp;quot;:&amp;quot;3984-4033&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit24&quot; id=&quot;scrub_pool_verify_checksums&quot;&gt;Scrub Pool (verify checksums)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool scrub mypool&lt;/pre&gt;
&lt;blockquote&gt;&lt;div class=&quot;no&quot;&gt;
Use `zpool status` afterward to see scrub results.&lt;/div&gt;&lt;/blockquote&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Scrub Pool (verify checksums)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;scrub_pool_verify_checksums&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:16,&amp;quot;secid&amp;quot;:24,&amp;quot;range&amp;quot;:&amp;quot;4034-4156&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit25&quot; id=&quot;export_import&quot;&gt;Export / Import&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool export mypool
sudo zpool import mypool&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Export \/ Import&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;export_import&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:17,&amp;quot;secid&amp;quot;:25,&amp;quot;range&amp;quot;:&amp;quot;4157-4241&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit26&quot; id=&quot;destroy_datasets_or_pools&quot;&gt;Destroy Datasets or Pools&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs destroy mypool/mydataset
sudo zpool destroy mypool&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Destroy Datasets or Pools&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;destroy_datasets_or_pools&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:18,&amp;quot;secid&amp;quot;:26,&amp;quot;range&amp;quot;:&amp;quot;4242-4345&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit27&quot; id=&quot;📊_ext4_vs_btrfs_vs_zfs&quot;&gt;📊 EXT4 vs Btrfs vs ZFS&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;
&lt;div class=&quot;table sectionedit28&quot;&gt;&lt;table class=&quot;inline&quot;&gt;
	&lt;tr class=&quot;row0&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Feature         &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; &lt;strong&gt;EXT4&lt;/strong&gt;        &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; &lt;strong&gt;Btrfs&lt;/strong&gt;           &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; &lt;strong&gt;ZFS&lt;/strong&gt;                    &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row1&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Snapshots       &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; ❌ No            &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; ✅ Native            &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; ✅ Native                   &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row2&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Checksumming    &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; Metadata only   &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; ✅ Data + Metadata   &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; ✅ Data + Metadata          &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row3&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Compression     &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; ❌ No            &lt;/td&gt;&lt;td class=&quot;col2&quot;&gt; ✅ (zlib, zstd, lzo) &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; ✅ (lz4, zstd, gzip)        &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row4&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; RAID Support    &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; ❌ External only &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; ✅ (limited, flaky)  &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; ✅ RAID-Z, mirror, etc.     &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row5&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Deduplication   &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; ❌ No            &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; ⚠️ Experimental     &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; ✅ (RAM intensive)          &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row6&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Scrubbing       &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; ❌ No            &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; ✅                   &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; ✅                          &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row7&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; Max Volume Size &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; 1 EiB           &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; 16 EiB              &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; 256 ZiB                    &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row8&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Encryption      &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; ❌ LUKS only     &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; ✅ Native            &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; ✅ Native                   &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row9&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Performance     &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; ✅ High          &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; ⚠️ Varies           &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; ⚠️ Needs RAM, very fast    &lt;/td&gt;
	&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;table3&amp;quot;,&amp;quot;secid&amp;quot;:28,&amp;quot;range&amp;quot;:&amp;quot;4377-5318&amp;quot;} --&gt;&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;\ud83d\udcca EXT4 vs Btrfs vs ZFS&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;\ud83d\udcca_ext4_vs_btrfs_vs_zfs&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:19,&amp;quot;secid&amp;quot;:27,&amp;quot;range&amp;quot;:&amp;quot;4346-5323&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit29&quot; id=&quot;sample_workflow_script&quot;&gt;Sample Workflow Script&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
# Create pool with good defaults
sudo zpool create -o ashift=12 \
  -O compression=lz4 \
  -O atime=off \
  -O normalization=formD \
  mypool /dev/sdX

# Create dataset for backups
sudo zfs create mypool/backups
sudo zfs set quota=100G mypool/backups
sudo zfs set mountpoint=/mnt/backups mypool/backups

# Take daily snapshot (use cron)
sudo zfs snapshot mypool/backups@$(date +%Y-%m-%d)

# Optional scrub every Sunday (via cron)
sudo zpool scrub mypool&lt;/pre&gt;
&lt;hr /&gt;
&lt;dl class=&quot;file&quot;&gt;
&lt;dt&gt;&lt;a href=&quot;https://wiki.devilplan.com/_export/code/linux:zfs:overview?codeblock=20&quot; title=&quot;Download Snippet&quot; class=&quot;mediafile mf_cpp&quot;&gt;snippet.cpp&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;pre class=&quot;code file cpp&quot;&gt;by&lt;span class=&quot;sy4&quot;&gt;:&lt;/span&gt;
▖     ▘▖▖
▌ ▌▌▛▘▌▙▌
▙▖▙▌▙▖▌ ▌
&amp;nbsp;
edited&lt;span class=&quot;sy4&quot;&gt;:&lt;/span&gt; June &lt;span class=&quot;nu0&quot;&gt;2025&lt;/span&gt;&lt;/pre&gt;
&lt;/dd&gt;&lt;/dl&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Sample Workflow Script&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;sample_workflow_script&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:19,&amp;quot;secid&amp;quot;:29,&amp;quot;range&amp;quot;:&amp;quot;5324-&amp;quot;} --&gt;</description>
    </item>
    <item rdf:about="https://wiki.devilplan.com/linux:zfs:raid">
        <dc:format>text/html</dc:format>
        <dc:date>2025-06-08T01:06:24+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>raid</title>
        <link>https://wiki.devilplan.com/linux:zfs:raid</link>
        <description>
&lt;p&gt;

ZFS offers powerful &lt;strong&gt;software RAID&lt;/strong&gt; features built into its &lt;strong&gt;storage pool (zpool)&lt;/strong&gt; layer. Unlike traditional software RAID tools like &lt;code&gt;mdadm&lt;/code&gt;, ZFS RAID is &lt;strong&gt;tightly integrated&lt;/strong&gt; with the filesystem, offering &lt;strong&gt;data integrity, easier management, and better recovery&lt;/strong&gt; options.
&lt;/p&gt;

&lt;p&gt;
Here&amp;#039;s a breakdown of the software RAID options ZFS supports:
&lt;/p&gt;

&lt;h2 class=&quot;sectionedit1&quot; id=&quot;zfs_raid_options&quot;&gt;ZFS RAID Options&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;ZFS RAID Options&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;zfs_raid_options&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:1,&amp;quot;range&amp;quot;:&amp;quot;355-375&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit2&quot; id=&quot;mirror_raid-1&quot;&gt;1. Mirror (RAID-1)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Data is duplicated across two or more disks. Best for redundancy and read performance.
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create mymirror mirror /dev/sdX /dev/sdY&lt;/pre&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;You can use more than two disks (&lt;code&gt;mirror /dev/sdX /dev/sdY /dev/sdZ&lt;/code&gt;) – ZFS will mirror across all.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;If one disk fails, data is safe.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Read speed is improved (ZFS reads in parallel from mirrors).&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;1. Mirror (RAID-1)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;mirror_raid-1&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:2,&amp;quot;range&amp;quot;:&amp;quot;376-773&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit3&quot; id=&quot;raid-z_single_parity_raid-5_equivalent&quot;&gt;2. RAID-Z (Single Parity, RAID-5 Equivalent)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Uses parity to recover from a &lt;strong&gt;single disk failure&lt;/strong&gt;.
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create myraid raidz /dev/sd{b,c,d}&lt;/pre&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;&lt;strong&gt;Minimum 3 disks&lt;/strong&gt; required.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;If one disk fails, ZFS reconstructs data using parity.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Write performance is &lt;strong&gt;better than mdadm RAID-5&lt;/strong&gt; but not as good as mirrors.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;2. RAID-Z (Single Parity, RAID-5 Equivalent)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;raid-z_single_parity_raid-5_equivalent&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:1,&amp;quot;secid&amp;quot;:3,&amp;quot;range&amp;quot;:&amp;quot;774-1126&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit4&quot; id=&quot;raid-z2_double_parity_raid-6_equivalent&quot;&gt;3. RAID-Z2 (Double Parity, RAID-6 Equivalent)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Tolerates &lt;strong&gt;two disk failures&lt;/strong&gt;.
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create myraid2 raidz2 /dev/sd{b,c,d,e,f}&lt;/pre&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Minimum &lt;strong&gt;4 disks&lt;/strong&gt;.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Parity is spread across the disks.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;More redundancy but lower usable storage.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;3. RAID-Z2 (Double Parity, RAID-6 Equivalent)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;raid-z2_double_parity_raid-6_equivalent&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:2,&amp;quot;secid&amp;quot;:4,&amp;quot;range&amp;quot;:&amp;quot;1127-1399&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit5&quot; id=&quot;raid-z3_triple_parity&quot;&gt;4. RAID-Z3 (Triple Parity)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Tolerates &lt;strong&gt;three disk failures&lt;/strong&gt;.
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create myraid3 raidz3 /dev/sd{b,c,d,e,f,g}&lt;/pre&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Minimum &lt;strong&gt;5 disks&lt;/strong&gt;.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Extremely fault-tolerant. Useful for archival, enterprise systems, or critical NAS.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;4. RAID-Z3 (Triple Parity)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;raid-z3_triple_parity&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:3,&amp;quot;secid&amp;quot;:5,&amp;quot;range&amp;quot;:&amp;quot;1400-1662&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit6&quot; id=&quot;comparison_of_raid_levels_in_zfs&quot;&gt;Comparison of RAID Levels in ZFS&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;
&lt;div class=&quot;table sectionedit7&quot;&gt;&lt;table class=&quot;inline&quot;&gt;
	&lt;tr class=&quot;row0&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; Mirror    &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; 2         &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; 1 (or N-1)   &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; 50% (if 2 disks)       &lt;/td&gt;&lt;td class=&quot;col4 leftalign&quot;&gt; Fast reads, writes may scale linearly     &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row1&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; RAID-Z    &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; 3         &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; 1            &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; N - 1                  &lt;/td&gt;&lt;td class=&quot;col4 leftalign&quot;&gt; Efficient, safe for moderate setups       &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row2&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; RAID-Z2   &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; 4         &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; 2            &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; N - 2                  &lt;/td&gt;&lt;td class=&quot;col4&quot;&gt; High safety, good for medium/large arrays &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row3&quot;&gt;
		&lt;td class=&quot;col0 leftalign&quot;&gt; RAID-Z3   &lt;/td&gt;&lt;td class=&quot;col1 leftalign&quot;&gt; 5         &lt;/td&gt;&lt;td class=&quot;col2 leftalign&quot;&gt; 3            &lt;/td&gt;&lt;td class=&quot;col3 leftalign&quot;&gt; N - 3                  &lt;/td&gt;&lt;td class=&quot;col4 leftalign&quot;&gt; Max safety, best for critical data        &lt;/td&gt;
	&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;secid&amp;quot;:7,&amp;quot;range&amp;quot;:&amp;quot;1701-2140&amp;quot;} --&gt;&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Comparison of RAID Levels in ZFS&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;comparison_of_raid_levels_in_zfs&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:4,&amp;quot;secid&amp;quot;:6,&amp;quot;range&amp;quot;:&amp;quot;1663-2145&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit8&quot; id=&quot;important_raid_concepts_in_zfs&quot;&gt;Important RAID Concepts in ZFS&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Important RAID Concepts in ZFS&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;important_raid_concepts_in_zfs&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:4,&amp;quot;secid&amp;quot;:8,&amp;quot;range&amp;quot;:&amp;quot;2146-2180&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit9&quot; id=&quot;ashift&quot;&gt;`ashift`&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
-o ashift=12&lt;/pre&gt;

&lt;p&gt;
Sets the sector size to 2^12 (4096 bytes). Best for modern disks (especially SSDs). You &lt;strong&gt;cannot change this later&lt;/strong&gt;.
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;`ashift`&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;ashift&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:4,&amp;quot;secid&amp;quot;:9,&amp;quot;range&amp;quot;:&amp;quot;2181-2335&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit10&quot; id=&quot;add_hot_spares&quot;&gt;Add Hot Spares&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
ZFS can keep extra disks idle, ready to take over when another fails:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool add mypool spare /dev/sdX&lt;/pre&gt;

&lt;p&gt;
ZFS will &lt;strong&gt;automatically replace a failed disk&lt;/strong&gt; with a spare.
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Add Hot Spares&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;add_hot_spares&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:5,&amp;quot;secid&amp;quot;:10,&amp;quot;range&amp;quot;:&amp;quot;2336-2541&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit11&quot; id=&quot;🚨_avoid_thisstriped_pools_across_multiple_disks&quot;&gt;🚨 Avoid This: Striped Pools Across Multiple Disks&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create badpool /dev/sdX /dev/sdY&lt;/pre&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;This creates a &lt;strong&gt;striped pool&lt;/strong&gt; with &lt;strong&gt;no redundancy&lt;/strong&gt;.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;If &lt;strong&gt;any one disk fails&lt;/strong&gt;, the entire pool is lost.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;&lt;div class=&quot;no&quot;&gt;
Never use multiple disks in a single pool without mirroring or RAID-Z.&lt;/div&gt;&lt;/blockquote&gt;

&lt;p&gt;
—
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;\ud83d\udea8 Avoid This: Striped Pools Across Multiple Disks&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;\ud83d\udea8_avoid_thisstriped_pools_across_multiple_disks&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:6,&amp;quot;secid&amp;quot;:11,&amp;quot;range&amp;quot;:&amp;quot;2542-2843&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit12&quot; id=&quot;🛡️_replacing_a_failed_disk_in_zfs_raid&quot;&gt;🛡️ Replacing a Failed Disk in ZFS RAID&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Check pool:&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool status&lt;/pre&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Offline the failed disk:&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool offline mypool /dev/sdX&lt;/pre&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Replace it:&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool replace mypool /dev/sdX /dev/sdY&lt;/pre&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Monitor resilvering:&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool status&lt;/pre&gt;

&lt;p&gt;
ZFS will rebuild the data on the new disk &lt;strong&gt;automatically&lt;/strong&gt;.
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;\ud83d\udee1\ufe0f Replacing a Failed Disk in ZFS RAID&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;\ud83d\udee1\ufe0f_replacing_a_failed_disk_in_zfs_raid&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:7,&amp;quot;secid&amp;quot;:12,&amp;quot;range&amp;quot;:&amp;quot;2844-3195&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit13&quot; id=&quot;test_a_raid-z_pool&quot;&gt;Test a RAID-Z Pool&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create -o ashift=12 \
  -O compression=lz4 \
  -O atime=off \
  testpool raidz /dev/sd{b,c,d}

sudo zfs create testpool/files
echo &amp;quot;Hello ZFS RAID-Z&amp;quot; | sudo tee /testpool/files/test.txt
sudo zfs snapshot testpool/files@first&lt;/pre&gt;

&lt;p&gt;
Then simulate a failure by detaching or offlining a disk and using &lt;code&gt;zpool status&lt;/code&gt;, &lt;code&gt;replace&lt;/code&gt;, and &lt;code&gt;resilver&lt;/code&gt;.
&lt;/p&gt;
&lt;hr /&gt;
&lt;hr /&gt;
&lt;dl class=&quot;file&quot;&gt;
&lt;dt&gt;&lt;a href=&quot;https://wiki.devilplan.com/_export/code/linux:zfs:raid?codeblock=12&quot; title=&quot;Download Snippet&quot; class=&quot;mediafile mf_cpp&quot;&gt;snippet.cpp&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;pre class=&quot;code file cpp&quot;&gt;by&lt;span class=&quot;sy4&quot;&gt;:&lt;/span&gt;
▖     ▘▖▖
▌ ▌▌▛▘▌▙▌
▙▖▙▌▙▖▌ ▌
&amp;nbsp;
edited&lt;span class=&quot;sy4&quot;&gt;:&lt;/span&gt; June &lt;span class=&quot;nu0&quot;&gt;2025&lt;/span&gt;&lt;/pre&gt;
&lt;/dd&gt;&lt;/dl&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Test a RAID-Z Pool&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;test_a_raid-z_pool&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:11,&amp;quot;secid&amp;quot;:13,&amp;quot;range&amp;quot;:&amp;quot;3196-&amp;quot;} --&gt;</description>
    </item>
    <item rdf:about="https://wiki.devilplan.com/linux:zfs:usb_backup">
        <dc:format>text/html</dc:format>
        <dc:date>2025-06-08T01:06:47+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>usb_backup</title>
        <link>https://wiki.devilplan.com/linux:zfs:usb_backup</link>
        <description>


&lt;h1 class=&quot;sectionedit1&quot; id=&quot;complete_guidesetting_up_zfs_on_a_usb_drive_for_backups&quot;&gt;Complete Guide: Setting Up ZFS on a USB Drive for Backups&lt;/h1&gt;
&lt;div class=&quot;level1&quot;&gt;

&lt;p&gt;
Walkthrough &lt;strong&gt;formatting, creating, and optimizing a ZFS pool&lt;/strong&gt; on a &lt;strong&gt;10TB USB drive&lt;/strong&gt;, specifically for &lt;strong&gt;daily backups&lt;/strong&gt;, with a focus on &lt;strong&gt;performance, reliability, and efficiency&lt;/strong&gt;.
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Complete Guide: Setting Up ZFS on a USB Drive for Backups&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;complete_guidesetting_up_zfs_on_a_usb_drive_for_backups&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:1,&amp;quot;range&amp;quot;:&amp;quot;10-262&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit2&quot; id=&quot;pre-installation_checks&quot;&gt;1. Pre-Installation Checks&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
Before proceeding, ensure:&lt;br/&gt;
- USB drive is connected and detected&lt;br/&gt;
-  &lt;strong&gt;ZFS utilities&lt;/strong&gt;  are istalled
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;1. Pre-Installation Checks&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;pre-installation_checks&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:2,&amp;quot;range&amp;quot;:&amp;quot;263-397&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit3&quot; id=&quot;install_zfs_if_not_already_installed&quot;&gt;1.1 Install ZFS (if not already installed)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo apt update
sudo apt install -y zfsutils-linux&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;1.1 Install ZFS (if not already installed)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;install_zfs_if_not_already_installed&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:0,&amp;quot;secid&amp;quot;:3,&amp;quot;range&amp;quot;:&amp;quot;398-504&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit4&quot; id=&quot;identify_the_usb_drive&quot;&gt;1.2 Identify the USB Drive&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Find the correct &lt;strong&gt;device name&lt;/strong&gt;:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
lsblk -o NAME,SIZE,MODEL,MOUNTPOINT&lt;/pre&gt;

&lt;p&gt;
The &lt;strong&gt;10TB drive&lt;/strong&gt; should look like &lt;code&gt;/dev/sdX&lt;/code&gt; (where &lt;code&gt;X&lt;/code&gt; is the correct letter).
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;1.2 Identify the USB Drive&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;identify_the_usb_drive&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:1,&amp;quot;secid&amp;quot;:4,&amp;quot;range&amp;quot;:&amp;quot;505-701&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit5&quot; id=&quot;prepare_the_drive_partitioning_formatting&quot;&gt;2. Prepare the Drive (Partitioning &amp;amp; Formatting)&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
ZFS can use raw disks, but it&amp;#039;s recommended to &lt;strong&gt;partition&lt;/strong&gt; it for flexibility.
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;2. Prepare the Drive (Partitioning &amp;amp; Formatting)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;prepare_the_drive_partitioning_formatting&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:2,&amp;quot;secid&amp;quot;:5,&amp;quot;range&amp;quot;:&amp;quot;702-835&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit6&quot; id=&quot;wipe_existing_data_optional&quot;&gt;2.1 Wipe Existing Data (Optional)&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
To ensure a clean setup, &lt;strong&gt;erase any existing partitions&lt;/strong&gt;:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo wipefs -a /dev/sdX
sudo dd if=/dev/zero of=/dev/sdX bs=1M count=100 status=progress&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;2.1 Wipe Existing Data (Optional)&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;wipe_existing_data_optional&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:2,&amp;quot;secid&amp;quot;:6,&amp;quot;range&amp;quot;:&amp;quot;836-1031&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit7&quot; id=&quot;create_a_single_gpt_partition&quot;&gt;2.2 Create a Single GPT Partition&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Use &lt;code&gt;parted&lt;/code&gt; to create a GPT partition spanning the whole drive:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo parted /dev/sdX --script mklabel gpt
sudo parted /dev/sdX --script mkpart primary 0% 100%&lt;/pre&gt;

&lt;p&gt;
Get the partition name (likely &lt;code&gt;/dev/sdX1&lt;/code&gt;):
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
lsblk -o NAME,SIZE,MODEL,MOUNTPOINT&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;2.2 Create a Single GPT Partition&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;create_a_single_gpt_partition&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:3,&amp;quot;secid&amp;quot;:7,&amp;quot;range&amp;quot;:&amp;quot;1032-1332&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit8&quot; id=&quot;create_optimize_the_zfs_pool&quot;&gt;3. Create &amp;amp; Optimize the ZFS Pool&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
Now, we create a &lt;strong&gt;ZFS pool&lt;/strong&gt; with the best options for a &lt;strong&gt;single USB backup drive&lt;/strong&gt;.
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;3. Create &amp;amp; Optimize the ZFS Pool&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;create_optimize_the_zfs_pool&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:5,&amp;quot;secid&amp;quot;:8,&amp;quot;range&amp;quot;:&amp;quot;1333-1457&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit9&quot; id=&quot;create_the_zfs_pool_with_optimized_settings&quot;&gt;3.1 Create the ZFS Pool with Optimized Settings&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool create -o ashift=12 \
      -o autotrim=on \
      -O compression=lz4 \
      -O atime=off \
      -O sync=disabled \
      -O recordsize=1M \
      -O xattr=sa \
      -O acltype=posix \
      -m /backup \
      backup_pool /dev/sdX1&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;3.1 Create the ZFS Pool with Optimized Settings&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;create_the_zfs_pool_with_optimized_settings&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:5,&amp;quot;secid&amp;quot;:9,&amp;quot;range&amp;quot;:&amp;quot;1458-1764&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit10&quot; id=&quot;explanation_of_options&quot;&gt;Explanation of Options:&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;div class=&quot;table sectionedit11&quot;&gt;&lt;table class=&quot;inline&quot;&gt;
	&lt;tr class=&quot;row0&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;strong&gt;Option&lt;/strong&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; &lt;strong&gt;Purpose&lt;/strong&gt; &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row1&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-o ashift=12&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Optimizes for &lt;strong&gt;4K sector&lt;/strong&gt; drives (prevents misaligned writes). &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row2&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-o autotrim=on&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Enables &lt;strong&gt;automatic TRIM&lt;/strong&gt; for SSDs (harmless for HDDs). &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row3&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-O compression=lz4&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; &lt;strong&gt;Fast compression&lt;/strong&gt; to reduce space usage and speed up writes. &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row4&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-O atime=off&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; &lt;strong&gt;Disables access time updates&lt;/strong&gt; (reduces unnecessary writes). &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row5&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-O sync=disabled&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Improves write performance for &lt;strong&gt;backups only&lt;/strong&gt; (⚠️ use with care). &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row6&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-O recordsize=1M&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Optimizes for &lt;strong&gt;large files&lt;/strong&gt; (suitable for backups). &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row7&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-O xattr=sa&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Stores extended attributes more efficiently. &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row8&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-O acltype=posix&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Enables &lt;strong&gt;POSIX ACLs&lt;/strong&gt; for better file permissions. &lt;/td&gt;
	&lt;/tr&gt;
	&lt;tr class=&quot;row9&quot;&gt;
		&lt;td class=&quot;col0&quot;&gt; &lt;code&gt;-m /backup&lt;/code&gt; &lt;/td&gt;&lt;td class=&quot;col1&quot;&gt; Mounts the pool at &lt;code&gt;/backup&lt;/code&gt;. &lt;/td&gt;
	&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;table&amp;quot;,&amp;quot;secid&amp;quot;:11,&amp;quot;range&amp;quot;:&amp;quot;1794-2530&amp;quot;} --&gt;&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Explanation of Options:&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;explanation_of_options&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:6,&amp;quot;secid&amp;quot;:10,&amp;quot;range&amp;quot;:&amp;quot;1765-2535&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit12&quot; id=&quot;verify_check_pool_status&quot;&gt;4. Verify &amp;amp; Check Pool Status&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
Check that ZFS is properly set up:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
zpool status
zfs list&lt;/pre&gt;

&lt;p&gt;
Expected output:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
NAME         STATE     READ WRITE CKSUM
backup_pool  ONLINE       0     0     0&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;4. Verify &amp;amp; Check Pool Status&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;verify_check_pool_status&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:6,&amp;quot;secid&amp;quot;:12,&amp;quot;range&amp;quot;:&amp;quot;2536-2745&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit13&quot; id=&quot;automate_snapshots_incremental_backups&quot;&gt;5. Automate Snapshots &amp;amp; Incremental Backups&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;5. Automate Snapshots &amp;amp; Incremental Backups&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;automate_snapshots_incremental_backups&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:8,&amp;quot;secid&amp;quot;:13,&amp;quot;range&amp;quot;:&amp;quot;2746-2792&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit14&quot; id=&quot;daily_snapshots&quot;&gt;5.1 Daily Snapshots&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
ZFS snapshots are still useful for local protection &lt;strong&gt;before syncing to the server&lt;/strong&gt;. In this case backups use &lt;code&gt;rsync&lt;/code&gt; instead of &lt;code&gt;zfs send&lt;/code&gt;, because the drives on the server are BTRFS and EXT4.
&lt;/p&gt;

&lt;p&gt;
Automatically create daily snapshots:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo crontab -e&lt;/pre&gt;

&lt;p&gt;
Add the following line:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
0 2 * * * /usr/sbin/zfs snapshot backup_pool@pre-rsync-$(date +\%F)&lt;/pre&gt;

&lt;p&gt;
This ensures a &lt;strong&gt;ZFS snapshot is taken before &lt;code&gt;rsync&lt;/code&gt; runs&lt;/strong&gt;, preventing backup corruption if files change during the process.
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;5.1 Daily Snapshots&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;daily_snapshots&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:8,&amp;quot;secid&amp;quot;:14,&amp;quot;range&amp;quot;:&amp;quot;2793-3308&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit15&quot; id=&quot;incremental_backup_script&quot;&gt;**5.2 Incremental Backup Script**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Since the server uses &lt;strong&gt;Btrfs and Ext4&lt;/strong&gt;, the ZFS &lt;strong&gt;incremental backup method (&lt;code&gt;zfs send/recv&lt;/code&gt;) won&amp;#039;t work&lt;/strong&gt;, because it requires both the source and destination to be ZFS.&lt;br/&gt;

&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**5.2 Incremental Backup Script**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;incremental_backup_script&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:10,&amp;quot;secid&amp;quot;:15,&amp;quot;range&amp;quot;:&amp;quot;3309-3522&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit16&quot; id=&quot;alternative_incremental_backup_method_compatible_with_any_filesystem&quot;&gt;**Alternative Incremental Backup Method (Compatible with Any Filesystem)**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Use &lt;strong&gt;&lt;code&gt;rsync&lt;/code&gt;&lt;/strong&gt; to efficiently copy only changed files to your server.
&lt;/p&gt;

&lt;/div&gt;

&lt;h4 id=&quot;create_an_incremental_backup_script&quot;&gt;**1. Create an Incremental Backup Script**&lt;/h4&gt;
&lt;div class=&quot;level4&quot;&gt;

&lt;p&gt;
Save this as &lt;code&gt;/usr/local/bin/backup-to-server.sh&lt;/code&gt;:
&lt;/p&gt;
&lt;dl class=&quot;file&quot;&gt;
&lt;dt&gt;&lt;a href=&quot;https://wiki.devilplan.com/_export/code/linux:zfs:usb_backup?codeblock=10&quot; title=&quot;Download Snippet&quot; class=&quot;mediafile mf_sh&quot;&gt;snippet.sh&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;pre class=&quot;code file sh&quot;&gt;#!/bin/bash
&amp;nbsp;
SRC=&amp;quot;/data_to_backup/&amp;quot;  # Replace with the source directory you want to back up
DEST=&amp;quot;/backup/&amp;quot;         # Replace with your ZFS pool mount point on the USB drive
&amp;nbsp;
# Use rsync with hard links to save space and make incremental backups
rsync -aAXHv --delete --progress \
      --link-dest=&amp;quot;$DEST/latest&amp;quot; \
      &amp;quot;$SRC&amp;quot; &amp;quot;$DEST/backup-$(date +%F)&amp;quot;
&amp;nbsp;
# Update the &amp;quot;latest&amp;quot; symlink to point to the newest backup
rm -rf &amp;quot;$DEST/latest&amp;quot;
ln -s &amp;quot;$DEST/backup-$(date +%F)&amp;quot; &amp;quot;$DEST/latest&amp;quot;
&amp;nbsp;
# Delete backups older than 7 days
find &amp;quot;$DEST&amp;quot; -maxdepth 1 -type d -name &amp;quot;backup-*&amp;quot; -mtime +7 -exec rm -rf {} \;&lt;/pre&gt;
&lt;/dd&gt;&lt;/dl&gt;

&lt;p&gt;
Make it executable:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo chmod +x /usr/local/bin/backup-to-server.sh&lt;/pre&gt;

&lt;/div&gt;

&lt;h4 id=&quot;automate_daily_incremental_backups&quot;&gt;**2. Automate Daily Incremental Backups**&lt;/h4&gt;
&lt;div class=&quot;level4&quot;&gt;

&lt;p&gt;
Edit cron:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo crontab -e&lt;/pre&gt;

&lt;p&gt;
Add:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
0 2 * * * /usr/local/bin/backup-to-server.sh&lt;/pre&gt;

&lt;p&gt;
This runs at &lt;strong&gt;2 AM daily&lt;/strong&gt;, copying only changed files to your server.&lt;br/&gt;

&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**Alternative Incremental Backup Method (Compatible with Any Filesystem)**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;alternative_incremental_backup_method_compatible_with_any_filesystem&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:10,&amp;quot;secid&amp;quot;:16,&amp;quot;range&amp;quot;:&amp;quot;3523-4678&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit17&quot; id=&quot;how_this_works&quot;&gt;**How This Works**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
- &lt;strong&gt;Uses &lt;code&gt;rsync&lt;/code&gt;&lt;/strong&gt; to copy files while preserving permissions, timestamps, and symlinks.
- &lt;strong&gt;Deletes removed files&lt;/strong&gt; (&lt;code&gt;--delete&lt;/code&gt;) to match the source.
- &lt;strong&gt;Uses &lt;code&gt;--link-dest&lt;/code&gt;&lt;/strong&gt; to create hard-linked snapshots, saving disk space.
- &lt;strong&gt;Maintains a “latest” symlink&lt;/strong&gt;, making it easy to restore files.
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**How This Works**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;how_this_works&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:14,&amp;quot;secid&amp;quot;:17,&amp;quot;range&amp;quot;:&amp;quot;4679-5004&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit18&quot; id=&quot;auto-cleanup_old_snapshots&quot;&gt;**6. Auto-Cleanup Old Snapshots**&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
Make sure only &lt;strong&gt;ZFS snapshots older than 7 days&lt;/strong&gt; are deleted, while leaving &lt;code&gt;rsync&lt;/code&gt; backups untouched.
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo crontab -e&lt;/pre&gt;

&lt;p&gt;
Cron job for cleanup:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
0 3 * * * /usr/sbin/zfs list -t snapshot -o name | grep backup_pool@pre-rsync | head -n -7 | xargs -n1 zfs destroy&lt;/pre&gt;

&lt;p&gt;
This keeps the &lt;strong&gt;last 7 days&lt;/strong&gt; of ZFS snapshots while avoiding unnecessary storage bloat.
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**6. Auto-Cleanup Old Snapshots**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;auto-cleanup_old_snapshots&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:14,&amp;quot;secid&amp;quot;:18,&amp;quot;range&amp;quot;:&amp;quot;5005-5411&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit19&quot; id=&quot;prevent_usb_sleep_power_issues&quot;&gt;**7. Prevent USB Sleep &amp;amp; Power Issues**&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
Since &lt;code&gt;rsync&lt;/code&gt; runs &lt;strong&gt;incremental backups&lt;/strong&gt; instead of a full &lt;code&gt;zfs send&lt;/code&gt;, the USB drive &lt;strong&gt;may be idle for longer periods&lt;/strong&gt;. If the drive enters sleep mode too aggressively, consider using &lt;code&gt;udev&lt;/code&gt; rules for persistent power settings.
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**7. Prevent USB Sleep &amp;amp; Power Issues**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;prevent_usb_sleep_power_issues&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:16,&amp;quot;secid&amp;quot;:19,&amp;quot;range&amp;quot;:&amp;quot;5412-5686&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit20&quot; id=&quot;disable_drive_sleep&quot;&gt;**7.1 Disable Drive Sleep**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Some USB drives aggressively spin down. Prevent this:
&lt;/p&gt;
&lt;dl class=&quot;file&quot;&gt;
&lt;dt&gt;&lt;a href=&quot;https://wiki.devilplan.com/_export/code/linux:zfs:usb_backup?codeblock=16&quot; title=&quot;Download Snippet&quot; class=&quot;mediafile mf_sh&quot;&gt;snippet.sh&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;pre class=&quot;code file sh&quot;&gt;sudo hdparm -B 254 /dev/sdX
sudo hdparm -S 0 /dev/sdX&lt;/pre&gt;
&lt;/dd&gt;&lt;/dl&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**7.1 Disable Drive Sleep**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;disable_drive_sleep&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:16,&amp;quot;secid&amp;quot;:20,&amp;quot;range&amp;quot;:&amp;quot;5687-5837&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit21&quot; id=&quot;reduce_write_caching_risks&quot;&gt;**7.2 Reduce Write Caching Risks**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Prevent data loss if the USB drive is unplugged suddenly:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
echo 1 | sudo tee /sys/block/sdX/device/scsi_disk/*/cache_type&lt;/pre&gt;

&lt;p&gt;
This sets &lt;code&gt;cache_type = 1&lt;/code&gt;, which means “Write Through” mode.
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Every write goes directly to the disk instead of being cached.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Safer for USB drives: If unplugged suddenly, less risk of data loss.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Downside: Slightly slower performance.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
&lt;strong&gt;To Make This Persistent Across Reboots&lt;/strong&gt;
&lt;/p&gt;

&lt;p&gt;
Since &lt;code&gt;/sys/block/sdX&lt;/code&gt; settings reset after reboot, add this to a startup script:
Edit &lt;code&gt;/etc/rc.local&lt;/code&gt; (or create it if missing):
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
#!/bin/bash
echo 1 | sudo tee /sys/block/sdX/device/scsi_disk/*/cache_type
exit 0&lt;/pre&gt;

&lt;p&gt;
Make it executable:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo chmod +x /etc/rc.local&lt;/pre&gt;

&lt;p&gt;
This ensures the setting is applied every time the system starts.
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**7.2 Reduce Write Caching Risks**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;reduce_write_caching_risks&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:17,&amp;quot;secid&amp;quot;:21,&amp;quot;range&amp;quot;:&amp;quot;5838-6636&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit22&quot; id=&quot;b_alternative_udev_rule_for_write_caching&quot;&gt;7.2b (alternative)  Udev Rule for Write Caching&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Instead of using &lt;code&gt;/sys/block/sdX&lt;/code&gt;, use a udev rule:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
echo &amp;#039;ACTION==&amp;quot;add&amp;quot;, SUBSYSTEM==&amp;quot;block&amp;quot;, KERNEL==&amp;quot;sd?&amp;quot;, ATTR{queue/write_cache}=&amp;quot;0&amp;quot;&amp;#039; | sudo tee /etc/udev/rules.d/99-usb-write-cache.rules&lt;/pre&gt;

&lt;p&gt;
This method disables write caching at the block device level.
Difference from Option 1:
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;It applies to all USB drives (not just one specific drive).&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Works persistently without needing &lt;code&gt;/etc/rc.local&lt;/code&gt;.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;Can be less effective depending on the drive firmware.&lt;hr /&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;7.2b (alternative)  Udev Rule for Write Caching&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;b_alternative_udev_rule_for_write_caching&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:20,&amp;quot;secid&amp;quot;:22,&amp;quot;range&amp;quot;:&amp;quot;6637-7156&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit23&quot; id=&quot;mount_the_zfs_pool_at_boot&quot;&gt;**8. Mount the ZFS Pool at Boot**&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
ZFS should automatically mount the pool, but &lt;strong&gt;ensure it imports on reboot&lt;/strong&gt;:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo systemctl enable zfs-import-cache
sudo systemctl enable zfs-mount
sudo systemctl enable zfs.target&lt;/pre&gt;

&lt;p&gt;
These services are responsible for:
&lt;/p&gt;
&lt;ul&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;&lt;code&gt;zfs-import-cache&lt;/code&gt;: Imports the ZFS pools at boot.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;&lt;code&gt;zfs-mount&lt;/code&gt;: Mounts the datasets.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt;&lt;code&gt;zfs.target&lt;/code&gt;: Ensures ZFS is correctly integrated into the boot process.&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
These steps should guarantee that the drive is mounted at /backup on reboot.
&lt;/p&gt;

&lt;p&gt;
If the pool does not mount automatically:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zpool import backup_pool&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**8. Mount the ZFS Pool at Boot**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;mount_the_zfs_pool_at_boot&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:21,&amp;quot;secid&amp;quot;:23,&amp;quot;range&amp;quot;:&amp;quot;7157-7750&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit24&quot; id=&quot;verify_the_setup&quot;&gt;**9. Verify the Setup**&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
Check status:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
zpool status backup_pool
zfs list&lt;/pre&gt;

&lt;p&gt;
Test snapshot &amp;amp; backup:
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
zfs snapshot backup_pool@test
zfs list -t snapshot
zfs destroy backup_pool@test&lt;/pre&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**9. Verify the Setup**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;verify_the_setup&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:23,&amp;quot;secid&amp;quot;:24,&amp;quot;range&amp;quot;:&amp;quot;7751-7951&amp;quot;} --&gt;
&lt;h2 class=&quot;sectionedit25&quot; id=&quot;conclusion&quot;&gt;**Conclusion**&lt;/h2&gt;
&lt;div class=&quot;level2&quot;&gt;

&lt;p&gt;
&lt;strong&gt;ZFS fully optimized for 10TB USB backup drive&lt;/strong&gt;&lt;br/&gt;
- &lt;strong&gt;Performance:&lt;/strong&gt; Optimized writes (&lt;code&gt;sync=disabled&lt;/code&gt;), large record size (&lt;code&gt;1M&lt;/code&gt;), and compression (&lt;code&gt;lz4&lt;/code&gt;).&lt;br/&gt;
- &lt;strong&gt;Reliability:&lt;/strong&gt; Checksums detect corruption, snapshots provide rollback, and backups are incremental.&lt;br/&gt;
- &lt;strong&gt;Efficiency:&lt;/strong&gt; Automated snapshots, incremental transfers, and old snapshot cleanup.&lt;br/&gt;
—
&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**Conclusion**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;conclusion&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:25,&amp;quot;range&amp;quot;:&amp;quot;7952-8337&amp;quot;} --&gt;
&lt;h1 class=&quot;sectionedit26&quot; id=&quot;zfs_with_one_pool_and_one_filesystem&quot;&gt;ZFS with one pool and one filesystem&lt;/h1&gt;
&lt;div class=&quot;level1&quot;&gt;

&lt;p&gt;
Using ZFS on a server with only &lt;strong&gt;one pool and one filesystem&lt;/strong&gt; (without redundancy) can still offer several advantages, even though you&amp;#039;re not leveraging its redundancy features like mirroring (RAID1) or parity (RAIDZ). Here are some benefits:
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;ZFS with one pool and one filesystem&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;zfs_with_one_pool_and_one_filesystem&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:26,&amp;quot;range&amp;quot;:&amp;quot;8338-8623&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit27&quot; id=&quot;data_integrity_with_checksumming&quot;&gt;1. **Data Integrity with Checksumming**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; ZFS &lt;strong&gt;detects silent data corruption&lt;/strong&gt; (bit rot) using checksums on all data and metadata.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Even without redundancy, ZFS will alert you if corruption occurs, which most traditional filesystems (like ext4 or XFS) do not do.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;1. **Data Integrity with Checksumming**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;data_integrity_with_checksumming&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:27,&amp;quot;range&amp;quot;:&amp;quot;8624-8900&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit28&quot; id=&quot;snapshots_rollbacks&quot;&gt;2. **Snapshots &amp;amp; Rollbacks**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; You can take &lt;strong&gt;snapshots&lt;/strong&gt; of your filesystem, allowing you to revert to a previous state if needed.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Useful for backups, testing, or recovering from accidental deletions.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;2. **Snapshots &amp;amp; Rollbacks**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;snapshots_rollbacks&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:28,&amp;quot;range&amp;quot;:&amp;quot;8901-9115&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit29&quot; id=&quot;compression_transparent&quot;&gt;3. **Compression (Transparent)**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; ZFS supports &lt;strong&gt;transparent compression&lt;/strong&gt; (e.g., LZ4, ZSTD), reducing disk space usage and potentially improving performance.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;3. **Compression (Transparent)**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;compression_transparent&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:29,&amp;quot;range&amp;quot;:&amp;quot;9116-9283&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit30&quot; id=&quot;automatic_space_management_with_copy-on-write_cow&quot;&gt;4. **Automatic Space Management with Copy-on-Write (CoW)**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; ZFS avoids overwriting existing data until the new data is fully written.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; This minimizes data corruption risks from unexpected crashes.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;4. **Automatic Space Management with Copy-on-Write (CoW)**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;automatic_space_management_with_copy-on-write_cow&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:30,&amp;quot;range&amp;quot;:&amp;quot;9284-9493&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit31&quot; id=&quot;efficient_storage_management_with_datasets&quot;&gt;5. **Efficient Storage Management with Datasets**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Even if you have only one filesystem (&lt;code&gt;zfs create tank&lt;/code&gt;), you can create multiple &lt;strong&gt;datasets&lt;/strong&gt; (&lt;code&gt;tank/home&lt;/code&gt;, &lt;code&gt;tank/var/log&lt;/code&gt;, etc.).&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Each dataset can have independent quotas, snapshots, and settings.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;5. **Efficient Storage Management with Datasets**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;efficient_storage_management_with_datasets&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:31,&amp;quot;range&amp;quot;:&amp;quot;9494-9757&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit32&quot; id=&quot;native_encryption&quot;&gt;6. **Native Encryption**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; If you need encryption, ZFS provides built-in encryption that is &lt;strong&gt;per-dataset&lt;/strong&gt; with no performance penalty compared to LUKS.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;6. **Native Encryption**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;native_encryption&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:32,&amp;quot;range&amp;quot;:&amp;quot;9758-9919&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit33&quot; id=&quot;flexible_volume_management&quot;&gt;7. **Flexible Volume Management**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; You can later &lt;strong&gt;add redundancy&lt;/strong&gt; by attaching additional drives (e.g., convert to a mirror).&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; No need for LVM, as ZFS natively handles volume management.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;7. **Flexible Volume Management**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;flexible_volume_management&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:33,&amp;quot;range&amp;quot;:&amp;quot;9920-10121&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit34&quot; id=&quot;adaptive_read_caching_arc&quot;&gt;8. **Adaptive Read Caching (ARC)**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; ZFS intelligently caches data in RAM, improving read performance.&lt;/div&gt;
&lt;/li&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; If you later add an SSD, you can use it as an L2ARC (secondary cache).&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;8. **Adaptive Read Caching (ARC)**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;adaptive_read_caching_arc&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:34,&amp;quot;range&amp;quot;:&amp;quot;10122-10308&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit35&quot; id=&quot;no_need_for_fsck&quot;&gt;9. **No Need for fsck**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Unlike traditional filesystems, ZFS &lt;strong&gt;does not require&lt;/strong&gt; periodic &lt;code&gt;fsck&lt;/code&gt; checks after crashes.&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;9. **No Need for fsck**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;no_need_for_fsck&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:35,&amp;quot;range&amp;quot;:&amp;quot;10309-10437&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit36&quot; id=&quot;downsides_of_using_zfs_without_redundancy&quot;&gt;**Downsides of Using ZFS Without Redundancy**&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
- &lt;strong&gt;Single Point of Failure&lt;/strong&gt;: If the disk dies, you lose everything.
- &lt;strong&gt;Higher RAM Usage&lt;/strong&gt;: ZFS works best with at least &lt;strong&gt;4GB+ of RAM&lt;/strong&gt;.
- &lt;strong&gt;Slightly Higher Write Overhead&lt;/strong&gt;: Due to CoW, small writes might be slower compared to ext4 on HDDs.
&lt;/p&gt;
&lt;hr /&gt;

&lt;p&gt;
When the &lt;strong&gt;ZFS pool is on a single USB drive used for daily backups&lt;/strong&gt;, the &lt;strong&gt;Single Point of Failure (SPOF) risk is reduced&lt;/strong&gt; because:&lt;br/&gt;

1. &lt;strong&gt;Primary Data is on the Server&lt;/strong&gt;
&lt;/p&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; Even if the USB drive fails, your original data is still available on the server.&lt;br/&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;
2. &lt;strong&gt;ZFS Features Benefit Backups&lt;/strong&gt;
&lt;/p&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; &lt;strong&gt;Snapshots&lt;/strong&gt;: You can use ZFS snapshots to create point-in-time backups.&lt;br/&gt;
   - &lt;strong&gt;Incremental Sends&lt;/strong&gt;: ZFS allows efficient incremental backups using &lt;code&gt;zfs send | zfs recv&lt;/code&gt;, reducing backup time and bandwidth.&lt;br/&gt;
   - &lt;strong&gt;Compression&lt;/strong&gt;: You can save space with &lt;code&gt;lz4&lt;/code&gt; or &lt;code&gt;zstd&lt;/code&gt; compression.&lt;br/&gt;
   - &lt;strong&gt;Data Integrity&lt;/strong&gt;: If bit rot or corruption occurs, ZFS will detect it.&lt;br/&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;
3. &lt;strong&gt;Potential Optimization for USB Backup Use Case&lt;/strong&gt;
&lt;/p&gt;
&lt;ol&gt;
&lt;li class=&quot;level1&quot;&gt;&lt;div class=&quot;li&quot;&gt; &lt;strong&gt;ashift=12&lt;/strong&gt;: If using an advanced format drive (4K sectors), set &lt;code&gt;ashift=12&lt;/code&gt; at pool creation.&lt;br/&gt;
   - &lt;strong&gt;zfs set atime=off&lt;/strong&gt;: Disables access time updates, improving performance.&lt;br/&gt;
   - &lt;strong&gt;zfs set compression=lz4&lt;/strong&gt;: Saves space without slowing down writes.&lt;br/&gt;
   - &lt;strong&gt;zfs set sync=disabled&lt;/strong&gt; (optional): Improves write performance for backups (use only if acceptable).&lt;br/&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;
### &lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br/&gt;
Using &lt;strong&gt;ZFS on a single USB drive for backups makes sense&lt;/strong&gt; if you want:&lt;br/&gt;
- Data integrity (checksums)&lt;br/&gt;
- Snapshots and efficient incremental backups&lt;br/&gt;
- Compression to save space&lt;br/&gt;

However, &lt;strong&gt;USB drives can fail&lt;/strong&gt;, so you may want an additional backup strategy, such as:&lt;br/&gt;
- A second backup drive rotated periodically&lt;br/&gt;
- Offsite/cloud backup for disaster recovery&lt;br/&gt;

&lt;/p&gt;
&lt;hr /&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;**Downsides of Using ZFS Without Redundancy**&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;downsides_of_using_zfs_without_redundancy&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:36,&amp;quot;range&amp;quot;:&amp;quot;10438-12250&amp;quot;} --&gt;
&lt;h1 class=&quot;sectionedit37&quot; id=&quot;other_notes&quot;&gt;Other notes&lt;/h1&gt;
&lt;div class=&quot;level1&quot;&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Other notes&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;other_notes&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:37,&amp;quot;range&amp;quot;:&amp;quot;12251-12265&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit38&quot; id=&quot;enable_compression&quot;&gt;Enable Compression&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Activate LZ4 compression to improve read/write performance:([Rdiffweb][1])
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
zfs set compression=lz4 rpool/backups&lt;/pre&gt;

&lt;p&gt;
To prevent rdiff-backup from compressing data, use:([Rdiffweb][1])
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
rdiff-backup --no-compression ...&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Enable Compression&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;enable_compression&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:25,&amp;quot;secid&amp;quot;:38,&amp;quot;range&amp;quot;:&amp;quot;12266-12526&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit39&quot; id=&quot;optimize_cache_usage&quot;&gt;Optimize Cache Usage&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Configure ZFS to store only metadata in its cache:([Rdiffweb][1])
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs set primarycache=metadata rpool/backups
sudo zfs set secondarycache=metadata rpool/backups&lt;/pre&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Optimize Cache Usage&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;optimize_cache_usage&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:27,&amp;quot;secid&amp;quot;:39,&amp;quot;range&amp;quot;:&amp;quot;12527-12730&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit40&quot; id=&quot;consider_deduplication&quot;&gt;Consider Deduplication&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Enable deduplication to save space when identical data blocks are present:([Rdiffweb][1])
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs set dedup=on rpool/backups&lt;/pre&gt;

&lt;p&gt;
Note: Deduplication increases RAM usage; assess its impact based on your data.([Rdiffweb][1])
&lt;/p&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;Consider Deduplication&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;consider_deduplication&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:28,&amp;quot;secid&amp;quot;:40,&amp;quot;range&amp;quot;:&amp;quot;12731-12991&amp;quot;} --&gt;
&lt;h3 class=&quot;sectionedit41&quot; id=&quot;support_non-utf-8_characters&quot;&gt;5. Support Non-UTF-8 Characters&lt;/h3&gt;
&lt;div class=&quot;level3&quot;&gt;

&lt;p&gt;
Allow non-UTF-8 characters for compatibility with various filesystems:([Rdiffweb][1])
&lt;/p&gt;
&lt;pre class=&quot;code&quot;&gt;
sudo zfs create -o utf8only=off rpool/backups&lt;/pre&gt;

&lt;p&gt;
This setting cannot be changed after dataset creation.([Rdiffweb][1])
&lt;/p&gt;
&lt;hr /&gt;

&lt;p&gt;
Implementing these configurations can optimize ZFS for rdiff-backup, enhancing backup performance and compatibility.

&lt;/p&gt;
&lt;hr /&gt;
&lt;dl class=&quot;file&quot;&gt;
&lt;dt&gt;&lt;a href=&quot;https://wiki.devilplan.com/_export/code/linux:zfs:usb_backup?codeblock=30&quot; title=&quot;Download Snippet&quot; class=&quot;mediafile mf_cpp&quot;&gt;snippet.cpp&lt;/a&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;pre class=&quot;code file cpp&quot;&gt;by&lt;span class=&quot;sy4&quot;&gt;:&lt;/span&gt;
▖     ▘▖▖
▌ ▌▌▛▘▌▙▌
▙▖▙▌▙▖▌ ▌
&amp;nbsp;
edited&lt;span class=&quot;sy4&quot;&gt;:&lt;/span&gt; June &lt;span class=&quot;nu0&quot;&gt;2025&lt;/span&gt;&lt;/pre&gt;
&lt;/dd&gt;&lt;/dl&gt;

&lt;/div&gt;
&lt;!-- EDIT{&amp;quot;target&amp;quot;:&amp;quot;section&amp;quot;,&amp;quot;name&amp;quot;:&amp;quot;5. Support Non-UTF-8 Characters&amp;quot;,&amp;quot;hid&amp;quot;:&amp;quot;support_non-utf-8_characters&amp;quot;,&amp;quot;codeblockOffset&amp;quot;:29,&amp;quot;secid&amp;quot;:41,&amp;quot;range&amp;quot;:&amp;quot;12992-&amp;quot;} --&gt;</description>
    </item>
</rdf:RDF>
