A little over a year ago, I set up a ZFS volume to serve as the storage backend for my local Nextcloud installation. I have written about this setup here for readers who want to know how I approached it.

While I learned about ZFS during my Bachelor’s degree, I was amazed by its technology. One specific feature I had in mind was the option to “Add more disks to easily add more space.” With that goal in mind, I proceeded to install ZFS, integrated my Nextcloud stack, and planned a future-proofing strategy: adding additional storage via new disks.

Recently, I came to the point at which I wanted to add more storage. The four old disks were accumulating just about 1 Terabyte. That is not particularly much if you want to store all your images and videos from your smartphone and other family members’ smartphones. In addition, the Nextcloud serves as data storage for all files, documents, etc. So, naturally, it requires some storage.

To address this, I purchased two 10TB Toshiba N300 drives to add to the existing ZFS pool. I configured them as a RAID-1 mirror vdev, which effectively results in a net storage capacity of 10 TB. To support this, I needed a SATA controller. I selected a SilverStone ECS606 PCIe card, which offers six SATA ports. Combined with the motherboard’s ports, I had 10 total ports available. The case is designed to house up to 8x 3.5″ drives and 2x 2.5″ drives, which matched the drive slots perfectly. I also purchased colorful cables from Sharkoon to color-code the disks that belong to each vdev.

I installed the drives, connected them, and verified their status using lsblk and /dev/disks/by-id. This method is my preferred way of identifying disks for vdev creation because it clearly shows the disk names. With the initial four disks, this helped me determine which drive was which. I then wanted to go ahead and add the two new Toshiba disks as a mirror vdev to the existing pool.

Unfortunately, as a ZFS beginner, I did not notice the critical difference between the physical sector sizes of the drives. The existing four disks use a 512-byte sector, whereas the new Toshiba drives use 4096-byte sectors. ZFS cannot add the new disks to the existing pool because it generally requires disks with matching sector sizes.

The solution was relatively straightforward. I created a new pool using the two new 10TB drives. Since the old pool is home to Nextcloud data, I had to move Nextcloud into maintenance mode. I stopped the Nextcloud container to ensure the pool was no longer active.

Next, I performed a rsync of the data from the old pool to the new pool. This was done in archive mode to ensure file metadata was preserved. I basically followed the Nextcloud documentation on how to create a backup of the data. After that, I attempted to edit the Nextcloud container to adjust the volume mountpoint. However, this appears to not be possible and would require to start a new container with new volume mounts. As this was not an option, I opted for another solution.

I changed the mountpoint of the old/original ZFS pool (/media/pool-mount) to /media/pool-mount-bku. Then, I updated the mountpoint of the new (10TB) drive to /media/pool-mount. After restarting the Nextcloud Docker containers, the containers did pick up the data from the new drives, but they remained mounted to the old mountpoint.

In the future, if I want to also add the old vdevs back to the pool, I will need to reformat them. This requires creating new vdevs that respect the updated 4096-byte sector size. For this to work, the old disks must have their sector size adjusted to match the new ones.

Moving forward, I learned to pay close attention to the physical sector size when selecting new drives. Another key point is that ZFS has an attribute called ashift which can be used to adjust the sector size. It seems that for a 512-byte sector, the value is ashift=9, whereas for 4096-byte sectors, the value is ashift=12. I created new vdevs using the adjusted ashift=12 value to match the physical sector size of the new disks. From what I read in forums, it appears that using larger sector sizes on disks with smaller physical sectors is acceptable. This would be the case here. The other way around can be done, but it appears to have horrific performance implications.

To summarize: I added some new storage to my local Nextcloud setup and learned a few more things about the ZFS filesystem. I now understand what I need to pay attention to when buying new drives in the future.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *