I put together a little box that I want to use for my local Nextcloud installation and potentially other stuff, too. Since this is also a little toy project, I decided to, first, play around a bit with it. The hardware it is running right now is a good mix of new and old things. The platform is all new, and the disks are old. The platform is built from these components:

To start playing around with it, I used two old 3.5″ spinning disks (from my old desktop computers) and two old 2.5″ spinning disks that I took out of older notebooks that I brought to the local recycling facility. The idea is to go through potentially relevant operations of ZFS while the system does not contain any important data. Scenarios I want to go through before I put data on those disks: (1) adding devices for capacity, (2) replacing a physical device after failure, (3) removing devices for no good reason, (4) just doing some benchmarking.

I decided to first create a pool with one vdev, which consists of two physical disks, that are configured in a mirroring, i.e., RAID-1. In case you, like me, have trouble to remember what RAID-1 is compared to RAID-0: I remember it with “The number indicates the amount of data the RAID preserves in case one disk dies.” This configuration is what I also want to run in my final setup. So, I was most curious as to how that performs in different scenarios while benchmarking. Before we get to the benchmark, likely another post on its own, I need to set up the ZFS pool, etc.

Initial Setup

First, I create a pool via the command shown below. In my case, I had to force the creation, since both disks were of different size. One of the disks is a 400 GB model and the other one is a 500 GB model. Both are the older SATA 3Gbps interface. Maybe that gives you an idea of how old they are.

# Force-create a pool called "tank" with a mirror vdev using the devices ata-# SAMSUNG-Device and ata-WD-Device
$> sudo zpool create -f tank mirror ata-SAMSUNG-Device ata-WD-Device

# Show the pool with the mirror vdev
$> zpool status
# Output similar to
config:
NAME                                           STATE     READ WRITE CKSUM
  tank                                         ONLINE       0     0     0
    mirror-0                                   ONLINE       0     0     0
    ata-SAMSUNG-DEV                            ONLINE       0     0     0
    ata-WD-DEV                                 ONLINE       0     0     0

Once the pool is created, we need to put the actual file system on the pool. This goes back to the fact that ZFS is a lot of things. I create ZFS file system and mount it as /media/pool-mount. After this command, I can actually put some data onto the pool and make use of the mirroring. This, at least in theory means, that the read performance should be improved from the ability to read from two devices as opposed to one.

# Create the data pool tank/data and set its mountpoint o /media/pool-mount
$> sudo zfs create -o mountpoint=/media/pool-mount tank/data

At this point I did a few “benchmarks”, just to see how well (or bad) ZFS performs in different scenarios. I use ‘”‘ because I would not call it benchmarking what I did. I mostly just copied things a bit from A to B and checked manually a bit how long it took. There was no real methodology or anything going with it. It was more a matter of curious playing around. So, I will spare everybody the complete nonsense data or impression I got from that and present some more reasonable data in a later post.

Adding A Vdev

After that initial testing with the single mirror vdev, I added another mirror vdev that consists of the two 2.5″ SATA disks. This is, basically, the way how additional storage can be added to the pool later without the need to take the pool offline or anything. I think that is beautiful and one of the features I’m most interested in. After that second mirror vdev was added, I again did some non-scientific benchmarking. And again, I will spare everybody with the “data” that I got there.

# Add another mirror-vdev to the pool named "tank"
$> sudo zpool add tank mirror /dev/disk/by-id/ata-Hitachi-Dev /dev/disk/by-id/ata-Hitachi-Dev

Removing A Vdev

Finaly, I then also wanted to see how I can remove a vdev from the pool again. While the motivation here was mostly to run some clean benchmarks on the pool with just the single mirror vdev, it’s probably a good idea to have gone through these motions before actual data is at risk. It took some reading and googling before I found this post on the level1techs forum. This simply worked on my setup.

# Remove the mirror vdev "mirror-1" from the pool tank again.
$> sudo zpool remove tank mirror-1
# Check the output of another 
$> zpool status
config:
NAME                             STATE     READ WRITE CKSUM
  tank                           ONLINE       0     0     0              
    mirror-0                     ONLINE       0     0     0
      ata-SAMSUNG-Dev            ONLINE       0     0     0
      ata-WD-Dev                 ONLINE       0     0     0
    mirror-1                     ONLINE       0     0     0  (removing)
      ata-Hitachi-Dev            ONLINE       0     0     0  (non-allocating)
      ata-Hitachi-Dev            ONLINE       0     0     0  (non-allocating)

More To Come

There is more to come from both the little machine I put together and me playing around with ZFS, including some benchmark numbers from different scenarios. However, for now, I’ll leave it as is, so that I have a bit of a little reference and you something to read.

Comments are closed