Moving ZFS filesystems between pools

When I originally set up the ZFS on my development v880 I added the internal disks as a raidz together with two volumes off the external fibre-channel array. As is the way with these things the development box has gradually become a production box. And I now realise that if the server goes pop I can’t just move the fibre-channel to another server because the ZFS pool contains that set of internal scsi disks.

To my horror I now discover that you can’t remove a top-level device (vdev in ZFS parlance) from a pool. Fortunately I have two spare volumes on the array so I can create a new pool and transfer the existing zfs filesystems to it. Here is a quick recipe for transferring zfs filesystems whilst keeping downtime to a minimum.

zfs snapshot oldpool/myfilesystem@snapshot1

zfs send oldpool/myfilesystem@snapshot1 | zfs receive newpool/myfilesystem

this will take a while but the filesystem can stay in use while you are doing it. Once this finishes you need to shut down any services that are relying on the filesystem and unmount it.

zfs unmount oldpool/myfilesystem

And take a new snapshot.

zfs snapshot oldpool/myfilesystem@snapshot2

you can now do an incremental send of the difference between the two snapshots which should be very quick.

zfs send -i oldpool/myfilesystem@snapshot1 \
             oldpool/myfilesystem@snapshot2 | zfs receive newpool/myfilesystem

Now you can point the services at the new filesystem and start over until all the filesystems on the original pool have been transferred.

4 thoughts on “Moving ZFS filesystems between pools”

  1. Very interesting article. I would like to migrate a SunCluster using zfs to new storage
    I was hoping to be able to run zpool replace . is the impossibility to remove a top level vdev still present ?

Leave a Reply

Your email address will not be published. Required fields are marked *