When I originally set up the ZFS on my development v880 I added the internal disks as a raidz together with two volumes off the external fibre-channel array. As is the way with these things the development box has gradually become a production box. And I now realise that if the server goes pop I can’t just move the fibre-channel to another server because the ZFS pool contains that set of internal scsi disks.
To my horror I now discover that you can’t remove a top-level device (vdev in ZFS parlance) from a pool. Fortunately I have two spare volumes on the array so I can create a new pool and transfer the existing zfs filesystems to it. Here is a quick recipe for transferring zfs filesystems whilst keeping downtime to a minimum.
zfs snapshot oldpool/myfilesystem@snapshot1 zfs send oldpool/myfilesystem@snapshot1 | zfs receive newpool/myfilesystem
this will take a while but the filesystem can stay in use while you are doing it. Once this finishes you need to shut down any services that are relying on the filesystem and unmount it.
zfs unmount oldpool/myfilesystem
And take a new snapshot.
zfs snapshot oldpool/myfilesystem@snapshot2
you can now do an incremental send of the difference between the two snapshots which should be very quick.
zfs send -i oldpool/myfilesystem@snapshot1 \ oldpool/myfilesystem@snapshot2 | zfs receive newpool/myfilesystem
Now you can point the services at the new filesystem and start over until all the filesystems on the original pool have been transferred.
Very interesting article. I would like to migrate a SunCluster using zfs to new storage
I was hoping to be able to run zpool replace . is the impossibility to remove a top level vdev still present ?
No idea, I haven’t actively used ZFS since 2008. You should probably try and find a good Solaris/OpenSolaris forum to ask on.
This article is a little out of date; Shadow Migration is now available as a way to migrate to a new pool with no downtime.
Link:
http://docs.oracle.com/cd/E23824_01/html/821-1448/gkkud.html