Grow root ZFS pool online under FreeBSD 9.2 using Rackspace’s instance
If you opted for a FreeBSD instance on Rackspace and chose standard configuration then you’ll end up with 512MB of RAM and 20GB of space. That’s ok for the starter but eventually you’d need more one day and will upgrade RAM to 1GB for example. With this upgrade your disk will grow upto 40GB but OS would not notice that (Rackspace doesn’t support automatic disk partitioning on FreeBSD). But you could fix that easily by yourself with a few simple CLI commands.
So lets plunge into the command line and see what we have and what we could do about that:
- Before the upgrade disk’s layout looks similarly to what is shown below:
# gpart show -p => 1 1048575 ada2 MBR (512M) 1 1048575 ada2s1 linux-data (512M) => 34 41942973 ada0 GPT (20G) 34 94 ada0p1 freebsd-boot (47k) 128 41942879 ada0p2 freebsd-zfs (20G) # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 19.9G 3.67G 16.2G 18% 1.00x ONLINE -
# gpart show => 1 2097151 ada2 MBR (1.0G) 1 2097151 1 linux-data (1G) => 34 41942973 ada0 GPT (40G) [CORRUPT] 34 94 1 freebsd-boot (47k) 128 41942879 2 freebsd-zfs (20G) # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 19.9G 3.67G 16.2G 18% 1.00x ONLINE -
GEOM: ada0: the secondary GPT header is not in the last LBA.
…The GPT primary metadata is stored at the beginning of the device. For
redundancy, a secondary (backup) copy of the metadata is stored at the
end of the device…
If the size of the device has changed (e.g., volume expansion) the sec-
ondary GPT header will no longer be located in the last sector. This is
not a metadata corruption, but it is dangerous because any corruption of
the primary GPT will lead to loss of the partition table. This problem
is reported by the kernel with the message:GEOM: provider: the secondary GPT header is not in the last LBA.
# gpart recover ada0 # gpart show => 1 2097151 ada2 MBR (1.0G) 1 2097151 1 linux-data (1G) => 34 83886013 ada0 GPT (40G) 34 94 1 freebsd-boot (47k) 128 41942879 2 freebsd-zfs (20G) 41943007 41943040 - free - (20G)
# gpart resize -i 2 ada0 gpart: Device busy
# sysctl kern.geom.debugflags=16 kern.geom.debugflags: 0 -> 16 # gpart resize -i 2 ada0 # gpart show => 1 2097151 ada2 MBR (1.0G) 1 2097151 1 linux-data (1G) => 34 83886013 ada0 GPT (40G) 34 94 1 freebsd-boot (47k) 128 83885919 2 freebsd-zfs (40G) # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 19.9G 3.67G 16.2G 18% 1.00x ONLINE -
# zpool set autoexpand=on zroot # zpool online -e zroot ada0p2 ada0p2 # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 39.9G 3.67G 36.2G 9% 1.00x ONLINE -
Bye!
on May 13, 2014 at 11:30 pm
· Permalink
[…] a few months after writing this I came across this article which does a good […]
on May 27, 2014 at 7:55 pm
· Permalink
Many thanks for this, just what I needed.
I had an issue with FreeBSD 10p4 where the very last command failed with the error:
# zpool online -e zroot ada0s3 ada0s3
cannot expand ada0s3: no such device in pool
cannot expand ada0s3: no such device in pool
I had to find the guid of the disk and use that instead of the device name.
# zdb
zroot -> vdev_tree -> children[0] -> guid
# zpool online -e zroot 14207384352887274889 14207384352887274889
After that it worked great!
on May 27, 2014 at 8:44 pm
· Permalink
Even more kudos for your great comment. Didn’t know that it’s possible to grow a pool using vdev’s guid ;-)
Thank you!
on June 7, 2014 at 12:58 pm
· Permalink
Thanks!
on June 7, 2014 at 1:13 pm
· Permalink
You’re most welcome! ;-)
on June 7, 2014 at 1:07 pm
· Permalink
[…] Update 06/2014: while the above procedure is related to Solaris, some time ago I was trying to use it to expand a zpool on FreeBSD where I’ve found that the above procedure won’t work. This happens because first you need to grow the partition where the filesystem resides and the grow the pool itself. I’ve found this useful article that describes the FreeBSD procedure. […]