Grow root ZFS pool online under FreeBSD 9.2 using Rackspace’s instance

If you opted for a FreeBSD instance on Rackspace and chose standard configuration then you’ll end up with 512MB of RAM and 20GB of space. That’s ok for the starter but eventually you’d need more one day and will upgrade RAM to 1GB for example. With this upgrade your disk will grow upto 40GB but OS would not notice that (Rackspace doesn’t support automatic disk partitioning on FreeBSD). But you could fix that easily by yourself with a few simple CLI commands.
So lets plunge into the command line and see what we have and what we could do about that:

  • Before the upgrade disk’s layout looks similarly to what is shown below:
  •  
    # gpart show -p
    =>      1  1048575    ada2  MBR  (512M)
            1  1048575  ada2s1  linux-data  (512M)
    
    =>      34  41942973    ada0  GPT  (20G)
            34        94  ada0p1  freebsd-boot  (47k)
           128  41942879  ada0p2  freebsd-zfs  (20G)
    
    # zpool list
    NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
    zroot  19.9G  3.67G  16.2G    18%  1.00x  ONLINE  -
    
  • But after, things somewhat have changed:
  • # gpart show
    =>      1  2097151  ada2  MBR  (1.0G)
            1  2097151     1  linux-data  (1G)
    
    =>      34  41942973  ada0  GPT  (40G) [CORRUPT]
            34        94     1  freebsd-boot  (47k)
           128  41942879     2  freebsd-zfs  (20G)
    
    # zpool list
    NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
    zroot  19.9G  3.67G  16.2G    18%  1.00x  ONLINE  -
    
  • Notice that gpart reports ada0 as corrupted and zpool still can’t see extra 20GB. But don’t panic. In our case this [CORRUPT] warning just identifies that the disk’s size has changed. Moreover we could confirm that by checking kernel’s messages using dmesg command:
  • GEOM: ada0: the secondary GPT header is not in the last LBA.
    
  • Let me partly quote gpart’s man page so the above message becomes more clear:
  • …The GPT primary metadata is stored at the beginning of the device. For
    redundancy, a secondary (backup) copy of the metadata is stored at the
    end of the device…

    If the size of the device has changed (e.g., volume expansion) the sec-
    ondary GPT header will no longer be located in the last sector. This is
    not a metadata corruption, but it is dangerous because any corruption of
    the primary GPT will lead to loss of the partition table. This problem
    is reported by the kernel with the message:

    GEOM: provider: the secondary GPT header is not in the last LBA.

  • Bingo. Thankfully, this is not critical and something that could be fixed in a second:
  • # gpart recover ada0
    
    # gpart show
    =>      1  2097151  ada2  MBR  (1.0G)
            1  2097151     1  linux-data  (1G)
    
    =>      34  83886013  ada0  GPT  (40G)
            34        94     1  freebsd-boot  (47k)
           128  41942879     2  freebsd-zfs  (20G)
      41943007  41943040        - free -  (20G)
    
  • Excellent. No more [CORRUPT] messages and as a bonus we could see an extra free space. So let’s grow our partition:
  • # gpart resize -i 2 ada0
    gpart: Device busy
    
  • Hm, device busy?! Well, there is a workaround for this one too:
  • # sysctl kern.geom.debugflags=16
    kern.geom.debugflags: 0 -> 16
    # gpart resize -i 2 ada0
    
    # gpart show
    =>      1  2097151  ada2  MBR  (1.0G)
            1  2097151     1  linux-data  (1G)
    
    =>      34  83886013  ada0  GPT  (40G)
            34        94     1  freebsd-boot  (47k)
           128  83885919     2  freebsd-zfs  (40G)
    
    # zpool list
    NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
    zroot  19.9G  3.67G  16.2G    18%  1.00x  ONLINE  -
    
  • Finally, let’s deal with our zpool and help it to see the full capacity of our disk:
  • # zpool set autoexpand=on zroot
    # zpool online -e zroot ada0p2 ada0p2
    
    # zpool list
    NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
    zroot  39.9G  3.67G  36.2G     9%  1.00x  ONLINE  -
    
  • That’s it. And thanks for reading.

Bye!

6 thoughts on “Grow root ZFS pool online under FreeBSD 9.2 using Rackspace’s instance

  1. Pingback: KVM – Adding Space To FreeBSD 10 zfs on root guest on a Centos 6.5 LVM host | SunSaturn Updates

  2. Many thanks for this, just what I needed.

    I had an issue with FreeBSD 10p4 where the very last command failed with the error:

    # zpool online -e zroot ada0s3 ada0s3
    cannot expand ada0s3: no such device in pool
    cannot expand ada0s3: no such device in pool

    I had to find the guid of the disk and use that instead of the device name.

    # zdb
    zroot -> vdev_tree -> children[0] -> guid

    # zpool online -e zroot 14207384352887274889 14207384352887274889

    After that it worked great!

    • Even more kudos for your great comment. Didn’t know that it’s possible to grow a pool using vdev’s guid ;-)
      Thank you!

  3. Pingback: How to expand a ZFS device | cyberz.org

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.