Footsteps of spring

Hooray, finally it could be distinctly heard marching down a corduroy and once you’re left with the last sheet of a tear-off calendar in you palm you could crumple it and let the spring in. In the end, you don’t have a choice since tomorrow is the first day of her reign. So you’d better strain your ears to hear her breath in the dripping of melted snow and breathe her in with the first beam of the sun. It’s a springtime!
But today is still February so I spent several hours in the skating-rink with my son teaching him to stand firmly and confidently on the skates. It may sound unbelievable but it was empty and we were the only skaters. When I was a kid myself there used to be a lot of boys and girls, alone or with their parents, just skating or playing hockey and we used to have the skating-rinks near almost every dwelling house. I do still remember how we fought like a crazy playing ice-hockey till the dark and it didn’t matter if the puck could be hardly seen. I said “used to” because now we prefer to party, to spend/kill time in the shopping-malls and to watch “dancing on ice” and may be that’s one of the reasons why we’ve fscked up the Olympic games.

VxVM is watching after you

Couple of days ago my colleague told me about one neat feature of VxVM 5.x that could be quite helpful in the field. Imagine a situation when a customer complains about VxVM misconfiguration and blames your team for a slime work. To prove him wrong, you could sift through VxVMs’ command log files to get a list of commands you typed during initial configuration. If a customer did something wrong himself and now is trying to shift the blame upon you these log files could be of invaluable help as well – just show where and when he made the mistake. The log files could me found in /etc/vx/log and are named /etc/vx/log/cmdlog and /etc/vx/log/cmdlog.number for the current and historic command logs respectively. There is a vxcmdlog(1M) command to give you some control over this feature.
One thing to keep in mind is that no every commands script are logged:

Most command scripts are not logged, but the command binaries that they call are logged. Exceptions are the vxdisksetup, vxinstall, and vxdiskunsetup scripts, which are logged.

Enjoy!

SunFire 4810 upgrade to Solaris 10u8 issue

Yesterday, I was doing a planned upgrade of SunFire 4810 to Solaris 10u8 and faced the following error just awhile after invoking “boot net – install”. It’s worth noting that prior to OS upgrade I’d successfully updated OBP to the latest release – 5.20.14.

SunOS Release 5.10 Version Generic_141444-09 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
FATAL: PROM_PANIC[0x0]: assertion failed: TTE_IS_VALID(ttep), file: ../../../sun4u/gen/src/hat_sfmmu.c, line: 741
debugger entered.

Both Google and SunSolve led me to the following bug but in my case I wasn’t installing with ZFS root.

Eventually, I used verbose boot with the full path instead to the network device to jumpstart the server but I’m not sure if the original issue wasn’t caused by a moon phase, sun flare or solar wind.

boot /ssm@0,0/pci@18,700000/pci@1/SUNW,hme@0,1 -v – install

Going to double check on another SF4810 since I didn’t have enough time during the last maintenance window.

Update
So I played a bit with another SF4810 and did exactly the same steps in hope to cause similar error as described above but everything I had done was result-less and vain. Sifting through the code I only dug out that assertion fails when tte_inthi > 0 (???). PTE (Page Table Entry) related code:

 typedef union {
      	struct tte {
      		uint32_t	v:1;		/* 1=valid mapping */
      		uint32_t	sz:2;		/* 0=8k 1=64k 2=512k 3=4m */
      		uint32_t	nfo:1;		/* 1=no-fault access only */
      
      		uint32_t	ie:1;		/* 1=invert endianness */
      		uint32_t	hmenum:3;	/* sw - # of hment in hme_blk */
      
      		uint32_t	rsv:7;		/* former rsv:1 lockcnt:6 */
      		uint32_t	sz2:1;		/* sz2[48] Panther, Olympus-C */
      		uint32_t	diag:1;		/* See USII Note above. */
      		uint32_t	pahi:15;	/* pa[46:32] See Note above */
      		uint32_t	palo:19;	/* pa[31:13] */
      		uint32_t	no_sync:1;	/* sw - ghost unload */
      
      		uint32_t	suspend:1;	/* sw bits - suspended */
      		uint32_t	ref:1;		/* sw - reference */
      		uint32_t	wr_perm:1;	/* sw - write permission */
      		uint32_t	exec_synth:1;	/* sw bits - itlb synthesis */
      
      		uint32_t	exec_perm:1;	/* sw - execute permission */
      		uint32_t	l:1;		/* 1=lock in tlb */
      		uint32_t	cp:1;		/* 1=cache in ecache, icache */
      		uint32_t	cv:1;		/* 1=cache in dcache */
      
      		uint32_t	e:1;		/* 1=side effect */
      		uint32_t	p:1;		/* 1=privilege required */
      		uint32_t	w:1;		/* 1=writes allowed */
      		uint32_t	g:1;		/* 1=any context matches */
      	} tte_bit;
      	struct {
      		int32_t		inthi;
      		uint32_t	intlo;
      	} tte_int;
      	uint64_t		ll;
      } tte_t;


#define	tte_inthi	tte_int.inthi
#define	TTE_IS_VALID(ttep)	((ttep)->tte_inthi < 0)

I wish there was more clear and precise description of tte_int.inthi and tte_t.ll in the source code.

Moving ZFS disk into Solaris with UFS

Doing another jumpstart installation I stumbled upon the following error:

Processing profile
        - Opening Flash archive
        - Validating Flash archive
        - Selecting all disks
        - Configuring boot device

ERROR: The boot disk (c0t0d0) is not selected

ERROR: Flash installation failed
Solaris installation program exited.

An obvious decision was to check the disk with format utility. Printing its partition information I saw the following picture:

Part      Tag    Flag     First Sector        Size        Last Sector
  0 unassigned    wm                 0          0              0    
  1 unassigned    wm                 0          0              0    
  2     backup    wm                34      33.91GB         71116540    
  3 unassigned    wm                 0          0              0    
  4 unassigned    wm                 0          0              0    
  5 unassigned    wm                 0          0              0    
  6 unassigned    wm                 0          0              0    
  7 unassigned    wm                 0          0              0    
  8   reserved    wm          71116542       8.00MB         71132925 

“How come I have nine partitions”, that was my first reaction?! And just in a jiffy I recalled that this disk used to was a part of ZFS pool and of course it had been formatted with EFI label. So, how to change it back from EFI to SMI label. To successfully accomplish that I had to run “format -e”, because without “-e” you could change nothing.


# format -e c0t0d0
partition> l
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Auto configuration via format.dat[no]? y
partition> print
Current partition table (default):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       0                0         (0/0/0)            0
  1       swap    wu       0                0         (0/0/0)            0
  2     backup    wu       0 - 24619       33.92GB    (24620/0/0) 71127180
  3 unassigned    wm       0                0         (0/0/0)            0
  4 unassigned    wm       0                0         (0/0/0)            0
  5 unassigned    wm       0                0         (0/0/0)            0
  6        usr    wm       0 - 24619       33.92GB    (24620/0/0) 71127180
  7 unassigned    wm       0                0         (0/0/0)            0

partition> q

Sorted. Now I was able to continue my interrupted jumpstart.

Send a complete zfs pool

Imagine, that you have a zfs pool with dozens of zfs datasets which you need to migrate to another box. “How do I accomplish that” is a reasonable question. Actually, that’s dead easy with zfs send/receive mechanism:

# zpool list zones
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zones  67.5G  30.5G  37.0G    45%  ONLINE  -

Firstly create a recursive snapshot and once it’s over you could send/receive the resulting data stream to another host:

# zfs snapshot -r zones@zones_snapshot
# zfs send -R zones@zones_snapshot | ssh user@host "zfs receive -Fd zones"