How to power off USP-V when it doesn’t want to

In the era of VSP G1000 this post may sound dated but I still hope that it would help some poor sole in the same situation I was sometime ago. The task was laughably simple: power off USP-V, power it up just to make sure it still could boot up flawlessly and shut it down again. All went nice and dandy till the final step – the ultimate power off. So I opened DKC panel and switched simultaneously two switches: PS-ENABLE and PS-OFF (all in accordance with the maintenance manual).


So far so good. During the first power off iteration it took the array (DKC and one DKU) roughly 15-20 minutes to cut off the power from its components so switches on AC boxes could be turned into OFF position. But not this time… After waiting for one hour and a half the system was still up. However, and thankfully the disks had been spun down successfully. That opened a door for a forceable shut down procedure. Which is simple and straightforward as a samurai’s sword.

  • Open DKC panel and unscrew it as shown on the picture below.
  • IMG_1928.JPG

  • Pay attention to the jumpers. We will be using JP3 which is right above JP2.
  • IMG_1929.JPG

  • If you don’t have a jumper (I didn’t have one) there is also a workaround. Go to the back of a DKU, open its door and at the bottom there is a recess with another set of jumpers.
  • IMG_1935-0.JPG

  • It is save to pick and pull out any of these jumpers (remember that the disks had to be powered off before that).
    I’ve been told that these jumpers define the physical position of a DKU rack relatively to a DKC (is it on the left or right and how far DKU-R1, DKU-L1, etc.)
  • Once you have a jumper put it into JP3 in the DKC panel and turn CHK RST switch on but pressing on its upper half.
  • A moment after that the array would be shut down.

Posted in: HDS |

AMS/HUS Volume migration

Sometime, due to a number of different reasons, there might be a need to perform an undisruptive LUN migration from one DP pool to another. Thankfully it’s trivially easy to do with AMS/HUS Hitachi’s storage arrays:

  1. First, enable Volume Migration if this option is still off:
  2. auopt -unit array_name -option VOL-MIGRATION -st enable
  3. Create a new LUN that will be used as a destination:
  4. auluadd -unit array_name -lu 2 -dppoolno 1 -size 1468006400 -widestriping enable -fullcapacity enable
  5. Create a special LUN called DMLU (Differential Management Logical Unit) which will be keeping the differential data during migration. Please not that this LUN can’t be less than 10GB and full capacity mode could not be enabled for it:
  6. auluadd -unit array_name -lu 3 -dppoolno 1 -size 70G -widestriping enable
    audmlu -unit array_name -set -lu 3
  7. Start LUN migration from source (LUN 1) to destination LUN (LUN 2) that we’ve created recently:
  8. aumvolmigration -unit array -add -lu 2
    aumvolmigration -unit array -create -pvol 1 -svol 2
  9. Basically, we’ve just paired two LUNs together and started the copy from one to another. To see the progress run the following command:
  10. aumvolmigration -unit array_name -refer
  11. Once the copy is over, the pair could be split and since the LUN’s id where the data live doesn’t change, the destination LUN, in our example that was LUN 2, could be deallocated and freed:
  12. aumvolmigration -unit array_name -split -pvol 1 -svol 2

Just a reminder, to make the migration non disruptive the original LUN id must not change, and that’s why in the end the only LUN id that could be deleted is that we created during step 2 – LUN with id 2.

That’s it. Have fun and safe migrations.

Posted in: HDS |

What to do if HDvM reports unknown version

If you notice that HDvM web interface is saying that it’s running an unknown version (and also recommends to contact HDS support) or when applying a patch it fails like this:

* Update Utility                                                      *
[HDvM 072003] Hitachi Device Manager
Now searching for target products...

The target product is not found.

Then don’t worry as it’s very easy to fix. Most probably you’re missing .HiCommandSPInfo file which by default is placed under /opt/HiCommand/. In my case, for some strange reason, I had it under /opt/ and after I copied it to the correct location (/opt/HiCommand/.HiCommandSPInfo) the issue was solved:

* Update Utility                                                      *
[HDvM 072003] Hitachi Device Manager
Now searching for target products...

[HDvM 072000] is found.

Good luck.

Posted in: HDS |

Migrating Hi-Track

Yesterday I was moving Hi-Tack monitor from Window to Linux and just want to give a quick overview of the process. To save all your previous Hi-Track configuration just do the following:

  1. Install Hi-Track, no surprisingly.
  2. Copy devices.export from your old installation and apply it to a new one:
    /usr/hds/hitdfmon/rundfmon import ./devices.export
  3. Copy object.dat28 and HitDFmon.config files to /usr/hds/hitdfmon/ (that is default installation directory for Hi-Track and your could be different).
  4. Copy object.dat28 if you’ve configured username/password to connect to the devices you monitor.
  5. Start Hi-Track:
    /etc/init.d/rundfmon start

Easy peasy.

Posted in: HDS |

HTnM data migration between Windows and Linux

Sometime ago I was trying to migrate HTnM data, more precisely HTnM agents datastore history information, from Windows to Linux and want to share my story. Migrating HTnM configuration and its data from Windows to Linux is very easy, just follow the documentation steps verbatim. But moving historical data from the HTnM agents, whether it’s RAID or SAN, is a different thing. One may think that it’s a straightforward operation, well it really is, but the documentation doesn’t quite agree on that.

OS groups that can be migrated
OS groups are defined as follows for the Tuning Manager series. Store database migration can only be performed for the same OS groups.

• Windows groups:
• Windows Server 2003 (x86)
• Windows Server 2003 (x64)
• Windows Server 2008 (x86)
• Windows Server 2008 (x64)
• HP-UX groups:
• HP-UX 11i V2 (IPF)
• HP-UX 11i V3 (IPF)
• Solaris (SPARC) groups
• Solaris 9 (SPARC)
• Solaris 10 (SPARC)
• Solaris (X64) group:
• Solaris 10 (X64)
• AIX groups:
• AIX 5L V5.3
• AIX V6.1
• Linux groups:
• Linux4 (x86)
• Linux5 (x86)

Moreover, it says that this is unsupported (I have opened the case in HDS and they confirmed my sneaking suspicions). Period.
Thankfully, I had some spare time to prove the documentation wrong. Sounds over-confide, I know. ;-)
Anyway, I tried to pretend that I hadn’t read that part that stated that this was unsupported and went through the steps as I was migrating data between Windows-based machines. First, I boldly tried to migrate directly from Windows (HTnM 6.4) to Linux (HTnM 7.1) and everything went very nice until I tried to restore the data from RAID agent on Linux server. When I ran jpcrestor tool I was told that data model of the agent on the migration destination (Linux server) differed from the migration source (Windows) version. JFYI, HTnN ver. 6.4 uses data model ver 7.6, whilst HTnM ver. 7.1 has data model ver. 8.0. For me it meant that before being able to migrate the store database I had to convert from older to newer version. Hitachi provides a special tool just for that:

# /opt/jp1pc/tools/jpcdbctrl dmconvert -d /tmp/export/
KAVE05847-I Data model conversion of backup data will now start. (dir=/tmp/export/)
KAVE05099-E An internal command terminated abnormally. (/opt/jp1pc/bin/jpccvtmdl -sdict "/tmp/export//STDICT.DAT.bak" -ddict "/tmp/export//STDICT.DAT" -stdir "/tmp/export/" -wkdir "/tmp/export//cvtwork" -directdbdir "/tmp/export/")
KAVE05849-E Data model conversion of backup data ended abnormally. (dir=/tmp/export/)

As I mentioned before, I opened a case with HDS and was “pleased” to hear that what I was doing was unsupported. So I was on my own. But I didn’t give up. I’d decided to upgraded HTnM on Windows to version 7.1 and repeat everything once again. To my surprise, since I didn’t have to mess around with converting data model to the same format, agent’s data were restored swimmingly. Important note. I had to run dos2unix against jpcsto.ini file since it contained Windows CRLF line terminators which is expected since I was migrating from Windows server. Once I did that, there were no more hurdles to overcome:

# ./jpcresto agtd /tmp/hdsb inst=uspv-array 
Directory receiving backup data = /opt/jp1pc/agtd/store/uspv-array 
Directory supplying backup data = /tmp/hdsb 
Clearing indexes... 
Appropriate indexes removed. 
Replace in progress. 
Replacing /tmp/hdsb/STDC.DB to /opt/jp1pc/agtd/store/uspv-array/STDC.DB ... 
Replacing /tmp/hdsb/STDC.IDX to /opt/jp1pc/agtd/store/uspv-array/STDC.IDX ... 
Replacing /tmp/hdsb/STDM.DB to /opt/jp1pc/agtd/store/uspv-array/STDM.DB ... 
Replacing /tmp/hdsb/STDM.IDX to /opt/jp1pc/agtd/store/uspv-array/STDM.IDX ... 
Replacing /tmp/hdsb/STAM.DB to /opt/jp1pc/agtd/store/uspv-array/STAM.DB ... 
Replacing /tmp/hdsb/STAM.IDX to /opt/jp1pc/agtd/store/uspv-array/STAM.IDX ... 
Replacing /tmp/hdsb/STPI/5/2010/1130/001/PI_LDS.DB to 
/opt/jp1pc/agtd/store/uspv-array/STPI/5/2010/1130/001/PI_LDS.DB ... 
Replacing /tmp/hdsb/STPI/5/2010/1130/001/PI_LDS.IDX to 
/opt/jp1pc/agtd/store/uspv-array/STPI/5/2010/1130/001/PI_LDS.IDX ... 
Replacing /tmp/hdsb/STPI/5/2010/1130/001/PI_CLPS.DB to 
/opt/jp1pc/agtd/store/uspv-array/STPI/5/2010/1130/001/PI_CLPS.DB ... 
Replacing /tmp/hdsb/STPI/5/2010/1130/001/PI_CLPS.IDX to 
/opt/jp1pc/agtd/store/uspv-array/STPI/5/2010/1130/001/PI_CLPS.IDX ... 
Replacing /tmp/hdsb/STPI/5/2010/1130/001/PI_LDE.DB to 
/opt/jp1pc/agtd/store/uspv-array/STPI/5/2010/1130/001/PI_LDE.DB ... 
Replacing /tmp/hdsb/STPI/5/2010/1130/001/PI_LDE.IDX to 
/opt/jp1pc/agtd/store/uspv-array/STPI/5/2010/1130/001/PI_LDE.IDX ... 
Replacing /tmp/hdsb/STPI/5/2010/1130/001/PI_PTS.DB to 
/opt/jp1pc/agtd/store/uspv-array/STPI/5/2010/1130/001/PI_PTS.DB ... 

... SKIPPED ... 

Replacing /tmp/hdsb/STPD/2011/0726/001/PD_RGC.DB to 
/opt/jp1pc/agtd/store/uspv-array/STPD/2011/0726/001/PD_RGC.DB ... 
Replacing /tmp/hdsb/STPD/2011/0726/001/PD_RGC.IDX to 
/opt/jp1pc/agtd/store/uspv-array/STPD/2011/0726/001/PD_RGC.IDX ... 
Replacing /tmp/hdsb/STPH.DB to /opt/jp1pc/agtd/store/uspv-array/STPH.DB ... 
Replacing /tmp/hdsb/STPH.IDX to /opt/jp1pc/agtd/store/uspv-array/STPH.IDX ... 
Replacing /tmp/hdsb/STPA.DB to /opt/jp1pc/agtd/store/uspv-array/STPA.DB ... 
Replacing /tmp/hdsb/STPA.IDX to /opt/jp1pc/agtd/store/uspv-array/STPA.IDX ... 
Replacing /tmp/hdsb/STPT.DB to /opt/jp1pc/agtd/store/uspv-array/STPT.DB ... 
Replacing /tmp/hdsb/STPT.IDX to /opt/jp1pc/agtd/store/uspv-array/STPT.IDX ... 
KAVE06006-I Restore processing of the Store database terminated normally. 

More than that, when I logged onto HTnM web interface and searched my one year old data from USP-V array – they were all their for my observation. Sweet. What’s the gist of this post, you may ask. Well, not only don’t ever give up but always take every recommendation, even if it comes from a vendor’s support team, with a grain of salt. Peace.

Posted in: HDS |

Hi-Track read timed out

Trying to investigate a nasty problem. Every day during seemingly the same hours, i.e. from 4 to 8 a.m., Hitrack reports that one of the controllers in the array it monitors is not responding. During the second consecutive check the problem resolves and the array stays healthy for the rest of the day. Next day it all starts over. The disk array (AMS2100) has the latest firmware installed and no errors or packet drops have been caught in the network. But Hitrack’s logs clearly show that there is a problem and as soon as it reports “Read timed out (2)” twice in a row the aforementioned alert hits the road. Hope, I could find the answer before my head cracks.

Update. Once we had migrated our HCS (HDvM and HTnM) from Windows to Linux the issue miraculously just gone away. And that wasn’t just a coincidence. We didn’t wiped out our Windows installation straight away but just powered it off. And then we turned it back on, guess what? Yes, the original issues had started to pop up again.