Recovering data from iPhone backup

Here is a story about how to recover the data from old iPhone’s backups.
As you know iTunes stores iPhone backups in your home directory or more precisely
in ~/Library/Application Support/MobileSync/Backup. The problem is that it’s not very convenient to mess with restoring iPhone and as a result to end up with your current data and configuration overwritten. To avoid that you could use iPhone Backup Extractor program that could do all the hard work for you and restore absolutely everything that is kept in the backup files. It has saved me once when I needed to restore photos from the backup that I had restored from “Time Machine” and hope it will help you one day.

Jumped into Oracle Solaris 11 Express wagon

It’s not the news anymore that Oracle Solaris 11 Express has been released.
Unsurprisingly it turned out to be a trivial task to upgrade from OpenSolaris (snv 134) and it was just a matter of updating the publisher and do usual image-update:

opensolaris:~$ pfexec pkg set-publisher --non-sticky
opensolaris:~$ pfexec pkg set-publisher --non-sticky extra
opensolaris:~$ pkg set-publisher -P -g solaris
opensolaris:~$ pfexec pkg image-update --accept

Some time later:

opensolaris:~$ uname -a
SunOS opensolaris 5.11 snv_151a i86pc i386 i86pc Solaris

My lovely SL500

My old friend SL500 has given me another gift just a few hours before my flight to Moscow from Irkutsk where I was replacing the robot module or Z-Drive assembly. So I rushed back to the customer’s site to find the following:

2010-11-12T22:38:21.483,, 510, robot, /usr/local/bin/Ifm, error, 0000, 5069, "Director - putResponse() servo mech reach event 5069 at 2009 tachs, 1849 mils"
2010-11-12T22:39:10.234,, 510,            robot, /usr/local/bin/Ifm,  error, 0000, 5069, "Director - putResponse() servo mech reach event 5069 at 2008 tachs, 1848 mils"
2010-11-12T22:39:10.291,, 3202,              ifm,                 ,  error, 3000, 3313, "(request id = HOST/0x101d5a48) IfmMove::doPut(): move back to source from (LMRC) 0,2,1,2 failed, going inop"
2010-11-12T22:39:10.644,, 3202,              ifm,                 ,  error, 3000, 3322, "(request id = HOST/0x101d5a48) IfmMove::doPut() to (LMRC) 0,2,1,2 : cartridge in hand, going inop"
2010-11-12T22:39:10.680,, 3202,              ifm,                 ,  error, 3000, 3322, "(request id = HOST/0x101d5a48) IfmMove::commonMoveCommand(): PUT request of tape AB0017L3 from (LMRC) 0,2,3,9 to (LMRC) 0,2,1,2 failed:"

The robot’s hand was frozen and standing still just opposite the slot it tried to load the tape back into. The result code 3313, which means “Put failed”, just seconds that. Since the time was running out the only solution I was able to come with was to manually remove the tape from the robot’s claws and reboot the library. Once I’m back to Moscow will have to call back the customer to find out the current state of the library. So this story will definitely have a sequel. Stay tuned…

Posted in: Sun |

Oracle Secure Backup

In preparation to a possible trip that will never happen I had to go through the configuration steps of Oracle Secure Backup to be able to do it quickly and professionally on-site.

Since I didn’t have a spare tape library to play with I used a built-in tape drive from D240 box and SF6900 as my test bed.

First, I had to configure my tape drive by setting sgen driver since st is not supported by OSB.

# update_drv -d -i '"scsiclass,01"' st
# add_drv -f -m '* 0666 bin bin' -i '"scsiclass,01" "scsiclass,08" "scsa,01.bmpt" "scsa,0.8.bmpt"' sgen
# ln -s /dev/scsi/sequential/c9t6d0 /dev/obt0

Next step is to create a host and assert is a few roles.

ob> mkhost -r admin,mediaserver,client -i server's_ip_address hostname
ob> lshost                                                             
sf6900-2      admin,mediaserver,client (via OB) in service

Once the host is defined it’s time to proceed wit the tape drive:

ob> mkdev -t tape -o -a sf600:/dev/obt0 tc-tape
ob> lsdev 
    Device type:            tape
    Model:                  [none]
    Serial number:          0005306351
    In service:             yes
    Automount:              yes
    Error rate:             8
    Query frequency:        [undetermined]
    Debug mode:             no
    Blocking factor:        (default)
    Max blocking factor:    (default)
    UUID:                   907cedda-ad0e-102d-a8d3-d67ad710fa01
    Attachment 1:
        Host:               sf6900
        Raw device:         /dev/obt0

It’s required to bind an OSB’s user to a unix account so that OS’s user would be able (authorized) to start a backup using RMAN:

ob> chuser --preauth sf6900:system_username+rman admin

In the end created a database backup storage selector and media family using mkssel and mkmf respectively to add more granularity into my backup configuration i.e. “Write window”, “Keep volume set” period or whether the volumes are appendable or not.

Finally, I used the following trivial RMAN script to make sure that everything was fine:

  allocate channel c1 device type sbt
    parms 'ENV=(OB_MEDIA_FAMILY=OracleBackup)';
  backup database include current controlfile;
  backup archivelog all not backed up;

To be on the safe side just used obtool to confirm that everything was indeed alright:

ob> lsj
Job ID           Sched time  Contents                       State
---------------- ----------- ------------------------------ ---------------------------------------
admin/3          none        database orcltst (dbid=1449400826) processed; Oracle job(s) scheduled
admin/3.1        none        datafile backup                running since 2010/11/02.17:06

ob> lspiece 
    POID Database   Content    Copy Created      Host             Piece name
     101 orcltst   full          0 11/02.17:07  sf6900      02ls0ppf_1_1
     102 orcltst   archivelog    0 11/02.17:09  sf6900      03ls0pua_1_1

American cop

Wish that one day the cops in my country would act similarly to prevent an accident and not bluntly looking for a chance to bribe:

Set a limit on a process’s size

At times it’s require to limit a process’s memory size by some value. In such cases “ulimit -m” is our best friend. But what if the task to limit not only a certain but all processes on the given system. I didn’t know how to do that, and frankly speaking have never faced with such problem, until recently. To my surprise it was just a matter of reading Linux’s documentation of its VM tunables to find a proper solution. So if swap is disabled then doing the following would tell the kernel not to hand out more than 70 percent of available RAM.

# sysctl -w vm.overcommit_memory=2
# sysctl -w vm.overcommit_ration=70

Conversely, if you do have swap enabled the same statement would limit the maximum possible memory that could be assigned to the processes to a slightly different value. That’s is because the general equation for this is:

commit = swap space(s) size + overcommit_ratio percent * RAM size.

In the end, allow me to cite the official Linux’s documentation:

The Linux kernel supports the following overcommit handling modes

0 – Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage. root is allowed to
allocate slighly more memory in this mode. This is the

1 – Always overcommit. Appropriate for some scientific

2 – Don’t overcommit. The total address space commit
for the system is not permitted to exceed swap + a
configurable percentage (default is 50) of physical RAM.
Depending on the percentage you use, in most situations
this means a process will not be killed while accessing
pages but will receive errors on memory allocation as

The overcommit policy is set via the sysctl `vm.overcommit_memory’.

The overcommit percentage is set via `vm.overcommit_ratio’.

The current overcommit limit and amount committed are viewable in
/proc/meminfo as CommitLimit and Committed_AS respectively.