Wednesday, October 10, 2012

Creating Flash Archive Recovery Image



Purpose

To provide instructions to the customer to create a Flash Archive recovery image that can be used to restore a system to "factory fresh" condition. This document provides the simplest instructions to create a Flash Archive (FLAR) image that can be loaded onto the target system to recover from a failed disk drive.

Assumptions

The customer has access to both:

Initial boot media (installation CD/DVD), or netinstall service.

Off-system storage for the FLAR image.

Instructions

Creating the FLAR image.

Record the partition table of the disk drive that the image is for. This assumes that the replacement disk drive will be the same size, and parititioned identically to the original drive.
There are two methods for obtaining the partition table of the disk drive:

As a root-level user, use the format(1M) command to print out the partition table of the drive that the FLAR image will be taken from.
# format
The
format command will provide the names of the partitions.

As a root-level user, use the prtvtoc(1M) command to generate the parition information.
# prtvtoc /dev/dsk/c0t0d0s0
The
prtvtoc command will provide the size of the partitions by the number of cylinders for each partition. Save the information to a safe location. It will be used during the restoration of the system image during recovery.

Make sure that there is adequate space for the FLAR image where it will be created. The FLAR archive will require up to 5GB of space without compression.
# df -h /tmp

Create the FLAR archive.
As a root-level user, execute the
flarcreate(1M) command. In this example, the FLAR image will be stored to a directory in under /tmp named FLAR_recovery. The FLAR image will be named newsystem_recovery.flar. If stored locally, the creation of the image should take less than 30 minutes.
# mkdir /tmp/FLAR_recovery
# flarcreate -n my_recovery_image -x /tmp/FLAR_recovery /tmp/FLAR_recovery/newsystem_recovery.flar
In this example:

·         The "-n my_recovery_image" implants a name into the FLAR image. The name should be something unique and meaningful to better identify it as the FLAR image for the system.

·         The "-x /tmp/FLAR_recovery" option causes the /tmp/FLAR_recovery directory and its contents to be excluded from the FLAR image since it will not be needed in the recovery image.
NOTE: By default, the
flarcreate command ignores items that are located in "swap" partitions.

·         /tmp/FLAR_recovery/newsystem_recovery.flar is the path and filename of the FLAR image. The filename should be something unique and meaingful to better identify it as the FLAR image for the system.

Saving the FLAR image to a secure off-system location.

Obviously, saving your recovery image on the same disk drive that you intend to restore after a failure will not be useful when that same disk drive fails. The FLAR image must be saved to an external device or at a remote location across NFS. That external device or remote location must be accessible to the system at recovery time.

Copy the new FLAR to a safe location:
# cp /tmp/FLAR_recovery/newsystem_recovery.flar /net/my-safe-machine/FLAR_image

Recovering the system image from a FLAR image.

The process begins as a normal installation using whichever install method you choose. Instead of installing from the boot method, the installer is used to install from the FLAR image.

Begin the boot process.

Using the initial boot media (installation CD/DVD).
ok> boot cdrom

Using the netinstall service.
ok> boot net

Supply the network, date/time, and password information for the system.

When the "Solaris Interactive Installation" part is reached, select "Flash" as the installation choice.

Supply the path to the off-system location of the FLAR image:
/net/my-safe-machine/FLAR_image/newsystem_recovery.flar

Select the correct Retrieval Method (HTTP, FTP, NFS) to locate the FLAR image.
For our example, we copied to an NFS location.

Specify the FLAR image location.
From our example, the location would be:
my-safe-machine:/FLAR_image/newsystem_recovery.flar

At the "Select Disks" section, select the disk to install the FLAR image onto.

There is no need to preserve existing data.

At the "File System and Disk Layout" window, choose "Customize" to edit the disk slices to input the values of the disk partition table from the original disk. The partition table corresponds to each slice on the disk. Partition 0 from the partition table maps to Slice 0 (s0) on the harddrive.

·         The slice sizes can be viewed in Cylinders to better match the output from the partition table.

·         Do not change the size of Slice 2. It must span the entire disk regardless of the space being allocated.

·         If the replacement disk has more storage space than the original disk, then it can be partitioned to use the space available. However, at least as much space for each partition must be allocated as it was allocated on the original disk.

After the system reboots, the recovery is now complete.

Additional Considerations

Rebuilding the Device Trees

The recovery instructions assume that none of the hardware components have been added, removed or moved between the time that the recovery image was created and the time that a recovery is performed. If a system has been recovered after hardware has been changed, then it is possible that the device trees (/dev and /devices) need to be updated. This can be done with either a reconfiguration reboot of the system, or by using the devfsadm(1M) command.

To rebuild the device trees, as a root-level user, use the devfsadm(1M) command:
# devfsadm -C

 

Solaris 10 Live Upgrade


Implementation Plan – Solaris 10 Live Upgrade combine with Sun Cluster & VXVM

 

 

Node 2 Upgrade.

 

1.   Fail over the DB Resource group to the epdci node.

 

root@node1 # scswitch –z –g rg-epd-db –h epdci

 

2.   Remove the node 2 from the vxdg.

 

root@node1 # scconf -r -D name=dgvx-epu-ci,nodelist=epddb

root@node1 # scconf -r -D name=dgvx-epu-db,nodelist=epddb

 

3.   Remove the node2 from the resource group.

 

root@node1 # scrgadm -c -g rg-epd-ci -h epddb

root@node1 # scrgadm -c -g rg-epd-db -h epddb

 

4.   Remove the node 2 from cluster and shutdown it.

 

root@node1 # /usr/cluster/bin/scconf -a -T node=epddb
 
root@node2 # shutdown -g0 -y -i0
 

5.   Boot the node2 server with boot -x option.

 
Ok > boot –x
 

6.   Uninstall the EMC Powerpath.

 
root@node2 # pkgrm EMCpower
 

7.   Uninstall the SUN Cluster software.

 
root@node2 # /usr/cluster/bin/scinstall -r
 

8.   Uninstall the VXVM 4.0  software.

 

root@node2 # cd /var/crash/vm.4.0.sol/volume_manager/

root@node2 # ./uninstallvm

 

 

9.   Reboot the server with boot option.

 

root@node2 # init 6

 

10.                Start the Solaris 10 Live upgrade same like stand alone server procedures.



Solaris Live upgrade from Solaris9 to Solaris 10

 

1.    The disk partition which will be used for live upgrade, the partition flags should be set to “wm”. If is not set it will be fail to create alternative Boot environment with the following error.

ERROR: Unable to umount ABE <Solaris10>: cannot make ABE bootable.

Making the ABE <Solaris10> bootable FAILED.

 

2.    Install the liveupgrade20 software from Solaris 10 DVD.

gaalstadjmp01@/export/install/media/Solaris_10_305/Solaris_10/Tools/Installers

 

3.       If the server is using the VXVM 4.0 upgrade to VXVM 5.0. Package is available in Jumpstart servers.

root@gaalstadjmp01 # /export/install/software/sf-50.tar.gz

 

4.       If the EMCpower path is installed kindly follow the below steps.

root@njcsprprvn04 # pkgrm EMCpower

 

5.    The devalias name must be set for the disk which will be used as Solaris 10 BE

 

root@njcsprprvn04 #         

diag-device=rootmirror net2

nvramrc=devalias rootmirror /pci@1f,700000/scsi@2/disk@1,0

 

6.    Identify the disk which will be used for live upgrade (Solaris 10 BE).

root@njcsprprvn04 # format

Searching for disks...done

 

 

AVAILABLE DISK SELECTIONS:

       0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

          /pci@1f,700000/scsi@2/sd@0,0

       1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

          /pci@1f,700000/scsi@2/sd@1,0

       2. c1t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

          /pci@1f,700000/scsi@2/sd@2,0

       3. c1t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

          /pci@1f,700000/scsi@2/sd@3,0

Specify disk (enter its number):

 

7.        

8.       Detach the sub mirrors of the disk c1t1d0 from the main mirrors. (Using metadetach)

 

root@njcsprprvn04 # metastat -p

d60 -m d61 d62 1

d61 1 1 c0t0d0s6

d62 1 1 c0t1d0s6

d40 -m d41 d42 1

d41 1 1 c0t0d0s4

d42 1 1 c0t1d0s4

d30 -m d31 d32 1

d31 1 1 c0t0d0s3

d32 1 1 c0t1d0s3

d20 -m d21 d22 1

d21 1 1 c0t0d0s1

d22 1 1 c0t1d0s1

d10 -m d11 d12 1

d11 1 1 c0t0d0s0

d12 1 1 c0t1d0s0

d50 -m d51 d52 1

d51 1 1 c0t0d0s5

d52 1 1 c0t1d0s5

 

9.       Clear the swap sub mirror metadevice. If we use lucreate using the swap meta device it will through error device d22 not found.

 

root@njcsprprvn04 #metaclear d22

 

10.   Creating a Boot Environment for Solaris 10. Always mention swap at the end. I has a bug that will fail the lucreate.

 

root@njcsprprvn04 # root@njcsprprvn04 # lucreate -m /:/dev/md/dsk/d12:ufs -m /var:/dev/md/dsk/d32:ufs -m /var/crash:/dev/md/dsk/d52:ufs -m /local/user:/dev/md/dsk/d42:ufs -m -:/dev/dsk/c1t1d0s1:swap -n rootmirror

 

lucreate -m /:/dev/md/dsk/d12:ufs -m /var:/dev/md/dsk/d32: ufs -m -:/dev/dsk/c0t1d0s1:swap -n rootmirror

 

 

Discovering physical storage devices

Discovering logical storage devices

Cross referencing storage devices with boot environment configurations

Determining types of file systems supported

Validating file system requests

Preparing logical storage devices

Preparing physical storage devices

Configuring physical storage devices

Configuring logical storage devices

Analyzing system configuration.

Comparing source boot environment <d10> file systems with the file

system(s) you specified for the new boot environment. Determining which

file systems should be in the new boot environment.

Updating boot environment description database on all BEs.

Searching /dev for possible boot environment filesystem devices

 

Updating system configuration files.

Creating configuration for boot environment <rootmirror>.

Creating boot environment <rootmirror>.

Creating file systems on boot environment <rootmirror>.

Creating <ufs> file system for </> on </dev/md/dsk/d12>.

Creating <ufs> file system for </local/user> on </dev/md/dsk/d42>.

Creating <ufs> file system for </var> on </dev/md/dsk/d32>.

Creating <ufs> file system for </var/crash> on </dev/md/dsk/d52>.

Mounting file systems for boot environment <rootmirror>.

Calculating required sizes of file systems for boot environment <rootmirror>.

Populating file systems on boot environment <rootmirror>.

Checking selection integrity.

Integrity check OK.

Populating contents of mount point </>.

Populating contents of mount point </local/user>.

Populating contents of mount point </var>.

Populating contents of mount point </var/crash>.

Copying.

Creating shared file system mount points.

Creating compare databases for boot environment <rootmirror>.

Creating compare database for file system </var/crash>.

Creating compare database for file system </var>.

Creating compare database for file system </local/user>.

Creating compare database for file system </>.

Updating compare databases on boot environment <rootmirror>.

Making boot environment <rootmirror> bootable.

Setting root slice to Solaris Volume Manager metadevice </dev/md/dsk/d12>.

Population of boot environment <rootmirror> successful.

Creation of boot environment <rootmirror> successful.

root@njcsprprvn04 #

 

11.   Check the Live Upgrade status.

 

root@njcsprprvn04 # lustatus

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

d10                        yes      yes    yes       no     -

rootmirror                 yes      no     no        yes    -

 

root@njcsprprvn04 # lustatus rootmirror

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

rootmirror                 yes      no     no        yes    -

 

root@njcsprprvn04 # lustatus d10

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

d10                        yes      yes    yes       no     -

 

12.   Upgrading the Inactive Boot Environment.

 

root@njcsprprvn04 # luupgrade -c -s /sol-install/media/Solaris_10_807

The media is a standard Solaris media.

The media contains an operating system upgrade image.

The media contains a standard media installer which can be run.

The media contains <Solaris> version <10>.

The media contains an automatic patch installation script.

 

root@njcsprprvn04 # luupgrade -u -n rootmirror -l /var/adm/lu.log -s /sol-install/media/Solaris_10_807

 

Validating the contents of the media </sol-install/media/Solaris_10>.

The media is a standard Solaris media.

The media contains an operating system upgrade image.

The media contains <Solaris> version <10>.

Constructing upgrade profile to use.

Locating the operating system upgrade program.

Checking for existence of previously scheduled Live Upgrade requests.

Creating upgrade profile for BE <rootmirror>.

Determining packages to install or upgrade for BE <rootmirror>.

Performing the operating system upgrade of the BE <rootmirror>.

CAUTION: Interrupting this process may leave the boot environment unstable

or unbootable.

Upgrading Solaris: 100% completed

Installation of the packages from this the media is complete.

Updating package information on boot environment <rootmirror>.

Package information successfully updated on boot environment <rootmirror>.

Adding operating system patches to the BE <rootmirror>.

The operating system patch installation is complete.

INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot

environment <rootmirror> contains a log of the upgrade operation.

INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot

environment <rootmirror> contains a log of cleanup operations required.

INFORMATION: Review the files listed above. Remember that all of the files

are located on boot environment <rootmirror>. Before you activate boot

environment <rootmirror>, determine if any additional system maintenance

is required or if additional media of the software distribution must be

installed.

The Solaris upgrade of the boot environment <rootmirror> is complete.

 

13.   Activating the Inactive Boot Environment.

 

root@njcsprprvn04 # luactivate rootmirror

 

**********************************************************************

 

The target boot environment has been activated. It will be used when you

reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You

MUST USE either the init or the shutdown command when you reboot. If you

do not use either init or shutdown, the system will not boot using the

target BE.

 

**********************************************************************

 

In case of a failure while booting to the target BE, the following process

needs to be followed to fallback to the currently working boot environment:

 

1. Enter the PROM monitor (ok prompt).

 

2. Change the boot device back to the original boot environment by typing:

 

     setenv boot-device disk:a

 

3. Boot to the original boot environment by typing:

 

     boot

 

**********************************************************************

 

Activation of boot environment <rootmirror> successful.

 

 

 

14.   Reboot the machine to boot with new BE.

 

root@njcsprprvn04 # init 6

root@njcsprprvn04 #

INIT: New run level: 6

The system is coming down.  Please wait.

System services are now being stopped.

Print services already stopped.

Apr 15 12:47:52 njcsprprvn04 syslogd: going down on signal 15

Terminated

nfs umount: /sol-install: is busy

Live Upgrade: Deactivating current boot environment <d10>.

Live Upgrade: Executing Stop procedures for boot environment <d10>.

Live Upgrade: Current boot environment is <d10>.

Live Upgrade: New boot environment will be <rootmirror>.

Live Upgrade: Activating boot environment <rootmirror>.

Live Upgrade: The boot device for boot environment <rootmirror> is

</dev/dsk/c1t1d0s0>.

Live Upgrade: Activation of boot environment <rootmirror> completed.

nfs umount: /sol-install: is busy

The system is down.

syncing file systems... done

rebooting...

 

Rebooting with command: boot

Boot device: disk1:a  File and args:

SunOS Release 5.10 Version Generic 64-bit

Copyright 1983-2005 Sun Microsystems, Inc.  All rights reserved.

Use is subject to license terms.

Hardware watchdog enabled

Hostname: njcsprprvn04

Configuring devices.

Loading smf(5) service descriptions:  21/119Apr 15 12:51:44 in.mpathd[102]: All Interfaces in group public have failed

Apr 15 12:51:44 in.mpathd[102]: All Interfaces in group public have failed                                            119/119

checking ufs filesystems

/dev/md/rdsk/d52: is logging.

/dev/md/rdsk/d60: is logging.

/dev/md/rdsk/d42: is logging.

Configuring network interface addresses: ce0 ce1 ce2 ce3 ce4 ce5.

 

njcsprprvn04 console login: root

Password:

Apr 15 13:03:03 njcsprprvn04 login: ROOT LOGIN /dev/console

Last login: Tue Apr 15 11:27:57 on console

Sun Microsystems Inc.   SunOS 5.10      Generic January 2005

You have mail.

Sourcing /root/.profile-local.....

Sourcing /root/.profile-local.....

Sourcing /root/.profile-EIS.....

root@njcsprprvn04 # uname -a

SunOS njcsprprvn04 5.10 Generic sun4u sparc SUNW,Sun-Fire-V440

root@njcsprprvn04 # more /etc/release

                         Solaris 10 3/05 s10_74L2a SPARC

           Copyright 2005 Sun Microsystems, Inc.  All Rights Reserved.

                        Use is subject to license terms.

                            Assembled 22 January 2005

 

15.   Install latest 10_Recommended patch cluster in single user mode.

root@njcsprprvn04 # init s

root@njcsprprvn04 # /10_Recommended/ install_cluster

root@njcsprprvn04 # init 6

root@njcsprprvn04 # uname -a

SunOS njcsprprvn04 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Fire-V440

root@njcsprprvn04 #

 

16.   If server is connected to SAN install the EMCpower path freshly.

 

11.

1.   Install the EMC Power path.
 
root@node2 # pkgadd –d .
 
2.   Install the Sun Cluster 3.2.
 
root@node2 # cd /var/crash/Suncluster32
root@node2 # ./installer
 
3.   Install the Sun Cluster 3.2 Core patch.
 
root@node2 # patchadd 125511-02
 
4.   Install the VXVM 5.0.
 
root@node2 # cd /var/crash/sf-50/volume_manager
root@node2 #./installvm
 
5.   Reboot the server with boot option.
 
root@node2 # init 6
 
 
 
Failing over the Disk Group.
 
6.   Bring down the resource group.
 
root@node1 # scswitch –F –g rg-epd-db
root@node1 # scswitch –F –g rg-epd-ci
 
7.   Unregister the VXVM Disk group
 
root@ node1 # scconf -r -D name= dgvx-epu-ci
root@ node1 # scconf -r -D name= dgvx-epu-db
 
 
8.   Delete all the resource and resource group in the node1.
 
root@ node1 # scswitch -n -j <Resource Name>
root@ node1 # scrgadm -r -j <Resource Name>
root@ node1 # scrgadm -r -g <Resource Group Name>
 
9.   Make sure all resource group and disk group got cleared from cluster configuration.
 
root@ node1 # scstat –D
root@ node1 # scstat –g
 
10.                Shutdown the node1.
 
root@ node1 # shutodown –g0 -y
 
11.                Import the VXVM Disk Group in the epddb.
 
root@node2  # vxdctl enable
root@node2 # vxdctl enable
root@node2 # vxdg -C –n dgvx-epu-db import dgvx-epu-db
root@node2 # vxdg -C –n dgvx-epu-ci  import dgvx-epu-ci
root@node2 # vxvol -g dgvx-epu-db startall
root@node2 # vxvol -g dgvx-epu-ci  startall
 
 
12.                Configure the Cluster/Resource group/Resources.
 
root@node2 # scinstall
 
Select 1 à Create a new cluster or add a cluster node
Select 2 à Create only this node in the cluster.
Cluster Name                                              àEPD
Cluster Partner name                                   à epdci
Disable Automatic quorum device selection.     à No
Select the first transport cable path                        à ce7
Select the second transport cable path           à ce13
Reboot the server.
 
Create resource group
 
root@node2 # scrgadm -a -g rg-epd-db -h epddb, epdci
root@node2 # scrgadm -a -g rg-epd-ci -h epddb, epdci
root@node2 # scrgadm -c -g rg-epd-ci -y Pathprefix="/global/nfs1”
 
Create Logical Resource
 
root@node2 # scrgadm -a -L -j rs-lh-epddbv -g rg-epd-db -l epddbv
root@node2 # scrgadm -a -L -j rs-lh-epdciv -g rg-epd-ci -l epdciv
 
Register the VXVM Disk group
 
scconf -a -D type=vxvm,name=dgvx-epu-ci,nodelist=epddb,preferenced=true
scconf -a -D type=vxvm,name=dgvx-epu-db,nodelist=epddb,preferenced=true
 
 
Create the HA Resource
 
root@node2 # scrgadm -a -t SUNW.HAStoragePlus:8
 
root@node2 # scrgadm -a -j rs-hastp-epddb -g rg-epd-db -t SUNW.HAStoragePlus:8 -x FilesystemMountPoints=/oracle/EPD,/oracle/EPD/mirrlogA,/oracle/EPD/mirrlogB,/oracle/EPD/objk,/oracle/EPD/origlogA,/oracle/EPD/origlogB,/oracle/EPD/saparch,/oracle/EPD/sapbackup,/oracle/EPD/sapcheck,/oracle/EPD/sapdata1,/oracle/EPD/sapdata10,/oracle/EPD/sapdata11,/oracle/EPD/sapdata12,/oracle/EPD/sapdata13,/oracle/EPD/sapdata14,/oracle/EPD/sapdata15,/oracle/EPD/sapdata16,/oracle/EPD/sapdata17,/oracle/EPD/sapdata18,/oracle/EPD/sapdata19,/oracle/EPD/sapdata2,/oracle/EPD/sapdata20,/oracle/EPD/sapdata3,/oracle/EPD/sapdata4,/oracle/EPD/sapdata5,/oracle/EPD/sapdata6,/oracle/EPD/sapdata7,/oracle/EPD/sapdata8,/oracle/EPD/sapdata9,/oracle/EPD/sapreorg,/oracle/EPD/saptrace,/oracle/EPD/vertexdata,/oracle/stage
 
root@node2 # scrgadm -a -j rs-hastp-epdci -g rg-epd-ci -t SUNW.HAStoragePlus:8 -x FilesystemMountPoints=/sapmnt/EPD,/sapmnt/EPD/exe/commprss,/sapmnt/EPD/global,/sapmnt/EPD/profile,/userdata/epd,/userdata/interfaces,/userdata/sap_app_arch,/userdata/saplogon,/userdata/software,/userdata/system,/userdata/tmp,/usr/sap/EPD,/usr/sap/put
 
Create Oracle DB / Listener resource.
root@node2 # scrgadm -a -j rs-ora-sapd -g rg-epd-db -t SUNW.oracle_server:6 -x ORACLE_SID=EPD -x ORACLE_HOME= /oracle/EPD/102_64 -x Alert_log_file=/oracle/EPD/saptrace/background/alert_EPD.log -x Connect_string=scmon/scmon123
 
root@node2 # scrgadm -a -j rs-ora-lsnr-pr-epd -g rg-epd-db -t SUNW.oracle_listener:5 -x ORACLE_HOME=/oracle/EPD/102_64
 
Create TSM resource.
 
root@node2 # scrgadm -a -j rs-tsm-epd -g rg-epd-db -t SUNW.gds:6 -y Scalable=false -y Port_list=7636/tcp -x Start_command=/opt/tivoli/tsm/client/ba/bin/startEPD.sh -x Stop_command=/opt/tivoli/tsm/client/ba/bin/stopEPD.sh -x Probe_command=/opt/tivoli/tsm/client/ba/bin/probeEPD.sh -x Probe_timeout=60
 
Create CI resource.
 
root@node2 # scrgadm -a -t SUNW.sap_ci_v2
root@node2 # scrgadm -a -j rs-ci-epdci -g rg-epd-ci -t SUNW.sap_ci_v2 -x SAPSID=EPD -x Ci_services_string=DVEBMGS -x Probe_timeout=120 -x Ci_startup_script=startsap_epdci_00 -x Ci_shutdown_script=stopsap_epdci_00
 
Create NFS resource.
 
root@node2 # scrgadm -a -t SUNW.nfs:3.2
root@node2 # scrgadm -a -j rs-nfs-epdci -g rg-epd-ci -t SUNW.nfs:3.2
 
Bring up all the resource Group in Epddb
 
root@node2 # scswitch -Z
 
Handover the node2 to App team.
 
Node 1 Upgrade.
 
13.                Boot the node1 server with boot -x option.
 
Ok > boot –x
 
14.                Uninstall the EMC Powerpath.
 
root@node1 # pkgrm EMCpower
 
15.                Uninstall the SUN Cluster software.
 
root@node1 # /usr/cluster/bin/scinstall -r
 
16.                Uninstall the VXVM software.
 
root@node1 # cd /var/crash/vm.4.0.sol/volume_manager/
root@node1 # ./uninstallvm
 
17.                Reboot the server with boot option.
 
root@node1 # init 6
 
18.                Start the Solaris 10 Live upgrade same like stand alone server procedures.
 
 
19.                Install the EMC Power path.
 
root@node1 # pkgadd –d .
 
20.                Install the Sun Cluster 3.2.
 
root@node1 # cd /var/crash/Suncluster32
root@node1 # ./installer
 
21.                Install the Sun Cluster 3.2 Core patch.
 
root@node1 # patchadd 125511-02
 
22.                Install the VXVM 5.0.
 
root@node1 # cd /var/crash/sf-50/volume_manager
root@node1 #./installvm
 
23.                Reboot the server with boot option.
 
root@node1 # init 6
 
 
Cluster Integration
 
24.                Run Scinstall in node1 and configure it as cluster.
 
root@ node1 # scinstall
 
Select Option 1 àCreate a new cluster or add a cluster node
Select Option 3 à Add it to the existing Cluster
Cluster name                                     à EPD
Cluster sponsor node                         à epddb
Cluster transport Path                        à It will scan and tell for ce7 and ce13
It will reboot, and add to the cluster
 
25.                Check the cluster configuration status.
 
root@ node2 # scstat
 
Note: - After application team confirm that the application is up and running. Do the failover test and Cluster TPP.