Wednesday, October 10, 2012

Creating Ldoms on T-Series


T5120 first start

On connection to serial management:

SUNSP-MC1 login:

Login using root/<pwd>

This will take you to the SP (the T5120 ILOM) prompt:

->

Create an admin user

-> create /SP/users/admin role=Administrator cli_mode=alom

Creating user...

Enter new password: ********

Enter new password again: ********

Created /SP/users/admin

The password used is <pwd>

Power on the server and redirect the host output to display on the serial terminal device

-> start /SYS

Are you sure you want to start /SYS (y/n)? y

start: Target already started


-> start /SP/console

Are you sure you want to start /SP/console (y/n)? y


Serial console started.  To stop, type #.

This will take you to a standard Solaris install.

T5120 SP commands

Start the console:

-> start /SP/console

Power on the host:

-> start /SYS

Power off the host:

-> stop /SYS

Reset the host:

-> reset /SYS




T5120 Firmware Update

Note: This requires a system reset!

Unzip patch 139439-10

Copy sysfwdownload (and sysfwdownload.README) to /usr/platform/sun4v/sbin/

Check which level your Sun System Firmware is at. This will also be displayed in the output of prtdiag -v.

host001 # /usr/platform/sun4v/sbin/sysfwdownload -g

7.2.4.e

If your Sun System Firmware level is 7.2.0 or later, you can use the -u argument with sysfwdownload as described below. Otherwise it's a bit more involved, read the README.

cd into the directory where the contents of 139439-10 were unzipped.

host001 # /usr/platform/sun4v/sbin/sysfwdownload -u Sun_System_Firmware-7_2_8-SPARC_Enterprise_T5120+T5220.pkg
WARNING: Host will be powered down for automatic firmware update when download is completed.
Do you want to continue(yes/no)? yes

.......... (6%).......... (12%).......... (19%).......... (25%).......... (32%).......... (38%).......... (45%).......... (51%).......... (58%).......... (64%).......... (71%).......... (77%).......... (83%).......... (90%).......... (96%)..... (100%)

Download completed successfully.

host001 # May 24 12:58:41 host001 unix: WARNING: Power-off requested, system will now shutdown.

The system will shutdown, and the SC should restart automatically after the firmware update has been applied to it.

When the SC came back up, I found that the original ILOM prompt:

->

had changed to a more familiar:

sc>


sc> poweron

sc> Chassis | major: Host has been powered on

Chassis | major: Hot removal of HDD2

Chassis | major: Hot insertion of HDD1

Chassis | major: Hot insertion of HDD0

Chassis | major: Hot removal of HDD3

Chassis | major: Host is running


sc> console

Enter #. to return to ALOM.


{0} ok boot

(etc)



LDOM / ZFS guidelines

General notes

  • It is not possible to remove a disk from a ZFS pool unless the disk has been specified as a (inactive) hot spare or a cache device. Do not add disks to an existing zpool unless you're happy for them to be permanently part of that pool.
  • Do not resize volumes/virtual disks already presented to an LDOM. Boot/root zpools do not seem to be resizable in this way anyway, and if you wish to add disk space to an existing zpool, the best way is to present extra volumes/virtual disks to it.
  • EFI Disk label does not appear to be an issue for ZFS (as opposed to VxVM, which requires SMI-labelled LUNs)
  • When creating a zpool which does not need mounting, don't mount it. Create the zpool with the "-m none" flag, e.g.
    server # zpool create -m none somepool somedisk

Primary domains

  • Copy the Solaris 10 iso image to /image/sol-10--sparc-dvd.iso. This can be used as a virtual cdrom drive which:
    • is much faster than jumpstart
    • can be used to boot -s on any guest LDOM in case of emergency
    • must be exported with a unique name for each guest LDOM
    • must be exported using the "options=ro" flag
  • Create a seperate zpool with volume(s) inside for each guest LDOM to boot off
    • Name the zpool after the target guest LDOM. E.g. for for the first production ldom on a primary name it lp01pool
    • Create seperate volumes within the zpool, name them as volumes. E.g. lp01pool/vol00, lp01pool/vol01, etc. Start at 00 - the numbering will reflect the number assigned to the virtual disks.
    • Make sure that each boot/root volume is big enough. It is not easy to resize these these once created. You may require lots of space for /var.
    • When exporting the volume which will be used as a boot disk or boot disk mirror for a guest LDOM, do not use any options flags or you make make the virtual disk unbootable
  • Create seperate zpools with volumes inside for applications usage, one zpool per discrete application
  • Create seperate zpools with volumes inside for temporarily assigned disk space.
  • When exporting other volumes, use the "options=slice" flag. This will allow any volume resize to be recognised on the guest LDOM. Although normal practise is not to resize volumes presented to guest LDOMs, the ability to do so in an emergency may be useful.

Guest domains

  • While installing Solaris -
    • Remote services enabled: No
    • Initial locale to be used after the system has been installed: Great Britain (UTF-8) ( en_GB.UTF-8 )
    • Filesystem to use for your Solaris installation: ZFS
    • Just choose one disk device if the underlying physical devices are already mirrored, otherwise choose two disk devices to create a mirror.
    • Use the presented virtual disk(s) for installation purposes, just one if the underlying LUN is metamirrored using an IBM SVC or the source zpool is mirrored, two otherwise (in which case Solaris will mirror the virtual disks as part of the install).
    • Put /var on a separate dataset. (You can apply a quota later if you're concerned that the root filesystem is in danger of filling up.)
  • Add extra disk device(s) after installation to grow the root pool if you're belatedly concerned about it's (lack of) size


ZFS notes



Creating a zpool without mounting it


When creating a zpool which does not need mounting, don't mount it. Create the zpool with the "-m none" flag, e.g.
server # zpool create -m none somepool somedisk

Resize an existing volume


host002 # zfs get volsize lt07pool/disk2
NAME            PROPERTY  VALUE    SOURCE
lt07pool/disk2  volsize   10G      -
host002 # zfs set volsize=15G lt07pool/disk2
host002 # zfs get volsize lt07pool/disk2
NAME            PROPERTY  VALUE    SOURCE
lt07pool/disk2  volsize   15G      -

Note:

  • If growing a zfs volume underlying a virtual disk which is being used as a zfs device in a guest LDOM, you may need to export and import the zpool from the guest for the new size to be recognised
  • Do not try to shrink a zfs volume underlying a virtual disk in a guest LDOM!
  • The best strategy for guest LDOMs seems to be to assign LUNs to the primary, then use these to create ZFS volumes which are given to the guests as virtual disks. Manage ZFS from within the guest LDOM. RAW LUNs cannot be exported and will have different device names on different hosts.

Remove a disk from a ZFS pool


It is not possible to remove a disk from a ZFS pool unless the disk has been specified as a (inactive) hot spare or a cache device:

host002lt08 # zpool remove lt08pool c0d2
cannot remove c0d2: only inactive hot spares or cache devices can be removed

Your only option at this time if this is absolutely required is to copy the data off, destroy the existing pool and re-create it without the disks you wanted to remove.

Mirrors can be broken, but this is not the same thing.

Apparently a fix is imminent. But then again, it's been in the pipeline for a long time - check the submission date.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783

A possible workaround if new disk space is known to be of a temporary nature, is to create a new, seperate zpool which later be destroyed to return the disks.

Useful links


zfs man page (Sun)

zpool man page (Sun)

Solaris ZFS Administration Guide

ZFS cheatsheet :: Col's Tech Stuff


LDoms - Configure Primary Domain : PD-001


The same steps were used to configure pd-002, with differences in hostnames, IP addresses, etc.

Both pd-001 and pd-002 were pre-built and sys-unconfig-ed before being shipped and configured remotely.

The e1000g0-3 interfaces were aggregated as aggr1, and the nxge0-3 interfaces were aggregated as aggr2.

Create default services


This will create disk services, console access and networking

pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-c--  SP      64    32640M   0.1%  18h 33m

Create virtual disk server (vds)


This will allow importing virtual disks into a logical domain from the primary

pd-001 # ldm add-vds primary-vds0 primary

Create virtual console concentrator service (vcc)


This will allow terminal service to logical domain consoles

pd-001 # ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Create virtual switch server (vsw)


Enables networking between virtual networking devices in logical domains

pd-001 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
aggr1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        ether 0:21:28:59:xx:xx
aggr2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        ether 0:21:28:72:xx:xx
pd-001 # ldm add-vsw net-dev=aggr1 primary-vsw0 primary
pd-001 # ldm add-vsw net-dev=aggr2 primary-vsw1 primary

Set the linkprop flag to phys-state for link-based IPMP.

pd-001 # ldm set-vsw linkprop=phys-state primary-vsw0
pd-001 # ldm set-vsw linkprop=phys-state primary-vsw1
pd-001 # ldm list-services primary
VCC
    NAME             LDOM             PORT-RANGE
    primary-vcc0     primary          5000-5100
 
VSW
    NAME             LDOM             MAC               NET-DEV   ID   DEVICE     LINKPROP   DEFAULT-VLAN-ID PVID VID                  MTU   MODE
    primary-vsw0     primary          00:14:4f:fa:71:22 aggr1     0    switch@0   phys-state 1               1                         1500
    primary-vsw1     primary          00:14:4f:f9:cd:0e aggr2     1    switch@1   phys-state 1               1                         1500
 
VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary

Enable VLAN Tagging


(Note: The steps below for VLAN tagging were added at a later stage, after Networks had configured Port Trunking on the relevant switch ports)

pd-001 # ldm set-vsw pvid=XX vid=A,B,C primary-vsw0
pd-001 # ldm set-vsw pvid=XX vid=A,B,C primary-vsw1

In this case the port VLAN (pvid) was the VLAN id for <IP>, while the other VLAN ids (vid) were for VLANs which were allowed through these interfaces (for the guest LDoms to use).



Control domain creation


Now setup the primary domain, which will act as the control domain

We will configure the primary/control domain with 1 core/8 threads and 4GB RAM

pd-001 # ldm set-mau 1 primary
pd-001 # ldm set-vcpu 8 primary
pd-001 # ldm set-memory 4g primary
Initiating delayed reconfigure operation on LDom primary.  All configuration
changes for other LDoms are disabled until the LDom reboots, at which time
the new configuration for LDom primary will also take effect.

Make these configuration changes permanent using list-spconfig:

pd-001 # ldm list-config
factory-default [next poweron]
initial

If ldm list-spconfig already shows an 'initial' logical domain configuration, then you will not be able to overwrite it. Remove it first, then add it again:

pd-001 # ldm remove-config initial
pd-001 # ldm list-config
factory-default [next poweron]
pd-001 # ldm add-config initial
pd-001 # ldm list-config
factory-default
initial [current]
pd-001 # init 6

Enable Networking Between the Primary/Control/Service Domain and Other Domains


pd-001 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
aggr1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        ether 0:21:28:59:xx:xx
aggr2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        ether 0:21:28:72:xx:xx
pd-001 # dladm show-link | grep vsw
vsw0            type: non-vlan  mtu: 1500       device: vsw0
vsw1            type: non-vlan  mtu: 1500       device: vsw1
pd-001 # ifconfig vsw0 plumb
pd-001 # ifconfig vsw1 plumb
pd-001 # ifconfig aggr1 down unplumb
pd-001 # ifconfig aggr2 down unplumb
pd-001 # ifconfig vsw0 <IP> netmask 255.255.255.0 broadcast + up
pd-001 # ifconfig vsw1 <IP> netmask 255.255.255.0 broadcast + up
pd-001 # mv /etc/hostname.aggr1 /etc/hostname.vsw0
pd-001 # mv /etc/hostname.aggr2 /etc/hostname.vsw1
pd-001 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
vsw0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        ether 0:14:4f:fa:71:22
vsw1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        ether 0:14:4f:f9:cd:e
pd-001 # init 6

Check status


pd-001 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
vsw0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        ether 0:14:4f:fa:71:22
vsw1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        ether 0:14:4f:f9:cd:e
pd-001 # dladm show-dev
vsw0            link: up        speed: 1000  Mbps       duplex: full
vsw1            link: up        speed: 1000  Mbps       duplex: full
e1000g0         link: up        speed: 1000  Mbps       duplex: full
e1000g1         link: up        speed: 1000  Mbps       duplex: full
e1000g2         link: up        speed: 1000  Mbps       duplex: full
e1000g3         link: up        speed: 1000  Mbps       duplex: full
nxge0           link: up        speed: 1000  Mbps       duplex: full
nxge1           link: up        speed: 1000  Mbps       duplex: full
nxge2           link: up        speed: 1000  Mbps       duplex: full
nxge3           link: up        speed: 1000  Mbps       duplex: full
pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.2%  6m

Configure active-active link-based IPMP by editing each active /etc/hostname.vswx file


Both interfaces need to be active for IPMP to be configurable within the guest LDoms

pd-001 # grep aggr /etc/hosts
<IP>    psp-001-aggr1   psp-001         # vsw0 / e1000g0-3
<IP>    psp-001-aggr2                   # vsw1 / nxge0-3
pd-001 # cat /etc/hostname.vsw0
pd-001-aggr1 group ipmp1
pd-001 # cat /etc/hostname.vsw1
pd-001-aggr2 group ipmp1

Then reboot

After rebooting ...

pd-001 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
vsw0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        groupname ipmp1
        ether 0:14:4f:fa:71:22
vsw1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        groupname ipmp1
        ether 0:14:4f:f9:cd:e

Testing IPMP


Sep 17 15:36:48 psp-001 in.mpathd[187]: The link has gone down on vsw0
Sep 17 15:36:48 psp-001 in.mpathd[187]: NIC failure detected on vsw0 of group ipmp1
Sep 17 15:36:48 psp-001 in.mpathd[187]: Successfully failed over from NIC vsw0 to NIC vsw1
 
pd-001 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
vsw0: flags=19000802<BROADCAST,MULTICAST,IPv4,NOFAILOVER,FAILED> mtu 0 index 2
        inet 0.0.0.0 netmask 0
        groupname ipmp1
        ether 0:14:4f:fa:71:22
vsw1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        groupname ipmp1
        ether 0:14:4f:f9:cd:e
vsw1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx

And then restore the connection:

Sep 17 15:39:56 pd-001 in.mpathd[187]: The link has come up on vsw0
Sep 17 15:39:56 pd-001 in.mpathd[187]: NIC repair detected on vsw0 of group ipmp1
Sep 17 15:39:56 pd-001 in.mpathd[187]: Successfully failed back to NIC vsw0
Sep 17 15:39:57 pd-001 in.mpathd[187]: The link has gone down on vsw0
Sep 17 15:39:57 pd-001 in.mpathd[187]: NIC failure detected on vsw0 of group ipmp1
Sep 17 15:39:57 pd-001 in.mpathd[187]: Successfully failed over from NIC vsw0 to NIC vsw1
Sep 17 15:40:04 pd-001 in.mpathd[187]: The link has come up on vsw0
Sep 17 15:40:04 pd-001 in.mpathd[187]: NIC repair detected on vsw0 of group ipmp1
Sep 17 15:40:04 pd-001 in.mpathd[187]: Successfully failed back to NIC vsw0
 
pd-001 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
vsw0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        groupname ipmp1
        ether 0:14:4f:fa:71:22
vsw1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx
        groupname ipmp1
        ether 0:14:4f:f9:cd:e
 
 

LDoms - Primary : DNS/NIS/Automounter


These instructions were run on the primary/control Ldoms pd-001 and pd-002, however they should work just as well on any guest domains.

Pre-requisites


Add the host entries for your new servers to the NIS hosts table

Actions


Login via the console

pd-001 # umount /home

Hash out /home entry in /etc/auto_master:

pd-001 # vi /etc/auto_master
#
# Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)auto_master        1.8     03/04/28 SMI"
#
# Master map for automounter
#
+auto_master
/net            -hosts          -nosuid,nobrowse
#/home          auto_home       -nobrowse

Move any local home directories in /export/home to /home

Edit /etc/passwd accordingly, removing any entries which would be superceded by NIS

Remove the /export/home filesystem to allow automounter to work

pd-001 # zfs destroy rpool/export/home

Edit /etc/resolv.conf

pd-001 # vi /etc/resolv.conf
search abc.com
nameserver <IP>
nameserver <IP>
 
pd-001 # cp /etc/nsswitch.nis /etc/nsswitch.nis.orig

Edit /etc/nsswitch.nis

pd-001 # egrep "^hosts|^automount" /etc/nsswitch.nis
hosts:      files nis dns
automount:  nis files

Add the following to /etc/hosts

...
# NIS servers
# Note: -g interfaces cannot be seen from psp-001
<IP>     HostA   #host-g
# host listed to enable automounter to work via NIS
172.27.4.IP     HostB  #host-g
...

Now add the server to NIS as a client

pd-001 # domainname dsgiplc
pd-001 # domainname > /etc/defaultdomain
pd-001 # ypinit -c
 
In order for NIS to operate sucessfully, we have to construct a list of theNIS servers.  Please continue to add the names for YP servers in order ofpreference, one per line.  When you are done with the list, type a <control D> or a return on a line by itself.
        next host to add:  hostA
        next host to add:  hostB
        next host to add:
 
The current list of yp servers looks like this:
 
hostA
hostB
 
Is this correct?  [y/n: y]  y
pd-001 # cp /etc/nsswitch.nis /etc/nsswitch.conf
pd-001 # svcadm enable nis/client

And some services for autofs to work:

pd-001 # svcadm enable autofs
pd-001 # svcadm enable -r nfs/client

Test NIS

pd-001 # ypcat passwd | grep parul
sawhnp02:##sawhnp02:5247:101:Parul:/export/home/sawhnp02:/bin/ksh

Reboot if possible for clean mountpoints (potentially saves some manual unmounting and restarting of autofs)

  • DNS should be working
  • NIS should be working
  • Home directories should automount

 

LDoms - Create Template Guest


Purpose


The purpose of this procedure is to create a "template" guest domain, a fully-patched guest Ldom. This Ldom will then be sys-unconfig-ed and a ZFS snapshot will be made of it. The snapshot can then be copied to other filesystems where it can be cloned to make a pre-installed, pre-patched base from which to quickly create other guest Ldoms.

Useful Terminology


  • A snapshot is a read-only copy of a file system or volume. Snapshots can be created almost instantly, and they initially consume no additional disk space within the pool. However, as data within the active dataset changes, the snapshot consumes disk space by continuing to reference the old data, thus preventing the disk space from being freed.
  • A clone is a writable volume or file system whose initial contents are the same as the dataset from which it was created. As with snapshots, creating a clone is nearly instantaneous and initially consumes no additional disk space. In addition, you can snapshot a clone.
    Clones can only be created from a snapshot. When a snapshot is cloned, an implicit dependency is created between the clone and snapshot. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as the clone exists. The
    origin property exposes this dependency, and the zfs destroy command lists any such dependencies, if they exist.

Conventions


  • Template guest Ldoms are currently 30GB in size
  • ZFS snapshots to be named as snap.YYYYmmdd, YYYYmmdd referring to the date they were created
  • ZFS clones to be named as clone.YYYYmmdd, YYYYmmdd referring to the snapshot they were copied from, not the day they were created.
    Clones are distinguished by the name of the filesystem they belong to, which should be based on the hostname of the guest using it.


pd-001 # zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
rpool                          13.5G   120G    98K  /rpool
rpool/ROOT                     9.47G   120G    21K  legacy
rpool/ROOT/s10s_u8wos_08a      9.47G   120G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var  2.55G   120G  2.55G  /var
rpool/dump                     2.00G   120G  2.00G  -
rpool/export                   65.5K   120G    23K  /export
rpool/export/home              42.5K   120G  42.5K  /export/home
rpool/swap                        2G   122G    16K  -
pd-001 # zfs create -p -o mountpoint=none rpool/ldoms
pd-001 # zfs create -p -o mountpoint=none rpool/ldoms/template
pd-001 # zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
rpool                          13.5G   120G    97K  /rpool
rpool/ROOT                     9.47G   120G    21K  legacy
rpool/ROOT/s10s_u8wos_08a      9.47G   120G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var  2.55G   120G  2.55G  /var
rpool/dump                     2.00G   120G  2.00G  -
rpool/export                   65.5K   120G    23K  /export
rpool/export/home              42.5K   120G  42.5K  /export/home
rpool/ldoms                      42K   120G    21K  none
rpool/ldoms/template             21K   120G    21K  none
rpool/swap                        2G   122G    16K  -
pd-001 # zfs create -V 30gb rpool/ldoms/template/disk0
pd-001 # zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
rpool                          43.5G  90.4G    97K  /rpool
rpool/ROOT                     9.47G  90.4G    21K  legacy
rpool/ROOT/s10s_u8wos_08a      9.47G  90.4G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var  2.55G  90.4G  2.55G  /var
rpool/dump                     2.00G  90.4G  2.00G  -
rpool/export                   65.5K  90.4G    23K  /export
rpool/export/home              42.5K  90.4G  42.5K  /export/home
rpool/ldoms                    30.0G  90.4G    21K  none
rpool/ldoms/template           30.0G  90.4G    21K  none
rpool/ldoms/template/disk0       30G   120G    16K  -
rpool/swap                        2G  92.4G    16K  -
pd-001 # df -k
Filesystem            kbytes    used   avail capacity  Mounted on
rpool/ROOT/s10s_u8wos_08a
                     140378112 7254610 94797162     8%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
proc                       0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
swap                 4212008    1592 4210416     1%    /etc/svc/volatile
objfs                      0       0       0     0%    /system/object
sharefs                    0       0       0     0%    /etc/dfs/sharetab
/platform/SUNW,SPARC-Enterprise-T5120/lib/libc_psr/libc_psr_hwcap2.so.1
                     102051772 7254610 94797162     8%    /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5120/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                     102051772 7254610 94797162     8%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                         0       0       0     0%    /dev/fd
rpool/ROOT/s10s_u8wos_08a/var
                     140378112 2673338 94797162     3%    /var
swap                 4210448      32 4210416     1%    /tmp
swap                 4210456      40 4210416     1%    /var/run
rpool/export         140378112      23 94797162     1%    /export
rpool/export/home    140378112      42 94797162     1%    /export/home
rpool                140378112      97 94797162     1%    /rpool
 
pd-001 # ldm add-domain template
pd-001 # ldm add-vcpu 8 template
pd-001 # ldm add-memory 4G template
pd-001 # ldm add-vnet vnet_tmpl0 primary-vsw0 template
pd-001 # ldm add-vnet vnet_tmpl1 primary-vsw1 template
pd-001 # ldm add-vdsdev /dev/zvol/dsk/rpool/ldoms/template/disk0 template_disk0@primary-vds0
pd-001 # ldm add-vdisk tmpl_vdisk0 template_disk0@primary-vds0 template
pd-001 # ldm add-vdsdev options=ro /image/sol-10-u8-ga-sparc-dvd.iso iso_vol_tmpl@primary-vds0
pd-001 # ldm add-vdisk cdrom iso_vol_tmpl@primary-vds0 template
pd-001 # ldm set-var auto-boot\?=false template
pd-001 # ldm list-config
factory-default
initial [current]
pd-001 # ldm add-config psp-001.`date +%Y%m%d.%H%M`
pd-001 # ldm list-config
factory-default
initial
pd-001.20100920.1126 [current]
pd-001 # ldm ls-constraints -x template > /backup/ldom/template.xml
bash: /backup/ldom/template.xml: No such file or directory
pd-001 # mkdir -p /backup/ldom
pd-001 # ldm ls-constraints -x template > /backup/ldom/template.xml
pd-001 # ldm bind template
pd-001 # ldm start template
pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.2%  2d 23h 7m
template         active     -t----  5000    8     4G        12%  1m
pd-001 # telnet localhost 5000
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
Trying ::1...
telnet: Unable to connect to remote host: Network is unreachable
pd-001 # svcadm enable ldoms/vntsd
pd-001 # telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
 
Connecting to console "template" in group "template" ....
Press ~? for control options ..
 
{0} ok boot cdrom:f -v
Boot device: /virtual-devices@100/channel-devices@200/disk@1:f  File and args: -v
 
Once the template has been installed and patched, minor configurations can be made, e.g. setting the NTP server.
When ready, sys-unconfig the template and create a snapshot of it:
 
template # sys-unconfig
This program will unconfigure your system.  It will cause it
to revert to a "blank" system - it will not have a name or know
about other systems or networks.
 
This program will also halt the system.
 
Do you want to continue (y/n) ? y
svc.startd: The system is coming down.  Please wait.
svc.startd: 77 system services are now being stopped.
Sep 20 14:57:30 template syslogd: going down on signal 15
svc.startd: The system is down.
syncing file systems... done
Program terminated
 
 
SPARC Enterprise T5120, No Keyboard
Copyright 2010 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.30.7, 4096 MB memory available, Serial #83524976.

 
 
 
{0} ok
telnet> q
Connection to localhost closed.
pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.4%  3d 2h 35m
template         active     -t----  5000    8     4G        12%  19s
pd-001 # ldm stop template
LDom template stopped
pd-001 # ldm unbind template
pd-001 # zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
rpool                          43.5G  90.4G    97K  /rpool
rpool/ROOT                     9.47G  90.4G    21K  legacy
rpool/ROOT/s10s_u8wos_08a      9.47G  90.4G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var  2.55G  90.4G  2.55G  /var
rpool/dump                     2.00G  90.4G  2.00G  -
rpool/export                   65.5K  90.4G    23K  /export
rpool/export/home              42.5K  90.4G  42.5K  /export/home
rpool/ldoms                    30.0G  90.4G    21K  none
rpool/ldoms/template           30.0G  90.4G    21K  none
rpool/ldoms/template/disk0       30G   108G  12.0G  -
rpool/swap                        2G  92.4G    16K  -
pd-001 # zfs snapshot rpool/ldoms/template/disk0@snap.`date +%Y%m%d`
pd-001 # zfs list
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
rpool                                     55.4G  78.4G    97K  /rpool
rpool/ROOT                                9.47G  78.4G    21K  legacy
rpool/ROOT/s10s_u8wos_08a                 9.47G  78.4G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var             2.55G  78.4G  2.55G  /var
rpool/dump                                2.00G  78.4G  2.00G  -
rpool/export                              65.5K  78.4G    23K  /export
rpool/export/home                         42.5K  78.4G  42.5K  /export/home
rpool/ldoms                               42.0G  78.4G    21K  none
rpool/ldoms/template                      42.0G  78.4G    21K  none
rpool/ldoms/template/disk0                42.0G   108G  12.0G  -
rpool/ldoms/template/disk0@snap.20100920      0      -  12.0G  -
rpool/swap                                   2G  80.4G    16K  -

Do not create a clone at this stage. The clone will be created from the snapshot after we have copied the snapshot to the SAN-based filesystem of the new guest.

pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.4%  3d 2h 45m
template         inactive   ------          8     4G


LDoms - Configure Guest LDom : GL008


  • gl008 is the LDom
  • pd-001 is the primary Control Domain
  • pd-002 is the auxilliary/failover Control Domain



Prepare the disk


List and choose a suitable disk


pd-001 # echo | format
Searching for disks...done
 
c2t500507680140A44Dd201: configured with capacity of 30.00GB
c2t500507680110A44Dd201: configured with capacity of 30.00GB
c2t500507680140A44Dd213: configured with capacity of 30.00GB
c2t500507680110A44Dd213: configured with capacity of 30.00GB
...
c3t500507680130A454d217: configured with capacity of 71.98GB
c3t500507680120A454d217: configured with capacity of 71.98GB
 
 
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@1,0
       2. c2t500507680140A44Dd201 <IBM-2145-0000 cyl 30718 alt 2 hd 32 sec 64>
          /pci@0/pci@0/pci@8/pci@0/pci@1/fibre-channel@0/fp@0,0/ssd@w500507680140a44d,c9
       3. c2t500507680110A44Dd201 <IBM-2145-0000 cyl 30718 alt 2 hd 32 sec 64>
          /pci@0/pci@0/pci@8/pci@0/pci@1/fibre-channel@0/fp@0,0/ssd@w500507680110a44d,c9
       4. c2t500507680140A44Dd213 <IBM-2145-0000 cyl 30718 alt 2 hd 32 sec 64>
          /pci@0/pci@0/pci@8/pci@0/pci@1/fibre-channel@0/fp@0,0/ssd@w500507680140a44d,d5
       5. c2t500507680110A44Dd213 <IBM-2145-0000 cyl 30718 alt 2 hd 32 sec 64>
          /pci@0/pci@0/pci@8/pci@0/pci@1/fibre-channel@0/fp@0,0/ssd@w500507680110a44d,d5
               ...
      48. c3t500507680130A454d217 <IBM-2145-0000 cyl 9214 alt 2 hd 64 sec 256>
          /pci@0/pci@0/pci@9/fibre-channel@0/fp@0,0/ssd@w500507680130a454,d9
      49. c3t500507680120A454d217 <IBM-2145-0000 cyl 9214 alt 2 hd 64 sec 256>
          /pci@0/pci@0/pci@9/fibre-channel@0/fp@0,0/ssd@w500507680120a454,d9
Specify disk (enter its number): Specify disk (enter its number):
pd-001 #

We'll use disk 2 (c2t500507680140A44Dd201) for the OS. Label the disk before we use it.

A template snapshot was made earlier.

pd-001 # zfs list
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
rpool                                     55.5G  78.4G    97K  /rpool
rpool/ROOT                                9.47G  78.4G    21K  legacy
rpool/ROOT/s10s_u8wos_08a                 9.47G  78.4G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var             2.55G  78.4G  2.55G  /var
rpool/dump                                2.00G  78.4G  2.00G  -
rpool/export                                23K  78.4G    23K  /export
rpool/ldoms                               42.0G  78.4G    21K  none
rpool/ldoms/template                      42.0G  78.4G    21K  none
rpool/ldoms/template/disk0                42.0G   108G  12.0G  -
rpool/ldoms/template/disk0@snap.20100920      0      -  12.0G  -
rpool/swap                                   2G  80.4G    16K  -
 
pd-001 # zpool create -m none psl-w008pool c2t500507680140A44Dd201
pd-001 # zfs list
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
psl-w008pool                                81K  29.3G    21K  none
rpool                                     55.5G  78.4G    97K  /rpool
rpool/ROOT                                9.47G  78.4G    21K  legacy
rpool/ROOT/s10s_u8wos_08a                 9.47G  78.4G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var             2.55G  78.4G  2.55G  /var
rpool/dump                                2.00G  78.4G  2.00G  -
rpool/export                                23K  78.4G    23K  /export
rpool/ldoms                               42.0G  78.4G    21K  none
rpool/ldoms/template                      42.0G  78.4G    21K  none
rpool/ldoms/template/disk0                42.0G   108G  12.0G  -
rpool/ldoms/template/disk0@snap.20100920      0      -  12.0G  -
rpool/swap

Copy the snapshot


We will copy snapshot rpool/ldoms/template/disk0@snap.20100920 to gl008pool/disk0@snap.20100920

pd-001 # zfs send rpool/ldoms/template/disk0@snap.20100920 | zfs receive psl-w008pool/disk0@snap.20100920

(This took about 7 minutes on an idle system)

pd-001 # zfs list
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
gl008pool                              12.0G  17.3G    21K  none
gl008pool/disk0                        12.0G  17.3G  12.0G  -
gl008pool/disk0@snap.20100920              0      -  12.0G  -
rpool                                     55.5G  78.4G    97K  /rpool
rpool/ROOT                                9.47G  78.4G    21K  legacy
rpool/ROOT/s10s_u8wos_08a                 9.47G  78.4G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var             2.55G  78.4G  2.55G  /var
rpool/dump                                2.00G  78.4G  2.00G  -
rpool/export                                23K  78.4G    23K  /export
rpool/ldoms                               42.0G  78.4G    21K  none
rpool/ldoms/template                      42.0G  78.4G    21K  none
rpool/ldoms/template/disk0                42.0G   108G  12.0G  -
rpool/ldoms/template/disk0@snap.20100920      0      -  12.0G  -
rpool/swap                                   2G  80.4G    16K  -

Clone the snapshot


The clone will be the bootdisk for the guest LDom gl008.

pd-001 # zfs clone gl008pool/disk0@snap.20100920 gl008pool/clonedisk0
pd-001 # zfs list
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
gl008pool                              12.0G  17.3G    21K  none
gl008pool/clonedisk0                       0  17.3G  12.0G  -
gl008pool/disk0                        12.0G  17.3G  12.0G  -
gl008pool/disk0@snap.20100920              0      -  12.0G  -
rpool                                     55.5G  78.4G    97K  /rpool
rpool/ROOT                                9.47G  78.4G    21K  legacy
rpool/ROOT/s10s_u8wos_08a                 9.47G  78.4G  6.92G  /
rpool/ROOT/s10s_u8wos_08a/var             2.55G  78.4G  2.55G  /var
rpool/dump                                2.00G  78.4G  2.00G  -
rpool/export                                23K  78.4G    23K  /export
rpool/ldoms                               42.0G  78.4G    21K  none
rpool/ldoms/template                      42.0G  78.4G    21K  none
rpool/ldoms/template/disk0                42.0G   108G  12.0G  -
rpool/ldoms/template/disk0@snap.20100920      0      -  12.0G  -
rpool/swap

Create the new Ldom


pd-001 # ldm add-domain gl008
pd-001 # ldm add-vcpu 8 gl008
pd-001 # ldm add-memory 2G gl008
pd-001 # ldm add-vnet pvid=A vnet0008 primary-vsw0 gl008
pd-001 # ldm add-vnet pvid=A vnet1008 primary-vsw1 gl008


Failover Document


LDoms - Failover Guest LDom : gl008



  • gl008 is the LDom 
  • pd-001 is the primary Control Domain
  • pd-002 is the auxilliary/failover Control Domain

These instructions assume that:

  1. gl008 (the guest LDom) has been configured to run on both Control Domains as documented in LDoms - Configure Guest LDom : gl008
  2. Failover has been previously tested and shown to work (which in this case, it has)



Failover from Primary to Auxiliary


pd-001 : Shut down gl008


Stop and unbind gl008 on pd-001

pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.5%  19d 19h 1m
gl008         active     -t----  5000    8     2G        12%  3m
template         inactive   ------          8     4G
pd-001 # ldm stop gl008
LDom gl008 stopped
pd-001 # ldm unbind gl008
pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.4%  19d 19h 2m
gl008         inactive   ------          8     2G
template         inactive   ------          8     4G

pd-001 : Export the ZFS pool used by gl008


pd-001 # zpool list
NAME           SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
gl008pool  29.8G  12.1G  17.7G    40%  ONLINE  -
rpool          136G  23.5G   113G    17%  ONLINE  -
pd-001 # zpool status -v gl008pool
  pool: gl008pool
 state: ONLINE
 scrub: none requested
config:
 
        NAME                       STATE     READ WRITE CKSUM
        gl008pool               ONLINE       0     0     0
          c2t500507680140A44Dd201  ONLINE       0     0     0
 
errors: No known data errors
pd-001 # zpool export psl-w008pool
pd-001 # zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool   136G  23.5G   113G    17%  ONLINE  -

pd-002 : Import the ZFS pool used by gl008


After exporting the zpool speak to the Storage Team to reverse metro-mirroring for the disk(s) used. The output from zpool status -v poolname (which we ran before exporting the zpool) will be useful in identifying the disks for the Storage Team. Otherwise you will get an error like the one below:

pd-002 # zpool import gl008pool
cannot import 'gl008pool': one or more devices is currently unavailable

Assuming the Storage Team have done their bit, the zpool import should work properly:

pd-002 # zpool import gl008pool
pd-002 # zpool list
NAME           SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
gl008pool  29.8G  12.1G  17.7G    40%  ONLINE  -
rpool          136G  23.9G   112G    17%  ONLINE  -

pd-002 : Bind, start and boot the LDom gl008


pd-002 # ldm bind gl008
pd-002 # ldm start gl008
LDom gl008 started
pd-002 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.4%  12d 16h 24m
gl008         active     -t----  5000    8     2G        12%  5s
template         inactive   ------          8     4G
pd-002 # telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
 
Connecting to console "gl008" in group "gl008" ....
Press ~? for control options ..
 
{0} ok boot
Boot device: disk  File and args:
SunOS Release 5.10 Version Generic_142900-11 64-bit
Copyright 1983-2010 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hostname: gl008

 
gl008 console login:

Login via the network to test access.

Failback from Auxiliary to Primary


When the situation requires it, you can fail back to the primary Control Domain:

pd-002: Shutdown and fail back to (the primary) pd-001


After shutting down, stop and unbind gl008:

pd-002 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.2%  12d 16h 31m
gl008         active     -t----  5000    8     2G        12%  2m
template         inactive   ------          8     4G
pd-002 # ldm stop gl008
LDom gl008 stopped
pd-002 # ldm unbind gl008
pd-002 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.2%  12d 16h 31m
gl008         inactive   ------          8     2G
template         inactive   ------          8     4G

pd-002 : Export the ZFS pool used by gl008


pd-002 # zpool list
NAME           SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
gl008pool  29.8G  12.1G  17.7G    40%  ONLINE  -
rpool          136G  23.9G   112G    17%  ONLINE  -
pd-002 # zpool export gl008pool
pd-002 # zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool   136G  23.9G   112G    17%  ONLINE  -

After exporting the zpool speak to the Storage team to reverse the metro-mirroring again.

pd-001 : Import the ZFS pool used by gl008


pd-001 # zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool   136G  23.5G   113G    17%  ONLINE  -
pd-001 # zpool import gl008pool
pd-001 # zpool list
NAME           SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
gl008pool  29.8G  12.1G  17.7G    40%  ONLINE  -
rpool          136G  23.5G   113G    17%  ONLINE  -

pd-001: Bind and start LDom gl008


pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.4%  19d 19h 52m
gl008         inactive   ------          8     2G
template         inactive   ------          8     4G
pd-001 # ldm bind gl008
pd-001 # ldm start gl008
LDom psl-w008 started
pd-001 # ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      8     4G       0.4%  19d 19h 52m
gl008         active     -t----  5000    8     2G        12%  2s
template         inactive   ------          8     4G
pd-001 # telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
 
Connecting to console "gl008" in group "gl008" ....
Press ~? for control options ..
 
{0} ok boot
Boot device: disk  File and args:
SunOS Release 5.10 Version Generic_142900-11 64-bit
Copyright 1983-2010 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.

gl008 console login:


LDoms - Add A New ZFS volume To A Guest LDom : gl009


  • primary - pd-001
  • guest LDom - gl009

On the primary/control domain:


Note, if you have a requirement for thin provisioning, you could create a ZFS sparse disk once it has been presented to the Guest LDom.

Create a new zpool using the new disk


pd-001 # zpool list
NAME           SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
gl009pool  71.5G  12.8G  58.7G    17%  ONLINE  -
gl008pool  29.8G  12.9G  16.8G    43%  ONLINE  -
rpool          136G  23.5G   113G    17%  ONLINE  -

We'll use disk 14 - c2t500507680110A44Dd219:

pd-001 # echo | format
          ...
      13. c2t500507680140A44Dd217 <IBM-2145-0000-82.00GB>
          /pci@0/pci@0/pci@8/pci@0/pci@1/fibre-channel@0/fp@0,0/ssd@w500507680140a44d,d9
      14. c2t500507680110A44Dd219 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
          /pci@0/pci@0/pci@8/pci@0/pci@1/fibre-channel@0/fp@0,0/ssd@w500507680110a44d,db
      15. c2t500507680140A44Dd219 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
          /pci@0/pci@0/pci@8/pci@0/pci@1/fibre-channel@0/fp@0,0/ssd@w500507680140a44d,db
          ...

Label the disk as an SMI disk.


This will impose a 2TB size limit on the disk, but allows for easier resizing later on, should it be required. If you cannot use SMI, then you're stuck with EFI, which has it's own strengths. Don't worry.

pd-001 # format -e
format> p
partition> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Ready to label disk, continue? y
partition> p
Current partition table (default):
Total disk cylinders available: 10238 + 2 (reserved cylinders)
 
Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       0                0         (0/0/0)            0
  1       swap    wu       0                0         (0/0/0)            0
  2     backup    wu       0 - 10237       10.00GB    (10238/0/0) 20967424
  3 unassigned    wm       0                0         (0/0/0)            0
  4 unassigned    wm       0                0         (0/0/0)            0
  5 unassigned    wm       0                0         (0/0/0)            0
  6        usr    wm       0 - 10237       10.00GB    (10238/0/0) 20967424
  7 unassigned    wm       0                0         (0/0/0)            0
 
(Note slice 6)
  
partition> q
format> q

Create the zpool using the slice number to maintain the SMI label


pd-001 # zpool create -m none psl-i009data01 c2t500507680110A44Dd219s6
pd-001 # zpool list
NAME             SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
gl009data01  9.94G  85.5K  9.94G     0%  ONLINE  -
gl009pool    71.5G  12.8G  58.7G    17%  ONLINE  -
gl008pool    29.8G  12.9G  16.8G    43%  ONLINE  -
rpool            136G  23.5G   113G    17%  ONLINE  -

Create the ZFS volume for exporting to the LDom psl-i009


If you want to use all of the available disk space, you will need to calculate how much you can use:

pd-001 # zfs get "available" gl009data01
NAME            PROPERTY   VALUE   SOURCE
gl009data01  available  9.78G   -
 
pd-001 # echo "scale=2;9.78*1024" | bc
10014.72

Go a bit higher than 10014mb to avoid rounding errors. Try 10016mb, if it fails, try 10015mb, etc:

pd-001 # zfs create -V 10016mb gl009data01/disk0

Make the virtual disk available to the LDom


pd-001 # ldm add-vdsdev options=slice /dev/zvol/dsk/gl009data01/disk0 gl009data01-0@primary-vds0

Note: options=slice - Exports a backend as a single slice disk. This should allow later volume resizes to be recognised on the guest LDOM.

pd-001 # ldm add-vdisk vdata01-0 gl009data01-0@primary-vds0 gl009

Login to gl009. The new disk should be immediately visible and available:

gl009 # echo | format
Searching for disks...done
 
AVAILABLE DISK SELECTIONS:
       0. c0d0 <SUN-DiskImage-70GB cyl 1998 alt 2 hd 96 sec 768>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c0d2 <Unknown-Unknown-0001-9.79GB>
          /virtual-devices@100/channel-devices@200/disk@2
Specify disk (enter its number): Specify disk (enter its number):

Create the mountpoint to be used for the data volume:

gl009 # mkdir /home/<domain>

Create a new zpool and mount it on the new mountpoint:

gl009 # zpool create -m /home/<domain> domaindata01 c0d2s0

(Note: use slice 0 - c0d2s0 as we're using slices for data disks less than 2TB)

gl009 # zpool list
NAME            SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
domaindata01  9.75G   124K  9.75G     0%  ONLINE  -
rpool            70G  8.33G  61.7G    11%  ONLINE  -
gl009 # zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
domaindata01                   120K  9.60G  55.5K  /home/domain
rpool                          10.3G  58.6G    97K  /rpool
rpool/ROOT                     7.33G  58.6G    21K  legacy
rpool/ROOT/s10s_u8wos_08a      7.33G  58.6G  4.60G  /
rpool/ROOT/s10s_u8wos_08a/var  2.73G  58.6G  2.73G  /var
rpool/dump                     1.00G  58.6G  1.00G  -
rpool/export                    243K  58.6G    50K  /export
rpool/export/home               193K  58.6G   193K  /export/home
rpool/swap                        2G  60.6G    16K  -
gl009 # df -k /home/domain
Filesystem            kbytes    used   avail capacity  Mounted on
domaindata01        10063872      55 10063752     1%    /home/domain

Remember:


  • Update the xml constraints listing on both the master and auxiliary Primary/Control LDoms
  • Update the Guest LDom configuration data on both the master and auxiliary Primary/Control LDoms
  • Update the logical domain configuration to the SP on both the master and auxiliary Primary/Control LDoms
  • Any zpools other than the root zpool in a Guest LDom:
    • Will have to be exported seperately from the Primary/Control LDom when switching from the Master to the Auxiliary - note that the Storage Team will have to be asked to switch the direction of the metromirror for the required VDisks
    • Will have to be forcibly imported into the Guest LDom after reboot *

(* Or you could export it from the Guest LDom before shutting it down, then import it normally after bringing the Guest LDom up on the alternative Primary/Control LDom. This may be impractical in a failover situation.)



Aggregated link in Solaris

Note: all interfaces intended to be aggregated should be connected to the same switch. The ports used on the switch need to be configured for aggregation by Network Support.

Intended interfaces to be used:

  • e1000g0
  • nxge0

host002 # ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1

        inet 127.0.0.1 netmask ff000000

e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2

        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx


local-mac-address? must be set to true.

host002 # eeprom local-mac-address?

local-mac-address?=true

Make sure interfaces to be used are visible to dladm

host002 # dladm show-dev

e1000g0         link: up        speed: 1000  Mbps       duplex: full

e1000g1         link: unknown   speed: 0     Mbps       duplex: half

e1000g2         link: unknown   speed: 0     Mbps       duplex: half

e1000g3         link: unknown   speed: 0     Mbps       duplex: half

nxge0           link: up        speed: 1000  Mbps       duplex: full

nxge1           link: down      speed: 0     Mbps       duplex: unknown

nxge2           link: down      speed: 0     Mbps       duplex: unknown

nxge3           link: down      speed: 0     Mbps       duplex: unknown

Unplumb the interfaces to be aggregated, in this case only e1000g0 as nxge0 is already unplumbed

host002 # ifconfig e1000g0 down unplumb

Create a link-aggregation group with key 1. key is the number that identifies the aggregation. The lowest key number is 1. Zeroes are not allowed as keys.

host002 # dladm create-aggr -d e1000g0 -d nxge0 1


host002 # mv /etc/hostname.e1000g0 /etc/hostname.aggr1

A reboot is required

host002 # init 6

After the reboot, check the interface status:

host002 # ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1

        inet 127.0.0.1 netmask ff000000

aggr1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2

        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx

        ether 0:21:28:59:50:3c

host002 # dladm show-aggr

key: 1 (0x0001) policy: L4      address: 0:21:28:59:50:3c (auto)

           device       address                 speed           duplex  link    state

           e1000g0     xx         1000  Mbps    full    up      attached

           nxge0        xx          1000  Mbps    full    up      attached


Actual on pd-001 is , please note that the aggr1 and aggr2 are unplumbed and plumbed with the new name vsw0 and vsw1


# dladm show-dev

vsw0            link: up        speed: 1000  Mbps       duplex: full

vsw1            link: up        speed: 1000  Mbps       duplex: full

e1000g0         link: up        speed: 1000  Mbps       duplex: full

e1000g1         link: up        speed: 1000  Mbps       duplex: full

e1000g2         link: up        speed: 1000  Mbps       duplex: full

e1000g3         link: up        speed: 1000  Mbps       duplex: full

nxge0           link: up        speed: 1000  Mbps       duplex: full

nxge1           link: up        speed: 1000  Mbps       duplex: full

nxge2           link: up        speed: 1000  Mbps       duplex: full

nxge3           link: up        speed: 1000  Mbps       duplex: full

# dladm show-aggr -L

key: 1 (0x0001) policy: L4      address: 0:21:28:59:50:3c (auto)

                LACP mode: off  LACP timer: short

    device    activity timeout aggregatable sync  coll dist defaulted expired

    e1000g0   passive  short   yes          no    no   no   no        no

    e1000g1   passive  short   yes          no    no   no   no        no

    e1000g2   passive  short   yes          no    no   no   no        no

    e1000g3   passive  short   yes          no    no   no   no        no

key: 2 (0x0002) policy: L4      address: 0:21:28:72:55:9a (auto)

                LACP mode: off  LACP timer: short

    device    activity timeout aggregatable sync  coll dist defaulted expired

    nxge0     passive  short   yes          no    no   no   no        no

    nxge1     passive  short   yes          no    no   no   no        no

    nxge2     passive  short   yes          no    no   no   no        no

    nxge3     passive  short   yes          no    no   no   no        no

# ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1

        inet 127.0.0.1 netmask ff000000

vsw0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2

        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx

        groupname ipmp1


vsw1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3

        inet <IP> netmask ffffff00 broadcast xx.xx.xx.xx

        groupname ipmp1

     
#



Solaris notes



Finding HBA cards on Solaris 10


host # fcinfo hba-port
No Adapters Found.
host002 # fcinfo hba-port
HBA Port WWN: 10000000c99a4878
        OS Device Name: /dev/cfg/c2
        Manufacturer: Emulex
        Model: LPe11000-S
        Firmware Version: 2.82a4 (Z3D2.82A4)
        FCode/BIOS Version: Boot:none Fcode:none
        Serial Number: 0999VM0-10040018BU
        Driver Name: emlxs
        Driver Version: 2.50o (2010.01.08.09.45)
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 2Gb
        Node WWN: 20000000c99a4878
HBA Port WWN: 10000000c99a47d5
        OS Device Name: /dev/cfg/c3
        Manufacturer: Emulex
        Model: LPe11000-S
        Firmware Version: 2.82a4 (Z3D2.82A4)
        FCode/BIOS Version: Boot:none Fcode:none
        Serial Number: 0999VM0-10040018D7
        Driver Name: emlxs
        Driver Version: 2.50o (2010.01.08.09.45)
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 2Gb
        Node WWN: 20000000c99a47d5

Patching Solaris 10


It is recommended to patch in single-user mode. However in single-user mode, some local filesystems may not be mounted. To mount local filesystems:

# svcadm enable svc:/system/filesystem/local:default

2 comments: