Friday, November 27, 2015

Solaris 11: The pkg Repositories

[Solaris Logo, courtesy former Sun Microsystems]

Abstract:

Packaging has long been the basis of modern Operating Systems, dating back to AT&T System V. Solaris adopted SVR4 packaging, when Sun Microsystems started growing from an Operating System needing a compiler to a production Operating System to be deployed. SVR4 Packaging was originally based upon the concept of a Stream (recorded upon Sequential Block infrastructures likes Tape) or a Tree (recorded Random Block infrastructures like Disk.) Sun Microsystems was astutely aware that "http" protocol was not much different from a "tape", where a stream of data was pulled down, and they upgraded SVR4 to support HTTP repositories with encryption and license keys. Somewhere along the way, Sun lost their way, and created a proprietary packaging system with fewer capabilities, called IPS, based upon the new command "pkg"... but Oracle is making the best of it.

[former OpenSolaris logo]

The "pkg" Repository

The concept of a Package Repository with the Image Packaging System was introduced with OpenSolaris. The repository would be served up through a web server and secured with certificates.




[Oracle Logo, courtesy Oracle Corporation]

Oracle pkg Repositories

There are two kinds of Oracle “pkg” repositories:
1.       Non-production Release Repository
Designated as: http://pkg.oracle.com/solaris/release/
2.       Production Support Repository
Designated as: https://pkg.oracle.com/solaris/support/


The document describing the Solaris 11.2 Package Publisher info:
http://docs.oracle.com/cd/E36784_01/html/E36802/gijmo.html

Checking Repository

The newly installed OS is using the Oracle Package Publisher defaults to the Release Repository.
sun9876/root# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION                           
solaris                     origin   online F http://pkg.oracle.com/solaris/release/

Additional detail can be reviewed about Oracle’s “solaris” release publisher:
sun9876/root# pkg publisher solaris

            Publisher: solaris
                Alias:
           Origin URI: http://pkg.oracle.com/solaris/release/
              SSL Key: None
             SSL Cert: None
          Client UUID: 6367a630-fbe6-11e3-8701-5bf522237f54
      Catalog Updated: August 18, 2015 04:44:20 PM
              Enabled: Yes

To check the current OS Release and Update – note: installed is Solaris 11.2 (0.175 is Solaris 11) SRU 0

sun9876/root# pkg info entire
          Name: entire
       Summary: Incorporation to lock all system packages to the same build
   Description: This package constrains system package versions to the same
                build.  WARNING: Proper system update and correct package
                selection depend on the presence of this incorporation.
                Removing this package will result in an unsupported system.
      Category: Meta Packages/Incorporations
         State: Installed
     Publisher: solaris
       Version: 0.5.11
 Build Release: 5.11
        Branch: 0.175.2.0.0.42.0
Packaging Date: June 24, 2014 07:38:32 PM
          Size: 5.46 kB
          FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.2.0.0.42.0:20140624T193832ZZ


To check the Oracle Release Repository – note: available is Solaris 11.2 (0.175 is Solaris 11) SRU 1
sun9876/root# pkg info -r entire
          Name: entire
       Summary: Incorporation to lock all system packages to the same build
   Description: This package constrains system package versions to the same
                build.  WARNING: Proper system update and correct package
                selection depend on the presence of this incorporation.
                Removing this package will result in an unsupported system.
      Category: Meta Packages/Incorporations
         State: Not installed
     Publisher: solaris
       Version: 0.5.11
 Build Release: 5.11
        Branch: 0.175.2.1.0.2.1
Packaging Date: September 23, 2014 10:49:40 PM
          Size: 5.46 kB
          FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.2.1.0.2.1:20140923T224940Z

There are 148 updates available from the Oracle Release repository.
sun9876/root# pkg list -u | wc -l
     148

To list the updates available:
sun9876/root# pkg list -u | head
NAME (PUBLISHER)                                  VERSION                    IFO
archiver/gnu-tar                                  1.27.1-0.175.2.0.0.42.1    i--
compress/bzip2                                    1.0.6-0.175.2.0.0.42.1     i--
compress/gzip                                     1.5-0.175.2.0.0.42.1       i--
compress/p7zip                                    9.20.1-0.175.2.0.0.42.1    i--
compress/pbzip2                                   1.1.6-0.175.2.0.0.42.1     i--
compress/pixz                                     1.0-0.175.2.0.0.42.1       i--
compress/unzip                                    6.0-0.175.2.0.0.42.1       i--
compress/xz                                       5.0.1-0.175.2.0.0.42.1     i--
compress/zip                                      3.0-0.175.2.0.0.42.1       i—

A dry-run of the update shows 8 packages updates available with release, and size - no reboot required.
sun9876/root# pkg update -nv
            Packages to update:         8
     Estimated space available: 275.69 GB
Estimated space to be consumed:  65.63 MB
       Create boot environment:        No
Create backup boot environment:       Yes
          Rebuild boot archive:        No

Changed packages:
solaris
  consolidation/sunpro/sunpro-incorporation
    0.5.11,5.11-0.175.2.0.0.37.0:20140414T130238Z -> 0.5.11,5.11-0.175.2.1.0.4.0:20140728T200719Z
  consolidation/userland/userland-incorporation
    0.5.11,5.11-0.175.2.0.0.42.1:20140623T010405Z -> 0.5.11,5.11-0.175.2.1.0.2.0:20140723T184045Z
  developer/assembler
    0.5.11,5.11-0.175.2.0.0.37.0:20140414T130241Z -> 0.5.11,5.11-0.175.2.1.0.4.0:20140728T200720Z
  entire
    0.5.11,5.11-0.175.2.0.0.42.0:20140624T193832Z -> 0.5.11,5.11-0.175.2.1.0.2.1:20140923T224940Z
  system/library/c++-runtime
    0.5.11,5.11-0.175.2.0.0.37.0:20140414T130401Z -> 0.5.11,5.11-0.175.2.1.0.4.0:20140728T200722Z
  system/library/math
    0.5.11,5.11-0.175.2.0.0.37.0:20140414T130409Z -> 0.5.11,5.11-0.175.2.1.0.4.0:20140728T200728Z
  system/library/mmheap
    0.5.11,5.11-0.175.2.0.0.23.0:20130916T153150Z -> 0.5.11,5.11-0.175.2.1.0.4.0:20140728T200732Z
  system/library/openmp
    0.5.11,5.11-0.175.2.0.0.37.0:20140414T130412Z -> 0.5.11,5.11-0.175.2.1.0.4.0:20140728T200733Z

To update from 11.2.0 to 11.2.1 (which was the latest at the time of this article publishing):
sun9876/root# pkg update

Management through Ops Center

If the operating system instance is managed through Ops Center, the publisher repositories are changed, and patching can be done centrally.

Ops Center server is the local proxy, holding patches & packages from Oracle
sun5582/dh127087$ pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION                           
solaris                     origin   online F https://oracle-oem-oc-mgmt-sun9999:8002/IPS/
cacao                       origin   online F https://oracle-oem-oc-mgmt-sun9999:8002/IPS/
mp-re          (non-sticky) origin   online F https://oracle-oem-oc-mgmt-sun9999:8002/IPS/
opscenter                   origin   online F https://oracle-oem-oc-mgmt-sun9999:8002/IPS/

The operating systems managed through Ops Center can be patched remotely or can be patched through the command line, using Ops Center server as the supported package repository.

Conclusions

While the detour that Sun Microsystems took, taking packaging back a couple decades, Oracle started to make the best of it. With the release of Ops Center, to manage the Solaris cloud components to automatically configure the pkg components and provide a continuous feed of packages for their Operating System and Firmware, Oracle has been making some sweet lemonade from their lemons.

Wednesday, November 25, 2015

Joyent: Encapsulating Linux through Docker into a Zone


[Solaris 11 Launch image, courtesy Oracle]

Abstract:

Virtualization has been available in the UNIX OS world. The creation of users in a time sharing environment, to isolate executable threads from one another as well as protect files in an underlying file system started the journey. The creation of the Virtual File System, where disks could me mounted anywhere in a file system tree (instead of drive letter) revolutionized computing to allow those systems to grow in the shared environment! The creation of "chroot" so an application could run in it's own file system space, made an application "feel" like it is on a dedicated system. The merging of SVR4 into Solaris created a robust multi-processor infrastructure to host multi-user and  multi-tenant systems. The creation of Zones under SVR4 Solaris 10, further extrapolated the original concepts of the UNIX "chroot", isolating CPU, Memory, Users, Storage - effectively making a single instance of the Solaris OS truly multi-tenant. The creation of Branded Zones for Linux and Solaris came later, offering entire operating systems to be encapsulated under Intel and SPARC Solaris systems. Newer proprietary technologies continue to enter the horizon.

[Oracle Linux, courtesy Oracle]
The Linux Problem

People participating in the Linux ecosystem are interested in creating new raw environments,  isolated to their operating system under proprietary Intel processors, to supply a reasonable replacement for mature infrastructure. These replacements constitute very long efforts, which often never really get completed. Veterans understand the benefit of good engineering and can often take systems "to the next level." Vendors like Oracle had taken Linux, ran their applications on top of it, and supplied the patches necessary to keep Linux stable.

Joyent: Zones(KVM and Linux)

Former employees of Sun Microsystems continue to do the heavy lifting in the industry. Network Management wrote about Joyent's efforts to port KVM into Solaris Zones under their SmartOS, based upon Illumos. Illumos originated from Sun Microsystem's OpenSolaris project (which became the basis of Oracle's Solaris 11.)

[Solaris Zone/Container concept, courtesy former Sun Microsystems]

Joyent: Zones(Docker and Linux)

One might expect that Cloud companies who are obsessed with Virtualization like Joyent would continue their quest for a "better cloud". In 2015, Joyent released a presentation on the porting of Docker to encapsulate Linux into a Zone... using the same SmartOS based upon Illumos, which found it's roots in Sun Microsystem's OpenSolaris.



For Joyent, The Cloud means chasing every container technology and integrating it into SmartOS, to give their customers choice, while simultaneously utilizing their infrastructure as efficiently as possible.

Conclusion

SVR4 UNIX and Sun Solaris developers have a long history of virtualization. The success story of Joyent in "Cloud" environments continues to lead the market in vision, taking things which were good but raw, and rolling them into mature facilities which continues to make the computing industry grow!

Thursday, October 22, 2015

SPARC: Oracle Linux Coming Soon!

[SPARC International Logo, Courtesy SPARC International]
Abstract:
Linux has been available under SPARC for some time. Ubuntu had committed to supporting Linux under UltraSPARC T systems. Fujitsu offered Linux under their SPARC systems for their MPP based clusters. China had offered Linux for small controllers based upon SPARC. Oracle is getting into the business of releasing Linux for their systems.

[Oracle Corporation Logo, Courtesy Oracle]

Source: Job Posting:
Oracle made a public job posting, foreshadowing an upcoming product release: Course/Curriculum Dev 4-Training
Oracle VM Server for SPARC is highly efficient, enterprise-class virtualization enabling the creation of 128 virtual servers on one system leveraging Oracle's SPARC servers... The change here is to remove any mention of "Solaris"... This product will also be available on Linux going forward so Linux or Solaris are equally valid. 
Documentation of a training class for a product is a pretty reliable source for a new product release.

[SPARC M7 Die, Courtesy The Register]

Not the Only Source
Larry Ellison, currently the CTO of Oracle, announced that Oracle Enterprise Linux was coming to SPARC back in 2010, around the acquisition time of Sun Microsystems by Oracle Corporation.
"We think Sparc will become clearly the best chip for running Oracle software. At that point we'd be nuts not to move Oracle Enterprise Linux there. We're a ways away, but I think that's definitely going to happen," Ellison said. It's likely to happen in "the T4, T5 timeframe."
The SPARC T4 & T5 processors are currently being sold. More SPARC processors are coming...

[San Francisco California, courtesy Oracle Corporation]
Reading the Tea Leaves
The T5's are about to be supplanted with the SPARC M7 pending release. and the SPARC T7 pending release. Oracle OpenWorld is about to occur. This seems like the right timing for a product announcement or release... get your new SPARC processors with Oracle Linux or Solaris could be a great marketing campaign!

Conclusion:
If your company has been holding out for a large vendor to support Linux under SPARC, this may be your opportunity. This could also be foretelling of the inevitable decline of Intel under Oracle Engineered Systems. The bundling of Linux under a lower cost SPARC could be the beginning of Oracle re-entering the HPC market.

Tuesday, October 20, 2015

Coming Soon: SPARC T7


[SPARC International logo]

Coming Soon: SPARC T7

Abstract:

Operating Systems and Software Vendors continue to struggle with the difference between 32 bit and 64 bit architectures, but the SPARC family of processors continues to roll out 64 bit CPU chips for data flagship Solaris 64 bit Operating System. Watching companies announce new products ahead of time is tricky because of Government Regulation, but sometimes watching less overt routes can provide a great level of insight as to what is coming soon.

[SPARC and Solaris Public Roadmap, courtesy Oracle Corporation]

Roadmap: Foretelling the Future

Oracle has a history of releasing public road maps for SPARC and Solaris. They have been fairly accurate, since Oracle acquired Sun Microsystems. The roadmaps are subject to change, but they give the Architect a good idea of what is coming and how to plan for it. As of August of 2015, Oracle's public roadmap indicates that a new SPARC is in Test, both an M-Series and a T-Series.In August, Network Management discussed details regarding the pending M7 release.

[System Controller and Console image, courtesy Oracle]

Firmware: What's in the Wild

SPARC T7 is operational!

A recent firmware release indicates the following bug numbers have been resolved:
19601081 Raise TMB size for ...T7 
20915261 T7-All Platforms: /HOST/console logging is not working... 
20949111 Snapshot should collect fmadm faulty -av output for T7 
21376029 STRAND_LOCAL_MMU_GROUP() broken for non-T7... targets 
The new SPARC T7, appears to be a reality. The Chassis and Processor clearly exists.

Conclusions:

Firmware being released on SPARC Servers are a clear indication of what is here. The M7's are also scheduled for release. If you are building Network Management platforms, this is the time to start your planning for hardware acquisition, to get the most "bang for the buck".

Monday, October 19, 2015

Solaris 11.2: Extending ZFS rpool Under Virtualized x86

Solaris 11.2: Extending ZFS "rpool" Under Virtualized x86

Abstract

Often when an OS is first installed, resources or redundancy may be required beyond what was originally in-scope on a project. Adding additional disks by adding file systems was an early solution, but the disks were always next to the original file system while pushing the effort to applications to resolve them. Virtual file systems were created to be able to add or mount additional storage anywhere in a filesystem. Volume managers were later created, to create volumes which file systems could sit on top of, with tweeks to file systems to allow expansion. In the modern world, file systems like ZFS provide all of those capabilities. In a virtualized environment, underlying disks are no longer even disks, and can be extended using shared storage, making file systems like ZFS even more important.

[Solaris Zone/Container Virtualization for Solaris 10+]

Use Cases

This document will discuss use cases where Solaris 11.2 was installed in an x86 environment on top of VMWare where a vSphere administrator will extend the virtual disks which the ZFS root file system was installed upon.

Two use specific cases to be evaluated include:
1) A simple Solaris 11.2 x86 installation with a single "rpool" Root Pool where it needs a mirror and was sized too small.
2) A more complex Solaris 11.2 x86 installation with a mirrored "rpool" Root Pool where it was sized too small.

A final Use Case is evaluated, which can be applied after either one of the previous cases:
3) Extend swap space on a ZFS "rpool" Root Pool

The terminology for ZFS is "autoexpand" for the ZFS filesystem filling the extended virtual disk file. For this article, the VMWare vSphere virtual disk extend is out of scope. It is expected that this process will work with other hypervisors.


[Solaris Logo, courtesy former Sun Microsystems]

Use Case 1: Simple OS Complexity Install Problem

Problem Background: Single Disk Lacks Redundancy and Capacity

When a simple Solaris 11.2 installation occurs, a single disk may be the original installation.
sun9999/root# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME      STATE     READ WRITE CKSUM
        rpool     ONLINE       0     0     0
          c2t1d0  ONLINE       0     0     0

errors: No known data errors

sun9999/root#

As the platform becomes more important, additional disk space (beyond the original 230GB) may be required in the root pool as well as additional redundancy (beyond the single disk.)
sun9999/root# zpool list
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  228G   182G  46.4G  79%  1.00x  ONLINE  -

sun9999/root#

Under Solaris, these attributes can be augmented without additional software or reboots.
[Sun Microsystems Logo]

Solution: Add and Extend Virtual Disks

Solaris systems under x86 are increasingly deployed under VMWare. Virtual disks  may be the original allocation, and these disks can be added and later even extended by the hypervisor. It will take some time before Solaris 11 recognizes that a change is done against the underlying virtual disks and these disks can be extended. The disks must be carefully identified before making any changes. Only the 3 steps in purple are required.

[OCZ solid state hard disk]

Identifying the Disk Candidates

The disks can be identified with "format" command.
sun9999/root# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c2t0d0
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c2t1d0
          /pci@0,0/pci15ad,1976@10/sd@1,0
       2. c2t2d0
          /pci@0,0/pci15ad,1976@10/sd@2,0

Specify disk (enter its number):

The 3x disks identified above are clearly virtual, but it is unclear the role of each disk.

The "zpool status" performed earlier identified Disk "1" as a root pool disk.

The older style Virtual File System Table will show other disks with older file system types. In the following case, clearly Disk "2" is a UFS filesystem, which can not be used for root.
sun9999/root# grep c2 /etc/vfstab
/dev/dsk/c2t2d0s0 /dev/rdsk/c2t2d0s0 /u000 ufs 1 yes onerror=umount
This leaves us with Disk "0", to be verified via format, which may be a good candidate for root mirroring.
Specify disk (enter its number): 0
selecting c2t0d0
[disk formatted]
Note: detected additional allowable expansion storage space that can be
added to current SMI label's computed capacity.
Select to adjust the label capacity.
...
format>
Solaris 11.2 has noted that Disk "0" can also be extended.

The "format" command will also verify the other sliced.
Specify disk (enter its number): 1
selecting c2t1d0
[disk formatted]
/dev/dsk/c2t1d0s1 is part of active ZFS pool rpool. Please see zpool(1M).

...
format> disk
...

Specify disk (enter its number)[1]: 2
selecting c2t2d0
[disk formatted]
Warning: Current Disk has mounted partitions.
/dev/dsk/c2t2d0s0 is currently mounted on /u000. Please see umount(1M).

format> quit

sun9999/root#

Clearly, no other disk is available, with the exception of Disk "0", for mirroring the root pool.

[Sun Microsystems Storage Server]
Adding Disk "0" to Root Pool "rpool"

It was already demonstrated the single "c2t1d0" device is in the "rpool" and the new disk candidate is "c2t0d0". To create a mirror, use the "attach" to add to the existing device disk a new candidate device disk and observe progress with "status" until resilvering is completed.
sun9999/root# zpool attach -f rpool c2t1d0 c2t0d0
Make sure to wait until resilver is done before rebooting.
sun9999/root# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function in a degraded state.
action: Wait for the resilver to complete.
        Run 'zpool status -v' to see device specific details.
  scan: resilver in progress since Thu Oct 15 17:19:49 2015
    184G scanned
    39.5G resilvered at 135M/s, 21.09% done, 0h18m to go
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t0d0  DEGRADED     0     0     0  (resilvering)

errors: No known data errors
sun9999/root#
The  previous resilver suggests future maintenance on the mirror with similar data may take ~20 minutes.
[Seagate External Hard Disk]

Extending Root Pool "rpool"

Verify there is a known good mirror so the root pool can be extended safely.
sun9999/root# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 184G in 0h19m with 0 errors on Thu Oct 15 17:39:34 2015
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0

errors: No known data errors


sun9999/root#

The newly added "c2t0d0" virtual disk has been automatically extended by zpool.
sun9999/root# prtvtoc -h /dev/dsk/c2t0d0
       0     24    00        256    524288    524543
       1      4    00     524544 1048035039 1048559582
       8     11    00  1048559583     16384 1048575966
sun9999/root# prtvtoc -h /dev/dsk/c2t1d0
       0     24    00        256    524288    524543
       1      4    00     524544 481803999 482328542
       8     11    00  482328543     16384 482344926
sun9999/root#
Next, enable auto expand or (extend) on rpool to resize, once the "c2t1d0" disk has been resized.
sun9999/root# zpool set autoexpand=on rpool
sun9999/root# zpool get autoexpand rpool
NAME   PROPERTY    VALUE  SOURCE
rpool  autoexpand  on     local

sun9998/root#
Detect the new disk size for the existing "c2t1d0" disk that was resized.
sun9999/root# devfsadm -Cv
...
devfsadm[13903]: verbose: removing file: /dev/rdsk/c2t1d0s14
devfsadm[13903]: verbose: removing file: /dev/rdsk/c2t1d0s15
devfsadm[13903]: verbose: removing file: /dev/rdsk/c2t1d0s8
devfsadm[13903]: verbose: removing file: /dev/rdsk/c2t1d0s9
sun9999/root#
The expansion should now take place, nearly instantaneously.

[Oracle Logo]

Verifying the Root Pool "rpool" Expansion

Note the original disk "c2t1d0" disk was extended.
sun9999/root# prtvtoc -h /dev/dsk/c2t0d0
       0     24    00        256    524288    524543
       1      4    00     524544 1048035039 1048559582
       8     11    00  1048559583     16384 1048575966

sun9999/root# prtvtoc -h /dev/dsk/c2t1d0
       0     24    00        256    524288    524543
       1      4    00     524544 1048035039 1048559582
       8     11    00  1048559583     16384 1048575966


sun9999/root#
The disk space is now extended to 500GB
sun9999/root# zpool list
NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  498G   184G  314G  37%  1.00x  ONLINE  -

sun9999/root#
And it is not a bad time to scrub the new disks, it will take about 1 hour, to ensure there are no errors.

sun9999/root# zpool scrub rpool
sun9999/root# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 1h3m with 0 errors on Thu Oct 15 19:58:09 2015
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0

errors: No known data errors
sun9998/root#

The Solaris installation on the ZFS Root Pool "rpool" is healthy.

[Oracle Servers]

Use Case 2: Medium Complexity OS Installation

Problem:  Mirrored Disks Lacks Capacity

The previous section was extremely detailed, this section will be more brief. Like the previous section, there is a lack of capacity in the root pool. Unlike the previous section, this pool is already mirrored.

Solution: Extend Mirrored Root Pool "rpool"

 The following use case is merely to extend the Solaris 11 Root Pool "rpool" after the VMWare Administrator had already increased the size of the root virtual disks. Note, only the two steps in purple are required.

Extend Root Pool "rpool"

The following steps take only seconds to run.

sun9998/root# zpool list
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  228G   179G  48.9G  78%  1.00x  ONLINE  -


sun9998/root# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 99.1G in 0h11m with 0 errors on Tue Apr  7 15:48:39 2015
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0

errors: No known data errors


sun9998/root# echo | format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c2t0d0
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c2t2d0
          /pci@0,0/pci15ad,1976@10/sd@2,0
       2. c2t3d0
          /pci@0,0/pci15ad,1976@10/sd@3,0
Specify disk (enter its number): Specify disk (enter its number):

sun9998/root# zpool set autoexpand=on rpool
sun9998/root# zpool get autoexpand rpool
NAME   PROPERTY    VALUE  SOURCE
rpool  autoexpand  on     local


sun9998/root# devfsadm -Cv
devfsadm[7155]: verbose: removing file: /dev/dsk/c2t0d0s10
devfsadm[7155]: verbose: removing file: /dev/dsk/c2t0d0s11
...

devfsadm[7155]: verbose: removing file: /dev/rdsk/c2t3d0s8
devfsadm[7155]: verbose: removing file: /dev/rdsk/c2t3d0s9

sun9998/root# zpool list
NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  498G   179G  319G  35%  1.00x  ONLINE  -


sun9998/root#

And, the effort is done, as fast as you can type the commands.

[Sun Microsystems Flash Module]

Verify Root Pool "rpool"

 The following verification is for the paranoid, the scrub will be kicked off in the background, performance will be monitored for about 20 seconds on 2 second polls, and the verification may take about 1-5 hours (depending on how busy the system or I/O subsystem is.)

sun9998/root# zpool scrub rpool

sun9998/root# zpool iostat rpool 2 10
          capacity     operations    bandwidth
pool   alloc   free   read  write   read  write
-----  -----  -----  -----  -----  -----  -----
rpool   179G   319G     11    111  1.13M  2.55M
rpool   179G   319G    121      5  5.58M  38.0K
rpool   179G   319G    103    189  6.15M  2.53M
rpool   179G   319G    161      8  4.60M   118K
rpool   179G   319G     82      3  10.3M  16.0K
rpool   179G   319G    199    113  6.38M  1.56M
rpool   179G   319G     31      5  1.57M  38.0K
rpool   179G   319G    117      3  9.64M  18.0K
rpool   179G   319G     30     96  2.28M  1.74M
rpool   179G   319G     24      4  3.12M  36.0K

sun9998/root# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 4h32m with 0 errors on Fri Oct 16 00:42:28 2015
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0

errors: No known data errors
sun9998/root#
Solaris installation and ZFS Root Pool "rpool" is healthy.

Use Case 3: AddSwap in a ZFS "rpool" Root Pool

Problem: Swap Space Lacking

After more disk space is added to the ZFS "rpool" Rooi Pool, it may be desired to extend the swap space. This must be done in another operation, after the "rpool" is already extended.

Solution: Add Swap to ZFS and the Virtual File System Table

The user community determines they need to increase swap from 12 GB to 20 GB, but they can not afford reboot. There are 2 steps required:
1) add swap space
2) make swap space permanent
First, existing swap space must be understood.

Review Swap Space

Swap space can be reviewed for reservation, activation, and persistence with "swap", "zfs", and "grep".
sun9999/root# zfs list rpool/swap
NAME         USED  AVAIL  REFER  MOUNTPOINT
rpool/swap  12.4G   306G  12.0G  -


sun9999/root# swap -l -h
swapfile                 dev    swaplo   blocks     free
/dev/zvol/dsk/rpool/swap 279,1     4K      12G      12G


sun9999/root# grep swap /etc/vfstab
swap                      -  /tmp    tmpfs  - yes     -
/dev/zvol/dsk/rpool/swap  -  -       swap   - no      -


sun9999/root# 
Note, the "zfs list" above will only work with a single swap dataset. When adding a second swap dataset, a different methodology must be used.

Swap Space Dataset Creation

To add swap space to the existing root pool, without a reboot, requires adding another dataset. To increase from 12 GB to 20 GB, the additional dataset should be 8 GB. This takes a split second.
sun9999/root# zfs create -V 8G rpool/swap2
sun9999/root# 
Swap dataset is now ready to be manually activated.

Swap Space Activation


The swap space is activated using the "swap" command. This takes a split second.
sun9999/root# swap -a /dev/zvol/dsk/rpool/swap2

sun9999/root# swap -l -h
swapfile                    dev    swaplo   blocks     free
/dev/zvol/dsk/rpool/swap  279,1        4K      12G      12G
/dev/zvol/dsk/rpool/swap2 279,3        4K     8.0G     8.0G

sun9999/root#
This swap space is only temporary, until the next reboot.

Swap Space Persistence

To make the swap space persistent, after a reboot, it must be added to the Virtual File System Table
sun9999/root# cp -p /etc/vfstab /etc/vfstab.2015_10_16_dh
sun9999/root# vi /etc/vfstab

(add the following line)
/dev/zvol/dsk/rpool/swap2  -  -       swap   - no      -
sun9999/root#
 The added swap space will now be activated automatically, upon the next reboot.

Swap Space Validation

Commands to verify: zfs swap datasets, active swap datasets, and persistent datasets
sun9999/root# zfs list | grep swap
rpool/swap                         12.4G   298G  12.0G  -
rpool/swap2                        8.25G   297G  8.00G  -


sun9999/root# swap -l -h
swapfile                    dev    swaplo   blocks     free 
/dev/zvol/dsk/rpool/swap  279,1        4K      12G      12G
/dev/zvol/dsk/rpool/swap2 279,3        4K     8.0G     8.0G


sun9999/root# grep swap /etc/vfstab
swap                       -   /tmp  tmpfs  -  yes     -
/dev/zvol/dsk/rpool/swap   -   -     swap   -  no      -
/dev/zvol/dsk/rpool/swap2  -   -     swap   -  no      -


sun9999/root#
Note, the zfs list command now uses a "grep", to capture multiple datasets.
A total of [12G + 8G =] 20GB is now available in swap.

Conclusions

Most of the above document is fluff, filled with paranoia, checking import items to ensure no data loss multiple times. Very few commands are required to perform the aspects of mirroring and root pool extension, Solaris provides a seemless methodology at the OS level to perform activities which are often painful under other operating systems or require additional 3rd party software to perform.