Thursday, October 22, 2015

SPARC: Oracle Linux Coming Soon!

[SPARC International Logo, Courtesy SPARC International]
Abstract:
Linux has been available under SPARC for some time. Ubuntu had committed to supporting Linux under UltraSPARC T systems. Fujitsu offered Linux under their SPARC systems for their MPP based clusters. China had offered Linux for small controllers based upon SPARC. Oracle is getting into the business of releasing Linux for their systems.

[Oracle Corporation Logo, Courtesy Oracle]

Source: Job Posting:
Oracle made a public job posting, foreshadowing an upcoming product release: Course/Curriculum Dev 4-Training
Oracle VM Server for SPARC is highly efficient, enterprise-class virtualization enabling the creation of 128 virtual servers on one system leveraging Oracle's SPARC servers... The change here is to remove any mention of "Solaris"... This product will also be available on Linux going forward so Linux or Solaris are equally valid. 
Documentation of a training class for a product is a pretty reliable source for a new product release.

[SPARC M7 Die, Courtesy The Register]

Not the Only Source
Larry Ellison, currently the CTO of Oracle, announced that Oracle Enterprise Linux was coming to SPARC back in 2010, around the acquisition time of Sun Microsystems by Oracle Corporation.
"We think Sparc will become clearly the best chip for running Oracle software. At that point we'd be nuts not to move Oracle Enterprise Linux there. We're a ways away, but I think that's definitely going to happen," Ellison said. It's likely to happen in "the T4, T5 timeframe."
The SPARC T4 & T5 processors are currently being sold. More SPARC processors are coming...

[San Francisco California, courtesy Oracle Corporation]
Reading the Tea Leaves
The T5's are about to be supplanted with the SPARC M7 pending release. and the SPARC T7 pending release. Oracle OpenWorld is about to occur. This seems like the right timing for a product announcement or release... get your new SPARC processors with Oracle Linux or Solaris could be a great marketing campaign!

Conclusion:
If your company has been holding out for a large vendor to support Linux under SPARC, this may be your opportunity. This could also be foretelling of the inevitable decline of Intel under Oracle Engineered Systems. The bundling of Linux under a lower cost SPARC could be the beginning of Oracle re-entering the HPC market.

Monday, October 19, 2015

Solaris 11.2: Extending ZFS rpool Under Virtualized x86

Solaris 11.2: Extending ZFS "rpool" Under Virtualized x86

Abstract

Often when an OS is first installed, resources or redundancy may be required beyond what was originally in-scope on a project. Adding additional disks by adding file systems was an early solution, but the disks were always next to the original file system while pushing the effort to applications to resolve them. Virtual file systems were created to be able to add or mount additional storage anywhere in a filesystem. Volume managers were later created, to create volumes which file systems could sit on top of, with tweeks to file systems to allow expansion. In the modern world, file systems like ZFS provide all of those capabilities. In a virtualized environment, underlying disks are no longer even disks, and can be extended using shared storage, making file systems like ZFS even more important.

[Solaris Zone/Container Virtualization for Solaris 10+]

Use Cases

This document will discuss use cases where Solaris 11.2 was installed in an x86 environment on top of VMWare where a vSphere administrator will extend the virtual disks which the ZFS root file system was installed upon.

Two use specific cases to be evaluated include:
1) A simple Solaris 11.2 x86 installation with a single "rpool" Root Pool where it needs a mirror and was sized too small.
2) A more complex Solaris 11.2 x86 installation with a mirrored "rpool" Root Pool where it was sized too small.

A final Use Case is evaluated, which can be applied after either one of the previous cases:
3) Extend swap space on a ZFS "rpool" Root Pool

The terminology for ZFS is "autoexpand" for the ZFS filesystem filling the extended virtual disk file. For this article, the VMWare vSphere virtual disk extend is out of scope. It is expected that this process will work with other hypervisors.


[Solaris Logo, courtesy former Sun Microsystems]

Use Case 1: Simple OS Complexity Install Problem

Problem Background: Single Disk Lacks Redundancy and Capacity

When a simple Solaris 11.2 installation occurs, a single disk may be the original installation.
sun9999/root# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME      STATE     READ WRITE CKSUM
        rpool     ONLINE       0     0     0
          c2t1d0  ONLINE       0     0     0

errors: No known data errors

sun9999/root#

As the platform becomes more important, additional disk space (beyond the original 230GB) may be required in the root pool as well as additional redundancy (beyond the single disk.)
sun9999/root# zpool list
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  228G   182G  46.4G  79%  1.00x  ONLINE  -

sun9999/root#

Under Solaris, these attributes can be augmented without additional software or reboots.
[Sun Microsystems Logo]

Solution: Add and Extend Virtual Disks

Solaris systems under x86 are increasingly deployed under VMWare. Virtual disks  may be the original allocation, and these disks can be added and later even extended by the hypervisor. It will take some time before Solaris 11 recognizes that a change is done against the underlying virtual disks and these disks can be extended. The disks must be carefully identified before making any changes. Only the 3 steps in purple are required.

[OCZ solid state hard disk]

Identifying the Disk Candidates

The disks can be identified with "format" command.
sun9999/root# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c2t0d0
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c2t1d0
          /pci@0,0/pci15ad,1976@10/sd@1,0
       2. c2t2d0
          /pci@0,0/pci15ad,1976@10/sd@2,0

Specify disk (enter its number):

The 3x disks identified above are clearly virtual, but it is unclear the role of each disk.

The "zpool status" performed earlier identified Disk "1" as a root pool disk.

The older style Virtual File System Table will show other disks with older file system types. In the following case, clearly Disk "2" is a UFS filesystem, which can not be used for root.
sun9999/root# grep c2 /etc/vfstab
/dev/dsk/c2t2d0s0 /dev/rdsk/c2t2d0s0 /u000 ufs 1 yes onerror=umount
This leaves us with Disk "0", to be verified via format, which may be a good candidate for root mirroring.
Specify disk (enter its number): 0
selecting c2t0d0
[disk formatted]
Note: detected additional allowable expansion storage space that can be
added to current SMI label's computed capacity.
Select to adjust the label capacity.
...
format>
Solaris 11.2 has noted that Disk "0" can also be extended.

The "format" command will also verify the other sliced.
Specify disk (enter its number): 1
selecting c2t1d0
[disk formatted]
/dev/dsk/c2t1d0s1 is part of active ZFS pool rpool. Please see zpool(1M).

...
format> disk
...

Specify disk (enter its number)[1]: 2
selecting c2t2d0
[disk formatted]
Warning: Current Disk has mounted partitions.
/dev/dsk/c2t2d0s0 is currently mounted on /u000. Please see umount(1M).

format> quit

sun9999/root#

Clearly, no other disk is available, with the exception of Disk "0", for mirroring the root pool.

[Sun Microsystems Storage Server]
Adding Disk "0" to Root Pool "rpool"

It was already demonstrated the single "c2t1d0" device is in the "rpool" and the new disk candidate is "c2t0d0". To create a mirror, use the "attach" to add to the existing device disk a new candidate device disk and observe progress with "status" until resilvering is completed.
sun9999/root# zpool attach -f rpool c2t1d0 c2t0d0
Make sure to wait until resilver is done before rebooting.
sun9999/root# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function in a degraded state.
action: Wait for the resilver to complete.
        Run 'zpool status -v' to see device specific details.
  scan: resilver in progress since Thu Oct 15 17:19:49 2015
    184G scanned
    39.5G resilvered at 135M/s, 21.09% done, 0h18m to go
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t0d0  DEGRADED     0     0     0  (resilvering)

errors: No known data errors
sun9999/root#
The  previous resilver suggests future maintenance on the mirror with similar data may take ~20 minutes.
[Seagate External Hard Disk]

Extending Root Pool "rpool"

Verify there is a known good mirror so the root pool can be extended safely.
sun9999/root# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 184G in 0h19m with 0 errors on Thu Oct 15 17:39:34 2015
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0

errors: No known data errors


sun9999/root#

The newly added "c2t0d0" virtual disk has been automatically extended by zpool.
sun9999/root# prtvtoc -h /dev/dsk/c2t0d0
       0     24    00        256    524288    524543
       1      4    00     524544 1048035039 1048559582
       8     11    00  1048559583     16384 1048575966
sun9999/root# prtvtoc -h /dev/dsk/c2t1d0
       0     24    00        256    524288    524543
       1      4    00     524544 481803999 482328542
       8     11    00  482328543     16384 482344926
sun9999/root#
Next, enable auto expand or (extend) on rpool to resize, once the "c2t1d0" disk has been resized.
sun9999/root# zpool set autoexpand=on rpool
sun9999/root# zpool get autoexpand rpool
NAME   PROPERTY    VALUE  SOURCE
rpool  autoexpand  on     local

sun9998/root#
Detect the new disk size for the existing "c2t1d0" disk that was resized.
sun9999/root# devfsadm -Cv
...
devfsadm[13903]: verbose: removing file: /dev/rdsk/c2t1d0s14
devfsadm[13903]: verbose: removing file: /dev/rdsk/c2t1d0s15
devfsadm[13903]: verbose: removing file: /dev/rdsk/c2t1d0s8
devfsadm[13903]: verbose: removing file: /dev/rdsk/c2t1d0s9
sun9999/root#
The expansion should now take place, nearly instantaneously.

[Oracle Logo]

Verifying the Root Pool "rpool" Expansion

Note the original disk "c2t1d0" disk was extended.
sun9999/root# prtvtoc -h /dev/dsk/c2t0d0
       0     24    00        256    524288    524543
       1      4    00     524544 1048035039 1048559582
       8     11    00  1048559583     16384 1048575966

sun9999/root# prtvtoc -h /dev/dsk/c2t1d0
       0     24    00        256    524288    524543
       1      4    00     524544 1048035039 1048559582
       8     11    00  1048559583     16384 1048575966


sun9999/root#
The disk space is now extended to 500GB
sun9999/root# zpool list
NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  498G   184G  314G  37%  1.00x  ONLINE  -

sun9999/root#
And it is not a bad time to scrub the new disks, it will take about 1 hour, to ensure there are no errors.

sun9999/root# zpool scrub rpool
sun9999/root# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 1h3m with 0 errors on Thu Oct 15 19:58:09 2015
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0

errors: No known data errors
sun9998/root#

The Solaris installation on the ZFS Root Pool "rpool" is healthy.

[Oracle Servers]

Use Case 2: Medium Complexity OS Installation

Problem:  Mirrored Disks Lacks Capacity

The previous section was extremely detailed, this section will be more brief. Like the previous section, there is a lack of capacity in the root pool. Unlike the previous section, this pool is already mirrored.

Solution: Extend Mirrored Root Pool "rpool"

 The following use case is merely to extend the Solaris 11 Root Pool "rpool" after the VMWare Administrator had already increased the size of the root virtual disks. Note, only the two steps in purple are required.

Extend Root Pool "rpool"

The following steps take only seconds to run.

sun9998/root# zpool list
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  228G   179G  48.9G  78%  1.00x  ONLINE  -


sun9998/root# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 99.1G in 0h11m with 0 errors on Tue Apr  7 15:48:39 2015
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0

errors: No known data errors


sun9998/root# echo | format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c2t0d0
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c2t2d0
          /pci@0,0/pci15ad,1976@10/sd@2,0
       2. c2t3d0
          /pci@0,0/pci15ad,1976@10/sd@3,0
Specify disk (enter its number): Specify disk (enter its number):

sun9998/root# zpool set autoexpand=on rpool
sun9998/root# zpool get autoexpand rpool
NAME   PROPERTY    VALUE  SOURCE
rpool  autoexpand  on     local


sun9998/root# devfsadm -Cv
devfsadm[7155]: verbose: removing file: /dev/dsk/c2t0d0s10
devfsadm[7155]: verbose: removing file: /dev/dsk/c2t0d0s11
...

devfsadm[7155]: verbose: removing file: /dev/rdsk/c2t3d0s8
devfsadm[7155]: verbose: removing file: /dev/rdsk/c2t3d0s9

sun9998/root# zpool list
NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  498G   179G  319G  35%  1.00x  ONLINE  -


sun9998/root#

And, the effort is done, as fast as you can type the commands.

[Sun Microsystems Flash Module]

Verify Root Pool "rpool"

 The following verification is for the paranoid, the scrub will be kicked off in the background, performance will be monitored for about 20 seconds on 2 second polls, and the verification may take about 1-5 hours (depending on how busy the system or I/O subsystem is.)

sun9998/root# zpool scrub rpool

sun9998/root# zpool iostat rpool 2 10
          capacity     operations    bandwidth
pool   alloc   free   read  write   read  write
-----  -----  -----  -----  -----  -----  -----
rpool   179G   319G     11    111  1.13M  2.55M
rpool   179G   319G    121      5  5.58M  38.0K
rpool   179G   319G    103    189  6.15M  2.53M
rpool   179G   319G    161      8  4.60M   118K
rpool   179G   319G     82      3  10.3M  16.0K
rpool   179G   319G    199    113  6.38M  1.56M
rpool   179G   319G     31      5  1.57M  38.0K
rpool   179G   319G    117      3  9.64M  18.0K
rpool   179G   319G     30     96  2.28M  1.74M
rpool   179G   319G     24      4  3.12M  36.0K

sun9998/root# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 4h32m with 0 errors on Fri Oct 16 00:42:28 2015
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0

errors: No known data errors
sun9998/root#
Solaris installation and ZFS Root Pool "rpool" is healthy.

Use Case 3: AddSwap in a ZFS "rpool" Root Pool

Problem: Swap Space Lacking

After more disk space is added to the ZFS "rpool" Rooi Pool, it may be desired to extend the swap space. This must be done in another operation, after the "rpool" is already extended.

Solution: Add Swap to ZFS and the Virtual File System Table

The user community determines they need to increase swap from 12 GB to 20 GB, but they can not afford reboot. There are 2 steps required:
1) add swap space
2) make swap space permanent
First, existing swap space must be understood.

Review Swap Space

Swap space can be reviewed for reservation, activation, and persistence with "swap", "zfs", and "grep".
sun9999/root# zfs list rpool/swap
NAME         USED  AVAIL  REFER  MOUNTPOINT
rpool/swap  12.4G   306G  12.0G  -


sun9999/root# swap -l -h
swapfile                 dev    swaplo   blocks     free
/dev/zvol/dsk/rpool/swap 279,1     4K      12G      12G


sun9999/root# grep swap /etc/vfstab
swap                      -  /tmp    tmpfs  - yes     -
/dev/zvol/dsk/rpool/swap  -  -       swap   - no      -


sun9999/root# 
Note, the "zfs list" above will only work with a single swap dataset. When adding a second swap dataset, a different methodology must be used.

Swap Space Dataset Creation

To add swap space to the existing root pool, without a reboot, requires adding another dataset. To increase from 12 GB to 20 GB, the additional dataset should be 8 GB. This takes a split second.
sun9999/root# zfs create -V 8G rpool/swap2
sun9999/root# 
Swap dataset is now ready to be manually activated.

Swap Space Activation


The swap space is activated using the "swap" command. This takes a split second.
sun9999/root# swap -a /dev/zvol/dsk/rpool/swap2

sun9999/root# swap -l -h
swapfile                    dev    swaplo   blocks     free
/dev/zvol/dsk/rpool/swap  279,1        4K      12G      12G
/dev/zvol/dsk/rpool/swap2 279,3        4K     8.0G     8.0G

sun9999/root#
This swap space is only temporary, until the next reboot.

Swap Space Persistence

To make the swap space persistent, after a reboot, it must be added to the Virtual File System Table
sun9999/root# cp -p /etc/vfstab /etc/vfstab.2015_10_16_dh
sun9999/root# vi /etc/vfstab

(add the following line)
/dev/zvol/dsk/rpool/swap2  -  -       swap   - no      -
sun9999/root#
 The added swap space will now be activated automatically, upon the next reboot.

Swap Space Validation

Commands to verify: zfs swap datasets, active swap datasets, and persistent datasets
sun9999/root# zfs list | grep swap
rpool/swap                         12.4G   298G  12.0G  -
rpool/swap2                        8.25G   297G  8.00G  -


sun9999/root# swap -l -h
swapfile                    dev    swaplo   blocks     free 
/dev/zvol/dsk/rpool/swap  279,1        4K      12G      12G
/dev/zvol/dsk/rpool/swap2 279,3        4K     8.0G     8.0G


sun9999/root# grep swap /etc/vfstab
swap                       -   /tmp  tmpfs  -  yes     -
/dev/zvol/dsk/rpool/swap   -   -     swap   -  no      -
/dev/zvol/dsk/rpool/swap2  -   -     swap   -  no      -


sun9999/root#
Note, the zfs list command now uses a "grep", to capture multiple datasets.
A total of [12G + 8G =] 20GB is now available in swap.

Conclusions

Most of the above document is fluff, filled with paranoia, checking import items to ensure no data loss multiple times. Very few commands are required to perform the aspects of mirroring and root pool extension, Solaris provides a seemless methodology at the OS level to perform activities which are often painful under other operating systems or require additional 3rd party software to perform.

Tuesday, October 6, 2015

Flash? The End of Disk?

[Growing capacity shipped, courtesy The Register]

Flash? The End of Disk?

A short article in The Register mentions a topic, which is not very popular in an industry driven by innovation. Silicon to retain storage is growing by leans and bounds, but will it overtake Disk?
Samsung expects the NAND flash industry to have capacity to produce up to 253 exabytes of total storage capacity by 2020, essentially "an impressive 3x increase relative to the current industry capacity".
He points out that this is expected to account for less than 10 per cent of the total storage capacity the industry will need by 2020.
There is a tall mountain to climb... if there are capable climbers,  there must be enough rope in order to climb a mountain!

Not So Fast...

It seems there may not be enough rope to climb this mountain, yet. Disk will be around for a longer period of time, than some expect. Sometimes, the factors of technology are impacted by economics.
every 10,000PB of NAND capacity costs $20bn, then to catch up with HDD capacity shipped in 2019, the flash industry would have to spend $2tn. We don't think it is going to happen unless flash capacity $/GB leaving the foundry is sustainably lower than that of disk.
The cost of manufacturing Disks is less expensive than the cost of manufacturing silicon and this investment must be accounted for through the money supply as well as the people who are purchasing products.


Monday, October 5, 2015

Happy Birthday Seymour Cray!

[Cray Super Computer rendering - Courtesy: The Register]

Happy Birthday Seymour Cray! 

Had you been around, this would be your 90th Birthday...
[Cray CS6400 - Courtesy CrayWiki]

Thank you for the Super Computer and massive CS6400 SPARC system, code named the SuperDragon while in development..
[Sun Microsystems E10K, courtesy Wikipedia]

The Business Systems Division of Cray Research was sold to Sun Microsystems and formed the core of the ground-breaking E10000 code named StarFire - a massively sized 64 SPARC Processor based Symmetric Multiprocessor Platform [SMP] system.

Systems like these were data-centers in a cabinet... data-centers in a single OS image... or a single chassis broken up into multiple OS images... depending on the required processing needs. These were not unlike what Oracle sells today, in their high-end systems, as Oracle continues Sun Microsystems, who continued Cray's vision for SMP SPARC Systems.

Friday, September 25, 2015

IPv4: North American Addresses Exhausted

[IPv4 and IPv6: The 4 Corners of the World,  courtesy Center for Applied Internet Data Analysis]

IPv4: North American Addresses Exhausted

Abstract:

The TCP/IP Internet was created around 1981, where each participant would get an address out of a total of around 4 Billion. This technical limitation used 32 bit addresses, during a time when people were using 8 bit computing. Internet usage is pervasive today, with items such as cell phones and light bulbs being attached, and it was just a matter of time before the pool of addresses were exhausted. Another benchmark was hit today.

Gwangju Illustration in South Korea

A simple way to view the The Internet is an Apartment Complex. Each building may be a different continent and each apartment has an address. When someone wants to live in that complex, there is limited number apartment numbers in each complex. In the beginning, anyone can live anywhere, rent is cheap, large blocks of apartments are available for friends to rent together, and life is good. As time goes on, space fills up, and you have to wait until someone leaves or dies to get an address. If the population is ever increasing, there is a problem... people start to double-up or triple-up in the apartment, all sharing a single address, but perhaps adding an "a" or a "b" to the end of the number.

[NAT illustration]

Mitigation Using the Illustration

When IP Addresses on The Internet started getting "tight", providers started to make devices share at each location they provided service to. While this sharing solution is not optimal, this is what happens every day when people multiple computers, televisions, tablets, etc. at their homes... the home gets a single IP Address on The Internet and all the devices share that address through a technology called Network Address Translation using an Internet Router/Firewall. This delayed the problem for many years, since tens of thousands of connections could share a single IP Address on the Internet, behind an Internet Gateway Router/Firewall running NAT.

Trouble with NAT: Mitigation is Not Solution

The problem is, not all devices connected on the Internet using NAT can talk directly to other devices using NAT without going through a system on The Internet using a real IP Address. Devices using NAT must communicate to a well known server in The Internet "cloud", so applications started to become more limited in their framework. Furthermore, identification of an end-point on The Internet becomes more difficult to track, so one really does not know who is behind the public IP address since it could be shared by dozens or thousands of devices, potentially anywhere in the world! When trying to manage devices on The Internet, it is always preferable to have a dedicated IP Address, for troubleshooting, otherwise a physical presence may be needed to investigate a problem. Some secure management protocols break with NAT, since the source or destination address are different from what they started as, so the packet must be modified along the way, which raises security concerns. For everyday people, NAT is a solution, but not without drawbacks. Public IP Addresses continue to be eaten away.

[Warning sign from Wikimedia]

The Warning:

In July of 2015, the American Registry for Internet Numbers ran out of larger blocks of addresses to provide. If you needed a presence on The Internet (i.e. Internet Service Provider, Web Hosting company, Banking Institution deploying ATM's, etc.) and had a large project, you could only get a small number of addresses in North & Central America.

[Empty bottles courtesy The Register]

Running Dry:

As of today in September 2015, North America has officially run out of addresses. North America was not the first region to run dry of IP Addresses, leaving large numbers of devices needing to participate on the Internet high-and-dry. Caribbean and Latin America ran out of addresses in 2014. Europe and the Middle East ran out of addresses in 2012. Asia-Pacific ran out of addresses in 2011. Only Africa still has addresses left, projected to be exhausted in 2019 at current rate of consumption.

[Structure of IPv4 and IPv6 Packets]

The Solution:

In a world where computers, and even cell phones, are 64 bit - using a 32 bit number to define addresses for communication over The Internet is antiquated. This original address size was part of the Internet Protocol, version 4 (IPv4) definition. Over a decade ago, a newer address format was created, called IPv6. Movement to IPv6 is the ultimate solution. There are enough addresses in the 64 number for a very long time. Various governments in Asia such as Hong Kong and Japan, being the first to run out, already started the push to IPv6. Providers in Europe, like British Telecom, started the push to IPv6. Internet Service Providers, like Comcast, are deploying under IPv6 in the United States.

The Conclusion:

As providers move to IPv6, this delays the fate of companies bound to IPv4, since they may receive recycled addresses or can purchase formerly assigned addresses from providers who already moved infrastructure to IPv6. Solution providers moving to IPv6 will gain the benefit of peer-to-peer communication over the Internet, for their applications, while legacy IPv4 solution providers will incur greater costs with having to go through a central bottleneck in The Internet "cloud". If there is ever a point in time where innovation and crisis meets - this is that opportunity, don't miss it!

Thursday, September 17, 2015

XSCF: Domain Service Processor Communication Protocol



XSCF: Domain Service Processor Communication Protocol

There appears to be another internal communication channel that can be made available called the “Domain Service Processor Communication Protocol” (or DSCP) – which can give you the IP Address of the service processor, to access from a Physical Domain.

With DSCP, console to a local service processor can be conveniently made available from the OS.
 

Configuring the Service Processor

The Service Processor can be attached to from a Serial Console using 9600 baud, 8 bits, no stop bit.

The Service Processor can also be attached via a TCP/IP network cable. An article on configuring a network connection on the M4000/M5000 SP is as follows:
  • http://xteams.oit.ncsu.edu/iso/m_xscf

The Service Processor can provide access through Web or CLI. The CLI is called XSCF.

 

XSCF Reference Guides

The Extended System Control Facility (XSCF) is fairly user friendly.
The Extended System Control Facility (XSCF) has various guides available and can get quite extensive.
The XSCF on the SPARC Enterprise Serverscan be accessible over an internal communication channel.

DSCP Usage

The "Domain Service Processor Communication Protocol" (or DSCP) has been around for quite some time, dating back to older large SPARC systems prior the M-Series. DSCP allows for the use of TCP/IP over an internal communications channel, without the requirement of physical LAN cables.
 An example page on configuring the DSCP.

There are multiple ways to configure the IP Addresses for the DSCP.
If the DSCP is changed, it will require a reboot of the Service Processor and Domain.
That was more than enough information to start configuration.

Configuring DSCP

The DSCP is not configured on this platform:
sun9999/root# /usr/platform/SUNW,SPARC-Enterprise/sbin/prtdscp            
ERROR: SP Address lookup failed. Aborting.                                 

To configure an M4000 with 2x domains, use some private, non-routable ip addresses:
XSCF> setdscp -i 10.0.0.0 -m 255.255.255.0                                 
Commit these changes to the database? [y|n] : y                            
                                                                           
XSCF> showdscp                                                         
DSCP Configuration:                                                    
Network: 10.0.0.0                                                      
Netmask: 255.255.255.0                                                 
Location     Address                                                   
----------   ---------                                                 
XSCF         10.0.0.1                                                  
Domain #00   10.0.0.2                                                  
Domain #01   10.0.0.3                                                  

To Enable:
  • The Service Processor may require a reboot (see Fujitsu Reference Guide page 173.)
  • The Physical Domains may require a reboot, in order to communicate with the SP.
 They should be ready to communicate.

Communicating with the Service Processor

Once this is done, you may be able to “reach in & out” of the service processor using TCP/IP… to list the addresses:
sun9999/root# /usr/platform/`uname -i`/sbin/prtdscp                        
Domain Address:      10.0.0.2                                              
SP Address:          10.0.0.1                                              

After this configuration is done, you should be able to get into the XSCF from Solaris:
sun9999/root # ssh `prtdscp -s`                                            
or
sun9999/root # telnet `prtdscp -s`                                         

From there, you may be able to log into XSCF from the sun9999 Solaris OS, get the flash image using FTPD hosted on sun9999.
XSCF> ping 10.0.0.2                                                       
XSCF> getflashimage -u root ftp://10.0.0.2/home/sm250241/FFXCP1113.tar.gz 

This procedure above was not tested in a lab, just researched for someone in need.