Showing posts with label iSCSI. Show all posts
Showing posts with label iSCSI. Show all posts

Monday, February 11, 2013

Oracle: Solaris 10 Update 11 Released!

Oracle: Solaris 10 Update 11 Released!

Abstract:
Solaris 10 was launched in 2005, with ground-breaking features like: DTrace, SMF (Services), Zones, LDom's, and later ZFS. The latest, and perhaps last, update of Solaris 10 was expected in 2012, to co-inside with an early release of the SPARC T5. In 2013, Oracle released yet another update, suggesting the T5 is close to release. The latest installment of Solaris 10 is referred to as 01/13 release, for January 2013, appears to be the final SVR4 Solaris release, with expected normal Oracle support extending to 2018. Many serious administrators will refer to this release as Solaris 10 Update 11.

(Oracle SPARC & Solaris Road Map, 2013-02-11)

What's New?
Oracle released the "Oracle Solaris 10 1/13 What's New" document, outlining some of the included features. The arrangement of the categories seems odd, in some cases, so a few were merged/re-orded below. Some of the interesting features include:


(Solaris 10 Update 11 Network File System Install Media Option)

(Solaris 10 Update 11 SVR4 Package Dependency Install Support)
  • Administration Enhancements
    OCM (Oracle Configuration Manager) Client Service
    Oracle Zones Pre-Flight Checker
    SVR4 pkgdep (Package Depends) Command
    Intel x86 FMA (Fault Management Architecture) Sandy Bridge EP Enhancements
    AMD MCA (Machine Check Architecture) Support for Family 15h, 0Fh, 10h
# zfs help                                                              
The following commands are supported:                                   
allow       clone       create      destroy     diff        get         
groupspace  help        hold        holds       inherit     list        
mount       promote     receive     release     rename      rollback    
send        set         share       snapshot    unallow     unmount     
unshare     upgrade     userspace                                  
(Solaris 10 Update 10 zfs help system enhancements)
# zpool help                                                            
The following commands are supported:                                   
add      attach   clear    create   destroy  detach   export   get      
help     history  import   iostat   list     offline  online   remove   
replace  scrub    set      split    status   upgrade                    
# zfs help create                                                       
usage:                                                                  
             create [-p] [-o property=value] ...                        
             create [-ps] [-b blocksize] [-o property=value] ... -V     
(Solaris 10 Update 10 zpool help system enhancements)
  • ZFS File System and Storage Enhancements
    Help tiered into sub-commands for: zfs, zpool
    ZFS aclmode enhancements
    ZFS diff enhancements
    ZFS snap alias for snapshot
    Intel x86 SATA (Serial ATA) support for ATA Pass-Through Commands
    AMD x86 XOP and FMA Support
    SPARC T4 CRC32c Acceleration for iSCSI
    Xen XDF (Virtual Block Device Driver) for x86 Oracle VM
# zfs help create                                                       
usage:                                                                  
             create [-p] [-o property=value] ...                        
             create [-ps] [-b blocksize] [-o property=value] ... -V     
(Solaris 10 Update 10 zpool help create system enhancements)

Competitive Pressures:
Competition makes the Operating System market healthy! Let's look at the competitive landscape.
(Illumos Logo)

Solaris USB 3.0 is in a better support position than Illumos still missing USB 3.0 today since Solaris 10, Solaris 11, and Illumos all have top-of-the-line read and write flash accelerators for hard disk storage... a USB 3.0 flash cache will provide a nice inexpensive performance boost! Slower Solaris USB 3.0 support from 2013q1 on SPARC will be shunned with Solaris ZFS SMB's considering Apple MacOSX. Apple released USB 3.0 support in 2012q4 with Fusion Drive, making OSX a strong contender. Apple may have been late to Flash when proper licensing could not be agreed between Sun/Oracle and Apple, Apple is still late with deduplication, but now Oracle and Illumos are late with USB 3.0 to combine with ZFS.

(Lustre logo, courtesy hpcwire)

Sun purchased Lustre, for ZFS integration back in 2007. NetMgmt salivated as Lustre for ZFS was on-tap back in 2009, ZFS needed cluster/replication for a long time. Redhat purchased GlusterFS in 2011 and went beta in 2012, for production quality filesystem clustering. IBM released ZFS and Luster on their own hardware & Linux OS. NetMgt noted Lustre on EMC was hitting in 2012, questioned Oracle's sluggishness, and begged for an Illumos rescue. Even Microsoft "got it" when Windows 2012 bundled: dedupe, clustering, iSCSI, SMB, and NFS. It seems Apple, Oracle, and Illumos are the last major vendors - late with native file system clustering... although Apple is not pretending to play on the Server field.

(Superspeed USB 3.0 logo, courtesy usb3-thunderbolt.com)

The lack of File System Clustering in the final update of Solaris 10 is miserable, especially after various Lustre patches made it into ZFS years ago. Perhaps Oracle is waiting for a Solaris 11 update for clustering??? The lack of focus by Illumos on clustering and USB 3.0 makes me wonder whether or not their core supporters (embedded storage and cloud provider) really understand how big of a hole they have. An embedded storage provider, should would want USB 3.0 for external disks and clustering for geographically dispersed storage  their check-list. A cloud provider should would want geographically dispersed clustering, at the least.

(KVM is bundled into Joyent SmartOS, as well as Linux)
Missing native ZFS clustering and hypervisor at Oracle is making Solaris look "long in the tooth". Xen on Oracle Linux with Xen being removed from Solaris is a poor excuse by Oracle. Joyent's SmartOS KVM integrated into Illumos helps the Solaris community move forward, but what is the use of a hypervisor without shared-nothing clustered storage, to migrate those VM's at will? Missing USB 3.0 and native ZFS clustering is putting pressure on Illumos to differentiate itself in the storage market.

Conclusions:
Oracle Solaris 10 is alive and well - GO GET Update 11!!! Some of the most important features include the enhancements to CPU architecture (is SPARC T5 silently supported, since T5 has been in-test since end of 2013?), USB 3.0, iSCSI support for root disk installations, install SVR4 package dependency support, and NFS media support. Many of these features will be welcomed by SMB's (small to medium sized businesses.)

(Bullet Train, courtesy gojapango)
The Solaris Train continues to move at Oracle, producing high quality product, SPARC support, and new drivers (i.e. USB 3.0) - if Solaris 11, Illumos, or SmartOS releases ZFS clustering, the resulting OS will be market leading.

Saturday, April 30, 2011

Updating QLogic HBA Firmware/BIOS using the QFlasher CD

While installing a QLogic Host Bus Adapter(HBA) I needed a QFlasher CD to update the firmware. The associated knowledge base article:
CD-ROM DOS Boot Disk ISO Image for Updating the BIOS and Firmware QLogic Adapters
does not list necessary commands to complete the process through Caldera DR-DOS and the DOS prompt is not self-explanatory.

After booting the x86/64 server I used these commands to get to the update program:

After the system completely boots, the user is in drive A. The drivers are in P. To switch to drive P
A:\> P:

List of available commands
A:\> ?

To read a text file, the filename + extension is required
A:\> type readme.txt

Show files and sub-directories contained in current directory
P:\> dir

Change directory
P:\> cd ISCSI

To run a file, the extension isn't necessary
P:\ISCSI\40XX> iflash

Note: commands & filenames are not case sensitive.

Monday, August 31, 2009

Multi-Node Cluster Shared Nothing Storage

Multi-Node Cluster Shared Nothing Storage

Abstract

A number of months back, a new release of Sun Cluster was released, in conjunction with OpenSolaris 2009.06. This release offered a new architecture for a lower cost fail-over cluster capability using Shared-Nothing Storage. This paper discussed the benefits using a broader implementation plan to further reduce costs and increase scalability.

Shared Nothing Storage

With the advent of ZFS under Solaris and ComStar under OpenSolaris, there is a new no-cost architecture in the world of high-availability under Sun - Shared Nothing Storage.

The benefits are clear in this environment:


  • External Storage is not required (with it's complexity and costs)

  • Additional storage area network infrastructure is not required (with it's complexity and costs)

  • The OS of the active node continually keeps all the local disks in sync (with virtually no complexity)
There are some drawbacks to this environment:


  • Complete CPU capacity is needed on both platforms for peak CPU capacity for active applications.
Applications can be run under Node-1 while Node-2 is always kept up to date, ready for failover of the storage as well as the applications which are sitting on that storage pool.

Dual-Node Shared Nothing Storage

Some people may not bee too impressed - there is still a node which is completely unused. This additional node may be considered a pure cost in an H-A or D-R environment. This is not necessarily true, if other strategies are taken into consideration.

For example, a dual-active node, where individual internal storage could be leveraged on dual active nodes through dual initiators, to completely leverage CPU capacity on both nodes during peak times.

The benefits are clear in this environment:


  • External Storage is not required (with it's complexity and costs)

  • Additional storage area network infrastructure is not required (with it's complexity and costs)

  • The OS of the active node continually keeps all the local disks in sync (with virtually no complexity)

  • 200% CPU capacity on two platforms can be leveraged during peak usage times
There are some drawbacks to this environment:


  • Fail-over of a single node results in reduction to 100% of CPU capacity
Applications can be run under Node-1 and Node-2 while disks on the opposing node is always kept up to date, ready for failover of the storage as well as the applications which are sitting on that storage pool.

Multi-Node Shared Nothing Storage

The dual-active node share nothing architecture seems very beneficial, but what can be done in very typical three-tier environments?

Considering how simple it is to move around pools as well as zones, multi-node clustering can be done with a couple of simple scripts.

For example, a triple-active node, where individual internal storage could be leveraged on all three active nodes through triple initiators, to completely leverage CPU capacity on all nodes during peak times.


The benefits are clear in this environment:


  • External Storage is not required (with it's complexity and costs)

  • Additional storage area network infrastructure is not required (with it's complexity and costs)

  • The OS of the active node continually keeps all the local disks in sync (with virtually no complexity)

  • 300% CPU capacity across all platforms can be leveraged during peak processing times

  • Failover of a single node means only a decrease to 200% CPU processing capacity
Applications can be run under Node-1, Node-2, and Node-3 while disks on the opposing nodes are always kept up to date, ready for failover of the storage as well as the applications which are sitting on that storage pool.

Application in Network Management

What does this have to do with Network Management?

Very often, there are multiple platforms which are used on polling platforms, with a high-availability requirement on an embedded database. There is usually a separate cost for H-A kits for applications as well as databases.

Placing each of the tiers within a Solaris Container is the first step to business optimization, higher availability, and cost reduction.


As a reminder, Oracle RDBMS can legally be run within a CPU Capped Solaris 10 Container, in order to reduce CPU licensing costs, leaving plenty of CPU available for failing over applications from other tiers. As additional capacity is needed by the business, the additional license can be purchased and the cap extended to other cores on the existing platform.

Pushing down the H-A requirements to the OS level eliminates application & license complexities and enables drag-and-drop load balancing or disaster-recovery under Solaris 10 or OpenSolaris using Solaris Containers. Running a RDBMS within a Capped Solaris 10 Container gives the business the flexibility to buy/stage hardware without having to pay the unused cpu cycles until they are actually needed.

- - - - - - - - - - - - - - - - - - -

Update - 2009-01-07: Another blog posting about this feature:

Solaris tip of the week: iscsi failover with COMSTAR


Update - 2019-10-21: Previous "Solaris tip of the week" no longer exists, transferred post:
https://jaydanielsen.wordpress.com/2009/12/10/solaris-tip-of-the-week-iscsi-failover-with-comstar/
I've been researching HA iscsi configurations recently, and I'd like to capture and share what I've learned about the COMSTAR stack. I have a simple demo that you can use for your own experiments...
 

Monday, July 27, 2009

More Work With ZFS


More Work With ZFS

The Last Time...

The last time ZFS was covered, an description of overall features were covered. How to use all of those features was uncovered. This post will try to cover some of the other features.

ZFS Sharing Overview

ZFS centralizes all directory sharing into a single command structure and removes the needs to manage arcane configuration files to deal with issue such as configuration, status, and persistency.

ZFS Sharing Protocols

The new ZFS suite offers protocol sharing of iSCSI, NFS as well as SMB(CIFS). There is a catch to this: the ZFS host must support a kernel implementations of the protocols - SMB(CIFS) is only supported under more recent releases of OpenSolaris and iSCSI is only supported under the Solaris families.

ZFS Sharing Stopping and Starting

ZFS uses a property to determine whether a filesystem mount is going to be shared or not.

To stop NFS sharing of a ZFS filesystem.

servera/root$ zfs set sharenfs=off u201
To start an NFS sharing of a ZFS filesystem.

servera/root$ zfs set sharenfs=on u201
Sharing Status

On the same server, one can check the "share" command to see what is being shared from all protocols, persistent or not. A listing of domestic sharing protocols that can be checked are in a configuration file on the sharing host

servera/admin$ cat /etc/dfs/fstypes
nfs NFS Utilities
autofs AUTOFS Utilities
cachefs CACHEFS Utilities

servera/admin$
share
- /u000 anon=60001,rw=servera "" - /u201 rw ""
On a foreign server, one can check to see what is being shared (via NFS protocol), persistent or not. A listing of foreign protocols that can be checked are in a configuration file on the remote host.

serverb/admin$ cat /etc/dfs/fstypes nfs NFS Utilities autofs AUTOFS Utilities cachefs CACHEFS Utilities

serverb/admin$ dfshares servera RESOURCE SERVER ACCESS TRANSPORT servera:/cdunix servera - - servera:/u201 servera - -
For the share and dfshares command, if no protocol is specified, then the "nfs" protocol is the default. A ZFS filesystem shared over NFS can be done using the "share" and "dfshares" command.

Sharing and Persistence

In most historic POSIX systems, there is a file referred to as "sharetab" (or some derivative of it) to review the sharing of filesystems. This is effective against any underlying filesystem (i.e. UFS, VxFS, ZFS, etc.) In the example below, cdunix is not on a ZFS filesystem.

servera/admin$ cat /etc/dfs/sharetab /u000 - nfs rw /u201 - nfs rw
If one is running a pure ZFS environment, persistence is held as a property. You can see the status of the ZFS file share through a ZFS command.

servera/admin$ zfs get sharenfs u201
NAME PROPERTY VALUE SOURCE
u201 sharenfs on local


Checking all shared protocols through ZFS is also possible, through parsing "all" properties option.

servera/admin$ zfs get all grep share
u201 sharenfs on default
u201 shareiscsi off default
u201 sharesmb off default
Checking the share status for all protocols from a foreign server is not as elegant. Individual protocols must be used, such as the "dfshares" command.