Tuesday, December 21, 2010

Sun Founders Panel 2006

Sun Founders Panel 2006

This video from the Computer History Museum contains an intriguing panel with Sun founders and pioneers Andy Bechtolsheim, Bill Joy, Vinod Khosla, Scott McNealy, and John Gage.



A memorable quote from the session, "Get them on tape before they die." Some of the details surrounding this session are located here. This video is a "must watch" for anyone involved in the technology business.

Technologist David Halko states: predicting the future today requires understanding the past.

Sunday, December 5, 2010

CoolThreads UltraSPARC and SPARC Processors


[UltraSPARC T3 Micrograph]

CoolThreads UltraSPARC and SPARC Processors

Abstract:

Processor development takes an immense quantity of time, to architect a high-performance solution, and an uncanny vision of the future, to project market demand and acceptance. In 2005, Sun embarked on a bold path moving toward many cores and many threads per core. Since the purchase of Sun by Oracle, the internal SPARC road map from Sun had clarified.


[UltraSPARC T1 Micrograph]
Generation 1: UltraSPARC T1
A new family of SPARC processors was announced by Sun on 2005 November 14.
  • Single die
  • Single socket
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4 threads/core
  • 1 shared floating point core
  • 1.0 GHz - 1.4 GHz clock speed
  • 279 million transisters
  • 378 mm2
  • 90 nm CMOS (TI)
  • 1 JBUS port
  • 3 Megabyte Level 2 Cache
  • 1 Integer ALU per Core
  • ??? Memory Controllers
  • 6 Stage Integer Pipeline per Core
  • No embedded Ethernet into CPU
  • Crypto Algorithms: ???
Platform designed as a front-end server for web server applications. With a massive number of cores, it was designed to provide web-tier performance similar to existing quad-socket systems leveraging a single socket.

To understand the ground-breaking advancement in this technology, most processors were single core, with an occasional dual core processor (with cores glued together through a more expensive process referred to as a multi-chip module, driving higher software licensing costs for those platforms.)


Generation 2: UltraSPARC T2
The next generation of the CoolThreads processor was announced by Sun on 2007 August.
  • Single die
  • Single Socket
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4, 6, 8 floating point units
  • 8 threads/core
  • 1.2 GHz - 1.6 GHz clock speed
  • 503 million transisters
  • 342 mm2
  • 65 nm CMOS (TI)
  • 1 PCI Express port (1.0 x8)
  • 4 Mageabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 4x Dual Channel FBDIMM DDR2 Controllers
  • 8 Stage Integer Pipeline per Core
  • 2x 10 GigabitEthernet on-CPU ports
  • Crypto Algorithms: DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
This processor was designed for higher compute intensive requirements and incredibly efficient network capacity. Platform made an excellent front-end server for applications as well as Middleware, with the ability to do 10 Gigabit wire-speed encryption with virtually no CPU overhead.

Competitors started to build Single-Die dual-core CPU's with Quad-Core processors by gluing dual-core processors into a Multi-Chip Module.


[UltraSPARC T2 Micrograph]
Generation 3: UltraSPARC T2+
Sun quickly released the first CoolThreads SMP capable UltraSPARC T2+ in 2008 April.
  • Single die
  • 1-4 Sockets
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4, 6, 8 floating point units
  • 8 threads/core
  • 1.2 GHz - 1.6 GHz clock speed
  • 503 million transisters
  • 342 mm2
  • 65 nm CMOS (TI)
  • 1 PCI Express port (1.0 x8)
  • 4 Megabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 2x? Dual Channel FBDIMM DDR2 Controllers
  • 8? Stage Integer Pipeline per Core
  • No embedded Ethernet into CPU
  • Crypto Algorithms: DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
This processor allowed the T processor series to move from the Tier 0 web engines and Middleware to Application tier. Architects started to understand the benefits of this platform entering the Database tier. This was the first Coolthreads processor to scale past 1 and up to 4 sockets.

By this time, competition really started to understand that Sun had properly predicted the future of computing. The drive toward single-die Quad-Core chips have started with Hex-Core Multi-Chip Modules being predicted.


Generation 4: SPARC T3
The market became nervous with Oracle purchasing Sun. The first Oracle branded CoolThreads SMP capable UltraSPARC T3 was launched in in 2010 September.
  • Single die
  • 1-4 Sockets
  • 64 bits
  • 16 integer cores
  • 16 crypto cores
  • 16 floating point units
  • 8 threads/core
  • 1.67 GHz clock speed
  • ??? million transisters
  • 377 mm2
  • 40 nm
  • 2x PCI Express port (2.0 x8)
  • 6 Megabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 4x DDR3 SDRAM Controllers
  • 8? Stage Integer Pipeline per Core
  • 2x 10 GigabitEthernet on-CPU ports
  • Crypto Algorithms: DES, 3DES, AES, RC4, SHA1, SHA256/384/512, Kasumi, Galois Field, MD5, RSA to 2048 key, ECC, CRC32
This processor was more than what the market was anticipating from Oracle. This processor took all the features of the T2 and T2+ combined them into the new T3 with an increase in overall features. No longer did the market need to choose between multiple sockets or embedded 10 GigE interfaces - this chip has it all plus double the cores.

The market, immediately before this release, the competition was releasing single die hex-core and octal-core CPU's using multi-chip modules, by gluing them together. The T3 was a substantial upgrade over the competition by offering double the cores on a single die.


Generation 5: SPARC T4
Oracle indicated in December 2010 that they had thousands of these processors in the lab and predicted this processor will be released end of 2011.

After the announcement, a separate press release indicated processors will have a renovated core, for higher single threaded performance, but the socket will offer half the cores.

Most vendors are projected to have 8 core processors available (through Multi-Chip modules) by the time the T3 is released, but only the T4 should be on a single piece of silicon during this period.


[2010-12 SPARC Solaris Roadmap]
Generation 6: SPARC T5

Some details on the T5 were announced with the T4. Processors will use the renovated T4 core, with a 28nm process. This will return to 16 cores per socket again. This processor may be the first Coolthreads T processor able to scale from 1-8 processors. It is projected to appear in early 2013.

Some vendors are projecting to have 12 core processors on the market using Multi-Chip Module technology, but when the T5 is released, this should still be the market leader in 16 cores per socket.

Network Management Connection

Consolidating most network management stations in a globalized environment works very well with the Coolthreads T-Series processors. Consolidating multiple slower SPARC platforms onto single and double socket T series have worked well over the past half decade.

While most network management polling engines will scale linearly with these highly-threaded processors, there are some operations which are bound to single threads. These type of processes include event correlation, startup time, and syncronization after a discovery in a large managed topology.

The market will welcome the enhanced T4 processor core and the T5 processor, when it is released.

Friday, December 3, 2010

Scalable Highest Performing Clusters at Value Pricing



Scalable Highest Performing Clusters at Value Pricing

Abstract:
Oracle presented another milestone achievement in their 5 year SPARC/Solaris road map with Fujitsu. John Fowler stated: "Hardware without Software is a Door-Stop, Solaris is the gateway."

High-Level:
The following is a listing of my notes from the two sessions. The notes have been combined, with Larry Ellison outlining the high-level and John Fowler presenting the lower-level details. SPARC T3 making world-record benchmarks. New T3 based integrated products. Oracle's Sun/Fujitsu M-Series gets a speed bump. SPARC T4 is on the way.

Presentation Notes:


New TpmC Database OLTP Performance
  • SPARC Top cluster performance
  • SPARC Top cluster price-performance
  • (turtle)
    HP Superdome Itanium 4 Million Transactions/Minute
  • (stallion)
    IBM POWER7 Power 780 10 Million Transactions/Minute
    (DB2 clustered through custom applications)
  • Uncomfortable 4 month for Oracle, when IBM broke the Oracle record
  • (cheetah)
    Sun SPARC 30 Million Transactions/Minute
    (standard off-the-shelf Oracle running RAC)
  • Oracle/Sun performance benchmark => ( IBM + HP ) x 2 !
  • Sun to IBM Comparison:
    3x OLTP Throughput, 27% better Price/Performance, 3.2x faster response time
  • Sun to HP Comparison:
    7.4x OLTP Throughput 66 Better Price/Performance, 24x compute density
  • Sun Supercluster:
    108 sockets, 13.5 TB Memory, Infiniband 40 Gigabit link, 246 Terabytes Flash, 1.7 Petabytes Storage, 1 Quadrillion rows, 43 Trillion transactions per day, 0.5 sec avg response

New Gold Release
  • Gold Standard Configurations are kept in the lab
  • What the customer has, the support organization will have assembled in the lab
  • Oracle, Sun, Cisco, IBM will all keep their releases and bug fixes in sync with releases

SPARC Exalogic Elastic Cloud
  • Designed to run Middleware
  • New T3 processor based
  • 100% Oracle Middleware is Pure Java
  • Tuned for Java and Oracle Fusion Middleware
  • Load-balances with elasticity
  • Ships Q1 2011
  • T3-1B SPARC Compute Blades based
    30 Compute Servers, 16 cores/server, 3.8 TB RAM, 960 GB mirrored flash disks, 40 TB SAS Storage, 4 TB Read Cache, 72 GB Write Cache, 40 Gg/sec Infiniband, 10 GigE to Datacenter

SPARC Supercluster
  • New T3 processor based and M processor based
  • T3-2 = 2 nodes, 4 CPU's, 64 cores/512 threads, 0.5 TB RAM, 96 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband
  • T3-4 = 3 nodes, 12 CPU's, 192 cores/1536 threads, 1.5 TB RAM, 144 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband
  • M5000 = 2 nodes, 16 CPU's, 64 core/128 threads, 1 TB RAM, 144 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband

T3 Processor in production
  • Releases already, performing in these platforms
  • 1-4 processors in a platform
  • 16 cores/socket, 8 threads/core
  • 16 crypto-engines/socket
  • More cores, threads, 10 GigE on-chip, more crypto engines

T4 Processor in the lab!
  • Thousands under test in the lab, today
  • To be released next year
  • 1-4 processors
  • 8 cores/socket, 8 threads/core
  • faster per-thread execution

M3 Processor from Fujitsu
  • SPARC VII+
  • 1-64 SPARC64 VII+ Processors
  • 4 cores, 2 threads/core
  • Increased CPU frequency
  • Double cache memory
  • 2.4x performance of original SPARC64 VI processor
  • VII+ boards will slot into the VI and VII board chassis
Flash Optimization
- Memory hierarchy with software awareness

Infiniband
- Appropriate for High Performance Computing
- Dramatically better performance than Ethernet for linking servers to servers & storage

New Solaris 11 Release

  • Next Generation Networking
    re-engineered network stack
    low latency high bandwidth protocols
    virtualized
  • Cores and Threads Scale
    Adaptive Thread and Memory Placement
    10,000's of core & threads
    thread observability with DTrace

  • Memory Scale
    Dynamic optimization for large memory configs
    Advanced memory placement
    VM systems for 1000's TB memory configs

  • I/O Performance
    Enhanced NUMA I/O framework
    Auto-Discovery of NUMA architecture
    I/O resources co-located with CPU for scale/performance

  • Data Scale
    ZFS Massive storage for massive datasets

  • Availability
    Boot times in seconds
    Minimized OS Install
    Risk-Free Updates with lightweight boot and robust package dependency
    Extensive Fault Management with Offline failing components
    Application Service Managemment with Restart failed applications and associated services quickly

  • Security
    Secure by default
    Secure boot validated with onboard Trusted Platform Module
    Role Based Root Access
    Encrypted ZFS datasets
    Accelerated Encryption with hardware encryption support

  • Trusted Solaris Extensions
    Dataset labels for explicit access rules
    IP labels for secure communication

  • Virtualization
    Network Virtualization to add to Server and Storage Virtualization
    Network Virtualization includes Virtual NIC's and Virtual Switches
SPARC Supercluster Architecture
  • Infiniband is 5x-8x faster than most common Enterprise interconnects
    Infiniband has been leveraged with storage and clustering in software
  • Flash is faster than Rotating Media
    Integrated into the Memory AND Storage Hierarchy

SPARC 5 Year Roadmap
  • SPARC T3 delvered in 2010
  • SPARC VII+ delivered in 2010
  • Solaris 11 and SPARC T4 to be delivered in 2011
Next generation of mission critical enterprise computing
  • Engineer software with hardware products
  • Deliver clusters for general purpose computing
  • Enormous levels of scale
  • Built in virtualization
  • Built in Security
  • Built in management tools
  • Very Very high availability
  • Tested with Oracle software
  • Supported with Gold Level standard
  • Customers spend less time integrating and start delivering services on systems engineered with highest performance components

Thursday, November 18, 2010

Simulating OpenView Events With Net-SNMP SNMP Traps



Simulating OpenView Events With via Net-SNMP
SNMP Traps

Abstract:
HP OpenView Network Node Manager used to be the industry standard network management tool. Net-SNMP is the standard SNMP stack used on most operating systems, such as Solaris. There is still a need to simulate the platform for migration of infrastructure. One common way to simulate the HP OpenView Network Node Manager environment is through the use of OpenView Events (ovevent) and Net-SNMP SNMP Traps (snmptrap) commands.

OpenView Events
The HP OpenView Node Down Event can be simulated through a guaranteed transport via the "ovevent" command.
ovevent $Severity $NMS \ .1.3.6.1.4.1.11.2.17.1.0.$Event \ .1.3.6.1.4.1.11.2.17.2.1.0 Integer 14 \ .1.3.6.1.4.1.11.2.17.2.2.0 OctetString ${Node}

NMS = IP or Resolvable Name of Network Management Station
Node = IP or Resolvable Name of the Managed Device
Severity = Critical, Major, Minor, Info
Event = 58916865 [OV_Node_Down], 58916864 [OV_Node_Up]
Simulate Using Net-SNMP via SNMP V1 Trap
An SNMP V1 trap can be produced to closel simulate this Node Down event. Note, this is not the exact representation, nor is the delivery of the event guaranteed. An SNMP Trap Receiver must receive this.
snmptrap -v 1 -c $Community $NMS \
.1.3.6.1.4.1.11.2.17.1 ${Node} 6 58916865 0


Community= SNMP Community String on used on Network Managment Station
Simulate Using Net-SNMP via SNMP V2c Trap
An SNMP V2c trap can be produced to closel simulate this Node Down event. Note, this is not the exact representation, nor is the delivery of the event guaranteed. An SNMP Trap Receiver must receive this.
snmptrap -v 2c -c $Community $NMS \
0 .1.3.6.1.4.1.11.2.17.1.0.58916865 \
.1.3.6.1.4.1.11.2.17.2.1.0 i 14 \
.1.3.6.1.4.1.11.2.17.2.2.0 s $Node
Simulate Using Net-SNMP via SNMP V2c Trap Test Tool
An SNMP V2c trap can be produced to closel simulate this Node Down event. Note, this is not the exact representation, nor is the delivery of the event guaranteed. An SNMP Trap Receiver must receive this.
snmptest -v 2c -c $Community $NMS:162 \<\<\!
\$T
.1.3.6.1.2.1.1.3.0
t
0
.1.3.6.1.6.3.1.1.4.1.0
o
.1.3.6.1.4.1.11.2.17.1.0.58916865
i
0
.1.3.6.1.4.1.11.2.17.2.1.0
i
14
.1.3.6.1.4.1.11.2.17.2.2.0
s
$Node

\!
Conclusion:
Common events from HP OpenView Network Node Manager, the former "gold standard" in Network Management, can be simulated under stock Solaris 10 and Solaris 11 with simple available OS commands.


Monday, November 8, 2010

Graphical User Interfaces: X and Beyond



Graphical User Interfaces: X and Beyond

Abstact:

There has been much discussion lately regarding the core of Desktop User Interfaces. Mark Shuttleworth has been guiding the Ubuntu Community to move toward Wayland. Long time X community contributor Allen Coopersmith had added some clarification of his own, regarding X11 long time stability and future viability. It is healthy to go through the discussion of display systems occasionally, for the newer programmer.

History:

The disucssion of desktop systems for UNIX is a re-occurring theme. This is in no way an exhaustive history, nor is it ment to be 100% accurate in comparing features of one system against another- not all desktops are created equal in capability or sophistication. Many of these systems relied upon X11, while others did not.


Most UNIX systems used TTY for access on early systems.


Terminals leveraged curses libraries for full screen usage. Various UNIX System V platforms built menuing systems upon the curses library through Form and Menu Language Interpreter or FMLI and enhanced the user experience through Framed Access Command Environment or FACE.


Solaris started with SunView, moved to OpenWindows, used NeWS with Display Postscript, and evetually started down the course of converging to a 100% open-source X11 system. The windowing system was based upon olwm and appeared on AT&T UNIX as well as Solaris.


There is a virtualized window manager based upon OpenWindows called OLVWM, which conforms to the OPEN LOOK standard, but Solaris had decided to abandon Open Look Window Manager or olwm in a later unification effort.

As X Windows became more popular, some vendors of UNIX offered graphical augmented enhancements, such as NCR's XFMLI. Sun received an infusion of cash from AT&T and AT&T purchased NCR. The use of FMLI within AT&T was phenominal by it's user community and the use of XFMLI by NCR was used to modernize the desktop without the necessity to change the investment of FMLI from the System V code base. Solaris even added an FMLI interface to the Live Upgrade OS feature.

Solaris started the process of abandoning FMLI and FACE, for enhanced terminal based user experience, in the mid 2000's, citing internationalization overhaul as a primary motivation.

A group of vendors aligned against Sun and AT&T (who standardized on OPENLOOK) with an alternative GUI referred to as Motif. It was basically a copy of an IBM standard, which IBM followed for OS/2 and Microsoft copied with Window 3.1. There was an open-source version attempted called Open Motif. This was later abandoned with a later unification effort.


Next's NextStep brought a new level of competition to Sun Solaris. A move was made to converge with OpenStep. An open-source version was attepted with GNUstep. Next was founded by former Apple CEO, Steve Jobs, and Next was re-purchased by Apple. The usage of PDF instead of Postscript was used at the heart of the environment. At this point, the NextStep & OpenStep environments were implemented in Apple hardware, from the desktop, to the server, laptop, notebook, and handheld environments.


Vendors dug in their heels, in what is now sometimes referred to as the UNIX Wars. Eventually, concensus was derived between most vendors with the consolidation of OPEN LOOK and MOTIF to Common Desktop Environment or CDE. The tools from Sun's original SunView, which were ported to OPEN LOOK, were ported, using the look and feel of MOTIF. Solaris has since decided to abandon CDE in the mid 2000's.



During the time of UNIX vendors were working towards a new standard desktop, some other desktops have been receiving activity. GNOME was a very important project. GNOME was adopted by various Linux vendors as a default desktop. Solaris adopted a varient of GNOME, called Java Desktop Environment as their standard going-forward environment in mid 2000's.

There was another competing open source environment to GNOME called KDE. KDE was offered as a secondary option on various Linux vendor desktops. Solaris offered KDE as a freeware add-on.


There was a very forward looking attempt at an open-source modern desktop environment written in Java by Sun called Project Looking Glass. The project seemed to die in mid 2000's, possibly from a threatened lawsuit by Apple. Many features later appeared in MacOSX. Other features were later copied into Windows 7.

Thoughts:

With so much of the Open Systems community based upon remote devices and servers, it seems incomprehensible that mechanisms to allow simple administration (via TTY, FMLI, XFMLI, and X11) to be replaced by multiple levels of complexity (web server, web browser, XML, AJAX, HTML#, CSS, etc.) HTML was basically a static-page serving system which had been hacked together to become more dynamic, but the efficiency is no where near TTY of X as far as overhead is concerned.

This being said, there seems to be a drive in this community to move towards better user experience, on the desktop, at the expense of their core-constituency on the server and embedded devices.

  • How much of Looking Glass could be reused?
    (The project focus shifted to Project Wonderland, which is now OpenWonderand.)
  • Wasn't there already a significant effort to build OpenStep that could be leveraged?
  • How much of the GUI and Drivers associated with Darwin under MacOSX are OpenSource and could be leveraged?
Since there is a popular and fairly well documented API [for desktops, mobile, and semi-mobile systems], one might think taking an older [OpenStep] code base [from, arguably, the most popular user-facing UNIX systems in the world], and making it an excellent option.

Since Java runs everywhere and it is maintained by major corporations, as well as a fledgling open source project, Looking Glass would bring tremendous revoution to the Open Systems Desktop, and make it current with open source MacOSX as well as propriatery Windows 7.

Architecture Process Proposal:

If this author were involved in the management of this project, a table of access methods would be built (TTY, X11, Display Postscript, PDF, HTTP/HTML, Direct Frame Buffer), table of raw features (line, circle, arc, font, cursor, box, etc.), table of derived features (window, menu, window manager, table wiget, etc.), and design a meta-languge that is both forwards & backwards compatible across the access methods.

This does not mean that every more complex feature would be suppoted by a simpler access method, but at least there should be a correlary to most and a universal benefit to all communities. Resources could then be leveraged from the core-constuency of the Open Systems markets and everyone could take away a benefit to their perspective community & commercial company.

Postscript:

By the way, I love X. The older X based applications were always fast, in comparison to modern toolkit based X applications. Applications built in X ran phenominally fast when ported [without the X protocol] to Microsoft Windows [in comparison to native MS developed GUI's.] Developers made a significant mistake by not concentrating on simplicity & speed when generating newer user experience environments. Every generation of desktop from SunView to OpenWindows, CDE, and GNOME became substantially heavier. Working with NextStep next to a SunView system made the Next platform much more appealing, from a performance perspective as a user.

The lack of decent TTY based GUI interfaces extended to X Windows by Open Vendors created a problem of system administration of servers, routers, firewalls, storage servers, network switches, wireless access points, kiosks, cash registers, etc. These platforms are the core-constituency of the Open Systems world. All of the vendors need to create proprietary menuing systems because of these holes, while they could be spending more time on developing Open Systems instead of this code which should be written once.

Companies like Sun, AT&T, Next, and Apple capitalized on simplifying the user interface [SunView, OpenLook, NextStep, Aqua] in the UNIX world. Newer graphics cards and CPU instruction set enhancements should be make our lives EASIER by removing code, instead of adding code, from the supportable code-base. The fact that people are considering re-writing the entire stack of code from the ground-up to replace X is a key factor that should tell us that something is deeply wrong with our current thinking, understanding of history, and understanding our current customer base.

Sunday, October 17, 2010

FPing: Options & Tuning


FPing: Options & Tuning

Abstract:

The FPing command offers substantial capability in polling multiple devices by polling asyncronously. FPing is projected to be bundled with Solaris 11, a worthy tool to be added to the Solaris toolkit. There are a lot of command line options, which various manual pages & elp files hold incomplete or conflicting information. This document is an attempt to clarify the options.

FPing Version:

The following illustrates the version of "fping" which this commentary is used for:

sunt2000$ fping -v
fping: Version 2.4b2_to $Date: 2001/01/25 11:25:04 $
fping: comments to
noc@zerohype.com

This version is currently installed via an SVR4 package from sunfreeware and can be downloaded under Solaris 10 here.

Issues Experienced:

A combination of selected command line arguments, total number of devices, and delay in the response from the devices can occasionally cause a crash of "fping" with the error "Arithmetic Exception".

The individual maintaining the fping source code has not been responsive to requests for clarifications regarding the package he has been maintaining regarding various crashes which have been experienced with the package. After working on the crash issue for several weeks, it became necessary to clarify the command line options and publish a short blog on the experience.

Command Line Options:

The command line options below were taken from the manual page for the Solaris packaged distribution and augmented with additional comments. Small fonts in parenthesis are original manual page entries, italics represent augmented description.

fping [ options ] [ systems... ]

-a Show systems that are alive.

-A Display targets by address rather than (DNS name) operating system name resolution.

-b n Number of bytes of ping data to send. The minimum size (normally 12) allows room for the data that fping needs to do its work (sequence number, timestamp). The reported received data size includes the IP header (normally 20 bytes) and ICMP header (8 bytes), so the minimum total size is 40 bytes. Default is 56, as in ping. Maximum is the theoretical maximum IP datagram size (64K), though most systems limit this to a smaller, system-dependent number.

-B n In the default mode, fping sends several requests to a target before giving up, waiting longer for a reply on each successive request. This parameter is the value by which the wait time is multiplied on each successive request; it must be entered as a floating-point number (x.y). This is referred to as an Exponential Backoff Factor. The default is 1.5.

-c Number of request packets to send to each target. In this mode, a line is displayed for each received response (this can suppressed with -q or -Q). Also, statistics about responses for each target are displayed when all requests have been sent (or when interrupted). The default of 1.

-C Similar to -c, but the per-target statistics are displayed in a format designed for automated response-time statistics gathering. The output display is also called Verbose Mode. For example:
% fping -C 5 -q somehost
somehost : 91.7 37.0 29.2 - 36.8
shows the response time in milliseconds for each of the five requests, with the "-" indicating that no response was received to the fourth request.

-d Use (DNS to lookup) operating system name resolution lookup on address of return ping packet. This allows you to give fping a list of IP addresses as input and print hostnames in the output.

-e Show elapsed (round-trip) time of packets.

-f file Read list of targets from a file. This option can only be used by the root user. Not used when -g is not specified. Regular users should pipe in the file via stdin:
% fping <>-g Generate a target list from a supplied IP netmask, or a starting and ending IP. Specify the netmask or start/end in the targets portion of the command line.
ex. To ping the class C 192.168.1.x, the specified command line could look like either:
fping -g 192.168.1.0/24
or
fping -g 192.168.1.0 192.168.1.255

-h Print usage message.

-i n The minimum amount of time (in milliseconds) between sending a ping packet to any target (default is 25). This is the icmp packet sending interval. The poller will move linearly through the list of provided hosts or ip addresses, waiting this interval after sending a packet before sending a packet to the next host or ip in the list. For large quantity of nodes, this number may need to be reduced to avoid a crash with an "Arithmetic Exception" error. Some networks may drop packets if this is set too low. Maintainer web site manual page specifies the default value is 10. If this value is critical for your implementation, specify it.

-l Loop sending packets to each target indefinitely. Can be interrupted with ctl-C; statistics about responses for each target are then displayed. May not die in looping mode if process reading STDOUT is closed abnormally.

-m Send pings to each of a target host's multiple interfaces.

-n Show target by operating system resolved name. Same as -d.

-p n In looping or counting modes (-l, -c, or -C), this parameter sets the time in milliseconds that fping waits between successive packets to an individual target. Useful in unreliable networks for spreading retry a distance from former attempt. For large quantity of nodes, increasing this number may help reduce instances of a crash with an "Arithmetic Exception" error. Default is 1000.

-q Quiet. Don't show per-target results, just set final exit status.

-Q n Like -q, but show summary results every n seconds. If this summary happens before all the devices can be polled, an "Arithmetic Exception" error may occur. This value may need to be increased, to aleviate this crashing symptom.

-r n Retry limit (default 3). This is the number of times an attempt at pinging a target will be made, not including the first try.

-s Print cumulative statistics upon exit.

-t n Initial target timeout in milliseconds (default 500). In the default mode, this is the amount of time that fping waits for a response to its first request. Successive timeouts are multiplied by the backoff factor.

-u Show targets that are unreachable.

-v Print fping version information.

Thursday, October 14, 2010

Solaris 10: History of Zones


There was a great article holding a short the history of Zones under Solaris 10 on a version by version release.
Solaris 10 11/06 (Update 3)
Zone renaming
Zone move and clone
zone attach/detach
Privileges in zone configuration

Solaris 10 8/07 (Update 4)
Upgrades, Live upgrades (ZULU)
IP Instances (dedicated NIC/separate TCP/IP stack)
Resource setting for memory/CPU

Solaris 10 5/08 (Update 5)
Dry run of zone migration (zoneadm -n)
CPU caps for zones

Solaris 10 10/08 (Update 6)
Update on attach
Default router in shared stack
Full ZFS support for zones

Solaris 10 10/09 (Update 8)
Turbo-Charged SVR4 Packaging
Zones Parallel Patching

Solaris 10 9/10 (Update 9)
Zones P2V (Physical to Virtual)
Host ID Emulation
"Upgrade on attach"
The benefits to Zones are many, but a few include: zero cost, incredible density on a single OS instance (up to 4000), and virtually no overhead.

Solaris Zones are an essential part of any cost effective data center when performance managed services for external customers.