Tuesday, December 21, 2010

Sun Founders Panel 2006

Sun Founders Panel 2006

This video from the Computer History Museum contains an intriguing panel with Sun founders and pioneers Andy Bechtolsheim, Bill Joy, Vinod Khosla, Scott McNealy, and John Gage.



A memorable quote from the session, "Get them on tape before they die." Some of the details surrounding this session are located here. This video is a "must watch" for anyone involved in the technology business.

Technologist David Halko states: predicting the future today requires understanding the past.

Sunday, December 5, 2010

CoolThreads UltraSPARC and SPARC Processors


[UltraSPARC T3 Micrograph]

CoolThreads UltraSPARC and SPARC Processors

Abstract:

Processor development takes an immense quantity of time, to architect a high-performance solution, and an uncanny vision of the future, to project market demand and acceptance. In 2005, Sun embarked on a bold path moving toward many cores and many threads per core. Since the purchase of Sun by Oracle, the internal SPARC road map from Sun had clarified.


[UltraSPARC T1 Micrograph]
Generation 1: UltraSPARC T1
A new family of SPARC processors was announced by Sun on 2005 November 14.
  • Single die
  • Single socket
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4 threads/core
  • 1 shared floating point core
  • 1.0 GHz - 1.4 GHz clock speed
  • 279 million transisters
  • 378 mm2
  • 90 nm CMOS (TI)
  • 1 JBUS port
  • 3 Megabyte Level 2 Cache
  • 1 Integer ALU per Core
  • ??? Memory Controllers
  • 6 Stage Integer Pipeline per Core
  • No embedded Ethernet into CPU
  • Crypto Algorithms: ???
Platform designed as a front-end server for web server applications. With a massive number of cores, it was designed to provide web-tier performance similar to existing quad-socket systems leveraging a single socket.

To understand the ground-breaking advancement in this technology, most processors were single core, with an occasional dual core processor (with cores glued together through a more expensive process referred to as a multi-chip module, driving higher software licensing costs for those platforms.)


Generation 2: UltraSPARC T2
The next generation of the CoolThreads processor was announced by Sun on 2007 August.
  • Single die
  • Single Socket
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4, 6, 8 floating point units
  • 8 threads/core
  • 1.2 GHz - 1.6 GHz clock speed
  • 503 million transisters
  • 342 mm2
  • 65 nm CMOS (TI)
  • 1 PCI Express port (1.0 x8)
  • 4 Mageabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 4x Dual Channel FBDIMM DDR2 Controllers
  • 8 Stage Integer Pipeline per Core
  • 2x 10 GigabitEthernet on-CPU ports
  • Crypto Algorithms: DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
This processor was designed for higher compute intensive requirements and incredibly efficient network capacity. Platform made an excellent front-end server for applications as well as Middleware, with the ability to do 10 Gigabit wire-speed encryption with virtually no CPU overhead.

Competitors started to build Single-Die dual-core CPU's with Quad-Core processors by gluing dual-core processors into a Multi-Chip Module.


[UltraSPARC T2 Micrograph]
Generation 3: UltraSPARC T2+
Sun quickly released the first CoolThreads SMP capable UltraSPARC T2+ in 2008 April.
  • Single die
  • 1-4 Sockets
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4, 6, 8 floating point units
  • 8 threads/core
  • 1.2 GHz - 1.6 GHz clock speed
  • 503 million transisters
  • 342 mm2
  • 65 nm CMOS (TI)
  • 1 PCI Express port (1.0 x8)
  • 4 Megabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 2x? Dual Channel FBDIMM DDR2 Controllers
  • 8? Stage Integer Pipeline per Core
  • No embedded Ethernet into CPU
  • Crypto Algorithms: DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
This processor allowed the T processor series to move from the Tier 0 web engines and Middleware to Application tier. Architects started to understand the benefits of this platform entering the Database tier. This was the first Coolthreads processor to scale past 1 and up to 4 sockets.

By this time, competition really started to understand that Sun had properly predicted the future of computing. The drive toward single-die Quad-Core chips have started with Hex-Core Multi-Chip Modules being predicted.


Generation 4: SPARC T3
The market became nervous with Oracle purchasing Sun. The first Oracle branded CoolThreads SMP capable UltraSPARC T3 was launched in in 2010 September.
  • Single die
  • 1-4 Sockets
  • 64 bits
  • 16 integer cores
  • 16 crypto cores
  • 16 floating point units
  • 8 threads/core
  • 1.67 GHz clock speed
  • ??? million transisters
  • 377 mm2
  • 40 nm
  • 2x PCI Express port (2.0 x8)
  • 6 Megabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 4x DDR3 SDRAM Controllers
  • 8? Stage Integer Pipeline per Core
  • 2x 10 GigabitEthernet on-CPU ports
  • Crypto Algorithms: DES, 3DES, AES, RC4, SHA1, SHA256/384/512, Kasumi, Galois Field, MD5, RSA to 2048 key, ECC, CRC32
This processor was more than what the market was anticipating from Oracle. This processor took all the features of the T2 and T2+ combined them into the new T3 with an increase in overall features. No longer did the market need to choose between multiple sockets or embedded 10 GigE interfaces - this chip has it all plus double the cores.

The market, immediately before this release, the competition was releasing single die hex-core and octal-core CPU's using multi-chip modules, by gluing them together. The T3 was a substantial upgrade over the competition by offering double the cores on a single die.


Generation 5: SPARC T4
Oracle indicated in December 2010 that they had thousands of these processors in the lab and predicted this processor will be released end of 2011.

After the announcement, a separate press release indicated processors will have a renovated core, for higher single threaded performance, but the socket will offer half the cores.

Most vendors are projected to have 8 core processors available (through Multi-Chip modules) by the time the T3 is released, but only the T4 should be on a single piece of silicon during this period.


[2010-12 SPARC Solaris Roadmap]
Generation 6: SPARC T5

Some details on the T5 were announced with the T4. Processors will use the renovated T4 core, with a 28nm process. This will return to 16 cores per socket again. This processor may be the first Coolthreads T processor able to scale from 1-8 processors. It is projected to appear in early 2013.

Some vendors are projecting to have 12 core processors on the market using Multi-Chip Module technology, but when the T5 is released, this should still be the market leader in 16 cores per socket.

Network Management Connection

Consolidating most network management stations in a globalized environment works very well with the Coolthreads T-Series processors. Consolidating multiple slower SPARC platforms onto single and double socket T series have worked well over the past half decade.

While most network management polling engines will scale linearly with these highly-threaded processors, there are some operations which are bound to single threads. These type of processes include event correlation, startup time, and syncronization after a discovery in a large managed topology.

The market will welcome the enhanced T4 processor core and the T5 processor, when it is released.

Friday, December 3, 2010

Scalable Highest Performing Clusters at Value Pricing



Scalable Highest Performing Clusters at Value Pricing

Abstract:
Oracle presented another milestone achievement in their 5 year SPARC/Solaris road map with Fujitsu. John Fowler stated: "Hardware without Software is a Door-Stop, Solaris is the gateway."

High-Level:
The following is a listing of my notes from the two sessions. The notes have been combined, with Larry Ellison outlining the high-level and John Fowler presenting the lower-level details. SPARC T3 making world-record benchmarks. New T3 based integrated products. Oracle's Sun/Fujitsu M-Series gets a speed bump. SPARC T4 is on the way.

Presentation Notes:


New TpmC Database OLTP Performance
  • SPARC Top cluster performance
  • SPARC Top cluster price-performance
  • (turtle)
    HP Superdome Itanium 4 Million Transactions/Minute
  • (stallion)
    IBM POWER7 Power 780 10 Million Transactions/Minute
    (DB2 clustered through custom applications)
  • Uncomfortable 4 month for Oracle, when IBM broke the Oracle record
  • (cheetah)
    Sun SPARC 30 Million Transactions/Minute
    (standard off-the-shelf Oracle running RAC)
  • Oracle/Sun performance benchmark => ( IBM + HP ) x 2 !
  • Sun to IBM Comparison:
    3x OLTP Throughput, 27% better Price/Performance, 3.2x faster response time
  • Sun to HP Comparison:
    7.4x OLTP Throughput 66 Better Price/Performance, 24x compute density
  • Sun Supercluster:
    108 sockets, 13.5 TB Memory, Infiniband 40 Gigabit link, 246 Terabytes Flash, 1.7 Petabytes Storage, 1 Quadrillion rows, 43 Trillion transactions per day, 0.5 sec avg response

New Gold Release
  • Gold Standard Configurations are kept in the lab
  • What the customer has, the support organization will have assembled in the lab
  • Oracle, Sun, Cisco, IBM will all keep their releases and bug fixes in sync with releases

SPARC Exalogic Elastic Cloud
  • Designed to run Middleware
  • New T3 processor based
  • 100% Oracle Middleware is Pure Java
  • Tuned for Java and Oracle Fusion Middleware
  • Load-balances with elasticity
  • Ships Q1 2011
  • T3-1B SPARC Compute Blades based
    30 Compute Servers, 16 cores/server, 3.8 TB RAM, 960 GB mirrored flash disks, 40 TB SAS Storage, 4 TB Read Cache, 72 GB Write Cache, 40 Gg/sec Infiniband, 10 GigE to Datacenter

SPARC Supercluster
  • New T3 processor based and M processor based
  • T3-2 = 2 nodes, 4 CPU's, 64 cores/512 threads, 0.5 TB RAM, 96 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband
  • T3-4 = 3 nodes, 12 CPU's, 192 cores/1536 threads, 1.5 TB RAM, 144 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband
  • M5000 = 2 nodes, 16 CPU's, 64 core/128 threads, 1 TB RAM, 144 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband

T3 Processor in production
  • Releases already, performing in these platforms
  • 1-4 processors in a platform
  • 16 cores/socket, 8 threads/core
  • 16 crypto-engines/socket
  • More cores, threads, 10 GigE on-chip, more crypto engines

T4 Processor in the lab!
  • Thousands under test in the lab, today
  • To be released next year
  • 1-4 processors
  • 8 cores/socket, 8 threads/core
  • faster per-thread execution

M3 Processor from Fujitsu
  • SPARC VII+
  • 1-64 SPARC64 VII+ Processors
  • 4 cores, 2 threads/core
  • Increased CPU frequency
  • Double cache memory
  • 2.4x performance of original SPARC64 VI processor
  • VII+ boards will slot into the VI and VII board chassis
Flash Optimization
- Memory hierarchy with software awareness

Infiniband
- Appropriate for High Performance Computing
- Dramatically better performance than Ethernet for linking servers to servers & storage

New Solaris 11 Release

  • Next Generation Networking
    re-engineered network stack
    low latency high bandwidth protocols
    virtualized
  • Cores and Threads Scale
    Adaptive Thread and Memory Placement
    10,000's of core & threads
    thread observability with DTrace

  • Memory Scale
    Dynamic optimization for large memory configs
    Advanced memory placement
    VM systems for 1000's TB memory configs

  • I/O Performance
    Enhanced NUMA I/O framework
    Auto-Discovery of NUMA architecture
    I/O resources co-located with CPU for scale/performance

  • Data Scale
    ZFS Massive storage for massive datasets

  • Availability
    Boot times in seconds
    Minimized OS Install
    Risk-Free Updates with lightweight boot and robust package dependency
    Extensive Fault Management with Offline failing components
    Application Service Managemment with Restart failed applications and associated services quickly

  • Security
    Secure by default
    Secure boot validated with onboard Trusted Platform Module
    Role Based Root Access
    Encrypted ZFS datasets
    Accelerated Encryption with hardware encryption support

  • Trusted Solaris Extensions
    Dataset labels for explicit access rules
    IP labels for secure communication

  • Virtualization
    Network Virtualization to add to Server and Storage Virtualization
    Network Virtualization includes Virtual NIC's and Virtual Switches
SPARC Supercluster Architecture
  • Infiniband is 5x-8x faster than most common Enterprise interconnects
    Infiniband has been leveraged with storage and clustering in software
  • Flash is faster than Rotating Media
    Integrated into the Memory AND Storage Hierarchy

SPARC 5 Year Roadmap
  • SPARC T3 delvered in 2010
  • SPARC VII+ delivered in 2010
  • Solaris 11 and SPARC T4 to be delivered in 2011
Next generation of mission critical enterprise computing
  • Engineer software with hardware products
  • Deliver clusters for general purpose computing
  • Enormous levels of scale
  • Built in virtualization
  • Built in Security
  • Built in management tools
  • Very Very high availability
  • Tested with Oracle software
  • Supported with Gold Level standard
  • Customers spend less time integrating and start delivering services on systems engineered with highest performance components

Thursday, November 18, 2010

Simulating OpenView Events With Net-SNMP SNMP Traps



Simulating OpenView Events With via Net-SNMP
SNMP Traps

Abstract:
HP OpenView Network Node Manager used to be the industry standard network management tool. Net-SNMP is the standard SNMP stack used on most operating systems, such as Solaris. There is still a need to simulate the platform for migration of infrastructure. One common way to simulate the HP OpenView Network Node Manager environment is through the use of OpenView Events (ovevent) and Net-SNMP SNMP Traps (snmptrap) commands.

OpenView Events
The HP OpenView Node Down Event can be simulated through a guaranteed transport via the "ovevent" command.
ovevent $Severity $NMS \ .1.3.6.1.4.1.11.2.17.1.0.$Event \ .1.3.6.1.4.1.11.2.17.2.1.0 Integer 14 \ .1.3.6.1.4.1.11.2.17.2.2.0 OctetString ${Node}

NMS = IP or Resolvable Name of Network Management Station
Node = IP or Resolvable Name of the Managed Device
Severity = Critical, Major, Minor, Info
Event = 58916865 [OV_Node_Down], 58916864 [OV_Node_Up]
Simulate Using Net-SNMP via SNMP V1 Trap
An SNMP V1 trap can be produced to closel simulate this Node Down event. Note, this is not the exact representation, nor is the delivery of the event guaranteed. An SNMP Trap Receiver must receive this.
snmptrap -v 1 -c $Community $NMS \
.1.3.6.1.4.1.11.2.17.1 ${Node} 6 58916865 0


Community= SNMP Community String on used on Network Managment Station
Simulate Using Net-SNMP via SNMP V2c Trap
An SNMP V2c trap can be produced to closel simulate this Node Down event. Note, this is not the exact representation, nor is the delivery of the event guaranteed. An SNMP Trap Receiver must receive this.
snmptrap -v 2c -c $Community $NMS \
0 .1.3.6.1.4.1.11.2.17.1.0.58916865 \
.1.3.6.1.4.1.11.2.17.2.1.0 i 14 \
.1.3.6.1.4.1.11.2.17.2.2.0 s $Node
Simulate Using Net-SNMP via SNMP V2c Trap Test Tool
An SNMP V2c trap can be produced to closel simulate this Node Down event. Note, this is not the exact representation, nor is the delivery of the event guaranteed. An SNMP Trap Receiver must receive this.
snmptest -v 2c -c $Community $NMS:162 \<\<\!
\$T
.1.3.6.1.2.1.1.3.0
t
0
.1.3.6.1.6.3.1.1.4.1.0
o
.1.3.6.1.4.1.11.2.17.1.0.58916865
i
0
.1.3.6.1.4.1.11.2.17.2.1.0
i
14
.1.3.6.1.4.1.11.2.17.2.2.0
s
$Node

\!
Conclusion:
Common events from HP OpenView Network Node Manager, the former "gold standard" in Network Management, can be simulated under stock Solaris 10 and Solaris 11 with simple available OS commands.


Monday, November 8, 2010

Graphical User Interfaces: X and Beyond



Graphical User Interfaces: X and Beyond

Abstact:

There has been much discussion lately regarding the core of Desktop User Interfaces. Mark Shuttleworth has been guiding the Ubuntu Community to move toward Wayland. Long time X community contributor Allen Coopersmith had added some clarification of his own, regarding X11 long time stability and future viability. It is healthy to go through the discussion of display systems occasionally, for the newer programmer.

History:

The disucssion of desktop systems for UNIX is a re-occurring theme. This is in no way an exhaustive history, nor is it ment to be 100% accurate in comparing features of one system against another- not all desktops are created equal in capability or sophistication. Many of these systems relied upon X11, while others did not.


Most UNIX systems used TTY for access on early systems.


Terminals leveraged curses libraries for full screen usage. Various UNIX System V platforms built menuing systems upon the curses library through Form and Menu Language Interpreter or FMLI and enhanced the user experience through Framed Access Command Environment or FACE.


Solaris started with SunView, moved to OpenWindows, used NeWS with Display Postscript, and evetually started down the course of converging to a 100% open-source X11 system. The windowing system was based upon olwm and appeared on AT&T UNIX as well as Solaris.


There is a virtualized window manager based upon OpenWindows called OLVWM, which conforms to the OPEN LOOK standard, but Solaris had decided to abandon Open Look Window Manager or olwm in a later unification effort.

As X Windows became more popular, some vendors of UNIX offered graphical augmented enhancements, such as NCR's XFMLI. Sun received an infusion of cash from AT&T and AT&T purchased NCR. The use of FMLI within AT&T was phenominal by it's user community and the use of XFMLI by NCR was used to modernize the desktop without the necessity to change the investment of FMLI from the System V code base. Solaris even added an FMLI interface to the Live Upgrade OS feature.

Solaris started the process of abandoning FMLI and FACE, for enhanced terminal based user experience, in the mid 2000's, citing internationalization overhaul as a primary motivation.

A group of vendors aligned against Sun and AT&T (who standardized on OPENLOOK) with an alternative GUI referred to as Motif. It was basically a copy of an IBM standard, which IBM followed for OS/2 and Microsoft copied with Window 3.1. There was an open-source version attempted called Open Motif. This was later abandoned with a later unification effort.


Next's NextStep brought a new level of competition to Sun Solaris. A move was made to converge with OpenStep. An open-source version was attepted with GNUstep. Next was founded by former Apple CEO, Steve Jobs, and Next was re-purchased by Apple. The usage of PDF instead of Postscript was used at the heart of the environment. At this point, the NextStep & OpenStep environments were implemented in Apple hardware, from the desktop, to the server, laptop, notebook, and handheld environments.


Vendors dug in their heels, in what is now sometimes referred to as the UNIX Wars. Eventually, concensus was derived between most vendors with the consolidation of OPEN LOOK and MOTIF to Common Desktop Environment or CDE. The tools from Sun's original SunView, which were ported to OPEN LOOK, were ported, using the look and feel of MOTIF. Solaris has since decided to abandon CDE in the mid 2000's.



During the time of UNIX vendors were working towards a new standard desktop, some other desktops have been receiving activity. GNOME was a very important project. GNOME was adopted by various Linux vendors as a default desktop. Solaris adopted a varient of GNOME, called Java Desktop Environment as their standard going-forward environment in mid 2000's.

There was another competing open source environment to GNOME called KDE. KDE was offered as a secondary option on various Linux vendor desktops. Solaris offered KDE as a freeware add-on.


There was a very forward looking attempt at an open-source modern desktop environment written in Java by Sun called Project Looking Glass. The project seemed to die in mid 2000's, possibly from a threatened lawsuit by Apple. Many features later appeared in MacOSX. Other features were later copied into Windows 7.

Thoughts:

With so much of the Open Systems community based upon remote devices and servers, it seems incomprehensible that mechanisms to allow simple administration (via TTY, FMLI, XFMLI, and X11) to be replaced by multiple levels of complexity (web server, web browser, XML, AJAX, HTML#, CSS, etc.) HTML was basically a static-page serving system which had been hacked together to become more dynamic, but the efficiency is no where near TTY of X as far as overhead is concerned.

This being said, there seems to be a drive in this community to move towards better user experience, on the desktop, at the expense of their core-constituency on the server and embedded devices.

  • How much of Looking Glass could be reused?
    (The project focus shifted to Project Wonderland, which is now OpenWonderand.)
  • Wasn't there already a significant effort to build OpenStep that could be leveraged?
  • How much of the GUI and Drivers associated with Darwin under MacOSX are OpenSource and could be leveraged?
Since there is a popular and fairly well documented API [for desktops, mobile, and semi-mobile systems], one might think taking an older [OpenStep] code base [from, arguably, the most popular user-facing UNIX systems in the world], and making it an excellent option.

Since Java runs everywhere and it is maintained by major corporations, as well as a fledgling open source project, Looking Glass would bring tremendous revoution to the Open Systems Desktop, and make it current with open source MacOSX as well as propriatery Windows 7.

Architecture Process Proposal:

If this author were involved in the management of this project, a table of access methods would be built (TTY, X11, Display Postscript, PDF, HTTP/HTML, Direct Frame Buffer), table of raw features (line, circle, arc, font, cursor, box, etc.), table of derived features (window, menu, window manager, table wiget, etc.), and design a meta-languge that is both forwards & backwards compatible across the access methods.

This does not mean that every more complex feature would be suppoted by a simpler access method, but at least there should be a correlary to most and a universal benefit to all communities. Resources could then be leveraged from the core-constuency of the Open Systems markets and everyone could take away a benefit to their perspective community & commercial company.

Postscript:

By the way, I love X. The older X based applications were always fast, in comparison to modern toolkit based X applications. Applications built in X ran phenominally fast when ported [without the X protocol] to Microsoft Windows [in comparison to native MS developed GUI's.] Developers made a significant mistake by not concentrating on simplicity & speed when generating newer user experience environments. Every generation of desktop from SunView to OpenWindows, CDE, and GNOME became substantially heavier. Working with NextStep next to a SunView system made the Next platform much more appealing, from a performance perspective as a user.

The lack of decent TTY based GUI interfaces extended to X Windows by Open Vendors created a problem of system administration of servers, routers, firewalls, storage servers, network switches, wireless access points, kiosks, cash registers, etc. These platforms are the core-constituency of the Open Systems world. All of the vendors need to create proprietary menuing systems because of these holes, while they could be spending more time on developing Open Systems instead of this code which should be written once.

Companies like Sun, AT&T, Next, and Apple capitalized on simplifying the user interface [SunView, OpenLook, NextStep, Aqua] in the UNIX world. Newer graphics cards and CPU instruction set enhancements should be make our lives EASIER by removing code, instead of adding code, from the supportable code-base. The fact that people are considering re-writing the entire stack of code from the ground-up to replace X is a key factor that should tell us that something is deeply wrong with our current thinking, understanding of history, and understanding our current customer base.

Sunday, October 17, 2010

FPing: Options & Tuning


FPing: Options & Tuning

Abstract:

The FPing command offers substantial capability in polling multiple devices by polling asyncronously. FPing is projected to be bundled with Solaris 11, a worthy tool to be added to the Solaris toolkit. There are a lot of command line options, which various manual pages & elp files hold incomplete or conflicting information. This document is an attempt to clarify the options.

FPing Version:

The following illustrates the version of "fping" which this commentary is used for:

sunt2000$ fping -v
fping: Version 2.4b2_to $Date: 2001/01/25 11:25:04 $
fping: comments to
noc@zerohype.com

This version is currently installed via an SVR4 package from sunfreeware and can be downloaded under Solaris 10 here.

Issues Experienced:

A combination of selected command line arguments, total number of devices, and delay in the response from the devices can occasionally cause a crash of "fping" with the error "Arithmetic Exception".

The individual maintaining the fping source code has not been responsive to requests for clarifications regarding the package he has been maintaining regarding various crashes which have been experienced with the package. After working on the crash issue for several weeks, it became necessary to clarify the command line options and publish a short blog on the experience.

Command Line Options:

The command line options below were taken from the manual page for the Solaris packaged distribution and augmented with additional comments. Small fonts in parenthesis are original manual page entries, italics represent augmented description.

fping [ options ] [ systems... ]

-a Show systems that are alive.

-A Display targets by address rather than (DNS name) operating system name resolution.

-b n Number of bytes of ping data to send. The minimum size (normally 12) allows room for the data that fping needs to do its work (sequence number, timestamp). The reported received data size includes the IP header (normally 20 bytes) and ICMP header (8 bytes), so the minimum total size is 40 bytes. Default is 56, as in ping. Maximum is the theoretical maximum IP datagram size (64K), though most systems limit this to a smaller, system-dependent number.

-B n In the default mode, fping sends several requests to a target before giving up, waiting longer for a reply on each successive request. This parameter is the value by which the wait time is multiplied on each successive request; it must be entered as a floating-point number (x.y). This is referred to as an Exponential Backoff Factor. The default is 1.5.

-c Number of request packets to send to each target. In this mode, a line is displayed for each received response (this can suppressed with -q or -Q). Also, statistics about responses for each target are displayed when all requests have been sent (or when interrupted). The default of 1.

-C Similar to -c, but the per-target statistics are displayed in a format designed for automated response-time statistics gathering. The output display is also called Verbose Mode. For example:
% fping -C 5 -q somehost
somehost : 91.7 37.0 29.2 - 36.8
shows the response time in milliseconds for each of the five requests, with the "-" indicating that no response was received to the fourth request.

-d Use (DNS to lookup) operating system name resolution lookup on address of return ping packet. This allows you to give fping a list of IP addresses as input and print hostnames in the output.

-e Show elapsed (round-trip) time of packets.

-f file Read list of targets from a file. This option can only be used by the root user. Not used when -g is not specified. Regular users should pipe in the file via stdin:
% fping <>-g Generate a target list from a supplied IP netmask, or a starting and ending IP. Specify the netmask or start/end in the targets portion of the command line.
ex. To ping the class C 192.168.1.x, the specified command line could look like either:
fping -g 192.168.1.0/24
or
fping -g 192.168.1.0 192.168.1.255

-h Print usage message.

-i n The minimum amount of time (in milliseconds) between sending a ping packet to any target (default is 25). This is the icmp packet sending interval. The poller will move linearly through the list of provided hosts or ip addresses, waiting this interval after sending a packet before sending a packet to the next host or ip in the list. For large quantity of nodes, this number may need to be reduced to avoid a crash with an "Arithmetic Exception" error. Some networks may drop packets if this is set too low. Maintainer web site manual page specifies the default value is 10. If this value is critical for your implementation, specify it.

-l Loop sending packets to each target indefinitely. Can be interrupted with ctl-C; statistics about responses for each target are then displayed. May not die in looping mode if process reading STDOUT is closed abnormally.

-m Send pings to each of a target host's multiple interfaces.

-n Show target by operating system resolved name. Same as -d.

-p n In looping or counting modes (-l, -c, or -C), this parameter sets the time in milliseconds that fping waits between successive packets to an individual target. Useful in unreliable networks for spreading retry a distance from former attempt. For large quantity of nodes, increasing this number may help reduce instances of a crash with an "Arithmetic Exception" error. Default is 1000.

-q Quiet. Don't show per-target results, just set final exit status.

-Q n Like -q, but show summary results every n seconds. If this summary happens before all the devices can be polled, an "Arithmetic Exception" error may occur. This value may need to be increased, to aleviate this crashing symptom.

-r n Retry limit (default 3). This is the number of times an attempt at pinging a target will be made, not including the first try.

-s Print cumulative statistics upon exit.

-t n Initial target timeout in milliseconds (default 500). In the default mode, this is the amount of time that fping waits for a response to its first request. Successive timeouts are multiplied by the backoff factor.

-u Show targets that are unreachable.

-v Print fping version information.

Thursday, October 14, 2010

Solaris 10: History of Zones


There was a great article holding a short the history of Zones under Solaris 10 on a version by version release.
Solaris 10 11/06 (Update 3)
Zone renaming
Zone move and clone
zone attach/detach
Privileges in zone configuration

Solaris 10 8/07 (Update 4)
Upgrades, Live upgrades (ZULU)
IP Instances (dedicated NIC/separate TCP/IP stack)
Resource setting for memory/CPU

Solaris 10 5/08 (Update 5)
Dry run of zone migration (zoneadm -n)
CPU caps for zones

Solaris 10 10/08 (Update 6)
Update on attach
Default router in shared stack
Full ZFS support for zones

Solaris 10 10/09 (Update 8)
Turbo-Charged SVR4 Packaging
Zones Parallel Patching

Solaris 10 9/10 (Update 9)
Zones P2V (Physical to Virtual)
Host ID Emulation
"Upgrade on attach"
The benefits to Zones are many, but a few include: zero cost, incredible density on a single OS instance (up to 4000), and virtually no overhead.

Solaris Zones are an essential part of any cost effective data center when performance managed services for external customers.

Solaris 11 Express 2010.11 and ZFS


Solaris 11 Express 2010.11 and ZFS

Abstract:
There was much discussion over the years over the discontinuance of Solaris Express distribution, the creation of OpenSolaris distribution, the discontinuance of OpenSolaris distribution, and ultimately the re-announcement of Solaris Express distribution. More information regarding Solaris Express has been leaked.

Designated Release:
It appears that there will be a release of Solaris Express referred next month or so, designated as Solaris 11 Express 2010.11!

RAID-Z/Mirror Hybrid Allocator:
Mirrored layout across the children of a RAID-Z vdev to ensure latency sensitive metadata can be read in a single I/O. See bug 6977913 reference.

ZFS "zdb" Enhancements:
Support for decompression, checksumming, and RAID-Z in "zdb" has been requested since 2008-Q4. See bug 6757444 reference.

ZFS File System Listing Enhancement:
Performance has been improved for listing ZFS file systems.

ZFS Encryption:
During 2003, a request was made to include encryption in ZFS, a welcome addition for laptops. The bug-id for this feature is 4854202.

Future Solaris 11 Support:
Solaris 11 is expected to power the new generation of embedded systems from Oracle including Exadata X2-8 database machine and Exalogic cloud-in-a-box!

Summary:
There is more than this in the next release to come, but this gives us a clear direction of where we are at and where we are going!

Tuesday, October 5, 2010

US Department of Energy: No POWER Upgrade From IBM


US Department of Energy: No POWER Upgrade From IBM

Abstract:

Some ay no one was ever fired for buying IBM, but no government or business ever got trashed for buying SPARC. The United States Department of Energy bought an IBM POWER system with no upgrade path and no long term spare parts.


[IBM Proprietary POWER Multi-Chip Module]

Background:

The U.S. Depertmant of Energy purchased a petaflops-class hybrid blade supercomputer called the IBM "Roadrunner" that performed into the multi-petaflop range for nuclear simulations at the Los Alamos National Laboratory. It was based upon the IBM Blade platform. Blades were based upon an AMD Opteron and hybrid IBM POWER / IBM Cell architecture. A short article was published in October 2009 in The Register.

Today's IBM:

A month later, the super computer was not mentioned at the SC09 Supercomputin Trade Show at Oregon, because IBM killed it. Apparently, it was killed off 18 months earlier - what a waste of American tax payer funding!

Tomorrow's IBM:

In March 2010, it was published that IBM gave it's customers (i.e. the U.S. Government) three months to buy spares, because future hybrid IBM POWER / Cell products were killed. Just a few months ago, IBM demonstrated their trustworthlessness with their existing Thin Client customers and partners by abandoning their thin client partnership and using the existing partner to help fund IBM's movement to a different future thin client partner!



Obama Dollars:

It looks like some remaining Democratic President Obama stimulus dollars will be used to buy a new super computer from Cray and cluster from SGI. The mistake of buying IBM was so huge that it took a massive spending effort from the Federal Government to recover from losing money on proprietary POWER.

[Fujitsu SPARC64 VII Processor]

[Oracle SPARC T3 Processor]
Lessons Learned:
If only the U.S. Government did not invest in IBM proprietary POWER, but had chosen an open CPU architecture like SPARC, which offers two hardware vendors: Oracle/Sun and Fujitsu.

[SUN UltraSPARC T2; Used in Themis Blade for IBM Blade Chassis]

Long Term Investment:

IBM POWER is not an open processor advocated by other systems vendor. Motorola abandoned the systems market for POWER from a processor production standpoint. Even Apple abandoned POWER on the desktop & server arena. One might suppose that when IBM kills a depended upon product, that one could always buy video game consoles and place them in you lights-out data center, but that is not what the Department of Energy opted for.

Oracle/Sun has a reputation of providing support for systems a decade old, and if necessary, Open SPARC systems and even blades for other chassis can be (and are) built by other vendors(i.e. Themis built an Open SPARC blade for an IBM Blade chassis.) SPARC processors have been designed & produced by different processor and system vendors for over a decade and a half. SPARC is a well proven long term investment in the market.

Network Management Connection:

If you need to build a Network Operation Center, build it upon the infrastructure the global telecommunications providers had trusted for over a decade: SPARC & Solaris. One will not find serious network management applications on IBM POWER, so don't bother wasting time looking. There are reasons for it.

Monday, October 4, 2010

DTrace: Managing Applications in Modern Operating Systems


DTrace: Managing Applications in Modern Operating Systems

Abstract: Modern operating systems often have many commands at different layers in the software stack which are required in order to debug issues. DTrace changes this and provides a secure single interface to investigate nearly every layer in the software stack in a running production system, where developers can even create their own hooks for when they may need additional future insight.

Video interview on DTrace:
http://blogs.sun.com/video/entry/dtrace_for_system_administrators_with

PDF presentation on DTrace:
http://blogs.sun.com/brendan/resource/OOW2010_DTrace.pdf

The DTrace book:
http://www.amazon.com/dp/0132091518

Disk monitoring using DTrace:
http://www.princeton.edu/~unix/Solaris/troubleshoot/diskio.html
http://developers.sun.com/solaris/articles/dtrace_tutorial.html
http://wikis.sun.com/display/DTrace/io+Provider

Least Privilege with DTrace - No root required:
http://www.sun.com/bigadmin/features/articles/least_privilege.jsp

Saturday, September 18, 2010

Linux: Root Exploit Briefly Closed Finally Resolved



Security Alert - Upgrade Linux Systems Again...

Another Linux root exploit found last decade, briefly closed for a few months, has finally been closed.

The Linux kernel has been purged of a bug that gave root access to untrusted users – again.

The vulnerability in a component of the operating system that translates values from 64 bits to 32 bits (and vice versa) was fixed once before – in 2007 with the release of version 2.6.22.7. But several months later, developers inadvertently rolled back the change, once again leaving the OS open to attacks that allow unprivileged users to gain full root access.

There are a lot of production systems which have been compromised by this defect over the past half-decade.

Network Management

Let's hope that affected systems are not runnning mission critical systems in your managed services environment that connect to tens of thousands of customer devices in a Network Management environment. It means another hit on availability and taking down the systems for yet another upgrade.

Tuesday, September 14, 2010

Microsoft Windows: The Target of Nearly All Malware




Microsoft Windows: The Target of Nearly All Malware

The Known

A short article from The Register delineates the concern for data centers.

The vast majority of malware - more than 99 per cent - targets Windows PCs, according to a new survey by German anti-virus firm G-Data.

G-Data reckons 99.4 per cent of all new malware of the first half of 2010 targeted Microsoft’s operating system.

Some would suggest that this is not news, but an understanding of what is already known.

The Predicted

With the increase of Windows viruses and worms up, the trend is expected to get much worse.
G-Data reckons the rate of virus production in 1H10 is 50 per cent up from the same period last year. It predicts 2010 as a whole will witness two million malware samples.
What is a Network Operation Center to do?

The Smart Road

Any reasonable Network Operation Center knows that exposing Microsoft Windows directly to other customer networks and directly to other supplier networks is an incredible security risk.

There are ways to mitigate that risk.
G-Data reckons the rate of virus production in 1H10 is 50 per cent up from the same period last year. It predicts 2010 as a whole will witness two million malware samples.
Deploying Microsoft Windows on customer and supplier facing networks is a risk that should be avoided when nearly all of the risk can be eliminated by deploying another operating system.

If mission critical software has to run in a business, the software runs under Solaris.

Thursday, September 9, 2010

Solaris: Developer Licensing Update


Solaris: Developer Licensing Update
The News

The developers usage license for Oracle Solaris, Oracle Solaris Cluster, and Oracle Solaris Express has been clarified by Oracle:
we grant you a perpetual (unless terminated as provided in this agreement), nonexclusive, nontransferable, limited License to use the Programs only for the purpose of developing, testing, prototyping and demonstrating your applications
The Olds

Gone is the old 90 day clause, inserted after Oracle bought Sun.

Network Management

If you are building, modeling, or testing Network Management applications, Solaris is a good Operating System to continue using!

With being able to run Virtual Box with Solaris Crossbow, there is virtually nothing that you can not simulate when building, testing, and demonstrating Network Management Systems!

ZFS: NetApp and Oracle Agree to Dismiss Lawsuits


ZFS: NetApp and Oracle Agree to Dismiss Lawsuits

Background:
There has been a lot of Fear, Uncertainty, and Doubt spewed around by marketing droids about the use of ZFS because of lawsuits filed between NetApp and Sun.

Announcement:
Since the purchase of Sun by Oracle, this seems to have come to an end. Both NetApp and Oracle agreed to dismiss lawsuits and people can use open-source ZFS free and clear of legal wranglings.

Network Management:
Performance management typically requires immense quantities of space that is needed to hold historical performance metrics of devices and communication links. ZFS is the best tool on the market to do this efficiently, cost effectively, and securely.

Now, there should be no inhibitions to making network management business run less expensively and more efficiently.

Wednesday, September 8, 2010

Solaris 10 Update 9 Is Here!



Solaris 10 Update 9 Is Here!

There are A LOT OF UPDATES in this latest version of Solaris 10, many of which will drive people to retrieve the free download!

Get Your Summary...

Some people want a management level view, want to read the document, or just want a technical take on it. Here is your selection:
Network Management Angle

From a Network Management perspective, there are a few features which look very interesting:
  • SPARC Solaris Install Time Updates

    Vendors will now be able to provide SPARC device drivers separate from regular releases.

    I personally wonder whether this will be beneficial to ISV communities to provide integration software on top of SPARC Solaris in the TelCo Market, so Network Management platforms can be delivered "on a disk" with minimal configuration.

  • Zone Virtualization Updates

    Upgrading zones by attaching them to an upgraded Global Zone gets better support: update a host in a cluster, move the zones to that new host, and run the "zoneadm attach -U" to update those zones which were newly attached to the global zone.

    Migration of a physical Solaris 10 machine into a Zone with support for the HostID in Update 9 will allow more network management platforms to be virtualized while still retaining their licensing features.

  • Oracle VM for SPARC (LDOM's) Update

    For Network Management platforms not certified to run in Zones (there are a few system calls which are not available), then these updates will be of interest to you.

    People should avoid vendors which take business down this route because they refuse to support Zones.

  • ZFS Enhancements

    There are substantial enhancements to the ZFS subsystem. Triple Parity RaidZ is now available, log devicperformance tuning [especially for databases], log device removal, mirror splitting & cloning, and recovery tools for power-off crashes of system using cheap drives (which don't really report reality when a transaction is supposed to be committed.) [This last category is especially useful for dealing with large performance management data sets cost-effectively.]

  • iSCSI Enhancements

    Some more tuning options, performance enhancements, and remote booting capabilities. Once again, great for Network Managment, especially deploying remote Solaris probes.

  • Crypto Enhancements

    The AES encryption engine in newer Intel processors is now supported, providing similar functionality in Intel Solaris to the crypto accelerator support traditionally only seen in the UltraSPARC T processor family under SPARC Solaris.

  • Power Management Enhancements

    Power Managment enhancement in select DRAM chips, RAID cards, and overall OS under the Intel architecture. This makes your network managment center cooler with Solaris.

  • Enhanced Hardware Support

    Enhancements for HP and Dell platforms for Intel Solaris. Additional gigabit ethernet card support. A variety of Infiniband enhancements. Fault Management Support for newer AMD processors.

  • Freeware Enhancements

    Newer versions of Firefox, Thunderbird, and other applications. Configure Network Management devices from older web browsers is often difficult, newer web browsers are ALWAYS welcome!

    Oracle SHOULD take the Solaris Web Client more seriously. A Web Browser is a SECURITY item, not a "freeware" item. Configuring thousands of remote devices from a Windows platform is a risk that a business should never take since Windows spyware could be capturing the information critical to securing a network infrastructure. Moving this type of work to Solaris infrastructure is MUCH safer alternative than using an MS Windows web browser.
Download your latest version of Solaris 10 Update 9 today!

Happy Network Managing!

Monday, September 6, 2010

Shake Up at Oracle


Shake Up at Oracle


Hurd is In,

Mark Hurd, a former CEO of a unified NCR & Teradata, who later went on to the systems giant HP, has now returned to a company who has a unified Data Warehousing and Systems giant... but Oracle is also an Applications giant, as well.

Hurd experienced an odd scandal at HP where he had an assistant who was formerly involved in not so savory movies. The accusations are not well disclosed. The sexual-harassment policy was not violated, but apparently HP's standards of business conduct were.


Phillips is Out

Phillips apparently expressed his desire to depart from Oracle after a scandel of his own. Over 8 years, cheating on his wife, to be expressed in billboards across the United States. An adulterer spurned by an adulteress.


Network Management Connection

Bringing a (possibly) strong systems CEO from HP to Sun could be good or bad for the Network Management arena.

While at NCR, Hurd had presided over a company which resold Solaris and SPARC systems and briefly resold Intel Solaris.

While at HP, Hurd's company had a relationship with Sun with reselling Solaris. That relationship was recently renewed.

While Hurd presided over Teradata at NCR, a port of Teradata from 32 bit Intel NCR UNIX to 64 bit SPARC Solaris or 32 bit Intel Solaris was never realized as some speculated or hoped.

SPARC RISC has traditionally been a strong player in the Telecommunications Industry, but HP has traditionally been a company to eliminate various competing RISC architectures internally.

It will be most curious what changes happen at Oracle in regard to the Sun SPARC and Solaris acquisitions, considering the history of the new President. With Solaris and SPARC being the traditional core of real network management applications, the impact may or may not be significant.

Thursday, September 2, 2010

Need a Helping ARM?


I wish there was activity about the porting of Solaris to ARM for this gadget...