Thursday, December 29, 2011

Solaris 11: A Cloud in a Box!

Solaris 11: A Cloud in a Box!
Computing industry began with resource centralized on singularly large computing platforms. The microprocessor brought computing power into the hands of individuals in homes and offices, but information was still centralized in each location. The creation of The Internet allowed for the sharing of information between homes and offices, around the globe. Reliable server and telecommunications infrastructure was required to make it work, applications were somewhat limited to a handful of standard Internet protocols, such as HTTP. Cloud Computing has been coming of age over the past number of years, driving custom applications to proprietary API's, to move more applications into the Internet, but this is quickly changing as operating system vendors include more robust virtualization. Cloud Computing is really about the virtualization of Internet infrastructure, including servers, to a point where pieces do not have to reside on the internet, nor in an office, nor just a split between the two - but can reside anywhere, including entirely in a laptop. Solaris 11, the first Cloud Operating System, offers the ability to virtualize everything, from entire data centers across thousands of platforms, to thousands of platforms virtualized on a laptop.

Simulating The Cloud: A Practical Example

Joerg M., an Oracle employee and publisher of C0T0D0S0, discusses Solaris 11 with some of it's features, demonstrates the building of a cluster of virtual data centers within a single operating system instance. If someone runs a data center, they should consider reviewing the article to better comprehend the capabilities of what a "Cloud" could and should be.

It should be noted that "simnet" clause to the "create-simnet" and "modify-simnet" are formally undocumented, but documented in the OpenSolaris released source code, and leveraged in various other derived Open Source branches. One of the most important distributions being the Joyent SmartOS cloud operating system distributions.

Not Included, but Not Out Of Scope

What is not included in Joerg's example are actual systems on the edges of the cloud. Adding them is actually more trivial than adding the virtual routers which were created. Add virtual interfaces, virtual systems, databases to virtual systems, middleware to virtual systems, applications to virtual systems, add bandwidth & latency limitations to WAN links, add port limitations to virtual firewalls, etc.

Why Go Through the Exercize?

Once someone builds the entire datacenter "in the box", creation of the real data center becomes trivial. But why does this matter?
  • For the first time, real test environments can be simulated, soup-to-nuts, in an inexpensive way. There is no charge for virtualization in a Solaris world.
  • Costs can be reduced by placing all development systems into a couple of "clouds" for virtually any number (Solaris supports over 4000 zones on a single OS instance) of applications
  • Movement of an application from development to test is as easy cloning a Zone and instantiating the Zone on a Test platform.
  • Costs can be reduced by placing all test systems into a couple of clouds for virtually any number of applications
  • Deploying tested application is as easy as instantiating the cloned test Zone on a production system
  • Disaster recovery is as easy as instantiating the Zone on the dead physical system onto a physical system in an alternate data center.
  • Deploying production applications into a cloud is as easy as backing up the application and restoring it into the cloud - not to mention bringing it back.
  • The interactions of the application with Firewalls, WAN's and LAN's are all well understood, with everything being properly developed and tested, making each production deployment seamless
The effort, with a step-by-step process will ensure that there are no missed steps in the process to bringing virtualization to a business.

Implications to Network Management

The world is slowly exiting the physical world and Network Management is no longer about monitoring edge routers and links - it is about monitoring virtualized infrastructure. Orchestration is all about automated deployment and cloud providers are getting better at this. The missing piece to this puzzle is robust SNMP management of everything. The creation of network management infrastructure needs to happen in the development clouds first, then the test clouds, so when the jump to production is complete - the management infrastructure has already been simultaneously developed and tested, with the applications.

Tuesday, December 27, 2011

JavaScript Tab Update

JavaScript Tab Update

JavaScript, formerly known as LiveScript, standardized as ECMAScript, is a language originally used on both client and server web platforms. Through unfortunate historical vendor interactions, server side usage of JavaScript became less common. With the advent of JavaScript engines, which can be decoupled from the client browser, JavaScript became usable on the server side for independent projects.

Netscape Communications brought to the market some of the first widespread adopted web client http browsers and web servers. One of Netscape's key technologies was called LiveScript - a client and server side technology to bring automation and communication to the browser and server suite. With the advent of Sun Microsystems cross-platform Java language, Java quickly became a hit, and LiveScript was rebranded JavaScript. Microsoft soon released their own web client & server platform, but only included a somewhat compatible client side JavaScript, creating a proprietary language on the server-side, leaving JavaScript to become less common on the server. Nations quickly figured out that Microsoft's half-baked implementation was a bad for the world and standardization soon occurred through ECMAScript. Sun's Java, with it's cross-platform capability, quickly became the language of choice on the server while JavaScript became the language of choice on the client. The battle for the fastest web browsers created teams of developers building JavaScript Engines, which could be decoupled from the Web Client. With the advent of decoupled JavaScript engines, developers started the movement back to server side JavaScript. NodeJS is a recent server side JavaScript non-blocking framework. NodeJS is based upon Google V8 engine, which unfortunately only works on a subset of known servers architectures and operating systems.

The following are recent resource changes to the JavaScript tab on the Network Management blog.

NodeJS Specific Developments
[html] Server side JavaScript Engine: Node.JS
[html] Community Support for NodeJS
[html] Internal Developers List for NodeJS
[html] X11 Client Implementation under NodeJS
[html] X11 "nwm" window manager
[html] XCB directly rendered by node-canvas
[html] Google V8 JavaScript Engine

JavaScript Engines
[html] Mozilla SpiderMonkey (Various platforms)
[html] Mozilla Tamarin (Various platforms)
[html] Mozilla Rhino (Java based JavaScript engine)
[html] Google V8 JavaScript Engine (Intel, ARM)
[html] Mozilla JaegerMonkey
[html] Apple WebKit Nitro (SquirrelFish Extreme)
[html] Opera Presto

Wednesday, December 21, 2011

Solaris 10: SSH and Forwarding HTTP

Solaris 10: SSH and Forwarding HTTP

When Sun first produced systems, the common way for users to move around a network and to distribute workload was to leverage the Berkeley "r" tools, such as "rsh", "rlogin", "rexec", etc. under Solaris. As academics became professional, security concerns over passwords being passed in the clear were raised and SSH was born. SSH was built with a compatible superset to "rsh", but this was later removed with the second version of the protocol. This document discusses the implementation of SSH under Solaris.

Global Configurations

SSH uses several global configuration files, one for the client, and another for the server. Each of these config files document the default compiler flags under Solaris. The "ssh" client global configuration file can be tailored on a per-user basis while the "sshd" server global configuration file is managed at the global level.

SSH Server Daemon

Under Solaris 10, related OS's, and above - SSHD is started through the services infrastructure.

sunserver/user$ svcs ssh
online Aug_17 svc:/network/ssh:default
There are built-in compiled defaults and global defaults which are reviewed, upon startup, and connection.

Start a Session with X and HTTP Forwarding

For demonstration purposes, there may be the need to temporarily open an X Console (to install an Oracle Database) and forward HTTP ports (to test an application) on a platform in a DMZ. The sample command may look like this:

sunclient/user$ ssh user@sunserver -b \
-L 58080: -L 8080: -g
Since the ports to be forwarded are over 1024, there is no requirement for special "root" permissions. The proxied HTTPD connections can be observed.

sunclient/user$ netstat -an  grep 8080
*.58080 *.* 0 0 49152 0 LISTEN
*.8080 *.* 0 0 49152 0 LISTEN
To perform a basic test of the forwarded HTTP port, the classic "telnet" can be used on the command line, but the connection is closed.

sunclient/user$ telnet localhost 58080
Connected to localhost.
Escape character is '^]'.
Connection to localhost closed by foreign host.
Note, the error on the remote side.

channel 5: open failed: administratively prohibited: open failed
This is a configuration issue.

Global SSHD Configuration

Under Solaris 10, forwarding agent is disabled as a compile flag, and is documented in the global configuration file. If one makes a connection via SSH, and proxies a port - an error message will be produced upon the first connection attempt to the proxied port.

To allow for the port forwarding, edit the configuration file "/etc/ssh/sshd_config".

AllowTcpForwarding yes
GatewayPorts yes
X11Forwarding yes
You will need to restart the "sshd" service, the administrative message disappears.

sunserver/root# svcadm restart ssh

Your port HTTP and X Windows Port Forwarding will now work for ad-hoc tasks.

Tuesday, December 20, 2011

Solaris Tab - Secure Deployment of LDom's or VM Server for SPARC

Solaris Tab - Secure Deployment of LDom's or VM Server for SPARC

An Oracle White Paper, Secure Deployment of Oracle VM Server for SPARC , was added to the Solaris Tab on Network Management.

Solaris Reference Material
2011-01 [PDF] Secure Deployment of LDom's or VM Server for SPARC

Solaris LDoms / Oracle VM Server for SPARC
Secure Deployment of LDoms or Oracle VM Server for SPARC

Monday, December 19, 2011

SPARC T4: Optimizing with Oracle VM Server for SPARC

SPARC T4: Optimizing with Oracle VM Server for SPARC


Modern computing systems had found their footing through the history of computing. Some companies and architectures influenced the modern desktop computer more than others. One such company was Sun Microsystems, which had found it's way into Oracle. Oracle released their latest processor, the SPARC T4, with a dynamic new capability to offer the functionality to process two different workloads, via virtualization technology.

Processor History:

In 1985, Sun Microsystems produced their first Sun-3 workstation and servers based upon the 32 bit CISC Motorola 68000 processor. In 1987, Sun Microsystems produced their first Sun-4 workstations and servers upon the 32 bit RISC SPARC processor. In 1995, Sun Microsystems produced their first UltraSPARC system based upon 64 bit RISC UltraSPARC processor. In 2002, Sun Microsystems acquired Afara Web Systems, with a new high-throughput SPARC design. In 2005, Sun Microsystems released their first server (no desktops) based upon the UltraSPARC T1 processor, which was tuned for multi-threaded workloads. Oracle, who made their fortunes primarily from software upon SPARC, acquired Sun Microsystems and released their first server (no desktops) in 2010 based upon the SPARC T3. Oracle released the SPARC T4 in 2011, supporting both multi-threaded and single-threaded workload.

Workload History:

The workloads in the SPARC processors were traditionally single-threaded workloads from their early years. With the advent of RISC processors, the concept to reduce complexity allowed for the increase clock speed and thus the increase of single threaded performance. With the investment from AT&T and merger with SVR4, Solaris experienced multi-threaded workloads expansion. When SGI purchased Cray Research, Sun Microsystems purchased the Cray Superserver 6400 to create massive high-speed single threaded capability into massive multi-threaded workload throughput of 64 threads via racks of equipment.

With the release of UltraSPARC T1, Sun Microsystems managed to shrink 32 threads of slower integer and crypto capacity not only into a single socket, but onto single piece of silicon, performing outstanding aggregate capacity. With the subsequent release of the T2 processor, 64 threads were merged onto a chip. While the throughput was equivalent to racks of equipment in the T processors, the single threaded performance was a decade behind.

Workload Selection:

With the release of the Oracle SPARC T4 processor, a system can now be tuned to support single or multi-threaded workloads via Oracle VM Server for SPARC release 2.1, previously known as Logical Domains or LDom's.

The short tuning white paper from Oracle describes:
This paper describes how to use the Oracle VM Server for
SPARC 2.1 CPU threading controls to optimize CPU performance
on SPARC T4 platforms. CPU performance can be optimized for
CPU-bound workloads by tuning CPU cores to maximize the
number of instructions per cycle (IPC). Or, CPU performance
can be optimized for maximum throughput by tuning CPU cores
to use a maximum number of CPU threads. By default, the CPU
is tuned for maximum throughput
During the provisioning of a Logical Doman or VM under SPARC, the provisioner can choose the workload optimization required. This can be performed during ["add-domain"] or after ["set-domain"] provisioning.
ldm add-domain [mac-addr=num] [hostid=num]
[threading=max-throughputmax-ipc] ldom

ldm set-domain [mac-addr=num] [hostid=num]
[threading=max-throughputmax-ipc] ldom
The "threading" parameter defines the workload. The options from the white paper are defined as follows:

  • max-throughput.
    Use this value to select the threading mode that maximizes throughput. This mode activates all threads that are assigned to the domain. This mode is used by default and is also selected if you do not specify any mode (threading=).

  • max-ipc.
    Use this value to select the threading mode that maximizes the number of instructions per cycle (IPC). When you use this mode on the SPARC T4 platform, only one thread is active for each CPU core that is assigned to the domain. Selecting this mode requires that the domain is configured with the whole-core constraint.

Sunday, December 18, 2011

Solaris Tab - SPARC T4 Workload Optimization

Solaris Tab - SPARC T4 Workload Optimization

A new Oracle White Paper, Tuning the SPARC CPU to Optimize Workload Performance on SPARC T4, was added to the Solaris Tab on Network Management.

Solaris Reference Material
2011-09 [PDF] Tuning to Optimize Workload Performance on SPARC T4

Friday, December 16, 2011

Oracle Ops Center 11g Release 1 Update 3

Oracle Ops Center 11g Release 1 Update 3

Datacenters have long struggled with the lifecycle management of servers on a massive scale. Sun Microsystems addressed this concern with their N1 product line, which was later re-branded xVM with additional consolidation of hypervisor. With the acquisition of Sun by Oracle, hypervisors were broken out and Ops Center has been placed under the umbrella of Oracle Enterprise Manager.

Ops Center History:
Ops Center has a long history with features consolidated from many startups and industry players, now conslidated under Oracle.
2001-10-26 [html] Terraspring Startup
2002-09-19 [html] Pyrus acquisition announced
2002-11-02 [html] Sun acquired Pyrus for virtualization
2002-11-15 [html] Sun acquires Terraspring for heterogeneous system automation
2003-07-03 [html][html] Sun acquired CenterRun for application automation
2003-12-04 [html] Sun releases N1 Service Provisioning System
2005-05-03 [html] Sun augments N1 Provisioning System with N1 System Manager
2007-11-16 [html] First Internet Archive capture
2007-12-04 [html] Sun announces xVM Ops Center and open-sourcing to
2008-05-28 [html] Sun xVM Ops Center 1.1.1 GA
2009-01-27 [html] Sun xVM Ops Center 2.0 GA
2009-02-27 [html] Final Internet Archive OpenxVM capture
2010-01-22 [html] Oracle xVM Ops Center GA

Upcoming Release:
Oracle Enterprise Manager Op Center 11g Release 1 Update 3 is about to be released. The upgrade documentation is now available, packages are soon to follow.

Thursday, December 15, 2011

From SunOS through Solaris to Illumos

From SunOS through Solaris to Illumos

Don't miss this slide show from Joyent

Wednesday, December 7, 2011

UNIX/Linux Vocabulary Building

The UNIX/Linux environment is a rich collaboration of tools, tricks, and jokes built by generations of users with widely varying levels of ability. While basic competence is achievable within a short period of time (i.e. "Just 5-10 years to learn the rules and only a couple lifetimes to master.") it's easy to become dependent on a few commands when other interesting or more suitable tools are readily available:

# find / | grep ifconfig
$ whereis ifconfig

$ man ls
$ pinfo ls

In this spirit I recommend the following links (not surprisingly, Dave's favorite AWK is listed in both).

Reddit thread:
Give Me That One Command You Wish You Knew Years Ago

Beware the spelling errors:
Advanced Unix Commands

Tuesday, December 6, 2011

Revisited: Oracle Database Licensing

Revisited: Oracle Database Licensing

Oracle licenses it's RDBMS by several factors, typically the Standard License (by socket) and an Enterprise License (by core scaling factor.) Occasionally, hardware and operating system vendors will enhance their offerings, requiring a revisit by database vendors to expand their legal categorizations for licensing. Oracle's guiding documents are readily available on-line.

Reason for Revisit:
Sun had produced several virtualization technologies, by the time Oracle purchased them. One particular virtualization technology, "LDoms" (short for Logical Domains), renamed to "Oracle VM for SPARC", has been added to the list of being approved for Physical Partitioning technologies.

Partitioning - Topic: Server/Hardware Partitioning
The Oracle Partitioning guide now approves of LDoms or Oracle VM for SPARC as a Hard Partitioning technology.
Oracle has deemed certain technologies, possibly modified
by configuration constraints, as hard partitioning, and no
other technology or configuration qualify. Approved hard
partitioning technologies include: Dynamic System Domains
(DSD) -- enabled by Dynamic Reconfiguration (DR), Solaris 10
Containers (capped Containers only), LPAR (adds DLPAR with
AIX 5.2), Micro-Partitions (capped partitions only), vPar,
nPar, Integrity Virtual Machine (capped partitions only),
Secure Resource Partitions (capped partitions only), Static
Hard Partitioning, Fujitsu’s PPAR, Oracle VM Server for SPARC.
Oracle VM Server for x86 can also be used as hard partitioning
technology only as described in the following document
Implications for Network Management:

With the current SPARC T4 systems, this becomes more important for Managed Services environments, where Service Provider licenses are required in order to perform external services with an Oracle RDBMS. Being able to limit the number of cores on a new quad socket SPARC T4-4 system offers a lot of flexibility - especially when performance characteristics are similar to 8 socket POWER7 and 32 socket SPARC64 VII platforms.

Most network management software is available under SPARC and few are available under POWER, yet there has been a movement towards POWER over the past few years, specifically for databases, This is the natural time to simplify architectures and re-consolidate those Oracle Databases back onto the SPARC Network Management platforms, again. Why introduce the complexities or firewalls, multiple architectures, multiple code bases, multiple reboot windows, multiple maintenance windows, and overcomplicating D-R procedures when it is cheaper to put it all back on a new low end SPARC platform, and it can be made even less expensive by introducing virtualization technologies like [Oracle VM for SPARC] LDoms and [CPU Capped] Zones?

Monday, December 5, 2011

Small Solaris

Small Solaris

Solaris has traditionally been an operating system designed to run in a small footprint. Early Sun workstations, like the Sun 3/50 required only 4 Megabytes of RAM. The growth of memory requirements continued with the release of each operating system. Solaris 10 was the last commercial Solaris release to support 128Meg of RAM, which Solaris 10 Update 1 reportedly required 384Meg of RAM minimum. This size continues to grow, with added functionality.

There is a release of OpenSolaris referred to as EON or Embedded Operating System / Network. It is small enough to run from a 256Meg flash, but it should be run on a system with 1 Gig of RAM minimum.

Illumos Discussion:
An Illumos discussion thread yielded Jerry Kemp, who happened to reference a particular defunct appliance-discussion list where OpenSolaris was running on a Soekris net5501 system. Another system was mentioned to host OpenSolaris, the fit-PC. Jerry also mentioned two different blogs postings from Sun/Oracle's Jim Conners and also a note about Compressed ZFS on ARM port of OpenSolaris.

Physically Small:
Jim built one embedded system was a very small platform, but it included 512Meg of RAM. This is hardly small, by any sense of the meaning, but it is physically small!
[2008-11-08] - Physically small platform, 512Meg RAM

Small Footprint Framework:
Jim built a framework which creates a Solaris in-memory installation that will work in an area as small as 60 Megabytes.
[2007-02-07] - Framework to Help Create Small Footprint RAM Resident Solaris

In 2009, Vineeth Pillai from Sun Microsystems in the Czech Republic presented "OpenSolaris ARM Port and Its Future".

The OpenSolaris port to ARM was announced in June 8, 2009.
It was based on OpenSolaris 2008.05 build 86 and ported to NEC NaviEngine 1. Compressed ZFS is incompatible with ZFS, but more suited for embedded devices.

On June 19, 2009 - it was confirmed that UFS and ZFS were in the ARM port of OpenSolaris.

In June 25, 2009 - NEC contributed ARM code to make ZFS use 8 Megabytes of RAM to run ZFS and 4 Megabytes of RAM for ARC. By Compressing ZFS data structures, they managed to boot OpenSolaris in 16MB of RAM and bring the ARC to 1MB with a 2MB ZFS runtime!

Darren Moffat posed a question in September 28, 2009 about mounting a disk under a QEMU instance of ARM OpenSolaris where Mitsuru Sasanuma replied the NE1 emulator does not implement IDE (SATA) and NIC devices, so hard disk images could not be used in QEMU. CZFS could be used with regular files.

Illumos Implications:
The substantial question of the week is, can we move Illumos to something smaller or bring Illumos to embedded devices for USB external hard drives?

Friday, December 2, 2011

X Tab: OpenWindows Augmented Compatibility Environment

The following has been added to the X Tab for Solaris 9 and Solaris 10.
OpenWindows Augmented Compatibility Environment

owacomp - [http|ix86|sparc|src|readme] - OpenWindows acomp Project
olvwm4.4p4 - [http|pkg|src|readme] - Solaris 8 SPARC OpenLook Virtual Window Manager
olvwm4.4p4 - [http|pkg|src|readme] - Solaris 8 ix86 OpenLook Virtual Window Manager

Thursday, December 1, 2011

Oracle Database Appliance Webcast

Oracle Database Appliance Webcast

Don't miss the webcast on December 13, 2011 at 12:00EST noon!

Objectives to achieve from webcast attendance includes understanding:

  • Consolidation of many small databases into a single highly available solution

  • Deploy and Manage clustered systems in hours

  • Benefit from Single Vendor support for Hardware, OS, and Database
The featured speakers scheduled are:

  • Bob Thome
    Senior Director of Product Management,

  • Matthew Baier
    Director of Product Marketing,
Register Now - See You There!