Friday, June 5, 2009

OpenSolaris 2009.06 - Network Virtualization

OpenSolaris 2009.06 - Network Virtualization

Network Virtualization Technology: Project Crossbow

Sun has been working at re-architecting the TCP/IP stack in Solaris for Virtualization for close to 3 years, making progress each year with new features. OpenSolaris 2009.06 exhibits some of the most recent enhancements

Network infrastructure in Solaris has been re-written at the NIC, Driver, and Socket levels - all the way up the stack.

Network Virtualization has to do with dedicated resources and isolation of network resources. They are talking about multiple: Hardware Ring Buffers in a NIC, TCP/IP Stacks in a Kernel, Kernel Ring Buffers in a Stack.
"Crossbow is designed as a fully parallelized network stack structure. If you think of a physical network link as a road, then Crossbow allows dividing that road into multiple lanes. Each lane represents a flow of packets, and the flows are architected to be independent of each other — no common queues, no common threads, no common locks, no common counters."

Some of the more interesting results of this integration: create networks with no physical NIC cards; create switches in software; assign bandwidth to a virtual NIC card (vNIC); assign CPU resources to a vNIC; assign quality of service (QoS) attributes to a vNIC; throttling protocols on a vNIC; virtualize dumb NIC's via the kernel to look like smart NIC's; switch automatically between interrupt and polled modes.

The implications are staggering:

  • Heavy consumption of network resources by an application does not necessarily have to step-on other mission critical applications running in another virtual server
  • Priorities for latency sensitive protocols (ex. VoIP) can be specified for traffic based upon various packet policies, like Source IP, Destination IP, MAC address, Port, or Protocol
  • Security is enhanced since Solaris 10 containers no longer have to share IP stacks for the same physical NIC, but physical NIC's can now have multiple IP stacks for each container
  • Multiple physical ports can be aggregated into a single virtual port and then re-subdivided into multiple virtual NIC's so many applications or many virtual servers can experience load sharing and redundancy in a simplified way (once at the lowest layer instead of multiple times, for each virtual machine)
  • Older systems can be retained for D-R or H-A since their dumb NIC's would be virtualized in the kernel and the newer NIC's with newer equipment can be added into the application cluster for enhanced performance
  • Heavily used protocols will switch a stack into "polled mode" to remove the overhead of interrupts to the overall operating system, providing better overall system performance, as well as providing faster network throughput to competing operating systems
  • Enhanced performance at a lower system resource expense is achieved by tuning the vNIC's to more closely match the clients mean flow control can happen at the hardware or NIC card level (instead of forcing the flow control higher in the TCP stack)
  • Modeling of applications and their performance can be done completely on a laptop, all application tiers, including H-A, without ever leaving the laptop - allowing architects to test the system performance implications by making live configuration settings
  • Repelling DoS attacks at the NIC card - if there is a DoS attack against a virtual server's vNIC card, the other virtual servers do not necessarily have to be impacted on the main system due to isolation and resource management, and packets are dropped at the hardware layer instead of at the kernel or application, where high levels of interrupts are soaking up all available CPU capacity.
Usually, adding & leveraging features like QoS and Virtualization will decrease performance to an operating system, but with OpenSolaris, adding these feature with a substantial re-write of code, enabled a substantial increase in read & write throughput over Solaris as well as substantial increase in read throughput (with close to on-par write throughput) in comparison to Linux on the same hardware.

This OpenSolaris technology is truly ground-breaking for the industry.

Usage of Network Virtualization in Network Managment

In the realm of Network Management, there is usually a mix of unreliable protocols (ICMP and UDP) with reliable protocols (TCP sockets.) The unreliable protocols are used to gather (ICMP, SNMP) or collect (Syslog) data from the edge devices while reliable protocols are used to aggregate that data within the management platform cluster.

While the UDP packets are sent/received, they can be dropped under times of high utilization (event storms, denial of service attacks, managed network outages, etc.) - so applying higher quality of service to these protocols becomes desirable to ensure the network management tools have the most accurate view of the state of the network.

Communication to internal system, which are aggregating that data, require this data for longer term usage (i.e. monthly reporting) and must be maintained (i.e. backups) - but these subsystems are no where near as important to maintaining an accurate state of the managed network when debugging an outage, which affects the bottom line of the company. These packets can be delayed a few microseconds to ensure the critical packets are being processed.

Enhanced performance in the overall TCP/IP stack also means more devices can be managed by the network management platform while maintaining the same hardware.

Implementation of
Network Virtualization in Network Management

The H-A platform can be loaded up with OpenSolaris 2009.06 and the LDOM holding the Network Management application can be live-migrated seamlessly in minutes.

After running on the production H-A platform for a time, the production platform can be upgraded, and the LDOM migrated back in minutes.


Operating systems like OpenSolaris 2009.06 offer to the Network Management Architect new options in lengthening asset lifespan, increasing return-on-investment for hardware assets, ensuring better system performance of network management assets, ensuring the best possible network management team performance possible.

No comments:

Post a Comment