Showing posts with label LDom. Show all posts
Showing posts with label LDom. Show all posts

Monday, February 27, 2023

How Do I Save the LDoms Configuration under Solaris?

 

 Abstract:

Under SPARC Logical Domains, the Hypervisor is actually running in the firmware of the chassis, where the Control Domain sends commands to partition the hardware underneath the OS's. The hypervisor and all settings are completely in memory... which means if there is a power outage, all virtualization configuration can be lost. The ILOM has onboard storage, to hold the LDoms configuration, when saved, and the hypervisor in the firmware is smart enough to request the configuration from the ILOM on boot, and then simultaneously boot all Logical Domains (including the Control Domain.)

List LDom Configurations

To list all Logical Domain Configurations, which were stored to the ILOM:

sun1824-cd/root# ldm list-spconfig
factory-default
@post-migration [current]
default-config
20190301
20191002
20211014
20220908

Note: in the above example, the "@post-migration" means the configuration was saved the last time someone executed a live migration onto or off of this platform, with the "-s" flag for "save config".

Save Logical Domain Configuration

To save a copy of the LDom configuration:

sun1824-cd/root# ldm add-spconfig `date +%Y%m%d`
sun1824-cd/root#

List Saved Logical Domain Configurations

The newly saved logical domain configuration  should show as the Year, Month, Day combination

sun1824-cd/root# ldm list-spconfig
factory-default
@post-migration
default-config
20190301
20191002
20211014
20220908
20230218 [current]
sun1824-cd/root#



 

Thursday, May 16, 2013

Virtualizing Solaris


Abstract:
The movement from physical to virtual servers has been happening for decades. First, the use of Physical Domains under SPARC "big-iron" became possible when Sun purchased Cray SPARC assets in 1996 while SGI purchased the remaining. The Sun Enterprise 10000 was introduced in 1997 with Physical Domains. With the release of Solaris 10 almost a decade ago, physical systems could be moved to logical Zones under a single kernel in 2004. With the line of T processors, the ability to load multiple OS's on the same platform at the firmware layer became possible in 2006. This article discusses LDom's.

P2V:
Physical to Virtual Migration or P2V is possible to consolidate physical Solaris platforms onto various virtualized Solaris platform destinations - such as Zones, Branded Zones, or LDom's. The P2V process uses an archive called a FLAR.

P2V and Branded Zone:
An Oracle Enterprise Manager blog was published recently, explaining how to move a physical server to a Branded Zone. The May 15th blog was titled: "How to go Physical to Virtual with Oracle Solaris Zones using Enterprise Manager Ops Center."

Logical Domains:
The documentation describing the deployment of the Logical Domains for Oracle Enterprise Manager is available under Oracle's web site. Each Logical Domain, sits on top of the firmware of T-Class processors, and can host Solaris 11, Solaris 10, and Solaris 10 can host older Solaris 8 & 9 Operating System under Branded Zones.

Network Management Implications:
Network Management platforms can very easily be consolidated onto newer platforms, with very little effort, using free drag-and-drop tools such as Oracle Enterprise Manager. If a network management center is still running under multiple older Physical platforms, one should consider Zones or LDom's, which offer virtually no overhead (in comparison to systems such as VMWare or HyperV which require a foreign software layer between their domains and the hardware, introducing problematic latency under heavy loads.)

Thursday, April 12, 2012

Solaris Tab: Solaris LDom's / Oracle VM for SPARC Addendums

The Solaris Tab was recently updated with some white papers.

White papers were placed in date order, using shortened titles on the top, for easy access, while they were categorized with their full titles on the bottom according to topic.

Solaris Reference Material

2007-07 [PDF] Understanding and Deploying Logical Domains
2010-05 [PDF] Best Practices for Data Reliability with LDom's
2010-05 [PDF] Best Practices for Network Availability with LDom's
2010-05 [PDF] Increase Application Scalability and Improve Utilization with LDom's

Solaris LDoms / Oracle VM Server SPARC
  • 2007-07 [PDF] Beginners Guide to Oracle VM Server for SPARC:Understanding and Deploying Logical Domains
  • 2010-05 [PDF] Best Practices for Data Reliability with Oracle VM Server for SPARC
  • 2010-05 [PDF] Best Practices for Network Availability with Oracle VM Server for SPARC
  • 2010-05 [PDF] Increase Application Scalability and Improve System Utilization with Oracle VM Server for SPARC

Monday, January 23, 2012

Virtualizations: LPARs, LDoms, Xen, KVM, VMWare, and HyperV


Virtualizations: LPARs, LDoms, Xen, KVM, VMWare, and HyperV

IBM LPARs
IBM LPARs is a premium proprietary virtualization technology which sits on top of IBM POWER architecture. It leverages the Virtual I/O Server (VIOS) in order to manage operating system resource requests from other domains.

https://www.ibm.com/developerworks/wikis/display/virtualization/VIO
"This allows a single machine to run multiple operating system (OS) images at the same time but each is isolated from the others. POWER4 based machines started this in 2001 by allowing many Logical Partitions (LPAR) to run on the same machine using but each using different CPUs, different memory sections and different PCI adapter slots. Next came with POWER4, the ability to dynamically change the CPU, memory and PCI adapters slots with the OS running. With the introduction of POWER5 in 2005, further Virtualization items have been added."
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphb1/iphb1_vios_virtualioserveroverview.htm
"The Virtual I/O Server is software that is located in a logical partition. This software facilitates the sharing of physical I/O resources between client logical partitions within the server. The Virtual I/O Server provides virtual SCSI target, virtual fibre channel, Shared Ethernet Adapter, and PowerVM™ Active Memory Sharing capability to client logical partitions within the system. As a result, client logical partitions can share SCSI devices, fibre channel adapters, Ethernet adapters, and expand the amount of memory available to logical partitions using paging space devices. The Virtual I/O Server software requires that the logical partition be dedicated solely for its use. The Virtual I/O Server is part of the PowerVM Editions hardware feature."

SPARC LDOM's or Oracle VM for SPARC
SPARC LDOM's (or now referred to as Oracle VM for SPARC) is analagous to IBM's LPARs. IBM's VIOS appears to be analagous to Control Domain under. The LDom Control Domain can be subdivided between Control, Service, and I/O Domains - to architect redundancy and additional performance in a SPARC platform. LDom's are a free Solaris SPARC bundled virtualization technology.

http://www.oracle.com/us/technologies/virtualization/oraclevm/oracle-vm-server-for-sparc-068923.html
"Oracle VM Server for SPARC (previously called Sun Logical Domains) provides highly efficient, enterprise-class virtualization capabilities for Oracle's SPARC T-Series servers. Oracle VM Server for SPARC allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by SPARC T-Series servers and the Oracle Solaris operating system. And all this capability is available at no additional cost."
http://en.wikipedia.org/wiki/Logical_Domains
"The Control domain, as its name implies, controls the logical domain environment. It is used to configure machine resources and guest domains... The control domain also normally acts as a service domain. Service domains present virtual services, such as virtual disk drives and network switches, to other domains… Current processors can have two service domains in order to provide resiliency against failures. I/O domain has direct ownership of and direct access to physical I/O devices, such as a network card in a PCI controller… Control and service functions can be combined within domains."
There are basic technologies available through LDOM's to developers and architects such as cluster-in-a-box, redundant I/O domains, etc.
http://docs.oracle.com/cd/E19316-01/820-4676/ggtcs/index.html
"In this logical domains (LDoms) guest domain topology, a cluster and every node within that cluster are located on the same Solaris host. Each LDoms guest domain node acts the same as a Solaris host in a cluster. To preclude your having to include a quorum device, this configuration includes three nodes rather than only two."

Xen
There are some similarities to the way these former hypervisors and Xen is architected. Various implementations of Xen exist, such as Citrix Hypervisor, Oracle VM for x86, and OpenSolaris based Xen (now a project under Illumos.) Xen is an open-sourced hypervisor.

http://xen.org/files/Marketing/WhyXen.pdf
"A critical benefit of the Xen Hypervisor is its neutrality to the various operating systems. Due to its independence, Xen is capable of allowing any operating system (Linux, Solaris, BSD, etc) to be the Domain0 thereby ensuring the widest possible use case for customers. For example, many hardware manufacturers leverage NetBSD as their OS of choice for Domain0 and are able to deploy Xen in the manner of their choosing."

"This separation of hypervisor from the Domain0 operating system also ensures that Xen is not burdened with any operating system overhead that is unrelated to processing a series of guests on a given machine. In fact, more are beginning to break up the Domain0 from a single guest into a series of mini-OS guests each with a specific purpose and responsibility which drives better performance and security in a virtualization environment."

KVM
No, this is not a Keyboard switch. Late to the game was a Linux and OpenSolaris based virtualization technology, unfortunately called KVM, for Kernel Virtual Machine. First implemented under Linux.
http://wiki.linuxplumbersconf.org/_media/2010:02-lpc-kvmstoragestackperformance.pdf

Modern OS features such as DTrace and ZFS are now available to KVM after it was quickly ported to OpenSolaris source code base by Joyent for their Open Source SMARTOS cloud operating system and cloud offering
http://www.phoronix.com/scan.php?page=news_item&px=OTc5Ng
"Joyent has announced today they have open-sourced their SmartOS operating system, which is based on Illumos/Solaris. Additionally, this cloud software provider has ported the Linux KVM (Kernel-based Virtual Machine) to this platform.

Being derived from Illumos and in-turn from Solaris, SmartOS does ship with ZFS support, DTrace, and other former Sun Microsystems technologies."



Microsoft HyperV
Some vendors came very late to the hypervisor game. Microsoft HyperV have a similar architecture, available only under Intel & AMD processors, depend on hardware acceleration available under only certain CPU chips from both of those vendors.


VMWare ESXi
VMWare has a great deal of experience in hypervisors, growing out of a software-driven solution, before hardware handlers became popular (and leveraged) in the Intel/AMD world. They provide some of the best backwards-compatibility in the Intel/AMD world.

Tuesday, December 20, 2011

Solaris Tab - Secure Deployment of LDom's or VM Server for SPARC


Solaris Tab - Secure Deployment of LDom's or VM Server for SPARC

An Oracle White Paper, Secure Deployment of Oracle VM Server for SPARC , was added to the Solaris Tab on Network Management.

Solaris Reference Material
2011-01 [PDF] Secure Deployment of LDom's or VM Server for SPARC

Solaris LDoms / Oracle VM Server for SPARC
Secure Deployment of LDoms or Oracle VM Server for SPARC

Monday, December 19, 2011

SPARC T4: Optimizing with Oracle VM Server for SPARC


SPARC T4: Optimizing with Oracle VM Server for SPARC

Abstract:

Modern computing systems had found their footing through the history of computing. Some companies and architectures influenced the modern desktop computer more than others. One such company was Sun Microsystems, which had found it's way into Oracle. Oracle released their latest processor, the SPARC T4, with a dynamic new capability to offer the functionality to process two different workloads, via virtualization technology.

Processor History:

In 1985, Sun Microsystems produced their first Sun-3 workstation and servers based upon the 32 bit CISC Motorola 68000 processor. In 1987, Sun Microsystems produced their first Sun-4 workstations and servers upon the 32 bit RISC SPARC processor. In 1995, Sun Microsystems produced their first UltraSPARC system based upon 64 bit RISC UltraSPARC processor. In 2002, Sun Microsystems acquired Afara Web Systems, with a new high-throughput SPARC design. In 2005, Sun Microsystems released their first server (no desktops) based upon the UltraSPARC T1 processor, which was tuned for multi-threaded workloads. Oracle, who made their fortunes primarily from software upon SPARC, acquired Sun Microsystems and released their first server (no desktops) in 2010 based upon the SPARC T3. Oracle released the SPARC T4 in 2011, supporting both multi-threaded and single-threaded workload.


Workload History:

The workloads in the SPARC processors were traditionally single-threaded workloads from their early years. With the advent of RISC processors, the concept to reduce complexity allowed for the increase clock speed and thus the increase of single threaded performance. With the investment from AT&T and merger with SVR4, Solaris experienced multi-threaded workloads expansion. When SGI purchased Cray Research, Sun Microsystems purchased the Cray Superserver 6400 to create massive high-speed single threaded capability into massive multi-threaded workload throughput of 64 threads via racks of equipment.

With the release of UltraSPARC T1, Sun Microsystems managed to shrink 32 threads of slower integer and crypto capacity not only into a single socket, but onto single piece of silicon, performing outstanding aggregate capacity. With the subsequent release of the T2 processor, 64 threads were merged onto a chip. While the throughput was equivalent to racks of equipment in the T processors, the single threaded performance was a decade behind.

Workload Selection:

With the release of the Oracle SPARC T4 processor, a system can now be tuned to support single or multi-threaded workloads via Oracle VM Server for SPARC release 2.1, previously known as Logical Domains or LDom's.

The short tuning white paper from Oracle describes:
This paper describes how to use the Oracle VM Server for
SPARC 2.1 CPU threading controls to optimize CPU performance
on SPARC T4 platforms. CPU performance can be optimized for
CPU-bound workloads by tuning CPU cores to maximize the
number of instructions per cycle (IPC). Or, CPU performance
can be optimized for maximum throughput by tuning CPU cores
to use a maximum number of CPU threads. By default, the CPU
is tuned for maximum throughput
During the provisioning of a Logical Doman or VM under SPARC, the provisioner can choose the workload optimization required. This can be performed during ["add-domain"] or after ["set-domain"] provisioning.
ldm add-domain [mac-addr=num] [hostid=num]
[failure-policy=ignorepanicresetstop]
[extended-mapin-space=on]
[master=master-ldom1,...,master-ldom4]
[threading=max-throughputmax-ipc] ldom

ldm set-domain [mac-addr=num] [hostid=num]
[failure-policy=ignorepanicresetstop]
[extended-mapin-space=[onoff]]
[master=[master-ldom1,...,master-ldom4]]
[threading=max-throughputmax-ipc] ldom
The "threading" parameter defines the workload. The options from the white paper are defined as follows:



  • max-throughput.
    Use this value to select the threading mode that maximizes throughput. This mode activates all threads that are assigned to the domain. This mode is used by default and is also selected if you do not specify any mode (threading=).

  • max-ipc.
    Use this value to select the threading mode that maximizes the number of instructions per cycle (IPC). When you use this mode on the SPARC T4 platform, only one thread is active for each CPU core that is assigned to the domain. Selecting this mode requires that the domain is configured with the whole-core constraint.

Sunday, December 18, 2011

Solaris Tab - SPARC T4 Workload Optimization


Solaris Tab - SPARC T4 Workload Optimization

A new Oracle White Paper, Tuning the SPARC CPU to Optimize Workload Performance on SPARC T4, was added to the Solaris Tab on Network Management.

Solaris Reference Material
2011-09 [PDF] Tuning to Optimize Workload Performance on SPARC T4

Wednesday, March 4, 2009

Partitioning: Oracle Licensing Terms & Agreements

When questions about Oracle licensing comes up, things can get rather puzzling.

How is one to determine license liability?

Single-Core vs Multi-Core Processors

Sometimes, there are web pages which help to determine liability with single and multi-core processors.
http://www.orafaq.com/wiki/Oracle_Licensing

Multi-core processors are priced as (number of cores)*(multi-core factor) processors, where the multi-core factor is:

  • 0.25 for SUN's UltraSPARC T1 processors (1.0 GHz or 1.2 GHz)
  • 0.50 for other SUN's UltraSPARC T1 processors (e.g. 1.4 GHz)
  • 0.50 for Intel and AMD processors
  • 0.75 for SUN's UltraSPARC T2 processors
  • 0.75 for all other multi-core processors
  • 1.00 for single-core processors
This may help guide towards a decision (i.e. if you need half of T2 processor for an application, there is a 50% discount when one purchases a system with a T1 processor for running Oracle.)
  • 8 (T1 cores) * .25 (multi-core 1.2 GHz T1 factor) = 2 (pricing factor)
  • 8 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 6 (pricing factor)
Examples of low-end T1 based systems include:
A T1 platform is a GREAT platform for deploying a basic development or test environment or a development or test clustering environment, where full fledged performance is not required, but binary compatibility is desired with SPARC applications.

Building applications which scale well on a T1 platform will offer excellent performance when the application needs to scale up with larger number of cores or processors, since it will be more likely to scale linearly since the CoolThreads cores will scale linearly in performance.

Partitioning Technologies

There are three primary partitioning technologies with Open platforms:
  • Dynamic System Domains
    Available for mid-range to high-end SUN and Fujitsu systems
    Allows for Solaris 8, Solaris 9, Solaris 10, and Solaris Express operating systems
    M4000 for up to 2 Dynamic System Domains
    M5000 for up to 5 Dynamic System Domains
    M8000 for up to 16 Dynamic System Domains
    M9000 for up to 24 Dynamic System Domains

  • Logical Domains or LDOM's
    Available for low-end to mid-range SUN and Fujitsu systems
    T1 Processors for up to 32 LDOM's
    T2 Processors for up to 64 LDOM's
    T2+ Processors for up to 256 LDOM's

  • Solaris 10 (capped) Conainers
    Solaris 10 Containers are available across all SUN & Fujitsu platforms
    Using BrandZ - Linux, Solaris 8 and Solaris 9 Operating Systems can run in Solaris Branded Zones
Partitioning with Oracle

There is a lot of mis-information about partitioning flooding the internet. The best place to go for information regarding partitioning is Oracle's web site. The following document is dated from 2002 and is still posted as current, as of the publishing of this blog entry.
http://www.oracle.com/corporate/pricing/partitioning.pdf

(page 1)
Soft Partitioning
...
As a result, soft partitioning is not permitted as a means to determine or limit the number of software licenses required for any given server.

(page 2)
Hard Partitioning

Hard partitioning physically segments a server, by taking a single large server and separating it into distinct smaller systems. Each separated system acts as a physically independent, self-contained server, typically with its own CPUs, operating system, separate boot area, memory, input/output subsystem and network resources.
Examples of such partitioning type include: Dynamic System Domains (DSD) -- enabled by Dynamic Reconfiguration (DR), Solaris 10 Containers (capped Containers only)...

Partitioning Examples:
A server has 32 CPUs installed, but it is hard partitioned and only 16 CPUs are made available to run Oracle. The customer is required to license Oracle for only 16 CPUs.

Very clearly, costs can be reduced by using Dynamic System Domains of high-end SPARC systems as well as Solaris 10 (capped) Containers on low-end to mid-range systems.

Other Helpful Oracle Guides
Partitioning & Architecture for Disaster Recovery and Development

With a T2 system being roughly twice the throughput of a T1 system, a low-end T2 makes a good production system which can scale up with a lower initial cost, leveraging hard partitioning options like LDOM's or Solaris 10 (capped) Containers.

For example, the following systems offer similar performance (omitting floating point applications):
  • 8 (T1 cores) * .25 (multi-core 1.2 GHz T1 factor) = 2 (pricing factor)
    A full system, no partitioning
  • 4 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 3 (pricing factor)
    Solaris 10 (capped) Container used to provide half the number of cores, leaving half the cores for later expansion
  • 4 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 3 (pricing factor)
    SPARC CoolThreads LDOM's used to provide half the number of cores, leaving half the cores for later expansion
As greater performance is needed with applications, the appropriate number of cores can be added to a T2 system, in order to provide higher capacity in affordable quantities.
  • 1 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 0.75 (pricing factor) = 1 (rounded up)
  • 2 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 1.50 (pricing factor) = 2 (rounded up)
  • 3 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 2.25 (pricing factor) = 3 (rounded up)
  • 4 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 3 (pricing factor)
  • 5 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 3.75 (pricing factor) = 4 (rounded up)
  • 6 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 4.50 (pricing factor) = 5 (rounded up)
  • 7 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 5.25 (pricing factor) = 6 (rounded up)
  • 8 (T2 cores) * .75 (multi-core 1.2 GHz T2 factor) = 6.00 (pricing factor)
As you can see, scaling up or down an LDOM or Solaris 10 (capped) Container to isolate Oracle license costs can be very effective to control business costs according to capacity need... except some choices may not make a good decision from an economic standpoint (i.e. 3 or 7 cores.)

The T2 may make a great consolidation platform for a Disaster Recovery platform, that could double as a Development Platform, by "scaling down" the number of cores in a container.