Making Known the Secrets to Network Management. Raising up a new generation of professionals.
Monday, May 24, 2010
What's Coming in 2010 for OpenSolaris
While just a draft, this is the hope for the Solaris and OpenSolaris communities!
Wednesday, May 19, 2010
LogMatrix NerveCenter Installation for Solaris 10
Abstract: This documents a basic installation of NerveCenter from LogMatrix on a Solaris 10 server. Initially I tried to add the web-based administration as well but the documentation only gave instructions on using it with Apache 1.x. As I lack the proficiency to work with Apache 1.x, don't want to add security flaws associated with it, and wasn't sure what workarounds were required to get Apache 2 integration, I skipped that step (the 'w' option).
# groupadd ncadmins
# groupadd ncusers
# cd ../NC5104/BIN/
# ./INSTALL.SH
Continue? Yes
Agree to legal terms? Yes
Installation directory [/opt/OSInc]? [Press Enter]
Component Selection: a c d s
This is all of the options that aren't for integration with other network management packages or web-based administration.
use default path [/usr/dt/lib] for x libraries? [Press Enter]
update NIS settings? [Y]
directory for web browser to view documentation: /usr/bin/firefox
set NerveCenter as a daemon (will always restart with system)? [Y]es
configure Pam integration for authentication? [Y]es
(shell script finishes)
# cp CORP.dat /opt/OSInc/conf/
# ./opt/OSInc/bin/ncstart
# . /opt/OSInc/userfiles/ncenv.ksh
Add ". /opt/OSInc/userfiles/ncenv.ksh" to every account that will access NerveCenter or to /etc/profile if everyone will access
From an xwindows terminal session on an account that's been added to the user groups above
$ ncadmin &
or
$ client &
# groupadd ncadmins
# groupadd ncusers
# cd ../NC5104/BIN/
# ./INSTALL.SH
Continue? Yes
Agree to legal terms? Yes
Installation directory [/opt/OSInc]? [Press Enter]
Component Selection: a c d s
This is all of the options that aren't for integration with other network management packages or web-based administration.
use default path [/usr/dt/lib] for x libraries? [Press Enter]
update NIS settings? [Y]
directory for web browser to view documentation: /usr/bin/firefox
set NerveCenter as a daemon (will always restart with system)? [Y]es
configure Pam integration for authentication? [Y]es
(shell script finishes)
# cp CORP.dat /opt/OSInc/conf/
# ./opt/OSInc/bin/ncstart
# . /opt/OSInc/userfiles/ncenv.ksh
Add ". /opt/OSInc/userfiles/ncenv.ksh" to every account that will access NerveCenter or to /etc/profile if everyone will access
From an xwindows terminal session on an account that's been added to the user groups above
$ ncadmin &
or
$ client &
Labels:
installation,
LogMatrix,
NerveCenter,
Solaris,
Solaris 10
Friday, May 14, 2010
USB NIC Drivers Under Solaris?
OK - I have to try this... getting a USB NIC card working under Solaris!
If someone else does, please provide the results!
If someone else does, please provide the results!
Configuring Oracle® Solaris ZFS for an Oracle Database
Configuring Oracle® Solaris ZFS for an Oracle Database
Abstract:
The Oracle database has been available for mission critical systems for decades. Open Systems have been a dominate platform vendor hosting the Oracle databases during this period. Oracle had purchased Open Systems vendor Sun, the dominate vendor in this space, for their technology to host their database and applications. Solaris provides an outstanding Open platform for implementation of the Oracle Database, when properly configured. This guide describes the recommended configuration for Oracle RDBMS.
White Paper:
The Oracle ZFS configuration recommendation tuning guide has been made available in May 2010. Topics covered include:
- Disk Recommendations
- LUN Recommendations
- FLASH Acceleration Recommendations
- Operating System Releases
- Operating System Patches
- Operating System Tuning
- ZFS Pool Tuning
- ZFS File System Tuning
- Oracle Data File Layout
- Life cycle Management Recommendations
One topic briefly broached on page 5 was:
Oracle release information - Oracle Solaris ZFS is recommended for any Oracle database version in single instance mode, and Oracle Solaris ZFS can be used with an Oracle RAC database when it is available as a NFS-shared file system.This topic received far to little coverage for most people interested in scaling out more effectively. A nice overview from 2005 can be helpful for some, to get a primer on different options, but ZFS was never introduced because ZFS was in it's infancy during the time Natalka wrote it. In November 2008, an unsupported but simple setup of using Solaris NFS in a RAC cluster was demonstrated by Padraig, but a lot of steps could have been eliminated had he used ZFS. This Oracle web site describes some of the different options for clustering, but ZFS is not mentioned.
As you can see, the use of ZFS with NFS does not seem to get much coverage from an Oracle RAC perspective - but honestly, this is the way to go to reduce configuration and get optimal flexibility. Performance gets a boost in 11g with the Oracle NFS embedded client api.
Additional Resources:
The Solaris Internals Wiki offers the ZFS Evil Tuning Guide - this is a very good place to start!
Monday, May 10, 2010
Exercizing an ICMP Poller
Exercizing an ICMP Poller
Abstract:
Polling devices for availability on a global basis required an understanding of the underlying topology and impacts due to concerns of latency and reliability. An excellent tool to perform polling on a wide scale is "fping".
Methodology:
The "fping" poller can take a sequence of nodes and probe those nodes. A simple way to start is to leverage the "hosts" table. The basic UNIX command called "time" can be leveraged to understand performance. The "nawk" command can be leverage in order to parse nodes and output into usable information.
Implementation:
Parsing the "hosts" table is simple enough to do via "nawk", leveraging a portion of the device name, IP address, or even comments! The "fping" poller is able to use Name, IP, or even the regular "/etc/hosts" table format for input.
For example, looking for all 192 entries in the host table can be done as follows:
sunt2000/root# nawk '/192./ { print $1 }' /etc/hosts # parse node ip addresses
sunt2000/root# nawk '/192./ { print $2 }' /etc/hosts # parse node names
sunt2000/root# nawk '/192./' /etc/hosts # parse node names
When performing large numbers of pings across the globe, name lookups can unexpectedly add a great deal of time to the run of the command:
sunt2000/root# time nawk '/192./ { print $1 }' /etc/hosts \| fping \|
nawk '/alive/ { Count+=1 } END { print "Count:", Count }'
Count: 2885
real 3m12.70s
user 0m0.45s
sys 0m0.68s
sunt2000/root# time nawk '/192./ { print $2 }' /etc/hosts \| fping \|
nawk '/alive/ { Count+=1 } END { print "Count:", Count }'
Count: 2884
real 8m47.74s
user 0m0.49s
sys 0m0.70s
The name resolution lookup for the system to find an ip address is mitigated the second run, due to the name caching daemon, under operating systems like Solaris.
sunt2000/root# time nawk '/192./ { print $2 }' /etc/hosts \| fping \|
nawk '/alive/ { Count+=1 } END { print "Count:", Count }'
Count: 2883
real 3m10.87s
user 0m0.44s
sys 0m0.67s
Avoiding Name Resolution Cache Miss:
Sometimes, an application can not afford to accept an occasional Name Resolution cache miss. One way of managing this is through the parsing script on the back end of the "fping" command in an ehanced "nawk" one-liner.
sunt2000/root# time nawk '/HDFC/ { print $1 }' /etc/hosts \| fping \| nawk '
BEGIN { File="/etc/hosts" ; while ( getline <>
Ip=$1; Name=$2 ; IpArray[Ip]=Ip ; NameArray[Ip]=Name }
close(File) }
/alive/ { Count+=1 ; print NameArray[$1] "\t" $0 }
END { print "Count:", Count }'
Count: 2882
real 3m9.04s
user 0m0.73s
sys 0m0.76s
Tuning Solaris 10:
When manaing LARGE number of devices, ICMP Input overflows may be occurring. This can be checked via the "netstat" command parsed by a nawk one-liner. Note, the high ratio.
sunt2000/root# netstat -s -P icmp \| nawk '{ gsub("=","") }
/icmpInMsgs/ { icmpInMsgs=$3 }
/icmpInOverflows/ { icmpInOverflows=$2 }
END { print "Msgs=" icmpInMsgs "\tOverflows=" icmpInOverflows }'
Msgs=381247797 Overflows=138767274
To check what the tunable is:
sunt2000/root# ndd -get /dev/icmp icmp_max_buf
262144
The above value is the default for this version of Solaris. It can (and should) be increased (dramatically), as device counts start to grow aggressively (into the thousands of devices.)
sunt2000/root# ndd -set /dev/icmp icmp_max_buf 2097152
Note, the above setting is not persistent after reboot.
Validating Tuning:
Before and after the fping, the values of icmp messages and overflows can be observed.
Prior FPing: icmpInMsgs=381267922 icmpInOverflows=138775834
Post FPing: icmpInMsgs=381270809 icmpInOverflows=138778159
Difference: icmpInMsgs=2887 icmpInOverflows=2325
Applying the tunables temporarily
sunt2000/root# ndd -set /dev/icmp icmp_max_buf 2097152
sunt2000/root# ndd -get /dev/icmp icmp_max_buf
2097152
Validate the Ratio:
Prior FPing: icmpInMsgs=381279465 icmpInOverflows=138778662
Post FPing: icmpInMsgs=381282224 icmpInOverflows=138778806
Difference: icmpInMsgs=2759 icmpInOverflows=144
Secondary Validation of the Ratio:
Prior FPing: icmpInMsgs=381296575 icmpInOverflows=138784125
Post FPing: icmpInMsgs=381300943 icmpInOverflows=138784125
Difference: icmpInMsgs=4368 icmpInOverflows=0
Making the Tuning Persistent:
A start/stop script can be created to make the tunables persistent.
Monitoring Overflows :
You can monitor overflows in near-real-time through a simple script such as:
Exercizing ICMP pollers in a Network Management environment is easy to do. It may be important to tune the OS if polling a large number of devices is required. Tuning the OS is a very reasonable process where metrics can simply show the behavior and improvements.
Abstract:
Polling devices for availability on a global basis required an understanding of the underlying topology and impacts due to concerns of latency and reliability. An excellent tool to perform polling on a wide scale is "fping".
Methodology:
The "fping" poller can take a sequence of nodes and probe those nodes. A simple way to start is to leverage the "hosts" table. The basic UNIX command called "time" can be leveraged to understand performance. The "nawk" command can be leverage in order to parse nodes and output into usable information.
Implementation:
Parsing the "hosts" table is simple enough to do via "nawk", leveraging a portion of the device name, IP address, or even comments! The "fping" poller is able to use Name, IP, or even the regular "/etc/hosts" table format for input.
For example, looking for all 192 entries in the host table can be done as follows:
sunt2000/root# nawk '/192./ { print $1 }' /etc/hosts # parse node ip addresses
sunt2000/root# nawk '/192./ { print $2 }' /etc/hosts # parse node names
sunt2000/root# nawk '/192./' /etc/hosts # parse node names
When performing large numbers of pings across the globe, name lookups can unexpectedly add a great deal of time to the run of the command:
sunt2000/root# time nawk '/192./ { print $1 }' /etc/hosts \| fping \|
nawk '/alive/ { Count+=1 } END { print "Count:", Count }'
Count: 2885
real 3m12.70s
user 0m0.45s
sys 0m0.68s
sunt2000/root# time nawk '/192./ { print $2 }' /etc/hosts \| fping \|
nawk '/alive/ { Count+=1 } END { print "Count:", Count }'
Count: 2884
real 8m47.74s
user 0m0.49s
sys 0m0.70s
The name resolution lookup for the system to find an ip address is mitigated the second run, due to the name caching daemon, under operating systems like Solaris.
sunt2000/root# time nawk '/192./ { print $2 }' /etc/hosts \| fping \|
nawk '/alive/ { Count+=1 } END { print "Count:", Count }'
Count: 2883
real 3m10.87s
user 0m0.44s
sys 0m0.67s
Avoiding Name Resolution Cache Miss:
Sometimes, an application can not afford to accept an occasional Name Resolution cache miss. One way of managing this is through the parsing script on the back end of the "fping" command in an ehanced "nawk" one-liner.
sunt2000/root# time nawk '/HDFC/ { print $1 }' /etc/hosts \| fping \| nawk '
BEGIN { File="/etc/hosts" ; while ( getline <>
Ip=$1; Name=$2 ; IpArray[Ip]=Ip ; NameArray[Ip]=Name }
close(File) }
/alive/ { Count+=1 ; print NameArray[$1] "\t" $0 }
END { print "Count:", Count }'
Count: 2882
real 3m9.04s
user 0m0.73s
sys 0m0.76s
Tuning Solaris 10:
When manaing LARGE number of devices, ICMP Input overflows may be occurring. This can be checked via the "netstat" command parsed by a nawk one-liner. Note, the high ratio.
sunt2000/root# netstat -s -P icmp \| nawk '{ gsub("=","") }
/icmpInMsgs/ { icmpInMsgs=$3 }
/icmpInOverflows/ { icmpInOverflows=$2 }
END { print "Msgs=" icmpInMsgs "\tOverflows=" icmpInOverflows }'
Msgs=381247797 Overflows=138767274
To check what the tunable is:
sunt2000/root# ndd -get /dev/icmp icmp_max_buf
262144
The above value is the default for this version of Solaris. It can (and should) be increased (dramatically), as device counts start to grow aggressively (into the thousands of devices.)
sunt2000/root# ndd -set /dev/icmp icmp_max_buf 2097152
Note, the above setting is not persistent after reboot.
Validating Tuning:
Before and after the fping, the values of icmp messages and overflows can be observed.
Prior FPing: icmpInMsgs=381267922 icmpInOverflows=138775834
Post FPing: icmpInMsgs=381270809 icmpInOverflows=138778159
Difference: icmpInMsgs=2887 icmpInOverflows=2325
Applying the tunables temporarily
sunt2000/root# ndd -set /dev/icmp icmp_max_buf 2097152
sunt2000/root# ndd -get /dev/icmp icmp_max_buf
2097152
Validate the Ratio:
Prior FPing: icmpInMsgs=381279465 icmpInOverflows=138778662
Post FPing: icmpInMsgs=381282224 icmpInOverflows=138778806
Difference: icmpInMsgs=2759 icmpInOverflows=144
Secondary Validation of the Ratio:
Prior FPing: icmpInMsgs=381296575 icmpInOverflows=138784125
Post FPing: icmpInMsgs=381300943 icmpInOverflows=138784125
Difference: icmpInMsgs=4368 icmpInOverflows=0
Making the Tuning Persistent:
A start/stop script can be created to make the tunables persistent.
t2000/root# vi /etc/init.d/ndd_rmm.sh
#!/bin/ksh
# script: ndd_rmm.sh
# author: david halko
# purpose: make a start/stop script to make peristent tunables
#
case ${1} in
start) /usr/sbin/ndd -get /dev/icmp icmp_max_buf nawk '
!/2097152/ {
Cmd="/usr/sbin/ndd -set /dev/icmp icmp_max_buf 2097152"
system(Cmd) }'
;;
status) ls -al /etc/init.d/ndd_rmm.sh /etc/rc2.d/S89_ndd_rmm.sh
/usr/sbin/ndd -get /dev/icmp icmp_max_buf
;;
install) ln -s /etc/init.d/ndd_rmm.sh /etc/rc2.d/S89_ndd_rmm.sh
chmod 755 /etc/init.d/ndd_rmm.sh /etc/rc2.d/S89_ndd_rmm.sh
chown -h ivadmin /etc/init.d/ndd_rmm.sh /etc/rc2.d/S89_ndd_rmm.sh
;;
*) echo "ndd_rmm.sh [startstatusinstall]\n"
esac
:w
:q
t2000/root# ksh /etc/init.d/ndd_rmm.sh install
t2000/root# ksh /etc/init.d/ndd_rmm.sh status
-rwxr-xr-x 1 ivadmin root 647 May 10 21:18 /etc/init.d/ndd_rmm.sh
lrwxrwxrwx 1 ivadmin root 22 May 10 21:20 /etc/rc2.d/S89_ndd_rmm.sh -> /etc/init.d/ndd_rmm.sh
262144
t2000/root# /etc/init.d/ndd_rmm.sh start
t2000/root# /etc/init.d/ndd_rmm.sh status
-rwxr-xr-x 1 root root 647 May 10 21:18 /etc/init.d/ndd_rmm.sh
lrwxrwxrwx 1 root root 22 May 10 21:20 /etc/rc2.d/S89_ndd_rmm.sh -> /etc/init.d/ndd_rmm.sh
2097152
Monitoring Overflows :
You can monitor overflows in near-real-time through a simple script such as:
#
# script: icmpOverflowMon.sh
# author: David Halko
# purpose: simple repetitive script to monitor icmp overflows
#
for i in 0 1 2 3 4 5 6 7 8 9 ; do
for j in 0 1 2 3 4 5 6 7 8 9 ; do
for k in 0 1 2 3 4 5 6 7 8 9 ; do
echo "`date` - $i$j$k - \c"
netstat -s -P icmp \| nawk '{ gsub("=","") }
/icmpInMsgs/ { icmpInMsgs=$3 ; print $0 }
/icmpInOverflows/ { icmpInOverflows=$2 ; print $0 }
END { print "InMsgs=" icmpInMsgs "\tOverflows=" icmpInOverflows }'
sleep 10
done
done
done
Conclusion:Exercizing ICMP pollers in a Network Management environment is easy to do. It may be important to tune the OS if polling a large number of devices is required. Tuning the OS is a very reasonable process where metrics can simply show the behavior and improvements.
----------------------------------------------
UPDATE --- every time I edit this posting in blogger, it removes my pipe symbols. In general, you may need to add a pipe between some commands like fping, netstat, nawk if you get a syntax error when you copy-paste a line of text.
Oracle VM Server for SPARC (LDoms) Dynamic Resource Management
Orgad Kimchi at Sun, now Oracle, blogged on VReality an overview of Oracle VM Server for SPARC, previously called Sun Logical Domains or LDoms. In particular, he discussed Version 1.3 with Dynamic Resource Management or DRM. The allocation of CPU threads or resources according to pre-defined polices was the target.
Orgad posted a PDF which was formatted reasonably well, but the fonts made certain sections difficult to read in the PDF that he included. I copied the PDF contents into this blog, re-formated it (while trying to keep as close to the original style as possible), adjusted some typographical errors, and included it in this blog. While the blog is not the optimal format to hold this content in, I left some feedback on his original content suggesting some reformatting suggestions.
Oracle VM Server for SPARC (LDoms) Dynamic Resource Management
ABSTRACT:
In this entry, I will demonstrate how to use the new feature of Oracle VM Server for SPARC (previously called Sun Logical Domains or LDoms) version 1.3 Dynamic Resource Management (a.k.a DRM) for allocating CPUs resources based on workload and pre defined polices.
Introduction to Oracle VM Server for SPARC:
Oracle VM Server for SPARC is a virtualization and partitioning solution supported on Oracle Solaris CoolThreads technology-based servers powered by UltraSPARC T1, T2, and T2 Plus processors with Chip Multi-threading Technology (CMT).
This technology allows the creation of multiple virtual systems on a single physical system. Each virtual system is called a logical domain (LDom) and runs a unique and distinct copy of the Solaris operating system.
Introduction to Dynamic Resource Management:
With this feature, we can define policies to control an upper and lower threshold for virtual CPU utilization on an LDom. If an LDom needs more capacity and other LDoms on the same physical server have spare capacity, the system can automatically add to or remove CPUs from domains - as per the defined policies.
The main goal of dynamic resource management (DRM) is to provide the LDoms resource allocation flexibility in order to allocate resources to the LDom during peak time without human intervention.
Architecture layout :
Prerequisites:
We need to define the control domain and three logical domains. Refer to the Logical Domains 1.3 Administration Guide (http://docs.sun.com/app/docs/doc/821-0406) for a complete procedure on how to install Oracle VM Server for SPARC.
Dynamic Resource Management configuration:
We will define a total of three polices (policy1, policy2 ,policy3), one for each domain (ldg1,ldg2 ,ldg3), each policy will define under what conditions virtual CPUs can be automatically added to and removed from a logical domain.
A policy is managed by using the commands: ldm add-policy, ldm set-policy, and ldm remove-policy commands.
The following ldm add-policy command creates the policy to be used on the ldg1 logical domain.
# ldm add-policy util-lower=25 util-upper=75 vcpu-min=4 vcpu-max=8 attack=1 decay=1 priority=1 name=policy1 ldg1
The following policy does the following:■ Specifies that the lower and upper limits at which to perform policy analysis are 25 percent
and 75 percent by setting the util-lower and util-upper properties, respectively.
■ Specifies that the minimum and maximum number of virtual CPUs is 4 and 8 by setting
the vcpu-min and vcpu-max properties, respectively.
■ Specifies that the maximum number of virtual CPUs to be added during any one resource
control cycle is 1 by setting the attack property.
■ Specifies that the maximum number of virtual CPUs to be removed during any one resource
control cycle is 1 by setting the decay property.
■ Specifies that the priority of this policy is 1 by setting the priority property. A priority of 1
means that this policy will be enforced even if another policy can take effect.
■ Specifies that the name of the policy file is policy1 by setting the name property.
■ Uses the default values for those properties that are not specified, such as enable (off) and
sample-rate (10 sec).
This is the second policy for the second LDom (ldg2)
# ldm add-policy util-lower=25 util-upper=75 vcpu-min=8 vcpu-max=16 attack=1 decay=1 priority=2 name=policy2 ldg2
This is the third policy for the third LDom (ldg3)# ldm add-policy util-lower=25 util-upper=75 vcpu-min=8 vcpu-max=16 attack=1 decay=1 priority=3 name=policy3 ldg3
Now we need to enable the policies:# ldm set-policy enable=yes name=policy1 ldg1The following example shows how the configuration looks on the control domain. You can verify
# ldm set-policy enable=yes name=policy2 ldg2
# ldm set-policy enable=yes name=policy3 ldg3
the policies have been created by using the "ldm ls -o res" subcommand.
# ldm ls -o resThe following example shows how a policy, called policy1, can be changed in order to add more
NAME
primary
------------------------------------------------------------------------------
NAME
ldg1
POLICY
STATUS PRI MIN MAX LO UP BEGIN END RATE EM ATK DK NAME
on 1 4 8 25 75 00:00:00 23:59:59 10 5 1 1 policy1
WEIGHTED MEAN UTILIZATION
4.2%
------------------------------------------------------------------------------
NAME
ldg2
POLICY
STATUS PRI MIN MAX LO UP BEGIN END RATE EM ATK DK NAME
on 2 8 16 25 75 00:00:00 23:59:59 10 5 1 1 policy2
WEIGHTED MEAN UTILIZATION
0.1%
------------------------------------------------------------------------------
NAME
ldg3
POLICY
STATUS PRI MIN MAX LO UP BEGIN END RATE EM ATK DK NAME
on 3 8 16 25 75 00:00:00 23:59:59 10 5 1 1 policy3
WEIGHTED MEAN UTILIZATION
0.0%
CPUs to a machine called ldg1
# ldm set-policy name=policy1 vcpu-max=16 ldg1
The following example shows how we can remove a policy, called policy1# ldm remove-policy name=policy1 ldg1
Now, let's check how dynamic resource management works :In order stress the CPU of your system, you can get the spinners loading tool from BigAdmin (see http://www.sun.com/bigadmin/software/nspin/nspin.tar.gz .)
We will monitor the system before and during the workload.
Connect to the console of the first guest domain (ldg1)
# telnet localhost 5000Verify the number and CPUs load using the mpstat command
# mpstatWe can see that the LDom is underutilized (idl =99) and that we have 4 CPUs (0-3)
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 4 215 7 20 0 0 0 0 11 1 0 0 99
1 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
2 0 0 3 21 6 19 0 0 0 0 11 1 0 0 99
3 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
Let's start the workload using the nspins command and monitor the effect on the system utilization and the total number of CPUs :
# nspins -n 8 &Now give it ~40 seconds. or so to run
# mpstat 10
We can see that all the machine's CPUs are utilized (idl=0) and the total number of CPUs are increased to 8 (0-7) In order to see the CPUs diminished effect we can stop the workload and monitor the LDom again.
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 52 201 0 2 8 0 0 0 1 100 0 0 0
1 0 0 4 20 4 12 13 0 0 0 6 100 0 0 0
2 0 0 2 31 11 23 18 0 0 0 13 100 0 0 0
3 0 0 3 21 5 11 12 0 1 0 38 100 0 0 0
4 0 0 2 16 1 6 10 0 0 0 1 100 0 0 0
5 0 0 2 23 2 13 13 0 0 0 2 100 0 0 0
6 0 0 1 17 2 8 10 0 1 0 2 100 0 0 0
7 0 0 0 12 1 4 9 0 0 0 1 100 0 0 0
# pkill nspinsWe see from the mpstat output that the total number of CPUs has decreased by 1 in a cycle from 8 to 4
# mpstat 10
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 4 215 7 20 0 0 0 0 11 1 0 0 99
1 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
2 0 0 3 21 6 19 0 0 0 0 11 1 0 0 99
3 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
4 1 0 3 21 4 12 10 0 0 0 4 91 0 0 9
5 1 0 3 15 2 7 9 0 0 0 7 91 0 0 9
6 0 0 2 15 2 7 9 0 0 0 2 91 0 0 9
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 4 215 7 20 0 0 0 0 11 1 0 0 99
1 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
2 0 0 3 21 6 19 0 0 0 0 11 1 0 0 99
3 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
4 1 0 3 20 4 12 10 0 0 0 4 89 0 0 10
5 1 0 5 15 2 7 9 0 0 0 7 89 0 0 11
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 4 215 7 20 0 0 0 0 11 1 0 0 99
1 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
2 0 0 3 21 6 19 0 0 0 0 11 1 0 0 99
3 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
4 1 0 3 20 4 12 10 0 0 0 5 88 0 0 12
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 4 215 7 20 0 0 0 0 11 1 0 0 99
1 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
2 0 0 3 21 6 19 0 0 0 0 11 1 0 0 99
3 0 0 3 21 6 19 0 0 0 0 9 1 0 0 99
Conclusion:
Oracle VM Server for SPARC Dynamic Resource Management provides the system administrator the flexibility to have better dynamic resource allocation based on system utilization. In this blog entry, I demonstrated how to set up Dynamic Resource Management and how to monitor this feature during CPU utilization peak time.
About the Author:
Orgad Kimchi joined Sun in September 2007. He is currently working in the Independent Software Vendors (ISV) Engineering organization helping software vendors adopt Sun technology and improve performance on Sun hardware and software. Orgad’s blog can be found at http://blogs.sun.com/vreality.
Saturday, May 8, 2010
Oracle's Intentions on Sun Hardware Portfolio
The Latest on Oracle’s Intentions for the Sun Hardware Portfolio
For the past two months Oracle has been conducting a series of Sun Welcome Events around the world, kicking off first in the US at the beginning of March. Last week was Sydney Australia’s turn and IDEAS analysts attended the event to get an update of the latest news.
Although the format had similar content to previous events...(see rest of article from Ideas International)
Thanks Amarand Agasi from Flickr for the photo!
Friday, May 7, 2010
Thursday, May 6, 2010
Getting Started With Solaris
An Introduction to Solaris for Beginners by Brian Leonard. Presented at Oracle Tech Days, Hyderabad, India, March 2010.
- Part 1 - About Solaris
Solaris is for Enterprises
OpenSolaris is for Developers
VirtualBox allows Developers running Host Operating Systems (i.e. OpenSolaris, Mac, Linux, Windows, etc.) to run other operating systems (i.e. Solaris 10)
Demo of Import VirtualBox Solaris 10 2 Gigabyte Image into Virtual Box
Solaris 10 is mature, has been around since 2005, 8 releases
GNOME Interface demonstration - Part 2 - Where is Everything?
Important Directories
/usr/sbin - system administration commands
/usr/bin - normal user commands
/var - variable files (i.e. logs)
/etc - configuration files
/opt - third party software
/export/home - home directories for users
Add a user "dan"
machine# useradd -m -d /export/home/dan -s /usr/bin/ksh dan
Set the user password for "dan"
machine# passwd dan
****
****
Set the user password for "dan" where he is forced to change the user password on first login
machine# passwd -f dan
****
**** - Part 3 - Users
Rights Profiles
Adding System Administrator Commands to your path
machine# echo "PATH=$PATH:/usr/sbin export PATH" >>/etc/profile
machine# . /etc/profile
Running commands against the profile permissions available
machine# pfexec usermod -P "Primary Administrator" dan
Converting root from a user into a role which can only be assumed by a user
machine# pfexec usermod -K type=role root
machine# su - root
Adding the ability for a user (dan) to be allowed to assume the root role
machine# pfexec usermod -R root dan - Part 4 - Managing Software
Solaris SVR4 Packaging
pkgadd, pkginfo
-- UPDATE! -- 2010-05-15 -- More Videos to continue the introduction to Solaris!
- Part 5 - System Services
Solaris Service Management Facility (SMF)
- operates services like web servers
- provides for automatic restart of failed services
- give the ability for hierarchal relationships between services
Basic User Commands for checking status
Check for all running services
# svs | more
Check for all configured services (running and not running)
# svcs -a | more
Checked for failed services
# svcs -x
Check for impacted services to a failed service
# svcs -xv
Show the long version of the svc with the process id
# svcs -lp ssh
Basic Administration Command for changing status
Enable, Disable, or Restart Services
# svcadm enable [service]
# svcadm disable [service]
# svcadm restart [service] - Part 6 - Networking
Important Commands
View Physical Devices
# dladm show-dev
View Network Interfaces
# ifconfig -a
To change a Host Name with supporting Network information
# sys-unconfig
Important Network Files
Name Service Configuration File
/etc/nsswitch.conf
DNS Name Resolver Configuration File
/etc/resolv.conf
Static Host Configuration File
/etc/hosts - Part 7 - Devices Names and File Systems
Naming of a Devices
c#t#d#[s|p]#
- c=Controller
- t=Target (not used in IDE devices)
- d=Device
- s=Slice (not used in IDE)
- p=Partition (Intel has 4 primary partitions)
Identifying Devices
# format
File Systems
- UFS - Default file system
- ZFS - New file system
- NFS - Network File System
ZFS
- introduced in Update 6 (2008-10 or 10/08)
- can now use ZFS for root file system
- Using ZFS for root file system has performance benefit with LiveUpgrade via a snapshot!
- Migrating from USF to ZFS: http://docs.sun.com/app/docs/doc/819-541/ggpdm
NFS
Show mounts on a remote server
# showmount -e [server]
Mount a remote server
# cd /net/[server]
Labels:
introduction,
OpenSolaris,
Solaris,
Solaris 10,
tutorial
Tuesday, May 4, 2010
How To Change A User's Home Directory in Solaris
# usermod -d /export/home/username username
(Don't forget to move any files from the old user account location to the new location!)
(Don't forget to move any files from the old user account location to the new location!)
Enabling VNC Under Solaris 10
Enabling VNC Under Solaris 10
Abstract:
Open Systems have traditionally been accessed via Command Line. MIT create a fully object oriented, multi-tiered, open source windowing system called X Windows, which was quickly adopted by nearly all computing industry players. While X Windows is well suited for local area network technology, the need for wide area network technology was addressed through several different attempts, such as X11R6 "Broadway" and proxies leveraging compression. A lighter WAN suitable screen display protocol, referred to as Virtual Network Computing (VNC) is also commonly used for X displays.
Procedure:
Solaris 10 was shipped with a basic VNC service mostly configured. This is the procedure to enable it.
Apologies for the issues related to the greater-than and less-than signs. They are not handled gracefully in the blogger software.
Fonts have been adjusted, bullets added for better formatting, and some additional wording since this was originally a hastily assembled article.
Abstract:
Open Systems have traditionally been accessed via Command Line. MIT create a fully object oriented, multi-tiered, open source windowing system called X Windows, which was quickly adopted by nearly all computing industry players. While X Windows is well suited for local area network technology, the need for wide area network technology was addressed through several different attempts, such as X11R6 "Broadway" and proxies leveraging compression. A lighter WAN suitable screen display protocol, referred to as Virtual Network Computing (VNC) is also commonly used for X displays.
Procedure:
Solaris 10 was shipped with a basic VNC service mostly configured. This is the procedure to enable it.
- Find VNC service
Cainan/root# svcs -a | grep -i vnc
disabled 13:47:12 svc:/application/x11/xvnc-inetd:default
- Enable vnc service
Cainan/root# svcadm enable svc:/application/x11/xvnc-inetd:default
- Note that VNC is broken by default, some changes will be required.
Cainan/root# svcs svc:/application/x11/xvnc-inetd:default
STATE STIME FMRI
maintenance 14:22:41 svc:/application/x11/xvnc-inetd:default
- Append vnc to the /etc/services
Cainan/root# echo "vnc-server\t5900/tcp\t\t\t# Xvnc" >>/etc/services
- Check /etc/services
Cainan/root# tail /etc/services
...
snmpd 161/udp snmp # SMA snmp daemon
vnc-server 5900/tcp # Xvnc
- Note, the gnu display manager is not customized yet, and needs correction
Cainan/root# ls -al /etc/X11/gdm/custom.conf
/etc/X11/gdm/custom.conf: No such file or directory
- Enable and configure gnu display manager for vnc
Cainan/root# cat >/etc/X11/gdm/custom.conf <<!
[xdmcp]
Enable=true
[security]
DisallowTCP=false
AllowRoot=true
AllowRemoteRoot=true
!
- Check the customization configuration file
Cainan/root# ls -al /etc/X11/gdm/custom.conf
-rw-r--r-- 1 root root 85 Dec 19 14:43 /etc/X11/gdm/custom.conf
- Re-enable and validate the vnc service
Cainan/root# svcadm disable svc:/application/x11/xvnc-inetd:default
STATE STIME FMRI
disabled 14:46:29 svc:/application/x11/xvnc-inetd:default
Cainan/root# svcadm enable svc:/application/x11/xvnc-inetd:default
Cainan/root# svcs svc:/application/x11/xvnc-inetd:default
STATE STIME FMRI
online 14:46:43 svc:/application/x11/xvnc-inetd:default
- Access the vnc server from a vnc client on the network for a test
cainan:0- - - - - Updated - - - - -
Apologies for the issues related to the greater-than and less-than signs. They are not handled gracefully in the blogger software.
Fonts have been adjusted, bullets added for better formatting, and some additional wording since this was originally a hastily assembled article.
Subscribe to:
Posts (Atom)