Monday, February 20, 2012

EMC Ionix: Enabling ESM SNMP Polling

Abstract:

In a converged world of EMC, who purchased SMARTS and VMWare, bundling various vendors into a single Ionix umbrella - functionality is slowly being hidden and removed, making managed services and enterprise management more difficult from a standards perspective. The ESM / EISM or Server Monitoring product is the latest product to start being dumbed-down by EMC.

The History:

With ESXi being a product of VMWare and VMWare being owned by EMC, the combined company offers a different management solution called VirtualCenter, which is highly proprietary. VirtualCenter is not an Managed Services grade product, able to run under multiple operating system platforms. The EISM or ESM product has traditionally been cross-platform, enabling Managed Service providers to manage servers, hypervisors, and applications processes all from a highly scalable central platform.

The Problem:

EMC is starting the process of crippling managed services products in their portfolio, so enterprise products can be emphasized through it's VMWare subsidiary, and additional tools (which were formerly not required for monitoring) would be a required purchased product.

By default, in recent versions of EMC Ionix ESM (or EISM) - the Server Monitoring solution - VMWare required Virtual Center to manage ESXi platforms, out of the box.

The Solution:

For Managed Service Providers, this is not an optimal solution. To revert back to the "standards" based methodology of managing servers - SNMP VMWare Discovery can be re-enabled manually.
sun9999/root$ cd /opt/InCharge8/ESM/smarts/bin
sun9999/root$ sm_edit conf/esm/DISCOVERY_VMWARE.import

#----- Register VMware VCenter Probe with TopologyManager-------#
ICF_TopologyManager::ICF-TopologyManager
{
# We get ASL error 'duplicate row' when we try to add a row again.
# To avoid this ASL error, first remove (we don't get ASL error if not found)
# a row then add it later.
types -= { "VCenterDiscovery","Java Probe to Discover VMware VCenter" }
types += { "VCenterDiscovery","Java Probe to Discover VMware VCenter" }
}
Remember to restart the ESM domain manager upon completion.

Friday, February 17, 2012

EMC Ionix: Field Certification of VMWare ESXi 4.0

Abstract:

The standard management protocol for managing systems is Simple Network Management Protocol. Enterprise and Managed Services vendors must support SNMP to be considered a player in the data center. VMWare ESXi offers SNMP capabilities, but tools such as EMC Ionix ESM requires a field certification in order to manage the basic capabilities.

Field Certification:

The following commands are used to perform the field certification:
sun9999/root# cd /opt/InCharge8/IP/smarts/bin
sun9999/root# sm_edit conf/discovery/oid2type_Field.conf
The following entry should be added, in order to perform the field certification:
# VMware 4 ESX server (vSphere )
.1.3.6.1.4.1.6876.4.1 {
TYPE = Host
VENDOR = VMWare
MODEL = ESX4.0
CERTIFICATION = CERTIFIED
CONT = Generic-MIB2
HOSTRESRCS = MIB2

INSTRUMENTATION:
Disk-Fault = HostResources:DeviceID
FileSystem-Performance = HostResources:DeviceID
CPU/Memory = HostResources:DeviceID
Interface-Fault = MIB2
Interface-Performance = MIB2
}

Thursday, February 16, 2012

Shut Down EMC Ionix (Voyence) NCM Port

Shut Down EMC Ionix (Voyence) NCM Port

Every try to shut down EMC Ionix (formerly Voyence) NCM (Network Configuration Manager) related tcp port services, by disabling /etc/init.d scripts, to find that there are still sockets being listened to?

The Problem

It was noted, on an NCM or Voyence platform, that a required port was still being listened to.
sun9999/root# netstat -anf inet | grep 1029
*.1029 *.* 0 0 49152 0 LISTEN
Verify the Culprit

Was it really a part of EMC Ionix NCM or Voyence?
sun9999/root# telnet localhost 1029
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Welcome to EMC Proxy
Copyright (c) 2011 EMC Corporation

User Access Verification
Enter user name:
^]
telnet> quit
Connection to localhost closed.
Well, it appears that EMC is definitely at the root cause.

Not a Start/Stop Script?

Since all the start/stop scripts were disabled from starting up, what else could be the cause?

Under modern UNIX systems, there is a service management facility.

Track Down the Service

Check the port against the registered services file.
sun9999/root# grep telnetproxy /etc/services
telnetproxy 1029/tcp # telnetproxy
Check Against Service Management Facility

EMC appeared nice enough to name the service consistently across the infrastructure
sun9999/root# inetadm | grep telnetproxy
enabled onlinne svc:/network/telnetproxy/tcp:default

sun9999/root# svcs -a | grep telnetproxy
enabled 18:22:21 svc:/network/telnetproxy/tcp:default
Where is the Executable for the Service?

The inet service can be interrogated to reveal the executable being run.
sun9999/root# inetadm -l svc:/network/telnetproxy/tcp:default
SCOPE NAME=VALUE
name="telnetproxy"
endpoint_type="stream"
proto="tcp"
isrpc=FALSE
wait=FALSE
exec="/usr/sbin/in.telnetproxy"
user="root"
default bind_addr=""
default bind_fail_max=-1
default bind_fail_interval=-1
default max_con_rate=-1
default max_copies=-1
default con_rate_offline=-1
default failrate_cnt=40
default failrate_interval=60
default inherit_env=TRUE
default tcp_trace=FALSE
default tcp_wrappers=FALSE
default connection_backlog=10


sun9999/root# ls -al /usr/sbin/in.telnetproxy
-rwxr-xr-x 1 root voyence 1151 Feb 7 18:18 /usr/sbin/in.telnetproxy

EMC was kind enough to name the group of the file, to correctly identify the origin. It is safe to shut down this service.
sun9999/root# svcs svc:/network/telnetproxy/tcp:default
STATE STIME FMRI
online Feb_07 svc:/network/telnetproxy/tcp:default

sun9999/root# svcadm disable svc:/network/telnetproxy/tcp:default

sun9999/root# svcs svc:/network/telnetproxy/tcp:default
STATE STIME FMRI
disabled 18:22:21 svc:/network/telnetproxy/tcp:default

Verify the Telnet Proxy Disable

Check for the tcp port via netstat, to verify that disabling the service did the job.
sun9999/root# netstat -anf inet |grep 1029
sun9999/root#

Tuesday, February 14, 2012

SSH: Auto-Login


SSH: Auto-Login

Abstract:
When working in a clustering environment, it is often desirable to securely move data between platforms, or even forward individual application displays securely. The SSH protocol allows for such movement, but automatic login is a requirement for automation and scripting. This can be accomplished via pre-exchanged keys.

SSH Forwarding:
To set up SSH application TCP port forwarding, view the following "Solaris 10: SSH and Forwarding HTTP" document.

SSH Auto-Login:
Several steps need to be followed to create the local public key and transfer it to the remote host:

  1. Decide which remote host will receive the "ssh" connections:
    sun9999/user$ Host="sun1234"

  2. Create a minimal permission ".ssh" directory on local host home directory
    sun9999/user$ cd ~ ; mkdir 700 .ssh

  3. Generate an key, such as as "rsa" key on the local host.
    sun9999/user$ ssh-keygen -t rsa

  4. Ensure a minimal permission ".ssh" directory exists on remote host home directory
    sun9999/user$ rsh ${Host} '[ ! -d .ssh ] && mkdir -m 700 .ssh'

  5. Copy the local "rsa" key to the ".ssh" directory on remote host: remhost
    sun9999/user$ cat .ssh/id_rsa.pub |
    rsh ${Host} 'cat >> .ssh/authorized_keys'

  6. Test the connection to the remote host, no password prompting should occur
    sun9999/user$ ssh ${Host} 'uname -n'
    sun1234


SSH: Auto-Login Debugging:
If password prompting is still occurring after the previous steps, one can use the "ssh -v" option in the test phase of step 6 above, in order to provide additional debugging verbosity.

A common error might be:
  • Failed to acquire GSS-API credentials for any mechanisms
    If the keys are properly created and login is still prompted for, ensure the remote host has "700" permissions on the ".ssh" directory and "755" permissions on the $HOME.

  • Password prompting for root
    By default, "ssh" will not work as the "root" user. Of course, this creates a problem when trying to forward ports which are below 1024 (i.e. http port tcp/80.) To correct:
    $Host/root#
    vi /etc/ssh/sshd_config
    PermitRootLogin yes
    $Host/root#
    svcadm restart ssh


Thoughts on Security:
Simple connectivity in a cluster can be done with the "r" tools ("rsh", "rcp", "rlogin".) Passwords are passed in the clear, when a user types them, at a prompt. Most critics advocate SSH as a more secure solution for clustering.

The "r" tools can also be set up for auto-login, in a clustered environment. This can be a reasonable alternative to the heavier "ssh" protocol, which burns CPU cycles on mandated end-to-end encryption, if data being passed is of little consequence.

Thoughts on Today's Date :
This article was published on "Saint Valentines Day" - Happy Saint Valentine's Day to you!

Monday, February 13, 2012

Vonage and MSN Port Usage

Vonage and MSN Port Usage

Abstract:

Adding Voice over IP (VoIP) and Instant Messaging to a home is normally a simple process. The goal is often to increase communication while reducing telecommunications bills. Occasionally, there are problems with access, which required troubleshooting or more advanced features are desired. A user may need to understand the protocols, in order to better maintain security, and limit scope to attacks by viruses and worms.

Vonage Voice Adapters

Vonage is a low-cost VoIP phone provider service. Normally, not much needs to be done, except plug in a device. Here are the protocols which are required.
Service
TCP
UDP
Notes
DNS

53
Name Resolution
TFTP
21,69,2400

Firmware Upgrade
HTTP
80

Configuration
SIP

5061pre-2005 Vonage devices
RTP

10000-20000
RTP (Voice) traffic

When a call is made, a random port between 10000 and 20000 is used for RTP (Voice) traffic. If any of these ports are blocked, you may experience one way or no audio.

Microsoft MSN and Windows Messenger

Microsoft provides various tools like MSN and Windows Messenger, but in order to get full functionality, occasionally users must forward ports through firewalls and expand exposure to worms and viruses. Use very carefully.
Service
TCP
UDP
Notes
Windows Messenger - voice

2001 - 2120
6801, 6901
Computer to Phone
MSN Messenger - file transfers
6891 - 6900

Allows up to 10 simultaneous transfers
MSN Messenger - voice
6901
6901
Voice communications computer to computer.
MSN Messenger text
1863

Instant text messages

The ports may be helpful when you want to limit vulnerabilities within your environment to unfriendly viruses and worms.

Saturday, February 11, 2012

Solaris 10: SNMP Agent Hints

Trying to enable SNMP under Solaris 10?

A few important things to know:

Adjust your community strings and manager parameters:

sun9999/admin$ ls -al /etc/sma/snmp/snmpd.conf /etc/snmp/conf/snmpd.conf
-rw------- 1 root bin 3300 Feb 11 03:32 /etc/sma/snmp/snmpd.conf
-rw-rw-r-- 1 root nsm 2221 Mar 17 2009 /etc/snmp/conf/snmpd.conf

Ensure your service is online:

sun9999/admin$ svcs snmpdx
STATE STIME FMRI
online May_21 svc:/application/management/snmpdx:default

Ensure your daemons are running:

sun9999/admin$ ps -elf | grep snmp
0 S root 1330 1 0 40 20 ? 370 ? May 21 ? 0:04 /usr/lib/snmp/snmpdx -y -c /etc/snm
0 S root 1322 1 0 40 20 ? 1290 ? May 21 ? 6:01 /usr/sfw/sbin/snmpd

Monday, February 6, 2012

SPARC: Road Map Updated!


SPARC: Road Map Updated!

The SPARC Road Map has been experiencing updates at a tremendously accelerated pace over the past few years, with new SPARC releases either happening early, with higher performance, or with a combination of the two. It is quite exciting to see SPARC back in the processor game again!

SPARC T3 Launch: SPARC Road Map
The following SPARC road map was revealed after the 16 core SPARC T3 Launch in Q42010.


Solaris 11 Launch: SPARC Road Map
During the Solaris 11 Launch in November 2011, the following was the SPARC road map, reminding the market of the 8 core T4 processor delivery, with the same performance as the former 16 core processor, and enhanced single-threaded performance.


It was also hinted that the SPARC T5 was ahead of schedule at Oracle - shipping in 2012.


Now, it is February 2012, and the SPARC road map has officially adjusted (although the exact date of when it occurred is unknown, since there was no official announcement.)


SPARC Road Map Analysis
Note the accelerated changes in the SPARC road map over the past few months:
  • The 8 socket T5? processor will perform well enough to replace the M series 8 socket platform in 2012 and be competitive to reach up tothe 16 socket M series.
  • The next 8 socket T5+? processor will perform well enough to replace the M series 8 socket platform in 2013 and be competitive to reach up to the 16 socket M series.

Unified SPARC - T4 Release

It should be noted that Oracle released a T/M unified processor socket called the "SPARC T4" in Q4 2011, which performed as well (or better, depending on the metrics) as the "SPARC T3" (with 128 threads per socket) which was released in 2010, but T4 halved the cores, doubled the speed (or better) of a T3 thread (with 64 threads per socket) and added a new option where thread speed could be 6x faster (with 8 threads per socket.)


Extrapolations and Remembrance
The M-Series was out-of-range for many smaller service providers, while the lower-end T series offered the price-performance to be competitive with only mid-range systems, where platform throughput mattered. The recently released T4 offered more competitive single-threaded speed, to eat away at lower-end open-systems market share. The next generation T-Series, expected later this year, will eat into the market share of more expensive higher-end open-systems market with lower-cost higher-socket counts.

Oracle already hinted that the T5 will have some of the features of the former RK or Rock processor (memory versioning looks like relational memory interface.) The addition of hardware compression, columnar database acceleration, Oracle number accelerations, and low latency clustering (at the socket level) will make it a great Oracle RDBMS & Oracle MySQL database accelerator and an outstanding Oracle RDBMS database accelerator - placing SPARC years ahead of POWER and proprietary x86. The competitive benefit to Network Management systems with large embedded databases (i.e. performance management) will be immense.

This is not the first time that adding accelerators gave SPARC a massive boost - the addition of crypto cores inside the T processors made it the fastest single socket HTTPS server on the market for years and the highest performing contender for scalable encrypted polling engines (for managed service provider class network management vendors.) Non-competitive network management service providers avoided the encryption discussion with SSH and SNMPV3 because they could not "keep up" while competitive software providers out-shined their competition on SPARC. With the recent release of Intel's crypto instructions, that benefit is waning for brand new network management service providers. The compression algorithms in conjunction with database accelerators will have come "just-in-time".

Clearly, the investment in the S3 core provided Oracle with the breathing space it needed, to unify the M and T series, with the lower-end SPARC T4 platforms. With the soon-to-be-released SPARC T5 platforms, Oracle will continue to consume the low-hanging-fruit in the M series (in addition to the AIX and HPUX) space with a high-performing SPARC core which scales to greater socket counts.

Final Thoughts
It appears pretty clear that "NetMgt.BlogSpot.COM" was the first to break the roadmap update news. Continue to manage your networks with obsession and security!

Friday, February 3, 2012

ZFS: Apple Enters Storage Arena


ZFS: Apple Enters Storage Arena

Abstract:
File systems have existed nearly as long as computing systems. First, systems used storage based upon tape solution with serial access. Next came random block file access. Various filesystems were created, offering different capabilities, and eventually allowed a disk drive to be divided up into multiple logical slices. Volume managers arrived later on the scene, to aggregate disks below individual filesystems, to make larger capacities. ZFS was created by Sun Microsystems, for the purpose of erasing the distinction of volume manager and file system - to add flexibility that the divided pair could not easily achieve. Apple computers often have the need for massive data storage, but the native filesystem has been lacking - until ZFS became a possibility.

History:
Apple computers are the traditional work horse for graphic design houses. They work with large media such as billboards and books with high resolution photographs... which all take a lot of space. As computers continued to advance, they knew they needed a real filesystem.

In 2007, Apple was originally intending on packaging ZFS into their MacOSX operating system and shipping it with Leopard. This would have fixed a lot of problems experienced in the Macintosh environment, including the long time it takes to re-silver an mirrored set if someone kicked a power cable on a desktop USB drive, and virtually unlimited expansion of a filesystem by merely adding disks.

Along came 2009, Apple dumped ZFS. There was an outcry in the community, looking for a real filesystem under MacOSX, but Apple started looking for a new team to "roll their own" filesystem.


In 2011, Apple still could not develop a modern filesystem, and some of the old people who were porting ZFS to MacOSX decided to form their own startup - with the purpose of porting ZFS to MacOSX.


Enter Ten's Complement LLC

Here it is - 2012... a half-decade later and Apple has been unable to release a modern filesystem. By the way, nearly every other operating system was incapable of that, including AIX, HPUX, Linux, and Windows. Interestingly, the old MacOSX developer finally released ZFS. The limited liability company Ten's Complement is now offers a Single Disk, Multiple Disk, and will offer a De-Duplication option in the near future for MacOSX.

Network Management Connection

With the creation of ZFS, Apple MacOSX has finally made it into the realm of being a very viable platform for server applications. No longer will people need to use MacOSX as a client and buy a SPARC or Intel Solaris platform as a server to gain the benefits of ZFS. Common designers, video publishers, and media collectors can now just add the occasional multi-terabyte hard drive and just keep on building their data collection with limited concern for failure - it will all be protected with parity and old deletions can be easily rolled back.

With the addition of ZFS to MacOSX - expect to see more MacOSX platforms in the small enterprises. The benefits of Solaris with the simplicity of MacOSX will surely be an awesome win for the computer community - which means Network Managers will need to take this into their consideration as they roll out management platforms.