Sun / Oracle License Change - T2+ Discount!
Abstract
Oracle licenses it's database by several factors, typically the Standard License (by socket) and an Enterprise License (by core scaling factor.) Occasionally, Oracle will change the core scaling factor, resulting in discounting or liability for the consumer.
The Platform
The OpenSPARC platform is an open sourced SPARC implementation where the specification is also open. There have been several series of chips based upon this implementation: T1, T2, and T2+. The T1 & T2 are both single socket implementations, while the T2+ is a multi-socket implementation.
The Discount
While reviewing the Oracle licensing PDF, the following information has come to light concerning the OpenSPARC processor line, in particular the Sun UltraSPARC T2+ processor.
Factor Vendor/Processor
0.25 SUN T1 1.0GHz and 1.2GHz (T1000, T2000)
0.50 SUN T1 1.4GHz (T2000)
0.50 Intel Xeon 74xx or 54xx multi-core series (or earlier); Intel Laptop
0.50 SUN UltraSPARC T2+ Multicore
0.75 SUN UltraSPARC T2 Multicore
0.75 SUN UltraSPARC IV, IV+, or earlier
0.75 SUN SPARC64 VI, VII
0.75 SUN UltraSPARC T2, T2+ Multicore
0.75 IBM POWER5
1.00 IBM POWER6, SystemZ
1.00 All Single Core Chips
Note, Red is old, Green is new. Oracle has broken out the T2+ processor to a core factor of 0.50 instead of 0.75.
To see a copy of some of the old license factors, please refer to my old blog on the Oracle IBM license change entry.
Impacts to Network Management infrastructure
To calculate your discount, see the table below. It is basically 33% for the Enterprise version of Oracle under the T2+ processor.
Chips Cores Old New
01 08 06 04
02 16 12 08
03 24 18 12
04 32 24 16
If you have been waiting for a good platform to move your polling intensive workloads to, this may be the right time, since the T2+ has had it's licensing liability reduced.
Making Known the Secrets to Network Management. Raising up a new generation of professionals.
Wednesday, September 30, 2009
Wednesday, September 16, 2009
ZFS: Adding Mirrors
ZFS: Adding Mirrors
Abstract
Several articles have been written about ZFS including: [Managing Storage for Network Management], [More Work With ZFS], [Apache: Hack, Rollback, Recover, and Secure], and [What's Better, USB or SCSI]. This is a short article on adding a mirrored drive to an existing ZFS volume.
Background
A number of weeks ago, a 1.5 Terabyte external was added to a Sun Solaris 10 storage server. Tests were conducted to observe the differences between SCSI and USB drives, as well as UFS and ZFS filesystems. The original disk that was added will now be added to.
Inserting a new USB drive into the system is the first step. If the USB drive is not recognized upon, a discovery can be forced using the classic "disks" command, as the "root" user.
An individual slice can be added as a mirror to an existing disk through "zpool attach"
The result of adding a disk slice to create a mirror can be checked with "zpool status"
The above session demonstrates how a whole external USB device was used to create a ZFS pool and an individual slice from another USB device was used to mirror an existing pool.
Now, if I can just get this Seagate FreeAgent Xtreme 1.5TB disk to just be recognized by some system using FireWire (No, can't use it reliably on an old MacG4, Dual G5, Mac Dual Core Intel, or a dual SPARC Solaris platforms) - I would be much happier than using USB.
Abstract
Several articles have been written about ZFS including: [Managing Storage for Network Management], [More Work With ZFS], [Apache: Hack, Rollback, Recover, and Secure], and [What's Better, USB or SCSI]. This is a short article on adding a mirrored drive to an existing ZFS volume.
Background
A number of weeks ago, a 1.5 Terabyte external was added to a Sun Solaris 10 storage server. Tests were conducted to observe the differences between SCSI and USB drives, as well as UFS and ZFS filesystems. The original disk that was added will now be added to.
Inserting a new USB drive into the system is the first step. If the USB drive is not recognized upon, a discovery can be forced using the classic "disks" command, as the "root" user.
A removable (i.e. USB) drive can be labeled using the "expert" mode of the "format" command.Ultra60-root$ disks
This is what the pool appears to be before adding a mirrored diskUltra60-root$ format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 [SEAGATE-SX1181677LCV-C00B cyl 24179 alt 2 hd 24 sec 611]
/pci@1f,4000/scsi@3/sd@0,0
1. c0t1d0 [SEAGATE-SX1181677LCV-C00C cyl 24179 alt 2 hd 24 sec 611]
/pci@1f,4000/scsi@3/sd@1,0
2. c2t0d0 [Seagate-FreeAgent XTreme-4115-1.36TB]
/pci@1f,2000/usb@1,2/storage@4/disk@0,0
3. c3t0d0 [Seagate-FreeAgent XTreme-4115-1.36TB]
/pci@1f,2000/usb@1,2/storage@3/disk@0,0
ProcessUltra60-root$ zpool status
pool: zpool2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
/dev/rdsk/c2t0d0 ONLINE 0 0 0
errors: No known data errors
An individual slice can be added as a mirror to an existing disk through "zpool attach"
VerificationUltra60-root$ zpool attach zpool2 /dev/rdsk/c2t0d0 /dev/dsk/c3t0d0s0
The result of adding a disk slice to create a mirror can be checked with "zpool status"
The consumption of the CPU utilization during the resliver can be observed through "sar"Ultra60-root$ zpool status
pool: zpool2
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 1h4m, 6.81% done, 14h35m to go
config:
NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
mirror ONLINE 0 0 0
/dev/rdsk/c2t0d0 ONLINE 0 0 0
c3t0d0s0 ONLINE 0 0 0
errors: No known data errors
If you are curious concerning the performance of the system during the resilvering process over the USB ports, there is "zfs iostat" command.Ultra60-root$ sar
SunOS Ultra60 5.10 Generic_141414-09 sun4u 09/16/2009
00:00:00 %usr %sys %wio %idle
00:15:01 0 40 0 60
00:30:00 0 39 0 60
00:45:00 0 39 0 61
01:00:00 0 39 0 61
01:15:00 0 39 0 61
01:30:01 0 41 0 59
...
10:45:00 0 43 0 57
11:00:00 0 40 0 59
11:15:01 0 40 0 60
11:30:00 0 40 0 59
11:45:00 0 39 0 61
12:00:00 0 43 0 56
12:15:00 0 47 0 53
12:30:01 0 44 0 56
Average 0 39 0 60
Conclusion
Ultra60-root$ zpool iostat 2 10
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zpool2 568G 824G 12 0 1.30M 788
zpool2 568G 824G 105 0 6.92M 0
zpool2 568G 824G 156 0 9.81M 7.48K
zpool2 568G 824G 157 1 10.1M 5.74K
zpool2 568G 824G 117 6 10.3M 11.5K
zpool2 568G 824G 154 5 10.1M 7.49K
zpool2 568G 824G 222 31 8.44M 36.7K
zpool2 568G 824G 120 13 8.45M 10.2K
zpool2 568G 824G 113 4 9.75M 8.99K
zpool2 568G 824G 120 5 9.48M 11.0K
The above session demonstrates how a whole external USB device was used to create a ZFS pool and an individual slice from another USB device was used to mirror an existing pool.
Now, if I can just get this Seagate FreeAgent Xtreme 1.5TB disk to just be recognized by some system using FireWire (No, can't use it reliably on an old MacG4, Dual G5, Mac Dual Core Intel, or a dual SPARC Solaris platforms) - I would be much happier than using USB.
Monday, September 14, 2009
Solaris Containers vs VMWare and Linux
Solaris Containers vs VMWare and Linux
I saw an interesting set of benchmarks today - two similarly configured boxes with outstanding performance differences.
SAP Hardware: Advanage Linux
Two SAP benchmarks were released - one under Solaris while the other was under Linux.
What were the results?
SAP System Advantage: Solaris
VMWare has offered highly functional virtualization under Intel & AMD platforms for some time, but there are alternatives.
I saw an interesting set of benchmarks today - two similarly configured boxes with outstanding performance differences.
SAP Hardware: Advanage Linux
Two SAP benchmarks were released - one under Solaris while the other was under Linux.
2009034: Sun Fire x4270, Solaris 10, Solaris Container as Virtualization, 8vCPUs (half the CPUs available in the box), Oracle 10g, EHP4: 2800 SD-User.
2009029: Fujitsu Primergy RX 3000, SuSE Linux Enterprise Server 10, VMWare ESX Server 4.0, 8vCPUs (half the CPUs available in the box), MaxDB 7.8, EHP4: 2056 SD-User.
SAP Benchmark: Results
What were the results?
Vendor | ServerOS | Partitioning | RDBMS | Memory |
---|---|---|---|---|
Oracle/SUN | Solaris | Zones | Oracle 10g | 48 Gigabytes |
Novell/SuSE | Linux | VMWare | MaxDB 7.8 | 96 Gigabyes |
Benchmark | Solaris | Linux | Result |
---|---|---|---|
Users | 2,800 | 2,056 | Solaris 36% more users |
Response | 0.97s | 0.98s | Solaris 1% greater responsiveness |
Line Items | 306,330/h | 224,670/hr | Solaris 36% greater throughput |
Dialog Steps | 919,000/hr | 674,000/hr | Solaris 36% greater throughput |
SAPS | 15,320 | 11,230 | Solaris 36% greater performance |
Avg DB Dialog | 0.008 sec | 0.008 sec | tie! |
Avg DB Update | 0.007 sec | 0.012 sec | Solaris 71% faster updates |
SAP System Advantage: Solaris
VMWare has offered highly functional virtualization under Intel & AMD platforms for some time, but there are alternatives.
- Solaris has yielded significantly higher performance solution on multiple platforms (Intel, AMD, and SPARC) for years
- Solaris server required half the RAM as the Linux server, to achieve higher performance
- A single OS solution (Solaris 10) offers greater security vs a multiple OS solution (VMWare Hypervisor in conjunction with SuSE Linux)
- When partitioning servers, database license liability (under Oracle) can be reduced under Solaris Containers while they can not be reduced, under VMWare.
Labels:
Intel,
Linux,
SAP,
Solaris,
Solaris 10,
Solaris Containers,
SuSE,
Zones
Thursday, September 10, 2009
What's Better: USB or SCSI?
What's Better: USB or SCSI?
Abstract
Data usage and archiving is just exploding everywhere. The bus options for adding data increase often, with new bus protocols being added regularly. With systems so prevalent throughout businesses and homes, when should one choose a different bus protocol for accessing the data? This set of tests will be done with some older mid-range internal SCSI drives against a brand new massive external USB drive.
Test: Baseline
The Ultra60 test system is an SUN UltraSPARC II server, running dual 450MHz CPU's and 2 Gigabytes of RAM. Internally, there are 280 pin 180Gigabyte SCSI drives. Externally, there is one external 1.5 Terabyte Seagate Extreme drive. A straight "dd" will be done, from a 36Gig root slice, to the internal drive, and external disk.
Test #1a: Write Internal SCSI with UFS
The first copy was to an internal disk running UFS file system. The system hovered around 60% idle time with about 35% CPU time pegged in the SYS category, the entire time of the copy.
Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u001/root_slice_0
75504936+0 records in
75504936+0 records out
real 1h14m6.95s
user 12m46.79s
sys 58m54.07s
Test #1b: Read Internal SCSI with UFS
The read back of this file was used to create a baseline for other comparisons. The system hovered around 50% idle time with about 34% CPU time pegged in the SYS category, the entire time of the copy. About 34 minutes was the span of the read.
Ultra60-root$ time dd if=/u001/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out
real 34m13.91s
user 10m37.39s
sys 21m54.72s
Test #2a: Write Internal SCSI with ZFS
The internal disk was tested again using the ZFS file system, instead of UFS file system. The system hovered around 50% idle with about 45% being pegged in the sys category. The write time lengthened about 50%, using ZFS.
Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u002/root_slice_0
75504936+0 records in
75504936+0 records out
real 1h49m32.79s
user 12m10.12s
sys 1h34m12.79s
Test #2b: Read Internal SCSI with ZFS
The 36 Gigabyte read took ZFS took about 50% longer than UFS. The CPU capacity was not strained much more, however.
Ultra60-root$ time dd if=/u001/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out
real 51m15.39s
user 10m49.16s
sys 36m46.53s
Test #3a: Write External USB with ZFS
The third copy was to an external disk running ZFS file system. The system hovered around 0% idle time with about 95% CPU time pegged in the SYS category, the entire time of the copy. The copy consumed about the same amount of time as the ZFS copy to the internal disk.
Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u003/root_slice_0
75504936+0 records in
75504936+0 records out
real 1h52m13.72s
user 12m49.68s
sys 1h36m13.82s
Test #3b: Read External USB with ZFS
Read performance is slower over USB than it is over SCSI with ZFS. The time is 82% slower than the UFS SCSI read and 21% slower than the ZFS SCSI read. CPU utilization seems to be slightly more with USB (a factor of 10% less idle time with USB over SCSI.)
Ultra60-root$ time dd if=/u003/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out
real 1h2m50.76s
user 12m6.22s
sys 42m34.05s
Untested Conditions
Attempted was Firewire and eSATA, but these bus protocols would not reliably work on the Seagate Extreme 1.5TB drive, under any platform tested (several Macintoshes and SUN Workstations.) If you are interested in a real interface besides USB, this external drive is not the one you should be investigating - it is a serious mistake to purchase.
Conclusion
The benefits of ZFS does not come with a cost in time. Reads and writes are about 50% slower, but the cost may be worth it for the benefits of: unlimited snapshots, unlimited file system expansion, error correction, compression, 1 or 2 disk failure tolerance, future 3 disk failure tolerance, future encryption, and future clustering are features.
If you are serious about your system performance, SCSI is definitely a better choice, over USB, to provide throughput with minimum CPU utilization - regardless of file system. If you invested in CPU capacity and have CPU capacity to burn (i.e. muti-core CPU), then buying external USB storage may be reasonable, over purchasing SCSI.
Abstract
Data usage and archiving is just exploding everywhere. The bus options for adding data increase often, with new bus protocols being added regularly. With systems so prevalent throughout businesses and homes, when should one choose a different bus protocol for accessing the data? This set of tests will be done with some older mid-range internal SCSI drives against a brand new massive external USB drive.
Test: Baseline
The Ultra60 test system is an SUN UltraSPARC II server, running dual 450MHz CPU's and 2 Gigabytes of RAM. Internally, there are 280 pin 180Gigabyte SCSI drives. Externally, there is one external 1.5 Terabyte Seagate Extreme drive. A straight "dd" will be done, from a 36Gig root slice, to the internal drive, and external disk.
Test #1a: Write Internal SCSI with UFS
The first copy was to an internal disk running UFS file system. The system hovered around 60% idle time with about 35% CPU time pegged in the SYS category, the entire time of the copy.
Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u001/root_slice_0
75504936+0 records in
75504936+0 records out
real 1h14m6.95s
user 12m46.79s
sys 58m54.07s
Test #1b: Read Internal SCSI with UFS
The read back of this file was used to create a baseline for other comparisons. The system hovered around 50% idle time with about 34% CPU time pegged in the SYS category, the entire time of the copy. About 34 minutes was the span of the read.
Ultra60-root$ time dd if=/u001/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out
real 34m13.91s
user 10m37.39s
sys 21m54.72s
Test #2a: Write Internal SCSI with ZFS
The internal disk was tested again using the ZFS file system, instead of UFS file system. The system hovered around 50% idle with about 45% being pegged in the sys category. The write time lengthened about 50%, using ZFS.
Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u002/root_slice_0
75504936+0 records in
75504936+0 records out
real 1h49m32.79s
user 12m10.12s
sys 1h34m12.79s
Test #2b: Read Internal SCSI with ZFS
The 36 Gigabyte read took ZFS took about 50% longer than UFS. The CPU capacity was not strained much more, however.
Ultra60-root$ time dd if=/u001/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out
real 51m15.39s
user 10m49.16s
sys 36m46.53s
Test #3a: Write External USB with ZFS
The third copy was to an external disk running ZFS file system. The system hovered around 0% idle time with about 95% CPU time pegged in the SYS category, the entire time of the copy. The copy consumed about the same amount of time as the ZFS copy to the internal disk.
Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u003/root_slice_0
75504936+0 records in
75504936+0 records out
real 1h52m13.72s
user 12m49.68s
sys 1h36m13.82s
Test #3b: Read External USB with ZFS
Read performance is slower over USB than it is over SCSI with ZFS. The time is 82% slower than the UFS SCSI read and 21% slower than the ZFS SCSI read. CPU utilization seems to be slightly more with USB (a factor of 10% less idle time with USB over SCSI.)
Ultra60-root$ time dd if=/u003/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out
real 1h2m50.76s
user 12m6.22s
sys 42m34.05s
Untested Conditions
Attempted was Firewire and eSATA, but these bus protocols would not reliably work on the Seagate Extreme 1.5TB drive, under any platform tested (several Macintoshes and SUN Workstations.) If you are interested in a real interface besides USB, this external drive is not the one you should be investigating - it is a serious mistake to purchase.
Conclusion
The benefits of ZFS does not come with a cost in time. Reads and writes are about 50% slower, but the cost may be worth it for the benefits of: unlimited snapshots, unlimited file system expansion, error correction, compression, 1 or 2 disk failure tolerance, future 3 disk failure tolerance, future encryption, and future clustering are features.
If you are serious about your system performance, SCSI is definitely a better choice, over USB, to provide throughput with minimum CPU utilization - regardless of file system. If you invested in CPU capacity and have CPU capacity to burn (i.e. muti-core CPU), then buying external USB storage may be reasonable, over purchasing SCSI.
Labels:
disk,
drive,
Performance,
SCSI,
Solaris,
Solaris 10,
SPARC,
Sun,
UFS,
Ultra60,
UltraSPARC,
UltraSPARC II,
USB,
ZFS
Tuesday, September 8, 2009
IBM: Sun Best in OS Vulnerabilites Reporting and Patching
IBM: Sun Best in OS Vulnerabilities Reporting and Patching - 2009-1H
I know what you are thinking, IBM thinks Sun outperformed the rest of the market in regards to OS security?
Apparently, in the 1st half of 2009, IBM commends Sun for security above all other competitors, even their own coders and product partners!
I know what you are thinking, IBM thinks Sun outperformed the rest of the market in regards to OS security?
Apparently, in the 1st half of 2009, IBM commends Sun for security above all other competitors, even their own coders and product partners!
Sun is the best at sharing information about its operating system's vulnerabilities and patching them, reports IBM's "X-Force 2009 Mid-Year Trend and Risk Report." This analysis of various online threats and vulnerabilities examined statistics for the first half of 2009.By what metrics did IBM measure?
Solaris had only 26 percent of the total number of OS vulnerabilities... Microsoft had the most ... with 39 percent of the total.But this was not the only metric...
Sun's patch rate also was deemed impressive with only four percent left unpatched. "For the vast number of disclosures Sun makes, they have an impressive patch rate (only four percent left unpatched)"... The average patch rate within the industry is 49 percent. Sun's four percent rate tops Apple's 18 percent and Microsoft's 17 percent.This is fairly eye opening to the industry - Sun clearly is better controlling their own destiny with Solaris than the competitors.
Labels:
IBM,
patching,
Security,
Solaris,
Solaris 10,
Sun,
vulnerability
Apache: Hack, Rollback, Recover, and Secure
Apache: Hack, Rollback, Recover, and Secure
Apache: Hack
It was a bad day for the Apache team - apache.org was hacked 2009-08-28 through their CentOS Linux derivative. The CentOS is a Linux distribution bundled with RedHat Package Manager.
Apache: Rollback
When your system has been hacked, what would be your first choice to recover?
Go to an on-line backup? (How do you know it was not also compromised?)
Go to a tape backup? (How do you know it was not also compromised?)
How far back to you go? (Do you only keep 3 backups?)
Do you re-build from scratch?
Apache: Recover
It was mentioned that the ZFS snapshot was used to ultimately recover the web site. How does ZFS offer this type of capability?
ZFS is a 128 bit unified file system and volume manager. It offers virtually unlimited volume size... and virtually unlimited snapshots. Any production environment exposed to the internet should use ZFS as a best practice, to be able to quickly resort back to a pre-corrupted stage.
For example, you can schedule a snapshot of your system every day, or every 15 minutes (if you want!), and you can hold these snapshots for a week, with virtually no overhead. At any point in time, you can drop back to a previous release, just as the apache.org foundation decided to do.
What exactly is a "snapshot"? The zfs manual page reads:
Apache: Secure
Securing those web servers are a different story. SSH is no magic bullet - this was also compromised. There is no magic bullet in the open-source world. Different open-source communities have different certification processes. A Linux kernel may come up, be slowly accepted into a distribution, with patches made along the way from the original kernel team as well as a separate distribution company.
One of the sections was about positive lessons :
While your front-end Linux boxes may get hacked, diversifying your infrastructure with an additional (secure) OS, and using a real unified file system & volume management system like ZFS under Solaris makes the hackers struggle while provides options to every-day system administrators.
IBM recently acknowledged Sun Solaris as being best-in-class in security - something to keep in mind.
Be cautious of other vendors like Microsoft with long outstanding security holes in their IIS web serving software or security issues which they refuse to fix. These are not good candidates for customer facing systems - for obvious reasons. Imagine gaining access to the IIS server and just querying the usernames and passwords from the ebedded MSSQL server - oh, the humanity!
Apache: Hack
It was a bad day for the Apache team - apache.org was hacked 2009-08-28 through their CentOS Linux derivative. The CentOS is a Linux distribution bundled with RedHat Package Manager.
Apache: Rollback
When your system has been hacked, what would be your first choice to recover?
Go to an on-line backup? (How do you know it was not also compromised?)
Go to a tape backup? (How do you know it was not also compromised?)
How far back to you go? (Do you only keep 3 backups?)
Do you re-build from scratch?
aurora.apache.org runs Solaris 10, and we were able to restore the box to a known-good configuration by cloning and promoting a ZFS snapshot from a day before the CGI scripts were synced over. Doing so enabled us to bring the EU server back online, and to rapidly restore our main websites.The Apache team was very fortunate - they implemented Sun SPARC Solaris ZFS. They were able to roll back to a snapshot and recover.
Apache: Recover
It was mentioned that the ZFS snapshot was used to ultimately recover the web site. How does ZFS offer this type of capability?
ZFS is a 128 bit unified file system and volume manager. It offers virtually unlimited volume size... and virtually unlimited snapshots. Any production environment exposed to the internet should use ZFS as a best practice, to be able to quickly resort back to a pre-corrupted stage.
For example, you can schedule a snapshot of your system every day, or every 15 minutes (if you want!), and you can hold these snapshots for a week, with virtually no overhead. At any point in time, you can drop back to a previous release, just as the apache.org foundation decided to do.
What exactly is a "snapshot"? The zfs manual page reads:
A read-only version of a file system or volume at a given point in time. It is specified as filesystem@name or volume@name.The process of taking an old snapshot and making it writable is called a "clone". The manual page reads:
A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space.This "clone" can be "promoted" so as to become the master version of the file system volume, erasing the old content. This is also described in the zfs manual page:
Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The origin property exposes this dependency, and the destroy command lists any such dependencies, if they exist.This is basically the process that Apache.org used to recover their Linux web servers.
The clone parent-child dependency relationship can be reversed by using the promote subcommand. This causes the “origin” file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from.
Apache: Secure
Securing those web servers are a different story. SSH is no magic bullet - this was also compromised. There is no magic bullet in the open-source world. Different open-source communities have different certification processes. A Linux kernel may come up, be slowly accepted into a distribution, with patches made along the way from the original kernel team as well as a separate distribution company.
One of the sections was about positive lessons :
- The use of ZFS snapshots enabled us to restore the EU production web server to a known-good state.
- Redundant services in two locations allowed us to run services from an alternate location while continuing to work on the affected servers and services.
- A non-uniform set of compromised machines (Linux/CentOS i386, FreeBSD-7 amd_64, and Solaris 10 on sparc) made it difficult for the attackers to escalate privileges on multiple machines.
While your front-end Linux boxes may get hacked, diversifying your infrastructure with an additional (secure) OS, and using a real unified file system & volume management system like ZFS under Solaris makes the hackers struggle while provides options to every-day system administrators.
IBM recently acknowledged Sun Solaris as being best-in-class in security - something to keep in mind.
Be cautious of other vendors like Microsoft with long outstanding security holes in their IIS web serving software or security issues which they refuse to fix. These are not good candidates for customer facing systems - for obvious reasons. Imagine gaining access to the IIS server and just querying the usernames and passwords from the ebedded MSSQL server - oh, the humanity!
Labels:
apache,
apache.org,
CentOS,
hacked,
IIS,
Linux,
Microsoft,
RedHat,
Solaris,
Solaris 10,
SQL Server,
ZFS
Future of Storage: Flash
Future of Storage: Flash
Abstract
With ever increasing storage size increases, cost decreases, and performance increases - it seems like Flash storage will soon be a winner in the Managed Services arena. Others are writing about this technology, such as StorageMojo and The Register. Understanding where the technology is going is a good start, but understanding your technological bottlenecks is required for application.
What's New?
The flash DIMM format is an opportunity to significantly change the computing industry for the long term. Pictured below is a Sun Flash DIMM, in the form factor of common laptop memory.
Since hard disk drives are normally always spinning and generating massive quantities of heat - failure rates were high. High failure rates necessitated easy access in arrays through front and rear chassis access. Removal of the heat generation and mechanical movement increases reliability - so why would one need to suck up space in the front or rear of a rack with the center of the rack mount unit being mostly empty?
DIMM is the way to go.
What's Up and Coming?
Sun Systems designer, Andy von Bechtolsheim discussed the use of flash a quarter ago, both in the regular market, as well as in the future of the Sun marketing products.
One example of a core building block was a 4 Terabyte 1U high storage unit.
While 4 Terabytes in a 1U high rack space may not make people jump for joy, the news is really around the benefits for a dramatic increase in performance. When one can increase performance 100x, use 1/100th the rack space, and use orders of magnitude less power - this can drive change in any business.
Sun briefly posted a PDF of the F5100 storage platform, but this document was pulled. Google still has the HTML version of the "Sun Storage F5100 Flash Array Getting Started Guide" document, although it is fairly stripped of visual content and structure. You can see from the HTML that it was published in July of 2009.
What's Here Today
What is very comforting is that this up & coming technology is already supported by standard storage management tools - Sun StorageTek Common Array Manager (CAM).
The F5100 Flash Array is steadily appearing in more standard Sun documentation, for example the "Sun StorageTek Common Array Manager User Guide for Open Systems". There is also a PDF of this guide (as well as others) available, for common consumption.
Application in Network Management
For Open Source polling & graphing software, the I/O becomes inhibitive in large installations. The need for many spindles in order to keep up with the data read & write rates had created the architectural need to split the database from the polling software. With massive quantities of data coming into a database on very rapid & regular polling rates - the need for archiving this data becomes increasingly important, but secondary to the performance since massive numbers of spindles will leave high capacity drives mostly empty.
What would happen if the artificial need for breaking storage away from the pollers disappeared?
The architecture could simplify and re-consolidate onto a single server with multiple [virtual] pollers. The result would be a reduced level of system complexity (fewer servers, switch ports, physical ports, drive interconnects, etc.), increase performance (eliminate the need for massive external storage), increase reliability (fewer moving mechanical parts), and overall decrease costs.
Abstract
With ever increasing storage size increases, cost decreases, and performance increases - it seems like Flash storage will soon be a winner in the Managed Services arena. Others are writing about this technology, such as StorageMojo and The Register. Understanding where the technology is going is a good start, but understanding your technological bottlenecks is required for application.
What's New?
The flash DIMM format is an opportunity to significantly change the computing industry for the long term. Pictured below is a Sun Flash DIMM, in the form factor of common laptop memory.
Since hard disk drives are normally always spinning and generating massive quantities of heat - failure rates were high. High failure rates necessitated easy access in arrays through front and rear chassis access. Removal of the heat generation and mechanical movement increases reliability - so why would one need to suck up space in the front or rear of a rack with the center of the rack mount unit being mostly empty?
DIMM is the way to go.
What's Up and Coming?
Sun Systems designer, Andy von Bechtolsheim discussed the use of flash a quarter ago, both in the regular market, as well as in the future of the Sun marketing products.
One example of a core building block was a 4 Terabyte 1U high storage unit.
While 4 Terabytes in a 1U high rack space may not make people jump for joy, the news is really around the benefits for a dramatic increase in performance. When one can increase performance 100x, use 1/100th the rack space, and use orders of magnitude less power - this can drive change in any business.
Sun briefly posted a PDF of the F5100 storage platform, but this document was pulled. Google still has the HTML version of the "Sun Storage F5100 Flash Array Getting Started Guide" document, although it is fairly stripped of visual content and structure. You can see from the HTML that it was published in July of 2009.
What's Here Today
What is very comforting is that this up & coming technology is already supported by standard storage management tools - Sun StorageTek Common Array Manager (CAM).
The F5100 Flash Array is steadily appearing in more standard Sun documentation, for example the "Sun StorageTek Common Array Manager User Guide for Open Systems". There is also a PDF of this guide (as well as others) available, for common consumption.
Application in Network Management
For Open Source polling & graphing software, the I/O becomes inhibitive in large installations. The need for many spindles in order to keep up with the data read & write rates had created the architectural need to split the database from the polling software. With massive quantities of data coming into a database on very rapid & regular polling rates - the need for archiving this data becomes increasingly important, but secondary to the performance since massive numbers of spindles will leave high capacity drives mostly empty.
What would happen if the artificial need for breaking storage away from the pollers disappeared?
The architecture could simplify and re-consolidate onto a single server with multiple [virtual] pollers. The result would be a reduced level of system complexity (fewer servers, switch ports, physical ports, drive interconnects, etc.), increase performance (eliminate the need for massive external storage), increase reliability (fewer moving mechanical parts), and overall decrease costs.
Labels:
Array,
CAM,
Common Array Manager,
F5100,
Flash,
StorageTek,
Sun
Microsoft IIS Vulnerabilities Across Releases
Microsoft IIS Vulnerabilities Across Releases
The Register published a short article of concern for those of us in the Network Management industry, where we customer or internet facing platforms for reporting delivery.
If Microsoft is not releasing patches for your old release of IIS, time to think about replacing that old portal.
New IIS attacks (greatly) expand number of vulnerable servers
The Register published a short article of concern for those of us in the Network Management industry, where we customer or internet facing platforms for reporting delivery.
Microsoft continues to say that IIS5 running on Windows 2000 appears to be the only version that is vulnerable to attacks that can remotely execute malicious code on an underlying server. But it's now clear that hackers can target every version of IIS to cause denial-of-service attacks.If you have a current or legacy IIS server - this may place your installation at risk. This is an piece of old code, meaning that historical code that you have not touched for awhile will be at risk. The risk centers around industry standard FTP protocol, one of the backbone protocols of the internet.
If Microsoft is not releasing patches for your old release of IIS, time to think about replacing that old portal.
Wednesday, September 2, 2009
Microsoft rejects call to fix SQL password-exposure risk
Microsoft rejects call to fix SQL password-exposure risk
Abstract
Most serious Managed Services Element Management Platforms, which depend on external databases, traditionally do not depend on databases such as Microsoft SQL. This article illustrates one of the reasons: security.
The Problem
The problem with passwords being stored in the clear is not that an infected system could have data destroyed on it, but rather other systems what work with that infected system could be infected!
Of course, behaviors like this are rampant with Day-0 Exploits, Microsoft SQL Worms, Microsoft Windows Viruses, etc. Another place to get passwords by malware is just another reason not to implement such a system in an area where customer managed devices are routable.
If a system is storing passwords for thousands of managed systems in the clear, an infection of a central system could be disastrous for the managed customer edge devices.
A developer in a company may have the option to secure passwords or not - but if the developer in a company ever has to meet a PCI audit and the vendor does not offer that option, then the company providing the managed services is placed in tremendous risk.
Abstract
Most serious Managed Services Element Management Platforms, which depend on external databases, traditionally do not depend on databases such as Microsoft SQL. This article illustrates one of the reasons: security.
The Problem
"Applications go to great lengths to obfuscate passwords when they are needed within the software, and should not store passwords as 'clear text,' either in memory (as is the case with this vulnerability) or on disk," Sentrigo's advisory stated.The Response
Microsoft has rejected the company's calls to change the way the software handles passwords, saying people with administrative rights already have complete control of the system anyway.
"Microsoft has thoroughly investigated claims of vulnerabilities in SQL Server and found that these are not product vulnerabilities requiring Microsoft to issue a security update," a spokesman wrote in an email. "An attacker who has administrative rights already has complete control of the system and can install programs; view, change, or delete data; or create new accounts with full user rights."What this means to Network Management
The problem with passwords being stored in the clear is not that an infected system could have data destroyed on it, but rather other systems what work with that infected system could be infected!
Of course, behaviors like this are rampant with Day-0 Exploits, Microsoft SQL Worms, Microsoft Windows Viruses, etc. Another place to get passwords by malware is just another reason not to implement such a system in an area where customer managed devices are routable.
If a system is storing passwords for thousands of managed systems in the clear, an infection of a central system could be disastrous for the managed customer edge devices.
A developer in a company may have the option to secure passwords or not - but if the developer in a company ever has to meet a PCI audit and the vendor does not offer that option, then the company providing the managed services is placed in tremendous risk.
Subscribe to:
Posts (Atom)