Don't Yell In Your Data Center
Yes, loud noise in your data center causes vibration which can increase latency.
Only Analytics by Sun have been able to demonstrate this in real-time.
Network Management Connection
Network Management is all about Fault, Performance, and Configuration Management.
Why has no one else shipped performance management software for their storage units on the par that Sun has?
Making Known the Secrets to Network Management. Raising up a new generation of professionals.
Thursday, October 15, 2009
Tuesday, October 13, 2009
Sun Takes #1 Spot in TPC-C Benchmarks!
Sun Takes #1 Spot in TPC-C Benchmarks!
Abstract
Sun has long participated in benchmarks. Some benchmarks have been left idle by Sun for may years. Sun has released a new TPC-C benchmark, using a cluster of T2+, earlier than advertised.An interesting blog on the topic
Interesting Observations
- Order of magnitude fewer racks to produce a faster solution
- Order of magnitude fewer watts per 1000 tpmC
- Sun's 36 sockets to IBM's 32 sockets
- 10 GigE & FC instead of InfiniBand
- Intel based OpenSolaris storage servers, instead of AMD "Thumper " based servers
- The order of magnitude improvements in space and power consumption was obviously more compelling to someone than shooting for an order of magnitude improvement in performance
- The performance could have been faster by adding more hosts to the RAC configuration, but the order of magnitude comparisons would be lost
- The cost savings for superior performing SPARC cluster is dramatic: fewer hardware components for maintenance , lower HVAC costs, lower UPS costs, lower generator costs, lower cabling costs, lower data center square footage costs
- The pricing per SPARC core is still to high for the T2 and T2+ processors, in comparison to the performance with competing sockets
- The negative hammering by a few internet posters about the Sun OpenSPARC CoolThreads processors not being capable of running large databases is finally put to rest
- a more scalable SMP solution, but this solution will expand better in an IBM horse race
- a full Sun QDR InfiniBand configuration
- a full end-to-end 10GigE configuration
- Ithe T2 with embedded 10GigE clustered instead of the T2+ with the 10GigE card
Labels:
Benchmark,
OpenSolaris,
OpenSPARC,
Performance,
Solaris,
Sun,
T2,
T2+,
TPC,
TPC-C,
UltraSPARC II
Thursday, October 8, 2009
IBM fends off T3 as Apple fends off Psystar; Future of Computing
IBM fends off T3 as Apple fends off Psystar
The Woes of IBM
The Department of Justice is investigating IBM, on behalf of the CCIA. IBM also refused to license their software on computing mainframes in the past. European rack server company T3 appears to be the latest aggressor, now.
The Woes of Apple
Apple has been defending it's right to license their Apple Computer MacOSX software only on Apple hardware for some time. The latest upstart, Psystar (who resides in the southern part of the United States), ironically released a new line of products referred to as the "Rebel Series".
An Odd Turn of Events
What makes these cases, so very interesting, is a more recent western U.S. court ruling against software company Autodesk which is leaning in the direction that software is not licensed, but owned.
Impacts to the Industry
As people turn to Open Source software, to reduce costs on their business, providers have been bundling their software at near-free costs with hardware, to be competitive.
Now, with hardware under attack for this practice, vendors will not be able to hide the cost of software creation. If the U.S. traditionally liberal Western court is not over-turned, the computing industry is at tremendous risk.
Basically, no one will pay for the salaries of good software designers and hardware designers will continually have their products knocked-off by clone manufacturers... leaving the industry in a place where innovation may suffer - because no one will reward innovation with a salary.
What Could Be Next?
If no one would pay for software and hardware innovation on commodity hardware, where might people go, to secure their investment?
Very possibly, the next turn could be back to proprietary platforms, again. If an innovative software solution is only available on a proprietary OS which is available only on a proprietary hardware platform - there is a guaranteed return on investment... regardless of whether the software is licensed or purchased.
This is not very good news, for the industry.
On the other hand, this could lead the industry back to a period of Open Computing - there there was choice between hardware vendors, OS vendors, software vendors... all according to open standards and open API's that were cooperatively created between heterogeneous industry groups like: x.org, open group, posix, open firmware, etc.
It is very possible that the liberal U.S. Western Federal Courts could be overturned (as happens quite often), once justices or the U.S. Congress realizes that they may be pushing a huge industry into non-existence or cause the balkanization of this industry, back into medieval fiefdoms.
It could also mean the end of commercial software, if the court case is not overturned. Programmers could be turned into perpetual free-lancers, in areas where labor is expensive.
In areas where labor is less expensive, programmers could be considered little more than factory workers. Hire your factory worker, by the hour, to finish your project and just release them. They will seldom see little more than a segment of code and never achieve the experience to really architect a complete solution.
It could also mean the end of high quality generic open-source software. This is, perhaps, the most ironic result. If software has no value, since few will have the means to pay for it, innovative programmers may choose to leave their software closed-source, in order to survive.
As much as people do not like licensing terms, the alternative is somewhat stark.
The Woes of IBM
The Department of Justice is investigating IBM, on behalf of the CCIA. IBM also refused to license their software on computing mainframes in the past. European rack server company T3 appears to be the latest aggressor, now.
The Woes of Apple
Apple has been defending it's right to license their Apple Computer MacOSX software only on Apple hardware for some time. The latest upstart, Psystar (who resides in the southern part of the United States), ironically released a new line of products referred to as the "Rebel Series".
An Odd Turn of Events
What makes these cases, so very interesting, is a more recent western U.S. court ruling against software company Autodesk which is leaning in the direction that software is not licensed, but owned.
Impacts to the Industry
As people turn to Open Source software, to reduce costs on their business, providers have been bundling their software at near-free costs with hardware, to be competitive.
Now, with hardware under attack for this practice, vendors will not be able to hide the cost of software creation. If the U.S. traditionally liberal Western court is not over-turned, the computing industry is at tremendous risk.
Basically, no one will pay for the salaries of good software designers and hardware designers will continually have their products knocked-off by clone manufacturers... leaving the industry in a place where innovation may suffer - because no one will reward innovation with a salary.
What Could Be Next?
If no one would pay for software and hardware innovation on commodity hardware, where might people go, to secure their investment?
Very possibly, the next turn could be back to proprietary platforms, again. If an innovative software solution is only available on a proprietary OS which is available only on a proprietary hardware platform - there is a guaranteed return on investment... regardless of whether the software is licensed or purchased.
This is not very good news, for the industry.
On the other hand, this could lead the industry back to a period of Open Computing - there there was choice between hardware vendors, OS vendors, software vendors... all according to open standards and open API's that were cooperatively created between heterogeneous industry groups like: x.org, open group, posix, open firmware, etc.
It is very possible that the liberal U.S. Western Federal Courts could be overturned (as happens quite often), once justices or the U.S. Congress realizes that they may be pushing a huge industry into non-existence or cause the balkanization of this industry, back into medieval fiefdoms.
It could also mean the end of commercial software, if the court case is not overturned. Programmers could be turned into perpetual free-lancers, in areas where labor is expensive.
In areas where labor is less expensive, programmers could be considered little more than factory workers. Hire your factory worker, by the hour, to finish your project and just release them. They will seldom see little more than a segment of code and never achieve the experience to really architect a complete solution.
It could also mean the end of high quality generic open-source software. This is, perhaps, the most ironic result. If software has no value, since few will have the means to pay for it, innovative programmers may choose to leave their software closed-source, in order to survive.
As much as people do not like licensing terms, the alternative is somewhat stark.
Wednesday, October 7, 2009
ZFS: The Next Word
ZFS: The Next Word
Abstract
ZFS is the latest in disk and hybrid storage pool technology from Sun Microsystems. Unlike competing 32 bit file systems, ZFS is a 128-bit file system, allowing for near limitless storage boundaries. ZFS is not a stagnant architecture, but a dynamic one, where changes are happening often to the open source code base.
What's Next in ZFS?
Jeff Bonwick and Bill Moore did a presentation at The Kernel Conference Australia 2009 regarding what was happening next in ZFS. A lot of the features were driven by the Fishworks team as well as Lustre clustering file system.
What are the new enhancements in functionality?
The ZFS implementation in Solaris 10-2009 release actually has some of the ZFS features detailed in the most recent conferences.
Abstract
ZFS is the latest in disk and hybrid storage pool technology from Sun Microsystems. Unlike competing 32 bit file systems, ZFS is a 128-bit file system, allowing for near limitless storage boundaries. ZFS is not a stagnant architecture, but a dynamic one, where changes are happening often to the open source code base.
What's Next in ZFS?
Jeff Bonwick and Bill Moore did a presentation at The Kernel Conference Australia 2009 regarding what was happening next in ZFS. A lot of the features were driven by the Fishworks team as well as Lustre clustering file system.
What are the new enhancements in functionality?
- Enhanced Performance
Enhancements all over the system - Quotas on a per-user basis
Always had quotas on a per-filesystem basis, originally thought each user would get a filesystem, this does not scale well for thousands of users with many existing management tools
Works with industry standard POSIX based UID's & Names
Works with Microsoft SMB SID's & Names - Pool Recovery
Disk drives often "out-right lie" to operating system when they re-order the writing of the blocks.
Disk drives often "out-right lie" to operating systems when they receive a "write barrier", indicating that the write was completed, when the write was not completed.
If there is a power outage in the middle of the write, even after a "write barrier" was done, the drive will often silently drop the "write commit", making the OS thinking that the writes were safe, when they were not - resulting in a pool corruption.
Simplification in this area - during a scrub, go back to an earlier uber-block, and correct pool... and never over-write a recently changed transaction group, in the case of a new transaction. - Triple Parity RAID-Z
Double parity RAID-Z has been around from the beginning (i.e. lose 2 out of 7 drives)
Triple parity RAID-Z allows for disks with bigger, higher, faster high-BER drive usage
Quadruple Parity is on the way (i.e. lose 3 out of 10 drives) - De-duplication
This is very nice capacity enhancement with application, desktop, and server virtualization - Encryption
- Shadow Migration (aka Brain Slug?)
Pull out that old file server and replace it with a ZFS [NFS] server without any downtime. - BP Rewrite & Device Removal
- Dynamic LUN Expansion
Before, if a larger drive was inserted, the default behavior was to resize the LUN
During a hot-plug, tell the system admin that the LUN has been resized
Property added to make LUN expansion automatic or manual - Snapshot Hold property
Enter an arbitrary string for a tag, issue the snapshot, issue a delete, when an "unhold" is done, the destroy is done.
Makes ZFS look sort of like a relational database with transactions. - Multi-Home Protection
If a pool is shared between two hosts, works great as long as clustering software is flawless.
The Lustre team prototyped a heart-beat protocol on the disk to allow for multi-home-protection inherent in ZFS - Offline and Remove a separate ZFS Log Device
- Extend Underlying SCSI Framework for Additional SCSI Commands
SCSI "Trim" command, to allow ZFS to direct less wear leveling on unused flash areas, to increase life and performance of flash - De-Duplicate in a ZFS Send-Receive Stream
This is in the works, to make backups & Restores more efficient
- Hybrid Storage Pools
Makes everything go (alot) faster with a little cache (lower cost) and slower drives (lower cost.)
- Expensive (fast, reliable) Mirrored SSD Enterprise Write Cache for ZFS Intent Logging
- Inexpensive consumer grade SSD cache for block level Read Cache in a ZFS Level 2 ARC
- Inexpensive consumer grade drives with massive disk storage potential with a 5x lower energy consumption - New Block Allocator
This was a extremely simple 80 line code segment that works well under empty pools, that was finally re-engineered for performance when the pool gets full. ZFS will now use both algorithms. - Raw Scrub
Increase performance by running through the pool and metadata to ensure checksums are validated without uncompressing data in the block. - Parallel Device Open
- Zero-Copy I/O
From the folks in Lustre cluster storage group requested and implemented the feature. - Scrub Prefetch
A scrub will now prefetch blocks to increase utilization of the disk and decrease scrub time - Native iSCSI
This is part of the COMSTAR enhancements. Yes, this is there today, under OpenSolaris, and offers tremendous performance improvements and simplified management - Sync Mode
NFS benchmarking in Solaris is shown to be slower than Linux, because Linux does not guarantee a write to NFS actually makes it to disk (which violates the NFS protocol specification.) This feature allows Solaris to use a "Linux" mode, where writes are not guaranteed, to increase performance, at the expense of . - Just-In-Time Decompression
Prefetch hides latency of I/O, but burns CPU. This allows prefetch to get the data without decompressing the data, until needed, to save CPU time, and also conserve kernel memory. - Disk drives with higher capacity and less reliability
Formatting options to reduce error-recovery on a sector-by-sector basis
30-40% improved capacity & performance
Increased ZFS error recovery counts - Mind-the-Gap Reading & Writing Consolidation
Consolidate Read Gaps in the case of reads, to ingle aggregate read can be used, reading data between adjacent sectors, and throw away intermediate data, since fewer I/O's allow for streaming data from drives more efficiently
Consolidate Write Gaps in the case of a write, so single aggrigate write can be used, even if adjacent regions have a blank sector gap between them, streaming data to drives more efficiently - ZFS Send and Receive
Performance has been improved using the same Scrub Prefetch code
The ZFS implementation in Solaris 10-2009 release actually has some of the ZFS features detailed in the most recent conferences.
Subscribe to:
Posts (Atom)