Showing posts with label SSD. Show all posts
Showing posts with label SSD. Show all posts

Tuesday, December 25, 2012

New Technology: 2012 December Update


SSD prices are low—and they'll get lower. MeRAM posed to supplant NAND flash memory...

IBM Integrates Optics onto Silicon...

Sun/Oracle receives patent 8,316,366 on Transactional Threading on November 2012... this came on the heels of a 2011 paper on formally verifying transactional memory on September 2011.

Silence has been from Sun/Oracle VLSI group on Proximity Communications, research funding is due to expire in 2013, is there a product in the future?

Samsung Spends $3.9bn on iPhone Chip Factory in Texas.

Texas Instruments to cut 517 OPAM Smartphone/Tablet Chip Manufacturing jobs in France.

AWS (Amazon Web Services) Hosting Server Retirement Notifications Wanting...

Microsoft Outlook 2013 Willfully Broken: Will Not Recognize .doc or .xls Files

Microsoft Windows 8: Hidden Backup & Clone Feature

Sunday, October 16, 2011

ZFS: A Multi-Year Case Study in Moving From Desktop Mirroring (Part 2)



Abstract:
ZFS was created by Sun Microsystems to innovate the storage subsystem of computing systems by simultaneously expanding capacity & security exponentially while collapsing the formerly striated layers of storage (i.e. volume managers, file systems, RAID, etc.) into a single layer in order to deliver capabilities that would normally be very complex to achieve. One such innovation introduced in ZFS was the ability to provide inexpensive limited life solid state storage (FLASH media) which may offer fast (or at least greater deterministic) random read or write access to the storage hierarchy in a place where it can enhance performance of less deterministic rotating media. This paper discusses the use of various configurations of inexpensive flash to enhance the write performance of high capacity yet low cost mirrored external media with ZFS.

Case Study:
A particular Media Design House had formerly used multiple external mirrored storage on desktops as well as racks of archived optical media in order to meet their storage requirements. A pair of (formerly high-end) 400 Gigabyte Firewire drives lost a drive. An additional pair of (formerly high-end) 500 Gigabyte Firewire drives experienced a drive loss within one month later. A media wall of CD's and DVD's was getting cumbersome to retain.

First Upgrade:
A newer version of Solaris 10 was released, which included more recent features. The Media House was pleased to accept Update 8, with the possibility of supporting Level 2 ARC for increased read performance and Intent Logging for increase write performance.

The Media House did not see the need to purchase flash for read or write logging at this time. The mirrored 1.5 Terabyte SAN performed adequately.


Second Upgrade:
The Media House started becoming concerned, about 1 year later, when 65% of their 1.5 Terabyte SAN storage was burned through.
Ultra60/root# zpool list

NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zpool2  1.36T   905G   487G    65%  ONLINE  -
The decision to invest in an additional pair of 2 Terabyte drives for the SAN was an easy one. The external Seagate Expansion drives were selected, because of the reliability of the former drives, and the built in power management which would reduce power consumption.

Additional storage was purchased for the network, but if there was going to be an upgrade, a major question included: what kind of common flash media would perform best for the investment?


Multiple Flash Sticks or Solid State Disk?

Understanding that Flash Media normally has a high Write latency, the question in everyone's mind is: what would perform better, an army of flash sticks or a solid state disk?

This simple question started what became a testing rat hole where people ask the question but often the responses comes from anecdotal assumptions. The media house was interested in the real answer.

Testing Methodology

It was decided that the copying of large files to/from large drive pairs was the most accurate way to simulate the day to day operations of the design house. This is what they do with media files, so this is how the storage should be tested.

The first set of tests surrounded testing the write cache in different configurations.
  • The USB sticks would each use a dedicated 400Mbit port
  • USB stick mirroring would occur across 2 different PCI buses
  • 4x consumer grade 8 Gigabyte USB sticks from MicroCenter were procured
  • Approximately 900 Gigabytes of data would be copied during each test run
  • The same source mirror was used: the 1.5TB mirror
  • The same destination mirror would be used: the 2TB mirror
  • The same Ultra60 Creator 3D with dial 450MHz processors would be used
  • The SAN platform was maxed out at 2 GB of ECC RAM
  • The destination drives would be destroyed and re-mirrored between tests
  • Solaris 10 Update 8 would be used
The Base System
# Check patch release
Ultra60/root# uname -a
SunOS Ultra60 5.10 Generic_141444-09 sun4u sparc sun4u


# check OS release
Ultra60/root# cat /etc/release
Solaris 10 10/09 s10s_u8wos_08a SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 September 2009


# check memory size
Ultra60/root# prtconf | grep Memory
Memory size: 2048 Megabytes


# status of zpool, show devices
Ultra60/root# zpool status zpool2
pool: zpool2
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t0d0s0 ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0

errors: No known data errors
The Base Test: No Write Cache

A standard needed to be created by which each additional run could be tested against. This base test was a straight create and copy.

ZFS is a tremendously fast system for creating a mirrored pool under. A 2TB mirrored poll takes only 4 seconds on an old dual 450MHz UltraSPARC II.
# Create mirrored pool of 2x 2.0TB drives
Ultra60/root# time zpool create -m /u003 zpool3 mirror c8t0d0 c9t0d0

real 0m4.09s
user 0m0.74s
sys 0m0.75s
The data to be copied with source and destination storage is easily listed.
# show source and destination zpools
Ultra60/root# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zpool2 1.36T 905G 487G 65% ONLINE -
zpool3 1.81T 85.5K 1.81T 0% ONLINE -

The copy of over 900 GB between mirrored USB pairs takes about 41 hours.
# perform copy of 905GBytes of data from old source to new destination zpool
Ultra60/root# cd /u002 ; time cp -r . /u003
real 41h6m14.98s
user 0m47.54s
sys 5h36m59.29s
The time to destroy the 2 TB mirrored pool holding 900GB of data was about 2 seconds.
# erase and unmount new destination zpool
Ultra60/root# time zpool destroy zpool3
real 0m2.19s
user 0m0.02s
sys 0m0.14s
Another Base Test: Quad Mirrored Write Cache

The ZFS Intent Log can be split from the mirror onto higher throughput media, for the purpose of speeding writes. Because this is a write cache, it is extremely important that this media is redundant - a loss to the write cache can result in a corrupt pool and loss of data.

The first test was to create a quad mirrored write cache. With 2 GB of RAM, there is absolutely no way that the quad 8 GB sticks would ever have more than a fraction flash used, but the hope is that such a small amount of flash used would allow the commodity sticks to perform well.

The 4x 8GB sticks were inserted into the system, they were found, formatted (see this article for additional USB stick handling under Solaris 10), and the system was now ready to accept them for creating a new destination pool.

Creation of 4x mirror ZFS Intent Log with 2TB mirror took longer - 20 seconds.
# Create mirrored pool with 4x 8GB USB sticks for ZIL for highest reliability
Ultra60/root# time zpool create -m /u003 zpool3 \
mirror c8t0d0 c9t0d0 \
log mirror c1t0d0s0 c2t0d0s0 c6t0d0s0 c7t0d0s0
real 0m20.01s
user 0m0.77s
sys 0m1.36s
The new zpool is clearly composed of a 4 way mirror.
# status of zpool, show devices
Ultra60/root# zpool status zpool3
pool: zpool3
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zpool3 ONLINE 0 0 0
mirror ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c9t0d0 ONLINE 0 0 0
logs
mirror ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c2t0d0s0 ONLINE 0 0 0
c6t0d0s0 ONLINE 0 0 0
c7t0d0s0 ONLINE 0 0 0

errors: No known data errors
No copy was done using the quad mirrored USB ZIL, because this level of redundancy was not needed.

A destroy of the 4 way mirrored ZIL with 2TB mirrored zpool still only took 2 seconds.

# destroy zpool3 to create without mirror for highest throughput
Ultra60/root# time zpool destroy zpool3
real 0m2.19s
user 0m0.02s
sys 0m0.14s
The intention of this setup was just to see if it was possible, ensure the USB sticks were functioning, and determine if adding an unreasonable amount of redundant ZIL to the system created any odd performance behaviors. Clearly, if this is acceptable, nearly every other realistic scenario that is tried will be fine.

Scenario One: 4x Striped USB Stick ZIL

The first scenario to test will be the 4-way striped USB Stick ZFS Intent Log. With 4 USB sticks, 2 sticks on each PCI bus, each stick on a dedicated USB 2.0 port - this should offer the greatest amount of throughput from these commodity flash sticks, but the least amount of security from a failed stick.
# Create zpool without mirror to round-robin USB sticks for highest throughput (dangerous)
Ultra60/root# time zpool create -m /u003 zpool3 \
mirror c8t0d0 c9t0d0 \
log c1t0d0s0 c2t0d0s0 c6t0d0s0 c7t0d0s0
real 0m19.17s
user 0m0.76s
sys 0m1.37s

# list zpools
Ultra60/root# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zpool2 1.36T 905G 487G 65% ONLINE -
zpool3 1.81T 87K 1.81T 0% ONLINE -


# show status of zpool including devices
Ultra60/root# zpool status zpool3
pool: zpool3
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpool3 ONLINE 0 0 0
mirror ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c9t0d0 ONLINE 0 0 0
logs
c1t0d0s0 ONLINE 0 0 0
c2t0d0s0 ONLINE 0 0 0
c6t0d0s0 ONLINE 0 0 0
c7t0d0s0 ONLINE 0 0 0
errors: No known data errors


# start copy of 905GB of data from mirrored 1.5TB to 2.0TB
Ultra60/root# cd /u002 ; time cp -r . /u003
real 37h12m43.54s
user 0m49.27s
sys 5h30m53.29s

# destroy it again for new test
Ultra60/root# time zpool destroy zpool3
real 0m2.77s
user 0m0.02s
sys 0m0.56s
The zpool creation was 19 seconds, destroying almost 3 seconds, but the copy decreased from 41 to 37 hours or about 10% savings... with no redundancy.

Scenario Two: 2x Mirrored USB ZIL on 2TB Mirrored Pool

Adding quad mirrored ZIL offered a 10% boost with no redundancy, what if we added a pair of mirrored USB ZIL sticks, to offer write striping for speed and mirroring for redundancy?
# create zpool3 with pair of mirrored intent USB intent logs
Ultra60/root# time zpool create -m /u003 zpool3 mirror c8t0d0 c9t0d0 \
log mirror c1t0d0s0 c2t0d0s0 mirror c6t0d0s0 c7t0d0s0
real 0m19.20s
user 0m0.79s
sys 0m1.34s

# view new pool with pair of mirrored intent logs
Ultra60/root# zpool status zpool3
pool: zpool3
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpool3 ONLINE 0 0 0
mirror ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c9t0d0 ONLINE 0 0 0
logs
mirror ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c2t0d0s0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c6t0d0s0 ONLINE 0 0 0
c7t0d0s0 ONLINE 0 0 0
errors: No known data errors

# run capacity test
Ultra60/root# cd /u002 ; time cp -r . /u003
real 37h9m52.78s
user 0m48.88s
sys 5h31m30.28s


# destroy it again for new test
Ultra60/root# time zpool destroy zpool3
real 0m21.99s
user 0m0.02s
sys 0m0.31s
The results were almost identical. A 10% improvement in speed was measured. Splitting the commodity 8GB USB sticks into a mirror offered redundancy without lacking performance.

If 4 USB sticks are to be purchased for ZIL, don't bother striping all 4, split them into mirrored pairs and get your 10% boost in speed.


Scenario Three: Vertex OCZ Solid State Disk

Purchasing 4 USB sticks for the purpose of a ZIL starts to approach the purchase price of a fast SATA SSD drive. On the UltraSPARC II processors, the drivers for SATA are lacking, so that is not necessarily a clear option.

The decision test a USB to SATA conversion kit with the SSD and run a single SSD SIL was made.
# new flash disk, format
Ultra60/root# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
...
2. c1t0d0
/pci@1f,2000/usb@1,2/storage@4/disk@0,0
...


# create zpool3 with SATA-to-USB flash disk intent USB intent log
Ultra60/root# time zpool create -m /u003 zpool3 mirror c8t0d0 c9t0d0 log c1t0d0
real 0m5.07s
user 0m0.74s
sys 0m1.15s
# show zpool3 with intent log
Ultra60/root# zpool status zpool3
  pool: zpool3
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zpool3      ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0
            c9t0d0  ONLINE       0     0     0
        logs
          c1t0d0    ONLINE       0     0     0

# run capacity test
Ultra60/root# cd /u002 ; time cp -r . /u003
real 32h57m40.99s
user 0m52.04s
sys 5h43m13.31s
The single SSD over a SATA to USB interface provided a 20% boost in throughput.

In Conclusion

Using commodity parts, a ZFS SAN can have the write performance boosted using USB sticks by 10% as well as by 20% using an SSD. The SSD is a more reliable device and better choice for ZIL.

Tuesday, July 19, 2011

Technical Posts for 1H July

Some interesting articles that I passed over recently, which have interesting implications to Network and Systems Management:


  • Seagate ships slim, fast Pulsar XT SSD
    Faster SSD's offer improvement opportunities for network management systems.
    Seagate is shipping its Pulsar XT.2 SSD with an SPC-1C benchmark rating, and has a second slower but higher capacity SSD coming soon. The 2.5-inch Pulsar XT.2 is available in up to 400GB capacities, has a 6Gbit/s SAS interface, and is built from fast single-level–cell flash.


  • Energy scavenger eats leftover wireless signals
    Technology from GA-USA offers important possibilities for remore network probes.
    A group of researchers led by Manos Tentzeris at Georgia Tech are working on antennae that could scavenge stray wireless signals to power small sensors or microprocessors. If you’re close enough to a large radio transmitter, harvesting stray energy is pretty straightforward


  • Cisco lays off 6,500 workers, execs; And sells off another 5,000 to Foxconn
    Network giant Cisco cutting staff indicates changes in the overall market.
    Networking giant Cisco Systems is going to get 11,500 employees smaller. After Wall Street closed today, Cisco said that it was going to cut 6,500 workers to get its costs more in line with its revenue streams, and added that it was selling off a set-top box manufacturing plant in Mexico with 5,000 employees to Chinese manufacturing Foxconn Technology Group.


  • Ahead of Apple Q3 earnings, NPD expects near record Mac sales
    More Apple hardware means more diversity in the Network Management arena.
    According to numbers from the NPD Group, the answer is yes. Piper Jaffray analyst Gene Munster reported on NPD's numbers in a note to investors on Monday (as seen by AppleInsider), noting that Mac sales were up by 12 percent year-over-year for every month in the quarter.


  • New fuel discovered that reversibly stores solar energy
    Solar energy is important for remote Network probes.
    Alexie Kolpak and Jeffrey Grossman from the Massachusetts Institute of Technology propose a new type of solar thermal fuel that would be affordable, rechargeable, thermally stable, and more energy-dense than lithium-ion batteries. Their proposed design combines an organic photoactive molecule, azobenzene, with the ever-popular carbon nanotube.


  • Oracle bestows SPARC T4 beta on 'select' customers
    The Gold-Standard platform in Network Management has received an upgrade.
    According to a blog post by Masood Heydari, vice president of hardware development at Oracle, the beta program will be available to a "select number of enterprises" – and as you might expect, the company is looking for enthusiastic shops that aim to use early access to Sparc T4 multi-core systems as a competitive advantage.
Enjoy the month!

Thursday, December 24, 2009

Security Summit November 2009: ZFS Crypto

Security Summit November 2009: ZFS Crypto


Discussed was security in large system installations with speakers from CTO, technical leaders, customers and community members.

Kicking off the 4th session was a presentation on ZFS Crypto: Data encryption for local, NAS and SAN. The presentation slides are in PDF format.


ZFS Theme

The original overall theme behind the creation of ZFS had been "to create a reliable storage system from inherently unreliable components". This theme is now changing "to create a secured reliable storage system from inherently unreliable components". Universal encryption in conjunction with data integrity had traditionally been considered "too expensive"... the implementation of ZFS helps to demonstrate that this may not be the case any longer.

ZFS Data Integrity

All data in ZFS is written via Copy on Write algorithm, meaning old data is not overwritten, providing for guaranteed data integrity (as long as the underlying hardware does not "lie" when something says it was written.) No RAID write hole in ZFS and no journaling required.

End to end checksums are used for user data as well as meta-data which describes the user data layout, protecting data end-to-end - from disks on remote storage all the way to the host.

ZFS Commands

The ZFS command structure is centered around two basic commands:
  • zpool - controls storage pools
  • zfs - administer file systems, zvols, and dataset properties
ZFS Crypto Requirements

The requirements driving cryptography extensions to ZFS includes:
  • Retain Copy-on-Write semantics
  • Integrate into ZFS admin model
  • Backward compatibility to existing ZFS pools
  • NIST 800-57 Key management recommendations
  • Key management delegation to: users, virtualized, and multi-level security environments
  • Flexible software encryption policy
  • Separate key use vs key change
  • Support software only solution 
  • Support single disk laptop use cases
  • SPARC, Intel, and AMD hardware support
  • Support hardware crypto (OpenSPARC T2, Sun PCI-Express CA-6000 cards)
  • Local and Remote Key Management
ZFS Encryption Policy

The encryption policy is on the ZFS Data Set (looks like a file system) level.
  • Encryption policy set on creation time of the data set
  • AES-128, AES-192, or AES-256 available initially
  • Encryption sets are to be extensible
  • Encryption and Key Management policies are both inherited and delegatable
  • Encryption Key is randomly generated
  • ZFS checksum is forced to 256SHA for encrypted datasets
Key Management

The key management process is flexible
  • Wrapping keys can be done by user or admin via: passphrase, raw or hex key, or hardware like smart card
  • Weapping key is inherited by child datasets
  • Clones share original dataset key, but can have new keys
Key Change or ReKey

U.S. Government NIST 800-57 regulation require a key change every 2 years.
  • Wrapping Key change does not re-encrypt old data
  • Changes just Wrapping Key users/admins provide
  • New data encryption key from change time forward
  • New property "rekeyed" to show time of last change
  • Key Change or ReKey is an on-line operation
  • Internal "libzfs"  C API and scriptable "zfs" interface for external key management
Where's the decrypted data?

Data in DRAM memory (primary cache) is decrypted, data in SSD (secondary cache) or on disk is encrypted.
  • ZFS Intent Log is always encrypted 
  • ZFS ARC cache holds large amounts of decrypted data (/dev/kmem privileges required to see it)
  • Control of decrypted data in caches are controllable by dataset (or filesystem)
  • The "Primarycache" (DRAM) and "Secondarycache" (SSD) can be tuned to none, metadata, or all
Use Cases

Various "use cases" are listed in the presentation slides.

Wednesday, October 7, 2009

ZFS: The Next Word

ZFS: The Next Word

Abstract

ZFS is the latest in disk and hybrid storage pool technology from Sun Microsystems. Unlike competing 32 bit file systems, ZFS is a 128-bit file system, allowing for near limitless storage boundaries. ZFS is not a stagnant architecture, but a dynamic one, where changes are happening often to the open source code base.

What's Next in ZFS?

Jeff Bonwick and Bill Moore did a presentation at The Kernel Conference Australia 2009 regarding what was happening next in ZFS. A lot of the features were driven by the Fishworks team as well as Lustre clustering file system.

What are the new enhancements in functionality?
  • Enhanced Performance
    Enhancements all over the system
  • Quotas on a per-user basis
    Always had quotas on a per-filesystem basis, originally thought each user would get a filesystem, this does not scale well for thousands of users with many existing management tools
    Works with industry standard POSIX based UID's & Names
    Works with Microsoft SMB SID's & Names
  • Pool Recovery
    Disk drives often "out-right lie" to operating system when they re-order the writing of the blocks.
    Disk drives often "out-right lie" to operating systems when they receive a "write barrier", indicating that the write was completed, when the write was not completed.
    If there is a power outage in the middle of the write, even after a "write barrier" was done, the drive will often silently drop the "write commit", making the OS thinking that the writes were safe, when they were not - resulting in a pool corruption.
    Simplification in this area - during a scrub, go back to an earlier uber-block, and correct pool... and never over-write a recently changed transaction group, in the case of a new transaction.
  • Triple Parity RAID-Z
    Double parity RAID-Z has been around from the beginning (i.e. lose 2 out of 7 drives)
    Triple parity RAID-Z allows for disks with bigger, higher, faster high-BER drive usage
    Quadruple Parity is on the way (i.e. lose 3 out of 10 drives)
  • De-duplication
    This is very nice capacity enhancement with application, desktop, and server virtualization
  • Encryption
  • Shadow Migration (aka Brain Slug?)
    Pull out that old file server and replace it with a ZFS [NFS] server without any downtime.
  • BP Rewrite & Device Removal
  • Dynamic LUN Expansion
    Before, if a larger drive was inserted, the default behavior was to resize the LUN
    During a hot-plug, tell the system admin that the LUN has been resized
    Property added to make LUN expansion automatic or manual
  • Snapshot Hold property
    Enter an arbitrary string for a tag, issue the snapshot, issue a delete, when an "unhold" is done, the destroy is done.
    Makes ZFS look sort of like a relational database with transactions.
  • Multi-Home Protection
    If a pool is shared between two hosts, works great as long as clustering software is flawless.
    The Lustre team prototyped a heart-beat protocol on the disk to allow for multi-home-protection inherent in ZFS
  • Offline and Remove a separate ZFS Log Device
  • Extend Underlying SCSI Framework for Additional SCSI Commands
    SCSI "Trim" command, to allow ZFS to direct less wear leveling on unused flash areas, to increase life and performance of flash
  • De-Duplicate in a ZFS Send-Receive Stream
    This is in the works, to make backups & Restores more efficient
Performance Enhancements include:
  • Hybrid Storage Pools
    Makes everything go (alot) faster with a little cache (lower cost) and slower drives (lower cost.)
    - Expensive (fast, reliable) Mirrored SSD Enterprise Write Cache for ZFS Intent Logging
    - Inexpensive consumer grade SSD cache for block level Read Cache in a ZFS Level 2 ARC
    - Inexpensive consumer grade drives with massive disk storage potential with a 5x lower energy consumption
  • New Block Allocator
    This was a extremely simple 80 line code segment that works well under empty pools, that was finally re-engineered for performance when the pool gets full. ZFS will now use both algorithms.
  • Raw Scrub
    Increase performance by running through the pool and metadata to ensure checksums are validated without uncompressing data in the block.
  • Parallel Device Open
  • Zero-Copy I/O
    From the folks in Lustre cluster storage group requested and implemented the feature.
  • Scrub Prefetch
    A scrub will now prefetch blocks to increase utilization of the disk and decrease scrub time
  • Native iSCSI
    This is part of the COMSTAR enhancements. Yes, this is there today, under OpenSolaris, and offers tremendous performance improvements and simplified management
  • Sync Mode
    NFS benchmarking in Solaris is shown to be slower than Linux, because Linux does not guarantee a write to NFS actually makes it to disk (which violates the NFS protocol specification.) This feature allows Solaris to use a "Linux" mode, where writes are not guaranteed, to increase performance, at the expense of .
  • Just-In-Time Decompression
    Prefetch hides latency of I/O, but burns CPU. This allows prefetch to get the data without decompressing the data, until needed, to save CPU time, and also conserve kernel memory.
  • Disk drives with higher capacity and less reliability
    Formatting options to reduce error-recovery on a sector-by-sector basis
    30-40% improved capacity & performance
    Increased ZFS error recovery counts
  • Mind-the-Gap Reading & Writing Consolidation
    Consolidate Read Gaps in the case of reads, to ingle aggregate read can be used, reading data between adjacent sectors, and throw away intermediate data, since fewer I/O's allow for streaming data from drives more efficiently
    Consolidate Write Gaps in the case of a write, so single aggrigate write can be used, even if adjacent regions have a blank sector gap between them, streaming data to drives more efficiently
  • ZFS Send and Receive
    Performance has been improved using the same Scrub Prefetch code
Conclusion

The ZFS implementation in Solaris 10-2009 release actually has some of the ZFS features detailed in the most recent conferences.