Showing posts with label Seagate. Show all posts
Showing posts with label Seagate. Show all posts

Tuesday, August 14, 2012

ZFS: A Multi-Year Case Study in Moving From Desktop Mirroring (Part 3)

Abstract:
ZFS was created by Sun Microsystems to innovate the storage subsystem of computing systems by simultaneously expanding capacity & security exponentially while collapsing the formerly striated layers of storage (i.e. volume managers, file systems, RAID, etc.) into a single layer in order to deliver capabilities that would normally be very complex to achieve. One such innovation introduced in ZFS was the ability to dynamically add additional disks to an existing filesystem pool, remove the old disks, and dynamically expand the pool for filesystem usage. This paper discusses the upgrade of high capacity yet low cost mirrored external media under ZFS.

Case Study:
A particular Media Design House had formerly used multiple external mirrored storage on desktops as well as racks of archived optical media in order to meet their storage requirements. A pair of (formerly high-end) 400 Gigabyte Firewire drives lost a drive. An additional pair of (formerly high-end) 500 Gigabyte Firewire drives experienced a drive loss within one month later. A media wall of CD's and DVD's was getting cumbersome to retain.

First Upgrade:
A newer version of Solaris 10 was released, which included more recent features. The Media House was pleased to accept Update 8, with the possibility of supporting Level 2 ARC for increased read performance and Intent Logging for increase write performance. A 64 bit PCI card supporting gigabit ethernet was used on the desktop SPARC platform, serving mirrored 1.5 Terabyte "green" disks over "green" gigabit ethernet switches. The Media House determined this configuration performed adequately.

ZIL Performance Testing:
Testing was performed to determine what the benefit was to leveraging a new feature in ZFS called the ZFS Intent Log or ZIL. Testing was done across consumer grade USB SSD's in different configurations. It was determined that any flash could be utilized in the ZIL to gain a performance increase, but an enterprise grade SSD provided the best performance increase, of about 20% with commonly used throughput loads of large file writes going to the mirror. It was determined at that point to hold off on the use of the SSD's, since the performance was adequate enough.

External USB Drive Difficulties:
The original Seagate 1.5 TB drives were working well, in the mirrored pair. One drive was "flaky" (often reported errors, a lot of "clicking". The errors were reported in the "/var/adm/messages" log.

# more /var/adm/messages
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.warning] WARNING: /pci@1f,4000/usb@4,2/storage@1/disk@0,0 (sd17):
Jul 15 13:16:13 Ultra60         Error for Command: write(10)  Error Level: Retryable
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.notice]   Requested Block: 973089160   Error Block: 973089160
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.notice]   Vendor: Seagate  Serial Number:            
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.notice]   Sense Key: Not Ready
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.notice]   ASC: 0x4 (LUN initializing command required), ASCQ: 0x2, FRU: 0x0
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.warning] WARNING: /pci@1f,4000/usb@4,2/storage@1/disk@0,0 (sd17):
Jul 15 13:16:13 Ultra60         Error for Command: write(10)  Error Level: Retryable
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.notice]   Requested Block: 2885764654  Error Block: 2885764654
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.notice]   Vendor: Seagate  Serial Number:            
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.notice]   Sense Key: Not Ready
Jul 15 13:16:13 Ultra60 scsi: [ID 107833 kern.notice]   ASC: 0x4 (LUN initializing command required), ASCQ: 0x2, FRU: 0x0


It was clear that one drive was unreliable, but in a ZFS pair, the unreliable drive was not a significant liability.

Mirrored Capacity Constraints:
Eventually, the 1.5 TB pair was out of capacity.
# zpool list
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zpool2  1.36T  1.33T  25.5G    98%  ONLINE  -
Point of Decision:
It was time to perform the drive upgrade. 2 TB drives were previously purchased and ready to be concatenated to the original set. Instead of concatenating the 2 TB drives to the 1.5 TB drives, as originally planned, a straight swap would be done, to eliminate the "flaky" drive int he 1.5 TB pair. The 1.5 TB pair could be used for other uses, which were less critical.

Target Drives to Swap:
The target drives to swap were both external USB. The zpool command provides the device names.
$ zpool status
  pool: zpool2
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The
       
pool can still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        zpool2        ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c4t0d0s0  ONLINE       0     0     0
            c5t0d0s0  ONLINE       0     0     0

errors: No known data errors
The former OS upgrade can be noted, where the pool was not upgraded, since the new features were not yet required to be leveraged. The old ZFS version is just fine, for this engagement, since the newer features are not required, and offers the ability to swap the drives to another SPARC in their office, without having to worry about being on a newer version of Solaris 10.

Scrubbing Production Dataset:
The production data set should be scrubbed, to validate no silent data corruption was introduced to the set over the years through the "flaky" drive.
Ultra60/root# zpool scrub zpool2

It will take some time, for the system to complete the operation, but the business can continue to function, as the system performs the bit by bit checksum check and repair across the 1.5TB of media.
Ultra60/root# zpool status zpool2
  pool: zpool2
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The
       
pool can still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: scrub completed after 39h33m with 0 errors on Wed Jul 18 00:27:19 2012
config:

        NAME          STATE     READ WRITE CKSUM
        zpool2        ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c4t0d0s0  ONLINE       0     0     0
            c5t0d0s0  ONLINE       0     0     0

errors: No known data errors
There is a time estimate on the scrub time, provided to allow the consumer to have an estimate of when the operation will be complete. Once the scrub is over, the 'zpool status' command above demonstrates the time absorbed by the scrub command.

Adding New Drives:
The new drives will be placed, in a 4 way mirror. Additional 2TB disks of media will be added to the existing 1.5TB mirrored set,  .
Ultra60/root# time zpool attach zpool2 c5t0d0s0 c8t0d0
real    0m21.39s
user    0m0.73s
sys     0m0.55s

Ultra60/root# time zpool attach zpool2 c8t0d0 c9t0d0

real    1m27.88s
user    0m0.77s
sys     0m0.59s
Ultra60/root# zpool status
  pool: zpool2
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h1m, 0.00% done, 1043h38m to go
config:

        NAME          STATE     READ WRITE CKSUM
        zpool2        ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c4t0d0s0  ONLINE       0     0     0
            c5t0d0s0  ONLINE       0     0     0
            c8t0d0    ONLINE       0     0     0  42.1M resilvered
            c9t0d0    ONLINE       0     0     0  42.2M resilvered

errors: No known data errors
The second drive took more time to add, since the first drive was in the process of resilvering. After waiting awhile, the estimates get better. Adding additional pair to the existing pair, to make a 4 way mirror completed in not muchlonger than it took to mirror a single drive - partially because each drive is on a dedicated USB port and the drives are split between 2 PCI buses.
Ultra60/root# zpool status
  pool: zpool2
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: resilver completed after 45h32m with 0 errors on Sun Aug  5 01:36:57 2012
config:

        NAME          STATE     READ WRITE CKSUM
        zpool2        ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c4t0d0s0  ONLINE       0     0     0
            c5t0d0s0  ONLINE       0     0     0
            c8t0d0    ONLINE       0     0     0  1.34T resilvered
            c9t0d0    ONLINE       0     0     0  1.34T resilvered

errors: No known data errors

Detaching Old Small Drives

Thew 4-way mioor is very for redundancy, but the purpose of this activity was to move the data from 2 smaller drives (where one drive was less reliable) to two newer drives, which should both be more reliable. The old disks now need to be detached.
Ultra60/root# time zpool detach zpool2 c4t0d0s0

real    0m1.43s
user    0m0.03s
sys     0m0.06s

Ultra60/root# time zpool detach zpool2 c5t0d0s0

real    0m1.36s
user    0m0.02s
sys     0m0.04s

As one can see, the activity to remove the mirrored drives from the 4-way mirror is very fast. The integrity of the pool can be validated through the zpool status command.

Ultra60/root# zpool status
  pool: zpool2
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: resilver completed after 45h32m with 0 errors on Sun Aug  5 01:36:57 2012
config:

        NAME        STATE     READ WRITE CKSUM
        zpool2      ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0  1.34T resilvered
            c9t0d0  ONLINE       0     0     0  1.34T resilvered

errors: No known data errors

Expanding the Pool

The pool is still the same size as the former drives. Under the older versions of ZFS, the pool would automatically extend. Under newer versions, the extension needs to be a manual process. (This is partially because there is no way to shrink a pool due to a provisioning error, so zfs developers make the administrastor make this mistake on purposes now!)

Using Auto Expand Property

One option is to use the autoexpand option.
Ultra60/root# zpool set autoexpand=on zpool2

This feature may not be available, depending on the version of ZFS.  If it is not available, you may get the following error:

cannot set property for 'zpool2': invalid property 'autoexpand'

If you fall into this category, other options exist.

Using Online Expand Option

Another option is to use the online expand option
Ultra60/root# zpool online -e zpool2 c8t0d0 c9t0d0

If this option is not available under the version of ZFS being used, the following error may occur:
invalid option 'e'
usage:
        online ...
Once again, if you fall into this category, other options exist.

Using Export / Import Option

When using an older version of ZFS, the zpool replace option on both disks (individually) would have caused an automatic expansion. In other words, had this approach been done, this step may have been unnecessary in this case.

This would have nearly doubled the re-silvering time, however. The judgment call, in this case, was to shorten the re-silver time, and build a 4-way mirror to shorten completion time.

With this old version of ZFS, taking the volume offline via the export and bringing it back online via import, is a safe and reasonably short method of forcing a growth.

Ultra60/root# zpool set autoexpand=on zpool2
cannot set property for 'zpool2': invalid property 'autoexpand'

Ultra60/root# time zpool export zpool2

real    9m15.31s
user    0m0.05s
sys     0m3.94s

Ultra60/root# zpool status
no pools available

Ultra60/root# time zpool import zpool2

real    0m19.30s
user    0m0.06s
sys     0m0.33s

Ultra60/root# zpool status
  pool: zpool2
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zpool2      ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0
            c9t0d0  ONLINE       0     0     0

errors: No known data errors

Ultra60/root# zpool list
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zpool2  1.81T  1.34T   486G    73%  ONLINE  -
As noted above, the outage time of 9 minutes to a saving 40 hours of re-silvering, was determined an effective trade-off.




Thursday, March 4, 2010

Good News for Large SNMP Performance Management!

Good News for Large SNMP Performance Management!

Seagate ships 2TB 6Gb/s SAS enterprise drive



Thanks to
SNSEuope for the good news!
Seagate is now shipping its 3.5-inch Constellation ES drive, the industry's first 2TB enterprise-class drives featuring 6Gb SAS, to customers worldwide. Designed specifically for multi-drive nearline storage environments, the Constellation ES drive has been qualified by leading enterprise OEMs and system builders who demand storage solutions of the highest capacities with increased power efficiency, enterprise-class reliability, and data security that their customers demand. The Constellation ES drive leverages Seagate's 30 years of leadership in meeting large enterprise customer needs in product development, qualification, and support.

The fourth-generation, 3.5-inch Seagate® Constellation™ ES drive family for 7200-RPM enterprise environments enables cost-effective, highly efficient storage with capacities of 500GB, 1TB and 2TB. Supporting up to 76TB per square foot, it offers best-in-class reliability, leading 6Gb/s SAS or SATA 3Gb/s performance, PowerChoice™ optimized power and cooling technology, and a government-grade security option – all backed by Seagate.
The Impact on Network Management

Businesses were forced to use 300GB drives to build large storage systems to hold large quantities of collected SNMP data in a managed services environment. With these new drives, there will be the ability to more reliably hold vastly larger quantities of data per chassis!



With typical a typical enterprise chassis holding 6.6x the storage, a singe ZFS file system will supports a lot more customers in the same form factor. Many other legacy file systems will top out at 16 Terabytes, making even the smallest external disk chassis with 2 Terabyte SAS drives a burden from an OS and Application perspective.


Adding 6x the quantity of memory to an existing computing chassis, to properly cache 6x the disk capacity, is most likely not a reasonable option, without buying computing platform. With ZFS, the ability to leverage a PCIe slot for read and write caches, will provide superior performance and linear scalability for those very same applications with the larger disks (as they are filled to capacity) than non-flash and non-ZFS based systems. This means, adding more disks will scale more linearly, without adding substantial quantities of RAM for in-memory cache, since popping in another PCIe card will do the trick.

This is GREAT for everyone - the Managed Services Provider, whose costs will decrease, as well as the Customer, who will receive lower prices for the services received!

Wednesday, September 16, 2009

ZFS: Adding Mirrors

ZFS: Adding Mirrors

Abstract

Several articles have been written about ZFS including: [Managing Storage for Network Management], [More Work With ZFS], [Apache: Hack, Rollback, Recover, and Secure], and [What's Better, USB or SCSI]. This is a short article on adding a mirrored drive to an existing ZFS volume.

Background

A number of weeks ago, a 1.5 Terabyte external was added to a Sun Solaris 10 storage server. Tests were conducted to observe the differences between SCSI and USB drives, as well as UFS and ZFS filesystems. The original disk that was added will now be added to.


Inserting a new USB drive into the system is the first step. If the USB drive is not recognized upon, a discovery can be forced using the classic "disks" command, as the "root" user.
Ultra60-root$ disks
A removable (i.e. USB) drive can be labeled using the "expert" mode of the "format" command.
Ultra60-root$ format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 [SEAGATE-SX1181677LCV-C00B cyl 24179 alt 2 hd 24 sec 611]
/pci@1f,4000/scsi@3/sd@0,0
1. c0t1d0 [SEAGATE-SX1181677LCV-C00C cyl 24179 alt 2 hd 24 sec 611]
/pci@1f,4000/scsi@3/sd@1,0
2. c2t0d0 [Seagate-FreeAgent XTreme-4115-1.36TB]
/pci@1f,2000/usb@1,2/storage@4/disk@0,0
3. c3t0d0 [Seagate-FreeAgent XTreme-4115-1.36TB]
/pci@1f,2000/usb@1,2/storage@3/disk@0,0
This is what the pool appears to be before adding a mirrored disk
Ultra60-root$ zpool status
pool: zpool2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
/dev/rdsk/c2t0d0 ONLINE 0 0 0

errors: No known data errors
Process

An individual slice can be added as a mirror to an existing disk through "zpool attach"
Ultra60-root$ zpool attach zpool2 /dev/rdsk/c2t0d0 /dev/dsk/c3t0d0s0
Verification

The result of adding a disk slice to create a mirror can be checked with "zpool status"
Ultra60-root$ zpool status
pool: zpool2
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 1h4m, 6.81% done, 14h35m to go
config:
NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
mirror ONLINE 0 0 0
/dev/rdsk/c2t0d0 ONLINE 0 0 0
c3t0d0s0 ONLINE 0 0 0

errors: No known data errors
The consumption of the CPU utilization during the resliver can be observed through "sar"
Ultra60-root$ sar

SunOS Ultra60 5.10 Generic_141414-09 sun4u 09/16/2009

00:00:00 %usr %sys %wio %idle
00:15:01 0 40 0 60
00:30:00 0 39 0 60
00:45:00 0 39 0 61
01:00:00 0 39 0 61
01:15:00 0 39 0 61
01:30:01 0 41 0 59
...
10:45:00 0 43 0 57
11:00:00 0 40 0 59
11:15:01 0 40 0 60
11:30:00 0 40 0 59
11:45:00 0 39 0 61
12:00:00 0 43 0 56
12:15:00 0 47 0 53
12:30:01 0 44 0 56

Average 0 39 0 60
If you are curious concerning the performance of the system during the resilvering process over the USB ports, there is "zfs iostat" command.

Ultra60-root$ zpool iostat 2 10
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zpool2 568G 824G 12 0 1.30M 788
zpool2 568G 824G 105 0 6.92M 0
zpool2 568G 824G 156 0 9.81M 7.48K
zpool2 568G 824G 157 1 10.1M 5.74K
zpool2 568G 824G 117 6 10.3M 11.5K
zpool2 568G 824G 154 5 10.1M 7.49K
zpool2 568G 824G 222 31 8.44M 36.7K
zpool2 568G 824G 120 13 8.45M 10.2K
zpool2 568G 824G 113 4 9.75M 8.99K
zpool2 568G 824G 120 5 9.48M 11.0K

Conclusion

The above session demonstrates how a whole external USB device was used to create a ZFS pool and an individual slice from another USB device was used to mirror an existing pool.

Now, if I can just get this Seagate FreeAgent Xtreme 1.5TB disk to just be recognized by some system using FireWire (No, can't use it reliably on an old MacG4, Dual G5, Mac Dual Core Intel, or a dual SPARC Solaris platforms) - I would be much happier than using USB.