Making Known the Secrets to Network Management. Raising up a new generation of professionals.
Friday, August 20, 2010
Flash: A Little ZFS History
Flash: A Little ZFS History
Adam Leventhal had been working for years at Sun with their Fishworks team, which leveraged a new piece of hardware referred to as Thumper, combined with Solaris 10's ZFS. He is no longer with Sun, but still has some great history on his personal blog with ZFS and acceleration.
Read Optimized Flash
Flash normally comes in two different flavors, Read Optimized Flash, where it is cheap & fast, but not so reliable. When caching LOTS of information, to reduce read access to rotating rust, the benefits are extremely beneficial since random access time will drop on large storage pool on such monster storage platforms like Sun's original Thumper, pictured above.
The Adaptive Read Cache in ZFS was designed to be extended. Disk are slow, DRAM is expensive, Flash meets a nice niche in the middle. Flash has limited write cycles, but if it burns out in a cache, it is no big deal since the cache-miss would just go to the disk.
A lot of businesses have been talking about replacing hard drives with Flash, but their long term storage is not as secure. Flash is better used as cache. Sun had affectionately called their Read Cache technology as "Readzilla" when it is applied to ZFS.
Write Optimized Flash
Another area of pain experienced by uses is with write bottlenecks. The more writing you do, the more random access to the disks may occur, and the more latency is produced because of the seek time as the mechanical heads move slowly across the platters.
Taking writes, turning them into sequential writes, is a big help in modern file systems like ZFS. If one could take the writes and commit them to another place, where there are no mechanical steppers, further advances is speed can be accomplished. This is where Sun came up with "Logzilla" - using Write Optimized Flash to accelerate this process.
ZFS has a feature where one can place their writes on dedicated infrastructure and Flash designed to handle writes quickly yet reliably is extremely beneficial. This is a much more expensive solution than disk, but because it is faster and non-volatile, a system crash when a write is being committed to disk will not be lost as it would be in straight DRAM.
Non-Volatile DRAM
Adam mentioned non-volatile DRAM as an option in his personal blog entry (as well as defunct Sun blog entry.) Get a UPS and plug in the NV-DRAM card, to get the benefits of DRAM speed, non-volatility of Flash, and virtually unlimited writes... this seems like a winner.
What no one tells you is that your UPS becomes a more critical component than ever before. If you do not replace your batteries in time (no generator and a truck hits a poll) - your critical data might be lost in an hour.
Network Management
Nearly all network management shops deal with high quantities of reads and writes on a steady load... with A LOT of head-stepping. This comes from the need to poll millions of distinct data every minute, roll the data up, stick it in a database, and roll the data to keep it trim.
For environments like this, ZFS under Solaris is optimal, leveraging Read and Write optimized Flash. In a clustered environment, it may become important to keep these write optimized flash units external, on centralized infrastructure.
If performance management in network management is your life: Solaris ZFS is your future with Readzilla and Logzilla. Nothing out there compares from any other Operating System for the past half-decade.
McAfee: Purchased by Intel After Fiasco
McAfee: Purchased by Intel After Fiasco
What is Virus Protection?
Operating Systems like Microsoft Windows offers mechanisms to install software automatically. Sometimes, it the mechanism is a bug. Other times, it is a key which may have been purchased or hacked, and later leveraged to deposit viruses or spy-ware. Sometimes, the OS just offers too much freedom to the user, to allow them to install anything they would like (anywhere they would like), and when they unwittingly install a piece of software on purpose, the machine becomes infected.
Some consider it the computing system equivalent to the Mafia, "You want to be safe, pay us some protection money, and you'll be safe." They work to make the computing environment more rigid because the operating system vendor (in this case, Microsoft) was too lazy.
To make the environment more rigid, inspection is done for known snips of code on files loaded on the hard drive, coming in or out via email, or through tools like web browsers. These pieces of code that are searched for, basically subets of the possible virus or worms which can be used to identify them, are called "signatures".
The "signatures" are distributed from central locations from the Mafia's God Father to the software applications which some people choose to install on their computer, hereafter referred to as the Hitman. It is the job of the Hitman on your computer to whack the virus... or regularly encourage you to pay-up if you did not pay your security bill.
Leading up to the Acquisition:
Less than half a year ago, McAfee distributed a virus signature update that identified a core Microsoft Windows file a problem and whacked it.
McAfee update crippled some Windows PCs by quarantining or deleting a file crucial to Windows operation, called “svchost.dll”.Large segments of society, especially emergency services who were unfortunate enough to pick Microsoft Windows for their core infrastructure, in combination with McAfee, and various service packs were scrambling for cover.
The bug, McAfee said, meant that “less than half of one per cent” of business customers, and a smaller number of consumer customers, could not use their computers. The company did not release any detailed figures, but said that the problem nly occurred on machine running Windows XP Service Pack 3 in combination with a specific build of McAfee’s antivirus product.
Reported victims include Kansas City Police Department and and the University of Kansas Hospital and about a third of the hospitals in Rhode Island. PCs also went haywire at Intel, the New York Times reports, citing Twitter updates from workers at the chip giant as a source.McAfee picked up very bad reputation after this event.First hand experiences from an Iowa community emergency response centre, ironically running a disaster recovery exercise at the time, can be found in a posting to the Internet Storm Centre here. The Register has heard from a senior security officer at a net infrastructure firm that was also hard hit by the snafu, as reported in our earlier story here.
To be fair, a virus signature is nothing more than a pattern of bits that can appear in a file at a particular set of locations, so it is amazing that after all these years, with so many virus signature creators, that this has not happened earlier.
The New God Father:
After a very bad spring and summer, the cost of the help desk support to repair all of those old machines, lost customers who would stop using their products, bad media coverage from the mishap, and new customers who were not very interested in taking a chance on them - someone else was really needed to clean up their reputation.
Intel Corporation made purchased McAfee. What surprises me is that fewer media outlets had connected the purchase with the recent virus signature failure.
Network Management Connection:
Microsoft Windows systems are tremendous targets for viruses and worms. With Network Management systems which must be located in a DMZ and connect to millions of potential end points, such platforms should be considered a virus & worm distribution system, and avoided.
The application of virus definitions to such production systems can disrupt the reputation of a third party management company and put them out of business, the same way McAfee tarnished reputation needed to be consumed by Intel.
Linux: 5 Year Old Root Exploit Finally Patched
Security Focus:
It has been a over half decade, but a Linux kernel root exploit has finally been patched. Yes, Oracle Enterprise Linux, RedHat, and others have been running around with this issue for a long time.
For your Novell fans, the SUSE distribution has been OK since 2004, but it has not trickled down to the other distributions since the fix had not been incorporated into the official kernel until now.
Network Management:
In a world of network management where a central or even distributed systems monitor or manage millions of potential device across many thousands of networks, a root exploit in an operating system kernel dating back over a half decade is extremely high risk.
If it has to run and has to run securely - a generic Linux distribution may not fit the bill.
Look for Operating System vendors who have a strong record with understanding Data Centers and managing networks, not just OS vendors who can do it more cheaply.
SunRay: VDI Out of This World!
SunRay: VDI Out of This World!
The SunRay Thin Clients can be used anywhere, even off the face of the earth, while traveling in airplanes!
This snapshot of the thin client functioning on an airplane tray is pretty amazing!
Tuesday, August 17, 2010
Solaris Express Resurrected and Other Changes
Solaris Express Resurrected
For those of you who have been following Solaris for a long time, it had started as an open source operating system, based upon BSD, merged with SVR4, Sun started the process of a re-write to open-source it again with the OpenSolaris project, controversially started an binary distribution also called OpenSolaris, and canceled Solaris Express program.
With the purchase of Sun by Oracle, the direction of the wind has changed. Solaris Express has been revived.
We will have a Solaris 11 binary distribution, called Solaris 11 Express, that will have a free developer RTU license, and an optional support plan.Sad Changes
Solaris 11 Express will debut by the end of this calendar year, and we will issue updates to it, leading to the full release of Solaris 11 in 2011.
On the sad side, the real-time feed of source code changes to the community will end, and binary distributions of OpenSolaris will also end.
We will not release any other binary distributions, such as nightly or bi-weekly builds of Solaris binaries, or an OpenSolaris 2010.05 or later distribution.Status Quo
On the bright side, source code will continue to be published, so existing down-stream OpenSource projects will be able to leverage it, much the same way Oracle works with RedHat's distribution of Linux today.
We will distribute updates to approved CDDL or other open source-licensed code following full releases of our enterprise Solaris operating system.Also, upstream contributions will continue to be accepted.
We will continue active open development, including upstream contributions, in specific areas that accelerate our overall Solaris goals.Acceptance as peers into the Solaris community (full access to real-time source code and contributions) will occur on case-by-case basis. This is really no difference from the existing community - not any yahoo off the internet could make changes to the source code.
Examples include our activities around Gnome and X11, IPS packaging, and our work to optimize ecosystems like Apache, OpenSSL, and Perl on Solaris.
We will have a technology partner program to permit our industry partners full access to the in-development Solaris source code through the Oracle Technology Network (OTN).The landscape has changing, things are returning to more of the way Sun used to be prior Jonathan Schwartz. It looks almost like a hybrid approach between Scott McNealy and Jonathyn on the software front.
This will include both early access to code and binaries, as well as contributions to us where that is appropriate.
Differences
Honestly, there is not that much of a difference, except competitors will not have access to new source code of Solaris as quickly to facilitate their copying. This has the potential to make Solaris a stronger competitor in the marketplace.
Instead of OpenSolaris competing with other binary distributions, it appears to become more of an example "gold disk"with mosts of the bugs worked out, with the source code for other derived contributions to base their work upon. If people are really serious about contributing, they still can, through a different web site.
Let's see how this hybrid development model works!
Labels:
OpenSolaris,
Solaris,
Solaris 11,
Solaris 11 Express,
Solaris Express
Thursday, August 12, 2010
Oracle-Sun 08/2010 Systems Strategy
Oracle-Sun 08/2010 Systems Strategy
Abstract:
John Fowler, Executive Vice President, had a web conference concerning the systems strategy for the merged Oracle & Sun company. The full PDF of slides was made available. For those who are more interested in systems oriented news, captured [high resolution] screen shots are captured with the interesting highlights.
Solaris Roadmap:
Solaris 11 is coming next year, with multiple upgrades scheduled for the next 5 years.
Solaris Features:
Oracle understands what Solaris is - this is the operating system where if the data needs to be secure and the business must function, this is the advocated direction for the business.
SPARC Trajectory:
Systems based upon the open SPARC central processors with the Solaris operating system will continue to receive upgrades to deliver performance improvements over the next 5 years.
SPARC Roadmap:
A hardware announcement is scheduled to happen later this year and the roadmap shows a doubling of perforance before the end of the year.
SPARC Direction:
One SPARC architecture, one operating system, one systems management, one virtualization strategy, more memory, more threads, more cores, and system aggregate throughput increases of 2x every 2 years.
Storage Trajectory:
Substantial "order of magnitude" improvements over the net 5 years.
Oracle Storage Roadmap:
Industry leading now, will continue to do so into the future.
Oracle Storage:
Oracle is number 1 or first to market on an increasing number of technologies.
Conclusion:
Oracle is the one to trust and the one who will deliver the goods for the next 5 years!
It will shortly be a very good time to build out a Network Operations Center, to get the most advanced and stable operating system on the latest next generation SPARC architecture.
Wednesday, August 11, 2010
Solaris 11: 2011 Confirmed
Solaris 11: 2011 Confirmed
The Concern:
There has been a lot of concern from the OpenSolaris community about the silence from Oracle. Various community members believed that Oracle was just tying up Solaris 11. A new community based upon internal & external developers started the creation of Illumos in response to the silence. Illumos was discussed in the Network Management blog.
The Confirmation:
Jeff Burt at eWeek attended a live web event at Oracle on Tuesday August 06, 2010 with Oracle Executive Vice President of Systems, John Fowler. Jeff reported that Solaris 11 will be coming in 2011.
Oracle will release the next version of the Solaris operating system in 2011, and will double the performance of its SPARC processors every other year.Sean Michael Kerner from ServerWatch also attended the live event, reporting the silence in the OpenSolaris community was due to the diligent work going on with the pending Solaris 11 release.
"Solaris 11 will be a superset of what is in openSolaris"... "We've been a little quiet on the open source front," Fowler said. "It's not that we're not investing in Solaris, we're just investing to make sure that we have all the major components for the new release."
The Odd Announcement:
Jeff Burt also reported in his eWeek article that Oracle Enterprise Linux is destined for SPARC?
Oracle will continue to support Oracle VM, its virtualization technology that enables businesses to run Windows and Linux environments—including Oracle's own Oracle Enterprise Linux—on SPARC-based systems.The meaning of this phrase seems uncertain - one might be wise to wait for the real presentation material and transcript to be made available from Oracle.
Labels:
eWeek,
Illumos,
OpenSolaris,
Oracle Enterprise Linux,
ServerWatch,
Solaris 10,
Solaris 11,
SPARC
Tuesday, August 10, 2010
Sorry Cliff Saran: UltraSPARC T3 Almost Here!
A Very Wrong Prediction
The Bizarre Prediction:
Cliff Saran, the managing editor of ComputerWeekly.com made a terribly bizarre prediction that no one in their right mind could ever consider as reasonable - that the "SPARC roadmap looked dim". How did he come to that conclusion?
The Facts Missed:
During the Hot Chips 2009 conference, there was a clear description of the architecture for the up and coming UltraSPARC 3 , at that point named Rainbow Falls. There was also a next generation crypto engines presentation. Clearly, had Cliff had not been watching the discussions from February 2010 about the 2 Billion Transistor UltraSPARC T3. Also missed was the code update in OpenSolaris for the official name UltraSPARC T3. Cliff apparently missed the additional code updates from OpenSolaris in July 2010 leveraging the official naming for the UltraSPARC T3.
The Freudian Slip:
Oracle held a web conference today (Tuesday August 10, 2010) talking about the SPARC roadmap, slipped [typo'ed?] a piece of information. Timothy Prickett Morgan covered the announcement from The Register (thanks for the screen shots!) indicating that the current generation of UltraSPARC has 512 threads (which is only possible on an UltraSPARC T3.) It seems like something was only partially redacted from the presentation, since the UltraSPARC T3 in a 4 sockets configuration should offer 64 cores with those 512 threads.
The Revelation:
Furthermore, John Fowler (formerly of Sun and now in Oracle) publicly released this rough image detailing the SPARC roadmap for the next 5 years.
The Conclusion:
For an industry managing editor to publish such a bizarre prediction in the headline of an article in his journal, discounting the information ringing in the development circles, and to be so wrong about his prediction just days before a formal announcement... indicates this editor is not an insider who has a tap on reality.
Labels:
ComputerWeekly,
HotChips,
Oracle,
SPARC,
Sun,
TheRegister,
UltraSPARC T3
Monday, August 9, 2010
Transferring Files With FTP
Transferring Files With FTP
Abstract:
FTP, with it's mini macro programming language, has been around since the beginning of internet time. It has been used to transfer files around the internet with the avoidance of lower level sockets programming for decades. When transferring files, there is sometimes the question over whether a file has completed it's transfer via FTP on the receiving "ftpd" end. This problem can be mitigate with several best practices so the receiving end can be well aware of when the file being transferred is ready for batch processing through a scheduling facility such as "cron".
Option 1 - Lock Files:
One could ask the initiator create a lock file (send-me.cpio.gz.lock), start sending the data file (send-me.cpio.gz), and then remove the lock file upon completion of the transfer. The cron job can pick it up again once it sees a file where there is no corresponding lock file.
This is helpful for transferring a single file as well as multiple files when that single file was "split" (send-me.cpio.gz.1, send-me.cpio.gz.2, send-me.cpio.gz.3, etc.) Processing for multiple files will not commence until after all the files in the batch have been sent and the lock file is removed.
Option 2 - Suffixes:
A second option is when moving files via FTP, if the sender starts the transfer of the file (send-me.cpio.gz), add a separate suffix to identify that it is in transit (put send-me.cpio.gz.work), and once the file has been sent, the sender should perform a rename of the file in ftp (rename send-me.cpio.gz.work send-me.cpio.gz) The rename is an atomic operation, so cron on the receiving platform can pick up files that do not have a ".work" suffix (or only pick up files which have a ".gz" suffix!)
This option is often very helpful for the occasional transfer of a single large file, where the integrity of the file is important, but people don't want to add too much complexity.
Option 3 - Work Directories:
A third option if one does not want to rename the files, one can always have the initiator place the files in a temporary directory (/temp) and then have the initiator move the file to the production directory (/prod) via their ftp session. The cron job can pick it up only from the production directory since it is known to be completely transferred since the move is an atomic operation.
If there are large numbers of small files which are needed to be transferred, this process is very helpful since occasionally the "inode" may grow aggressively (slowing down the all processing) in the temporary or production directory, requiring an occasional rebuild (rm /temp; mkdir /temp) to resize the inode.
Option 4 - Multiple Files:
A fourth option can deal well with transferring many multiple files (mput) from an initiating system where the receiver wants to process them as they are arriving. If there is a directory holding a large number of files (file1.Z, file2.Z, file3.Z, file4.Z, ...), the initiator can create an additional file with a known suffix (file1.Z.CoMpLeTe, file2.Z.CoMpLeTe, file3.Z.CoMpLeTe, file4.Z.CoMpLeTe, ...), initiate the "mput", and the receiver can have "cron" jobs set up looking for the suffix ("CoMpLeTe"), process the original file name, and upon processing completion, purge the file containing the suffix.
This is especially helpful where transfers may be overlapping from multiple sources with multiple files and the receiving end wants to process the individual files in as close to real-time as possible.
Advanced Automation:
If the senders are newbies to the internet and have worked very little with FTP on the initiating or sending end, there are ways to help them along.
With "ftp", you can build macros on the sending end so the process of logging in, renaming, moving files, creating/removing lock files, or logging out can be reduced to single macro commands, to further remove complexity on the sending end.
The receiver can build the macros and just send them to the people who are the file senders, and the receiver can maintain the ftp macro code, as well. The "ftp" protocol can be used to update those foreign macro files, using a "rename" to swap out the old macro file and an additional "rename" to swap those new macro files into production.
Conclusion:
When there is a need to send files regularly from a source to a destination, the FTP protocol is a good choice when the sender cooperates with the receiver.
Monday, August 2, 2010
More Configuring VNC
The original blog posting on configuring VNC under Solaris 10 is here.
Thanks JayG for some more VNC Configuration options under Solaris!
Illumos, Licensing, Software, Hardware Updates
Illumos, Licensing, Software, Hardware Updates
A New Distribution?
It appears there is a new distribution about to appear for followers of OpenSolaris: Illumos!
Don't miss the project opening!
Solaris Licensing Update
Third party provider terms have been getting re-written with companies such a HP and Dell. Yes, Dell & HP are reselling Solaris with other Oracle items such as Linux and VM. Your Oracle Premier Subscription license here for third-party x86/x64 hardware.
Hats-off to Joerg for his blog post to help sort out the confusion regarding pricing of Solaris under Oracle and non-Oracle hardware.
The short version: if you are running Solaris 10 under Oracle hardware, you get a perpetual license; if you are running Solaris 10 under non-Oracle hardware, you get subscription licensing.
Solaris & Hardware Updates
New hardware releases often accompany new software releases.
NetMgt has been blogging about hints into the next release of Solaris from Oracle in early July as well as next release of UltraSPARC T3 hardware from Oracle in late July - both pointing to September release.
Another hint was blogged by Joerg in early August when he ran into some M3000 XCP 1093 release notes - also pointing to an official September release.
Conclusion
September looks like a pretty exciting time for Network Managment Architects!
Labels:
Illumos,
OpenSolaris,
Solaris,
Solaris 10,
UltraSPARC T3
Subscribe to:
Posts (Atom)