Thursday, March 19, 2015

Oracle: Next Generation of Engineered Systems


[Graphic courtesy Oracle Data Center Kickoff]

Oracle's Next Generation Engineered Systems

Abstract:

Larry Ellison: Executive Chairman of the Board and CTO introduces Oracle's 5th Generation of Oracle Engineered Systems. Provide the Highest Performance systems and Lowest Service Price at the core. Oracle effectively targets Cisco UCS, HP, EMC.

Summary of Major Announcements

Oracle Virtual Compute Appliance X5

Converged compute and srorage; Runs all datacenter applications. High Performance and Lowest Purchase Price... Combines compute servers, networking, and storage servers in the same box... highly available and fully redundant Compute Infrastructure: Scalable from 2-25 nodes; Linux, Solaris, and Windows; Network Infrastructure: High speed, low latency, fully configured fabric, integrates to existing Ethernet & Storage Networks Management Infrastructure: Redundant management servers; virtual assembly builder with templates included Half Price Oracle List to Cisco Discount; almost a third price

Oracle Storage Appliance X5

Twice as fast, half as much
  1. Extreme Flash Storage Server
  2. High Capacity Storage Server
12.8 TB PCIe Flash or 6.4 TB PCIe Flash with 48 TB SAS Disks

Oracle Database Appliance X5

2x Servers: 2x18 cores; 8x32 GB (256GB DIMM); 2x Infninband; 4x 10 Gbit Ethernet Storage: 4x 200 GB Flash for Redo Logs; 4x 400 GB Flash for ODA Accelerators; 16x 4TB Hard Drive (Data + Temp Tables + Archive Logs)

Zero Data Loss Recovery Appliance

Fully automated, point in time recovery, no data loss, thousands of databases Backup and log to another rack, another data center, or to Oracle Public Cloud

Big Data Appliance

Oracle Big Data SQL joins: Hadoop, NoSQL, and Oracle RDBMS

Exalogic Elastic Cloud X5-2

Private Cloud for Applications & Middleware Portability to Oracle Cloud Compute: 2x 18 cores, 256 GB RAM/node, 800 GB Flash/node Network: 40 Gbit InfiniBand internal; 10Gb or 1Gb Ethernet external Storage: 80 TB Disk; 256GB Storgage DRAM

Exadata Database Machine X5

Workloads: Warehousing, OLTP, Database as a Service, In-Memory Database Flash Disks replaced High-Performance Disks because Flash Capacity Increase and Price drop! Elastic Configurations: 2x DB and 3x Storage Servers... Full Rack... Multi-Rack Optimize for: In-Memory Max DRAM; OLTP Equal DB & Flash; Warehouse High Capacity Storage and Compute

Oracle SuperCluster

Two SPARC Options:
  1. SuperCluster T5-8
  2. SuperCluster M6-32
Same Storage Server and Software as Exadata X5

Data Center of the Future with Public Cloud

Options Include: - Logging Backups to the Cloud - Cloud as Backup Datacenter - Test and Development in Cloud with Production Local - Production in Cloud with Test and Development Local

The Deep Dive Sessions

The following Deep Dive sessions are for both newly announced hardware as well as for some existing software noted at the bottom of this section. Written summaries provided can assist in helping select which videos to watch.
Oracle SuperCluster

Oracle Largest, Most Advanced, and Most Secure Appliance

  • Exadata Storage Grid
  • Firmware based Hypervisor (vs re-purposed Linux OS as Hypervisor)
  • Cloud Tenant Self Service Portal
  • Rule Based Access Control Metering and Limiting by Account for customer's self service 
  • IO Domain Recipes (i.e. Small, Medium, Large selections) 
  • Templates on top of Recipe (Pre-configured Recipe with OS Patches and Application)
  • Extreme Tenant Isolation through Zone, Network Paths, and Disks
  • Automated Compliance Validation of isolation
Oracle Exadata X5-2

Exadata X5-2: Extreme Flash and Elastic Configurations

Oracle Exalogic X5

Oracle Exalogic X5-2 and Exalogic Elastic Cloud Software 12c

Engineered system designed to run the mid-tier components
  • Oracle Applications 
  • Java Applications 
  • Fusion Middleware 
Exabus Technology, shared with Exadata, which reduced latency between servers. Platform as a Service (Software made available in a cloud) and Infrastructure as a Service deployed on the customer premise.
Virtual Compute Appliance

Oracle Virtual Compute Appliance: Simplify IT and Save Money

Goals:
  • Simplify Deployment
  • Reduce Cost
Pre-built system which is ready to use in a Data Center with a minimal number of steps
  • Compute Capability: 2 - 25 nodes
  • Software defined network with Dual Redundant InfiniBand
  • Ethernet and FibreChannel external connectivity
  • Active-Passive Management Server
  • ZFS Storage Appliance with Redundant Controllers
Self-Service
  1. Provisioning of VM's, Storage, and Network
  2. Policy Driven
  3. Metering and Chargeback
  4. RESTful Infrastructure as a Service (IaaS) interface
Oracle Enterprise Manager drives IaaS
  • Fault Detection
  • Incident Management
  • Lifecycle Managment
  • Change Managment
  • Search & Compare of VM's
  • Apply Patches
  • Gold Templates
  • Compliance reporting
All software is bundled (Linux, Solaris, OEM 12c, Oracle VM, Orchestration, Oracle Virtual Networking, Oracle Trusted Partitioning)
Oracle Database Appliance

Oracle Database Appliance X5-2

Provides everything to deploy a high availability database & application
  • Wizards for simplified deployment
  • Patch Automation (Firmware, OS, Database, Storage, etc.)
  • Oracle High Availability Software Stack (Real Application Cluster or RAC)
  • Affordable with Capacity on Demand
  • Oracle Multitenant Option bundled License
  • In-Memory Database Option bundled License
  • OS and Virtualization Licenses
Refreshed hardware, higher consolidation density Oracle Enterprise Manager Plug-In for Monitoring and Management with Analytics across Appliances Same software stack as Exadata for affordable Test and Development
Oracle FS1 Flash Storage System

Oracle FS1 Flash Storage System 

Summary of Features
  • 2 - 16 Highly Available Nodes
  • Petabytes of Flash
  • 2M 50/50 Read/Write IOPS
  • 80 GB/sec or 5 TB/minute Data Movement
Designed to leverage Flash, not existing Hard Disk solutions. Supports both Flash and Disk, Designed for Flash with Economies of Disk
Oracle Big Data Appliance X5-2

Big Data Appliance

Solves problems surrounding:
  • Performance
    Optimized Hardware
  • Time
    30% Quicker to Deploy
  • Cost
    21% Less Expensive to Purchase
  • Integration
    Data Transparently into the Infrastructure
Oracle Big Data SQL for simple insertion Oracle Enterprise Manager Compatibility
Oracle for Enterprise Big Data

The Move to Big Data

Oracle Linux

Oracle: A Complete, Independent Linux Vendor

Nothing significantly new, basic key points:
  • Oracle Linux Premier Support included with Oracle Hardware
  • Stand-Alone Oracle Linux Premier Support offered for other servers 
  • MyOracleSupport Integrated 
  • Oracle KSplice Bundled (on-line patches, immediately active)
  • Oracle Enterprise Manager included for Patching and Management 
  • Oracle Clusterware Bundled 
  • Oracle Backport Lifetime Sustaining Support (no bug fixes, new hardware support) 
  • Oracle OpenStack bundled 
  • Red Hat Binary Compatibility
Delivery on DVD with pure Red Hat or Oracle Unbreakable Linux Kernel. OS Features
  • Oracle Unbreakable Kernel option for newer Oracle Engineered Systems.
  • DTrace Integration from Solaris for Oracle Linux
  • Isolation features: Linux Containers (LXC) similar to Solaris Zones; Docker (for Application)
  • Free to download, use, distribute, update; Pay for production system
  • Oracle VM Templates
Differentiation: DTrace and KSplice

Friday, March 13, 2015

New Tab: Packaging Resources!

[Solaris Logo, formerly from Sun Microsystems, now Oracle]

Announcement:
Network Management has just released the new Packaging Tab for Solaris Community!



Packaging resources for Solaris

[http] - SunFreeware (migrating to UNIXPackages)
[http] - UNIXPackages (commercial)
[http] - OpenCSW
[http] - Solaris Multimedia
[http] - iBiblio Solaris Package Archive
[http] - Solaris 11 Packages from Oracle (commercial)

Package/Configuration Management Resources for CPE

OpenACS [Home|Source] Config Mgmt for TR069 Protocol

Sunday, March 8, 2015

Security: SuperFish and HeartBleed Vulnerabilities

Some Nice Security Testers...


There has been a lot of security discussion lately, regarding SSL. Both SuperFish corporation and HeartBleed vulnerability have been in the cross-hairs.

[Dead Fish on Beach, courtesy Wikipedia]

Detecting a SuperFish Issue...


While SuperFish is not strictly a vulnerability, the poor security policy can allow for the bypass SSL security.

Filippo.IO was kind enough to assemble a SuperFish vulnerability tester - go and test your PC here!

Detect a Bleeding Heart...
If you have a web site you commonly use, Filippo.IO also offers a HeartBleed vulnerability tester.





HP Acquires Aruba Wireless Infrastructure

[HP Logo, Courtesy Wikipedia.org]
Abstract: 
Hewlett-Packard Company, created in a garage by two electrical engineers in Palo Alto, California, started their company through the creation of superior test equipment. Through organic and acquisition means, HP had grown into consumer and enterprise markets, ranging from printers, to PC's, to mini-computers, networking equipment, and software. They are preparing to split into two different companies, one based upon consumer equipment and another based upon enterprise equipment. Prior the split, HP is filling gaps in their networking portfolio.
[Huawei-3Com Partnership Logo]

Road to Aruba:
On September 26, 2014, HP announced the launch of a Software Defined Networking Application Store.  acquisition of a Software Defined Network company. October 5, 2014 marks when HP announced the split between HP, Inc (for printers & desktops) and Hewlett-Packard Enterprise (for networking, server, and software.) Just days later, October 26, 2014,  HP decided to find a buyer for H3C, the networking partnership between Chinese based Huawei and U.S. based 3Com which HP received when 3Com was acquired in 2010. Clearly, HP is committed to filling out their networking portfolio in the Enterprise company while culling some partnerships..

[Aruba Networks logo, courtesy Wikipedia]
Aruba Not Soon Enough:
March 2015, HP announces the acquisition of Aruba, wireless network provider, filling a gap in their Networking portfolio, prior their corporate split between Desktop/Printer and Enterprise. Hewlett-Packard's networking division was experiencing some pain, according to The Register.
The deal will form a welcome plug to HP's sliding network biz, which fell 10.8 per cent to $562m (£365m) in the company's first quarter results last week.
Aruba posted sales of $729m (£473m) for its full year results in 2014. In its second quarter numbers last week, revenue rose 21 per cent to $212.9m (£138m) and net profit came in at $5.6m versus a net loss of $10.7m in the same quarter a year earlier.
With larger quantities of networking moving from wired to wireless, the new growth area must be accounted for in Hewlett-Packard's portfolio. The Aruba Networking acquisition is expected to be complete in Hewlett-Packard's second quarter.
[HP Split Image, courtesy Anandtech]

Divide and Conquer:
This is not the first split, for HP - Agilent Technologies was created when the Test Equipment division was spun-off 1999. The split of Printers and PC's, to form HP, Inc., should complete in October 2015. The PC and Enterprise markets are very different, requiring significant management style differences... the former requiring very short innovation cycles while the later demands long-term viability of a product with significant investment with close attention to security. Aruba should make an excellent contribution to the portfolio.

Conclusions:
Hewlett-Packard Company was also famous for Network Management products, such as the formerly branded OpenView suite, which dominated the market during the 1990 as the Internet was aggressively expanding. HP's former OpenView suite consolidated into HP Software Division will find a very good home, in the new Hewlett-Packard Enterprise with Network Equipment vendors like Aruba. The combined 3Com, HP Networking, and Aruba portfolio will offer a reasonable platform for the Enterprise company, while the existing established Network and Systems Management suites will provide a software layer to unify the equipment for basic Fault, Performance, and Configuration Management in the Managed Services arena.

Monday, March 2, 2015

Motorola's FreeScale to be acquired by NXP

[Motorola Logo, courtesy Wikipedia]

Abstract:
The Scientific, Education, Engineering, and Server microcomputer markets were once dominated with Motorola based processors. Motorola created the necessary parts for computing platforms, from the power transistors required for switching power supplies, to the plastic coated low-cost semiconductor format which became industry standard, to analog television screens needed for human interaction, to digital HDTV digital screens for modern day human interaction, all the way down to the Central Processor unit with all their additional support chips. Today, we mark the day where America's innovation company, spun-off as Freescale by Motorola, was acquired by a Dutch competitor NXP.

[68000 microprocessor die, courtesy Wikipedia]

History:
A short history of Motorola dating to 2009 can be seen in this PDF. There is not significant concentration on Motorola's contribution to the Computer Industry, so this article completes a short summary of Motorola semiconductor & microprocessor innovations.
1928 - Motorola was started as in Illinois, USA as Galvin Manufacturing Corporation
1947 - Motorola developed their first Television (a requirement for computer monitors)
1949 - Motorola opened up their first Solid State research lab
1955 - The first high powered transistor (core of computer switching power supplies)
1963 - Worlds first rectangular Television (modern computer monitor form factor)
1965 - Developed low cost plastic semiconductor packaging (becomes industry standard)
[Motorola 6800 Microprocessor, courtesy Wikipedia]
 1974 - 6800 8-bit Microprocessor developed (for video games, computers, and cars)
[Motorola 6809, Courtesy Wikipedia]
 1978 - 6809 8/16-bit hybrid Microprocessor released (video games, small computers)
[Motorola 68000, courtesy Wikipedia]

1979 - 68K 68000 16/32 bit hybrid Microprocessor released (used in workstations & servers)
1982 - 68K 68008 8/16/32 bit hybrid Microprocessor supporting inexpensive 8 bit support chips
1982 - 68K 68010 16/32 but hybrid Microprocessor supporting Virtual Memory
1984 - 68K 68020 true 32-bit Microprocessor released (for desktop workstations)
1987 - 68K 68030 released, integrating Memory Management unit (lower cost workstations)
1988 - 88K 88000 released, Motorola's first 32-bit RISC architecture announced
[Motorola 88100 Processor, courtesy Wikipedia]
1988 - 88K 88100 released, 32-bit RISC implementation (1-4 socket shared MMU servers)
1989 - 68K 68040 released, integrating Floating Point processor (faster workstations)
1990 - Motorola acquired General Instrument Corporation (proposed digital HDTV)
[Motorola 88110 Processor, courtesy Wikipedia]
1991 - 88K 88110 announced, 2nd generation 32-bit RISC processor (integrated MMU)
1991 - PowerPC architecture released, a partnership between Apple, IBM, and Motorola
1992 - 88K 88110 first & last processors shipped (succeeded by PowerPC)
1992 - PowerPC 601 32-bit IBM CPU, PowerPC core, on Motorola 88110 bus
[Motorola PowerPC 603, courtesy Wikipedia]
1994 - PowerPC 603 32 bit 2nd generation microprocessor released
1994 - PowerPC 604 32 bit 2nd generation microprocessor released
[68060 Microprocessor, courtesy Wikipedia]

1994 - 68K 68060 last 68K compatible processor, instructions optimized in hardware
1994 - 68K ColdFire microprocessor family released, with a simplified 68K core
1995 - 68K DragonBall microprocessor family from Hong Kong, a 68K micro-controller
[Motorola PowerPC 604e, courtesy Wikipedia]
1996 - PowerPC 604e 32 bit 2nd generation microprocessor released
1997 - PowerPC 620 64 bit 2nd generation microprocessor released
1997 - PowerPC 7xx 32 bit 3rd generation microprocessor released
2001 - i.MX microprocessor family released, abandoning 68K core for ARM core
[Freescale Semiconductor logo, courtesy Wikipedia]
2004 - Motorola spins-off Microprocessor division as Freescale Semiconductor
2010 - Kinetis microprocessor family released by Freescale, based upon ARM core
2013 - Kinetis microprocessor developed the worlds smallest processor
2015 - Motorola Semiconductor, which became Freescale, is acquired by Dutch NXP

[NXP Semiconductor logo, courtesy Wikipedia]

Conclusion:
The United States was the originator of massive computer industry change over the decades. Motorola was one of the first major computing vendors. Motorola divested their Semiconductor division to Freescale. Freescale largely dis-invested itself from the award-winning Motorola's 68K architectures in favor of British owned ARM RISC architecture. Now, Freescale is gone.

Monday, January 12, 2015

M7: Next Generation SPARC and Next Generation Data Center

[Oracle SPARC M7 Die, Courtesy The Register]

Abstract:

The SPARC Processor family from Sun Microsystems had existed for nearly 30 years. SPARC was an early contender in the 64 bit processing market while most commodity processors were still 32 bits or fewer. With the purchase of Sun Microsystems by Oracle, SPARC development continued and produced the fastest processor on the planet. Oracle promised a day where Oracle would have one processor for both the T and M platforms. The latest generation of processor finally appears able to unify the T and M system lines. In December 2014, the Hot Chips 26 Symposium in 2014 conference material was released to the general public, illustrating what is coming in 2015. Happy New Year!

[M7 32x Socket Interconnect, courtesy Enterprise Tech]

SPARC M7 at Hot Chips 26 in 2014:

Oracle announced during the Hot Chips conference that the new M7 processor and system would be released in 2015. Some notable reviews of the Oracle SPARC M7 included:
  • [2014-08-13] Timothy Pricket Morgan from Enterprise Tech Systems Edition
    Oracle Cranks Up The Cores To 32 With Sparc M7 Chip
    When asked about what the performance advantage would be comparing an InfiniBand or Ethernet cluster running Oracle RAC and the Sparc M7-Bixby setup using the non-coherent memory clusters, Fowler said that the difference “would not be subtle.”
  • [2014-08-18] Simon Sharwood from The Register
    Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
    New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
  • [2014-08-12]  James Niccolai  from PC World
    Oracle's Sparc M7 chip to supercharge in-memory computing
These reviews were helpful in understanding the market's take on the hardware announcement from Oracle, but the actual presentation was released in December 2014 had the details.

[Oracle 2010 SPARC Roadmap, courtesy Enterprise Tech]

Hot Chips Oracle Presentation:

Stephen Phillips, the Senior Directory SPARC Architecture from Oracle, gave the presentation. He was involved in the architecture for T2+ (Victoria Falls) and later T-Series; M5, M6, M7 Processors, placing him in the crosshairs of the delivery of the 2010 roadmap above. The Hot Chips 26 - August 12, 2014 (Big Iron Presentations) was included in Video (Video Start-End: 32:26-1:04:00) as well as High-Resolution PDF.

It should be noted, that the clustering of the M7 produces a system capable of remote memory sharing with up to 64 sockets, make the formerly released 5 year road map (pictured above) astounding accurate, illustrating the intense fidelity that Oracle offered to the SPARC and UNIX communities.

[M7 Decompression and Query Offload Engine, courtesy EnterpriseTech]

SPARC M7 In-Silicon Enhancements:

As one watches the presentation and reviews the high resolution slides, the following is a short set of notes, highlighting what some would consider key aspects.

Slide 3 - Recent history of SPARC
Slide 5 - M5 CPU - 20nm process; 32 Cores per Socket; 4 clustered cores; S4 enhanced core
Slide 6 - S4 Core - Dynamic Threading (1-8 threads) for speed & throughput; Faster Live migration
Slide 7 - Core Cluster - 1.5x Larger L2 Cache; 2x Greater Core Bandwidth
Slide 8 - Level 3 Cache & Network: 2.5x-5x Bandwidth; 25% Less Latency; HW Accelerator Access
Slide 9 - OVM "Aware" Solaris Process Groups by Core Cluster &, L3 Cache Partition
Slide 10 - Power aware in silicon; auto-adjusts voltage & frequency according to policy
Slide 11 - 2x-3x Memory Bandwidth; Live DIMM Retirement; Memory Lane Failover;
Greater than 2x PCIe performance
Slide 12 - Performance increase ~3x over M6
Slide 13 - Live Production Data Integrity Checking (for buffer overrun protection)
Slide 14 - Fine Grain Memory Migration for JAVA (for concurrent  operations of middleware)
Slide 15 - Virtual Memory Masking for Java Runtime (embed object state into unused 64 bit)
Slide 16 - Decompression & Query accelerators for Oracle 12c (row & column for OLTP & OLAP)
Slide 17 - 8x Fused Decompression + Query Accelerators
Slide 18 - High performance for in-memory database without OS intervention
Slide 19 - 10-to-1 Decompression improvement of 1x query pipeline to 1x T5 (S3) Thread
Slide 20 - Third Party benefit through tool-chain
Slide 21 - Glue-less 1-8 socket support, like the T5
Slide 22 - SMP Scalability comprising 32 M7 SPARC Processors; 4 socket physical domains
Slide 23 - Reliable & Secure Shared Memory Clustering; 64 M7 sockets in a cluster; >1 failure tolerance
Slide 24 - Coherent Memory Cluster comprising 64 M7 SPARC Processors; secure foreign memory

The use of hardware application accelerators has proven to be a massive game changer in the industry for Oracle, as SPARC continues where others have failed.

Conclusions:

In the world of Network Management, this particular hardware solution  enables massive network management scalability, the fastest virtualization migration technology, and provide the most reliable underlying infrastructure. Underpinning the SPARC processor technology includes 2x-10x "everything" over commodity Intel platforms, lashed together by various vendors. This is where high performance data centers which need a small footprint, low power utilization, enterprise software, and massive scalability will go.

Thursday, August 14, 2014

The [Almost] Great Internet Crash: 2014-08-12 is 512K Day

The [Almost] Great Internet Crash: 2014-08-12 is 512K Day

Abstract:
Since the creation of computers, people had been taking computer equipment and making them attach through different means. Proprietary cables and protocols were constantly being designed, until the advent of The Internet. Based upon TCP/IP, there seemed to be little to limit it's growth, until the 32 bit address range started to "run out" when more people in more countries wanted to come on-line. On August 12, 2014, an event affectionately referred to as "512K Day" had occurred, a direct result of the IPv4 hacks to keep the older address scheme alive until IPv6 could be implemented.


History:
The Internet was first created with the Internet Engineering Task Force (IETF) publishing of "RFC 760" on January 1980, later to be replaced by "RFC 791" on September 1981. These defined the 32 bit version of TCP/IP was called "IPv4". During the first decade of The Internet, addresses were allocated in basic "classes", according to network size of the applicant's needs.

As corporations and individuals started to use The Internet, it was realized that this was not scalable, so IETF published "RFC 1518" and "RFC 1519" in 1993 to break the larger blocks down into more fine-grain slices for allocation, called Classless Inter-Domain Routing (or "CIDR"... which was subsequently refreshed in 2006 as "RFC 4362".) Network Address Translation ("NAT") was also created. 

The Private Internet Addresses were published as part of "RFC 1918" in February 1996, in order to help alleviate the problem of "sustained exponential growth". Service Providers used NAT and CIDR to continue to facilitate the massive expansion of The Internet, using private networks hidden behind a single Internet facing IP Address.

In 1998, the IETF formalized "IPv6", as a successor protocol for The Internet, based upon 128 bits. The thought was providers would move to IPv6 and sun-set IPv4 with the NAT hack.
[Example of a private network sitting behind a public WAN/Internet connection]
Address Exhaustion:
Routing and system vendors had started supporting IPv6, but the vast majority of users continue to use CIDR and NAT hack to run the internet. The Internet, for the most part, had run out of IPv4 Addresses, called Address Exhaustion.
The IP address space is managed by the Internet Assigned Numbers Authority (IANA) globally, and by five regional Internet registries (RIR) responsible in their designated territories for assignment to end users and local Internet registries, such as Internet service providers. The top-level exhaustion occurred on 31 January 2011.[1][2][3] Three of the five RIRs have exhausted allocation of all the blocks they have not reserved for IPv6 transition; this occurred for the Asia-Pacific on 15 April 2011,[4][5][6] for Europe on 14 September 2012, and for Latin America and the Caribbean on 10 June 2014.
Now, over a decade later, people are still using IPv4 with CIDR and NAT, trying to avoid the inevitable migration to IPv6.

[Normal outage flow with an unusual spike on 2014-08-12]

Warning... Warning... Will Robinson!
People were well aware of the problems with people using CIDR and NAT - address space would continue to become so fragmented over time that routing tables would eventually hit their maximums, crashing segments of The Internet.

Some discussions started around 2007, with how to mitigate this issue in the next half-decade. It was known that there was a limited number of routes that routing equipment can handle.
...this _should_ be a relatively safe way for networks under the gun to upgrade (especially those running 7600/6500 gear with anything less than Sup720-3bxl) to survive on an internet with >~240k routes and get by with these filtered routes, either buying more time to get upgrades done or putting off upgrades for perhaps a considerable time.

On May 12, 2014 - Cisco published a technical article warning people of the upcoming event.
As an industry, we’ve known for some time that the Internet routing table growth could cause Ternary Content Addressable Memory (TCAM) resource exhaustion for some networking products. TCAM is a very important component of certain network switches and routers that stores routing tables. It is much faster than ordinary RAM (random access memory) and allows for rapid table lookups.
No matter who provides your networking equipment, it needs to be able to manage the ongoing growth of the Internet routing table. We recommend confirming and addressing any possible impacts for all devices in your network, not just those provided by Cisco.

On June 9, 2014 - Cisco published a technical article 117712 on how to deal with the the 512K route limit on some of their largest equipment... when the high-speed TCAM memory segment overflows.
When a route is programmed into the Cisco Express Forwarding (CEF) table in the main memory (RAM), a second copy of that route is stored in the hardware TCAM memory on the Supervisor as well as any Distributed Forwarding Card (DFC) modules on the linecards.

This document focuses on the FIB TCAM; however, the information in this document can also be used in order to resolve these error messages:
%MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for IPv4 unicast protocol
%MLSCEF-DFC4-7-FIB_EXCEPTION: FIB TCAM exception, Some entries will be software switched 
%MLSCEF-SP-7-FIB_EXCEPTION: FIB TCAM exception, Some entries will be software switched
Cisco's solution will steal memory from IPv6 and MPLS labels, but allocate up to 1 Million routes.

On July 25, 2014 - people started reminding others to adjust their routing cache sizes!
As many readers on this list know the routing table is approaching 512K routes.
For some it has already passed this threshold.
How do they know? Well, common people have an insight into this through the "CIDR Report"... yes, anyone can watch the growth of The Internet.

[PacketLife.net warning on 2014-05-06 of the 512K limit]

The Day Parts of The Internet Crashed:
Cisco published a Service Provider note "SP360", to note the event.
Today we know that another significant milestone has been reached, as we officially passed the 512,000 or 512k route mark!
Our industry has known this milestone was approaching for some time. In fact it was as recently as May 2014 that we provided our customers with a reminder of the milestone, the implications for some Cisco products, and advice on appropriate workarounds.

Both technical journals and business journals started noticing the issue. People started to notice that The Internet was becoming unstable on August 13, 2014. The Wall Street Journal published on August 13, 2014:
The problem also draws attention to a real, if arcane, issue with the Internet's plumbing: the shrinking number of addresses available under the most popular routing system. That system, called IPv4, can handle only a few billion addresses. But there are already nearly 13 billion devices hooked up to the Internet, and the number is quickly growing, Cisco said.
Version 6, or IPv6, can hold many orders of magnitude more addresses but has been slow to catch on. In the meantime, network engineers are using stopgap measures

The issue was inevitable, but what was the sequence of events?

[BGP spike shown by GBPMon]
One Blip from One Large Provider:
Apparently, Verizon released thousands of small networks into the global routing tables
So whatever happened internally at Verizon caused aggregation for these prefixes to fail which resulted in the introduction of thousands of new /24 routes into the global routing table.  This caused the routing table to temporarily reach 515,000 prefixes and that caused issues for older Cisco routers.
Luckily Verizon quickly solved the de-aggregation problem, so we’re good for now. However the Internet routing table will continue to grow organically and we will reach the 512,000 limit soon again.
Whether this was a mistake or not is not the issue, this situation was inevitable.
In Conclusion:
The damage was done, but perhaps it was for the best. People should be looking at making sure their internet connection is ready for when it happens again. People should be asking questions such as: "why are we still using NAT?" and "when are we moving to IPv6?" If your service provider is still relying upon NAT, they are in no position to move to IPv6, and are contributing to the instability of The Internet.