Showing posts with label Internet. Show all posts
Showing posts with label Internet. Show all posts

Friday, September 25, 2015

IPv4: North American Addresses Exhausted

[IPv4 and IPv6: The 4 Corners of the World,  courtesy Center for Applied Internet Data Analysis]

IPv4: North American Addresses Exhausted

Abstract:

The TCP/IP Internet was created around 1981, where each participant would get an address out of a total of around 4 Billion. This technical limitation used 32 bit addresses, during a time when people were using 8 bit computing. Internet usage is pervasive today, with items such as cell phones and light bulbs being attached, and it was just a matter of time before the pool of addresses were exhausted. Another benchmark was hit today.

Gwangju Illustration in South Korea

A simple way to view the The Internet is an Apartment Complex. Each building may be a different continent and each apartment has an address. When someone wants to live in that complex, there is limited number apartment numbers in each complex. In the beginning, anyone can live anywhere, rent is cheap, large blocks of apartments are available for friends to rent together, and life is good. As time goes on, space fills up, and you have to wait until someone leaves or dies to get an address. If the population is ever increasing, there is a problem... people start to double-up or triple-up in the apartment, all sharing a single address, but perhaps adding an "a" or a "b" to the end of the number.

[NAT illustration]

Mitigation Using the Illustration

When IP Addresses on The Internet started getting "tight", providers started to make devices share at each location they provided service to. While this sharing solution is not optimal, this is what happens every day when people multiple computers, televisions, tablets, etc. at their homes... the home gets a single IP Address on The Internet and all the devices share that address through a technology called Network Address Translation using an Internet Router/Firewall. This delayed the problem for many years, since tens of thousands of connections could share a single IP Address on the Internet, behind an Internet Gateway Router/Firewall running NAT.

Trouble with NAT: Mitigation is Not Solution

The problem is, not all devices connected on the Internet using NAT can talk directly to other devices using NAT without going through a system on The Internet using a real IP Address. Devices using NAT must communicate to a well known server in The Internet "cloud", so applications started to become more limited in their framework. Furthermore, identification of an end-point on The Internet becomes more difficult to track, so one really does not know who is behind the public IP address since it could be shared by dozens or thousands of devices, potentially anywhere in the world! When trying to manage devices on The Internet, it is always preferable to have a dedicated IP Address, for troubleshooting, otherwise a physical presence may be needed to investigate a problem. Some secure management protocols break with NAT, since the source or destination address are different from what they started as, so the packet must be modified along the way, which raises security concerns. For everyday people, NAT is a solution, but not without drawbacks. Public IP Addresses continue to be eaten away.

[Warning sign from Wikimedia]

The Warning:

In July of 2015, the American Registry for Internet Numbers ran out of larger blocks of addresses to provide. If you needed a presence on The Internet (i.e. Internet Service Provider, Web Hosting company, Banking Institution deploying ATM's, etc.) and had a large project, you could only get a small number of addresses in North & Central America.

[Empty bottles courtesy The Register]

Running Dry:

As of today in September 2015, North America has officially run out of addresses. North America was not the first region to run dry of IP Addresses, leaving large numbers of devices needing to participate on the Internet high-and-dry. Caribbean and Latin America ran out of addresses in 2014. Europe and the Middle East ran out of addresses in 2012. Asia-Pacific ran out of addresses in 2011. Only Africa still has addresses left, projected to be exhausted in 2019 at current rate of consumption.

[Structure of IPv4 and IPv6 Packets]

The Solution:

In a world where computers, and even cell phones, are 64 bit - using a 32 bit number to define addresses for communication over The Internet is antiquated. This original address size was part of the Internet Protocol, version 4 (IPv4) definition. Over a decade ago, a newer address format was created, called IPv6. Movement to IPv6 is the ultimate solution. There are enough addresses in the 64 number for a very long time. Various governments in Asia such as Hong Kong and Japan, being the first to run out, already started the push to IPv6. Providers in Europe, like British Telecom, started the push to IPv6. Internet Service Providers, like Comcast, are deploying under IPv6 in the United States.

The Conclusion:

As providers move to IPv6, this delays the fate of companies bound to IPv4, since they may receive recycled addresses or can purchase formerly assigned addresses from providers who already moved infrastructure to IPv6. Solution providers moving to IPv6 will gain the benefit of peer-to-peer communication over the Internet, for their applications, while legacy IPv4 solution providers will incur greater costs with having to go through a central bottleneck in The Internet "cloud". If there is ever a point in time where innovation and crisis meets - this is that opportunity, don't miss it!

Thursday, August 14, 2014

The [Almost] Great Internet Crash: 2014-08-12 is 512K Day

The [Almost] Great Internet Crash: 2014-08-12 is 512K Day

Abstract:
Since the creation of computers, people had been taking computer equipment and making them attach through different means. Proprietary cables and protocols were constantly being designed, until the advent of The Internet. Based upon TCP/IP, there seemed to be little to limit it's growth, until the 32 bit address range started to "run out" when more people in more countries wanted to come on-line. On August 12, 2014, an event affectionately referred to as "512K Day" had occurred, a direct result of the IPv4 hacks to keep the older address scheme alive until IPv6 could be implemented.


History:
The Internet was first created with the Internet Engineering Task Force (IETF) publishing of "RFC 760" on January 1980, later to be replaced by "RFC 791" on September 1981. These defined the 32 bit version of TCP/IP was called "IPv4". During the first decade of The Internet, addresses were allocated in basic "classes", according to network size of the applicant's needs.

As corporations and individuals started to use The Internet, it was realized that this was not scalable, so IETF published "RFC 1518" and "RFC 1519" in 1993 to break the larger blocks down into more fine-grain slices for allocation, called Classless Inter-Domain Routing (or "CIDR"... which was subsequently refreshed in 2006 as "RFC 4362".) Network Address Translation ("NAT") was also created. 

The Private Internet Addresses were published as part of "RFC 1918" in February 1996, in order to help alleviate the problem of "sustained exponential growth". Service Providers used NAT and CIDR to continue to facilitate the massive expansion of The Internet, using private networks hidden behind a single Internet facing IP Address.

In 1998, the IETF formalized "IPv6", as a successor protocol for The Internet, based upon 128 bits. The thought was providers would move to IPv6 and sun-set IPv4 with the NAT hack.
[Example of a private network sitting behind a public WAN/Internet connection]
Address Exhaustion:
Routing and system vendors had started supporting IPv6, but the vast majority of users continue to use CIDR and NAT hack to run the internet. The Internet, for the most part, had run out of IPv4 Addresses, called Address Exhaustion.
The IP address space is managed by the Internet Assigned Numbers Authority (IANA) globally, and by five regional Internet registries (RIR) responsible in their designated territories for assignment to end users and local Internet registries, such as Internet service providers. The top-level exhaustion occurred on 31 January 2011.[1][2][3] Three of the five RIRs have exhausted allocation of all the blocks they have not reserved for IPv6 transition; this occurred for the Asia-Pacific on 15 April 2011,[4][5][6] for Europe on 14 September 2012, and for Latin America and the Caribbean on 10 June 2014.
Now, over a decade later, people are still using IPv4 with CIDR and NAT, trying to avoid the inevitable migration to IPv6.

[Normal outage flow with an unusual spike on 2014-08-12]

Warning... Warning... Will Robinson!
People were well aware of the problems with people using CIDR and NAT - address space would continue to become so fragmented over time that routing tables would eventually hit their maximums, crashing segments of The Internet.

Some discussions started around 2007, with how to mitigate this issue in the next half-decade. It was known that there was a limited number of routes that routing equipment can handle.
...this _should_ be a relatively safe way for networks under the gun to upgrade (especially those running 7600/6500 gear with anything less than Sup720-3bxl) to survive on an internet with >~240k routes and get by with these filtered routes, either buying more time to get upgrades done or putting off upgrades for perhaps a considerable time.

On May 12, 2014 - Cisco published a technical article warning people of the upcoming event.
As an industry, we’ve known for some time that the Internet routing table growth could cause Ternary Content Addressable Memory (TCAM) resource exhaustion for some networking products. TCAM is a very important component of certain network switches and routers that stores routing tables. It is much faster than ordinary RAM (random access memory) and allows for rapid table lookups.
No matter who provides your networking equipment, it needs to be able to manage the ongoing growth of the Internet routing table. We recommend confirming and addressing any possible impacts for all devices in your network, not just those provided by Cisco.

On June 9, 2014 - Cisco published a technical article 117712 on how to deal with the the 512K route limit on some of their largest equipment... when the high-speed TCAM memory segment overflows.
When a route is programmed into the Cisco Express Forwarding (CEF) table in the main memory (RAM), a second copy of that route is stored in the hardware TCAM memory on the Supervisor as well as any Distributed Forwarding Card (DFC) modules on the linecards.

This document focuses on the FIB TCAM; however, the information in this document can also be used in order to resolve these error messages:
%MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for IPv4 unicast protocol
%MLSCEF-DFC4-7-FIB_EXCEPTION: FIB TCAM exception, Some entries will be software switched 
%MLSCEF-SP-7-FIB_EXCEPTION: FIB TCAM exception, Some entries will be software switched
Cisco's solution will steal memory from IPv6 and MPLS labels, but allocate up to 1 Million routes.

On July 25, 2014 - people started reminding others to adjust their routing cache sizes!
As many readers on this list know the routing table is approaching 512K routes.
For some it has already passed this threshold.
How do they know? Well, common people have an insight into this through the "CIDR Report"... yes, anyone can watch the growth of The Internet.

[PacketLife.net warning on 2014-05-06 of the 512K limit]

The Day Parts of The Internet Crashed:
Cisco published a Service Provider note "SP360", to note the event.
Today we know that another significant milestone has been reached, as we officially passed the 512,000 or 512k route mark!
Our industry has known this milestone was approaching for some time. In fact it was as recently as May 2014 that we provided our customers with a reminder of the milestone, the implications for some Cisco products, and advice on appropriate workarounds.

Both technical journals and business journals started noticing the issue. People started to notice that The Internet was becoming unstable on August 13, 2014. The Wall Street Journal published on August 13, 2014:
The problem also draws attention to a real, if arcane, issue with the Internet's plumbing: the shrinking number of addresses available under the most popular routing system. That system, called IPv4, can handle only a few billion addresses. But there are already nearly 13 billion devices hooked up to the Internet, and the number is quickly growing, Cisco said.
Version 6, or IPv6, can hold many orders of magnitude more addresses but has been slow to catch on. In the meantime, network engineers are using stopgap measures

The issue was inevitable, but what was the sequence of events?

[BGP spike shown by GBPMon]
One Blip from One Large Provider:
Apparently, Verizon released thousands of small networks into the global routing tables
So whatever happened internally at Verizon caused aggregation for these prefixes to fail which resulted in the introduction of thousands of new /24 routes into the global routing table.  This caused the routing table to temporarily reach 515,000 prefixes and that caused issues for older Cisco routers.
Luckily Verizon quickly solved the de-aggregation problem, so we’re good for now. However the Internet routing table will continue to grow organically and we will reach the 512,000 limit soon again.
Whether this was a mistake or not is not the issue, this situation was inevitable.
In Conclusion:
The damage was done, but perhaps it was for the best. People should be looking at making sure their internet connection is ready for when it happens again. People should be asking questions such as: "why are we still using NAT?" and "when are we moving to IPv6?" If your service provider is still relying upon NAT, they are in no position to move to IPv6, and are contributing to the instability of The Internet.

Wednesday, December 18, 2013

Malware: Targeting Linux Platforms



[Screenshot courtesy ARS Technica]
This is not the first case of such worms, targeting Internet devices, in this case Intel based only.
http://arstechnica.com/security/2013/11/new-linux-worm-targets-routers-cameras-internet-of-things-devices/
Researchers have discovered a Linux worm capable of infecting a wide range of home routers, set-top boxes, security cameras, and other consumer devices that are increasingly equipped with an Internet connection. Linux.Darlloz, as the worm has been dubbed, is now classified as a low-level threat, partly because its current version targets only devices that run on CPUs made by Intel

[Screenshot courtesy Symantec]
A short article from Security company Symantec discussing the latest WORM targeting The Internet.
http://www.symantec.com/connect/blogs/linux-worm-targeting-hidden-devices
Symantec has discovered a new Linux worm that appears to be engineered to target the “Internet of things”. The worm is capable of attacking a range of small, Internet-enabled devices in addition to traditional computers. Variants exist for chip architectures usually found in devices such as home routers, set-top boxes and security cameras. Although no attacks against these devices have been found in the wild, many users may not realize they are at risk, since they are unaware they own devices that run Linux.

Monday, June 3, 2013

Solaris, Verizon FiOS, and The Internet


Solaris and Home Users
An interesting article hit the wires. A couple of Solaris ZFS platforms in someone's home with 24TB of data storage each is apparently the culprit of sucking a lot of bandwidth. It is interesting how Solaris continues to be a motivating factor on The Internet after all these years. Verizon is not very happy with the massive data consumption of their FiOS Internet connected network.

[houkouonchi's data usage in May 2013]
 How Much is Unlimited?
Houkauonchi in California apparently pays residential rates of $208/month for 2 lines. Verizon started offering a faster residential option, which he took, even though the old business option previously had did not experience a discount or an incremental faster upgrade. What would any reasonable home user do? Well, he switched from a fast business rate to a faster resential rate to get more bandwidth! LOL! Verizon wants him to move to move back to a $400/month business service.

Once the telephone company sees an outlier like this in their usage, they start asking questions. Sometimes, it can be spammers or illegal pirating / file sharing activity.The primary driver for bandwidth usage for this user is securing hosting a server used by friends & family on the internet. As soon as this was done, Terms of Service is violated, and he needs to move to Business subscription.

[Brak710 data usage in May 2013]
Not isolated:
Another user, Brak710 from Pittsburgh, PA,
received the same type of inquiry. Apparently, he is purchasing4 different FiOS network connections, all from neighboring properties. High bandwidth usage also drove his basement servers to Business rates.
Lessons Learned:
If you have equipment at home, keep in mind that Network Carriers are monitoring their networks. Regular abusive usage patterns may cause your montly rate to rise, if once wants to keep their residential access pricing!

Friday, March 23, 2012

Free 4G Wireless Internet


Free 4G Wireless Internet?

Abstract:
Wireless cellular or packet protocols are typically described by different categories, the higher the category the faster the performance. The categories are loosely defined by the International Telecommunications Union-Radio communications sector (ITU-R) and organized by Generation. The first vendor has appeared on the market to support free 4G.


Wireless History:

New wireless generations seem to be appearing regularly every 10 years since the 1980's, with the latest being 4G.
0G - Mobile Radio Telephone, appearing in 1946
1G - Analog, 22kb/s-56kb/s, appearing in 1981
2G - Digital, 56kb/s-236.8 kbit/s, appearing in 1992
3G - Multi-Media, 200kbp/s peak rate, appearing in 2001
4G - Packet based Internet Protocol, 1 gigabit peak rate, 2010-2011

It should be noted: there is a wide gap between 3G and 4G, as far as capacity is concerned. There are many intermediate steps, which vendors have branded 3.5G, 3.75G, or even as 4G (if the technology has on it's "roadmap" the ability to meet 4G specifications, as WiMAX has done.)


Internet Access:

The Internet was a term coined with access to the U.S. Military Department of Defense's TCP/IP network. Early on, this was done through cooperation between different U.S. government organizations as well as through the public and private university systems within the United States.

Regular public American citizenry started gaining access to The Internet in the 1990's via dial-up access, providing 300b/s-56kb/s. Various corporations managed to raise enough investment resources to provide this access. In the late 1990's, free dial-up internet services started to become available, through corporations like: NetZero and FreeServe. As users started to migrate from dial-up to broardband (see later), lawsuits started to be filed between major players in a shrinking market (like NetZero and Juno) resulting in consolidation and creation of United Online (NetZero and Juno created the second largest internet access provider.) Towards the end of popular dialup access the internet, major providers included: AOL, United Online, MSN, Earthlink, AT&T Worldnet.

Performance was enhanced in the 2000's via broadband or high-speed access, commonly via DSL, Satellite, and Cable. The telco market was regulated, forcing them to allow access from third-party internet service providers (ISP's.) In order to encourage quicker adoption of faster technology, the regulations were loosened, consolidating internet access to several cable, several telco, and several satellite providers. Free service broadband providers never were able to be profitable.


Internet Access and Wireless Convergence:

Internet access became possible via diverse wireless telco networks, as the wireless telephone companies became more diverse, wiress data access became more desirable, and the back-haul links to the cell towers became more robust. Internet access based upon cellular networks started becoming more competitive.


Free Internet Access over Wireless:

The local area network WiFi protocol has become nearly ubiqutous, with locations offering free internet access via WiFi in hotels, coffee shops, book stores, and even automobile service stations.

The drawback to this methodology is that people must remain in a fairly confined area. This restriction has been pretty reasonable for many people, just as "free beer" may only be available at a frat house.


Free internet access provider, Net Zero, helped to pave the way for free internet in the dial-up. United Online is now prepared to offer free ineternet access over 4G via it's NetZero subsidiary - with the purchase of equipment and for a period of 1 year (for 200MBytes of data.) After the first year, the $9.95 plan must be purchased, providing for 500MBytes of data. Using WiMAX technology, now being billed as a 4G technology, people can walk or drive around and have access to the internet.

The drawback is clear: with the purchase of the hotspot or USB dongle, Internet is only free for 1 year. No one has a right to complain how long something is free, the consumer just needs to decide how good of a deal it is for them.

Network Management Connection:
With the rapid expansion of wireless as an access mode and the rapid cost reduction in internet access for wireless devices, inexpensive and massively scalable network management tools will become a requirement.