The Internet has become an integral part of everyday life, whether people realize it or not. From its humble origins to an unprecedented advance in human capacity, the Internet is a pinnacle of achievement.
Like all achievements, however, the Internet is not perfect. Its core, open protocols were designed decades ago, and performance and security issues have forced many businesses and users to rely on overlay or private networks rather than the public Internet. Similarly, the advent of big tech has sparked debate over the control and controversial use of data within walled gardens.
However, such shortcomings are not without efforts to overcome them — the Internet is continually evolving.
I had a chance to speak with William B. Norton, Co-Founder of NOIA Network, who has experienced some of the Internet’s critical stages of development first-hand since the 1980s. His insights into the historical and current state of the Internet are enlightening, and his ongoing work with NOIA Network reveals a surging trend to build a better Internet.
If you’re looking for some deep insights and context into where the Internet is today, Bill’s perspective is a must-read.
Joresa Blount: You’ve been a pioneer in peering technologies and the Internet’s protocol design for decades with co-founder roles at Equinix, and now NOIA Network. Can you elaborate on your experience with the early stages of the Internet and how its development, both positive and negative, has manifested itself over the last couple of decades?
Bill Norton: The Internet has gone through a series of transitions over the past decades. I started working on the Internet in 1988 when I wrote the network management software that monitored the core of the Internet, the government-sponsored NSFNET. The friction back then was that, as a government-sponsored activity, commercial ISPs were not allowed to connect.
The NSFNET had an Authorized Use Policy (AUP), allowing only certain supercomputing centers, higher education, and research organizations to connect. As a centrally managed system, there were politics involved (Senators and Congressmen lobbied) in the expansion of site decisions. No Al Gore jokes here — Al Gore and Richard Boucher were strong advocates for advancing the Internet.
When it became clear the government didn’t need to own and operate the NSFNET for the National Science Foundation-funded organizations to be able to communicate, a transition plan to a commercial Internet was circulated. Instead of an NSFNET connection, regional networks were instead given funds to buy Internet services from one of the many competing Network Service Providers (NSPs) that had connections to Network Access Points (NAPs). These NAPs served as network interconnection points, to ensure that NSF-funded networks on one NSP network could reach NSF-sponsored networks on a competing NSP network. This mandate ensured that NSPs bought into the new architecture — otherwise, they would not qualify for receiving the NSF funds.
When the NSF-funded traffic became a small part of the commercial operator’s traffic, most NSPs pulled out of the NAPs, and the “NSP” designation was abandoned. The generic “Internet Service Provider” designation was used instead, and they interconnected with each other outside of the NAPs.
With the continued exponential growth of the Internet, and with the NSF no longer providing material traffic or funding, what could the NSF do but let the Internet evolve outside of its control. The distributed commercial Internet left the nest.
Joresa: How has the commercial Internet evolved from those early origins, and what does the modern version still need?
Bill: I chaired the NSFNET Regional Technicians meetings during the transition from NSFNET to the commercial Internet and wrote the business plan for a self-sustaining (not government-funded) North American Network Operators Group (NANOG), and I became its first chairman (1995-1998).
As NANOG chair, I worked closely with the ISPs, Internet Exchange Point Operators, and Content Providers, mostly looking for content of interest for the NANOG meetings. Unlike in the NSFNET days, where folks eagerly shared performance issues, new protocols, and new attachments to the Internet, the commercial ISPs did not share openly.
They said, “If I share my new customer attachments, my competitors will be able to cherry-pick my best customers. If I share performance problems and solutions that I find, my competitor’s salespeople will only highlight the problems to discredit my service.”
I learned that commercial operators would do that which is in their own interests, and commercial interests drive less openness. This was a problem for me because I was having a hard time finding engineers able to speak. I also knew, as I know today, that there is more evolution needed, and more light shed on the problems in order for them to be addressed.
During the NANOG breaks, I started noticing pairs of network engineers in the back of the room huddled around their laptops. They were configuring “Peering” sessions between their respective networks. Why? They didn’t want to pay AT&T for their traffic to each other, so they configured their routers to exchange their customer traffic directly instead of sending it through AT&T. By doing so, they don’t have to pay AT&T for that traffic exchange, and it turns out to be faster too!
As more and more of these “Tier 2” networks peered, the dependence of the largest “Tier 1” ISPs decreased.
I understood that commercial interests aren’t all bad. ISPs peer to reduce costs and to provide better connectivity. The Internet gets better connectivity because of this alignment of interests.
At this point, I finished my MBA from the Michigan Business School and went to help Jay Adelson and Al Avery launch an Internet Exchange Point company eventually named Equinix. My title was Co-Founder and Chief Technical Liaison, which meant that I spent 90% of my time on the road attending Internet conferences (including NANOG) to help evangelize peering and Equinix as peering data center.
The largest content providers like Yahoo! started to peer around this stage. The content providers are a different species of the player in the ecosystem. To them, the most important thing about peering was the improved end-user experience — ISPs primarily peered to save money.
In the commercial Internet, eyeballs love content and content loves eyeballs. Aligned interests drive behavior and the Internet, like a living organism, morphs, and adapts to the stimuli applied.
This fat middle trend accelerated when the cable companies started peering with each other. At the time, peer-to-peer traffic (primarily Napster) was filling up their network pipes, and when they ordered new capacity, it was filled up almost immediately by pent up demand. The cable companies as a species tended to peer to reduce costs and, at this stage of evolution, peered openly with other content providers and ISPs.
And so, the Internet continued along its trajectory from a centralized backbone to a hierarchy of competing backbones and on to the distributed more “meshy” Internet topology of today.
Network “meshiness” means there are many more paths possible through the Internet, but the routing protocols still only use the one shortest path to the destination, without taking into account the quality of the network path. And so, routers will happily forward network traffic across links that are congested with other people’s traffic. We experience congestion as glitchy video, garbled audio, poor cloud gaming, slow load times for web pages, etc.
Today, with more companies depending on the public Internet for mission-critical activities, the more the Internet needs to be able to detect and route around areas of congestion.
Today’s Internet needs “smart” routing.
Joresa: Can you give a high-level overview for the audience on why there is an ongoing push to revamp some of the Internet’s core protocols?
Bill: Internet traffic today is routed using “hot potato routing,” which means that Internet Service Providers pass your traffic off to the next ISP as quickly as possible. In this way, traffic is routed across the Internet using the shortest network path, not necessarily the best network path. Routers will happily forward Internet traffic over congested paths, so some of your Internet traffic may get dropped.
Some in the community believe the answer to congestion is more bandwidth. The problem is that the load and, therefore, the congestion point move around. We believe a more dynamic approach is required, one that morphs and adapts as the Internet morphs and adapts. The good news is that there are underutilized network paths that can be used, and we have developed the software to spread the load more efficiently across the rich tapestry of existing Internet Service Providers.
Joresa: NOIA Network recently released its technical paper, co-authored by you and Jonas Simanavicius, also of NOIA Network. Can you provide an overview of what NOIA Network is striving to achieve with its “Programmable Internet” using a unique blend of segment routing, IPv6, and blockchain technology?
Bill: In a nutshell, NOIA Network is building software that enables a better Internet. The software continuously measures alternative paths through the Internet using the bandwidth of others in the system. When a better network path is found, it is used instead of the default Internet routed path. You earn NOIA coins for sharing, and you spend NOIA coins when you use bandwidth. Blockchain is used as a distributed ledger to track sharing and usage to ensure fairness.
The system is enabled by the Distributed Internet Transit Exchange (DITEX), a database of all global network segments in the system along with their performance characteristics. Applications use DITEX to program a better network path into the Internet packets, so traffic is forwarded along a better network path. We call it the “Programmable Internet.”
There are two flavors of the Programmable Internet. The retail model is called the public Segment Routing Wide Area Network (“public SR-WAN”) and is for everyone to share and use spare network capacity, and earn or use NOIA coins to do so.
The private SR-WAN model is for the enterprise market, and uses similar technology but is driven by operators using high-end routers in network-dense data centers instead of using general-purpose computers.
Joresa: NOIA recently had a software tool for segment routing incorporated into Cisco’s open-source segment routing repository, which is accessed by businesses around the world. What specifically is segment routing, and why is it gaining so much traction as a modern variant of source routing?
Bill: Segment Routing introduces traffic engineering capabilities to applications, a role traditional only used inside networks.
Source routing has been around for decades, but it wasn’t of much use without another Internet-attached device to send your traffic through. Segment Routers provide that function.
The other requirement is to have a directory of these Internet-attached devices. The Distributed Internet Transit Exchange (DITEX) provides that directory complete with real-time network performance characteristics.
With segment routing, the network path is specified in the packet, and the segment routers forward your traffic along a path you specify instead of the default Internet path. This project is garnering a lot of excitement because finally, all three components are available.
Joresa: Can you elaborate on what DITEX is and how it folds into NOIA’s broader vision for a more robust Internet?
Bill: Since the Distributed Internet Transit Exchange (DITEX) provides a database of global network segments and their performance characteristics, one can construct a network using the best network segments regardless of who owns the underlying technology. With the blockchain as the distributed ledger, settlement is frictionless, allowing for monetization of underutilized network capacity. Network operators can list their segments in the DITEX along with the premium they require.
In the white paper, I cite “Spread Networks,” which sell premium network capacity between Chicago and New York’s financial district. This bandwidth is expensive since they literally drilled through mountains to provide this lower latency path for program trading. There are 12 hours outside the business day when the capacity is underutilized. A network like Spread could monetize this path by attaching our segment router code on either end of the fiber. Being the lowest latency path, and since paths are constructed using the lowest latency path, this path would naturally attract traffic and therefore earn more coins during these off-hours.
Good network segments attract more traffic, and more traffic earns more NOIA coin. The Internet is more robust because we are now able to leverage network paths that were previously unavailable, and this is pulling traffic from slower, perhaps congested paths.
You can think of the NOIA Network solution as WAZE for the Internet. As a side effect of communicating with cars, WAZE discovered highway congestion by cars not moving. WAZE then guides cars to better-performing alternative roadways. The NOIA Network software identifies when you are using a congested path across the Internet as indicated by packet loss and jitter. NOIA Network software then uses alternative paths for your traffic leveraging the better paths of others. The DITEX provides that global view.
In this way, the incentives are aligned for sharing Internet bandwidth to provide a better Internet for everyone.
Joresa: Amid the flurry of recent data scandals, concerns over government surveillance, and abrogation of net neutrality, people are increasingly worried about the adverse consequences of a more centralized Internet topology. How do you envision Internet decentralization unfolding over the next 5 to 10 years when so much of big tech value capture relies on controlling user data and Internet gateway points are dominated by an exclusive set of firms and governments?
Bill: Over the next 5 to 10 years, I expect to see a lot more innovative blockchain solutions, owned by nobody or owned by everyone using it, depending on how you look at it.
The Internet ecosystem continues to morph and adapt to the stimuli placed upon it. Blockchain technology represents a new species in the ecosystem, a species set apart because it is a self-sustaining and autonomous organism. As long as computers are using the blockchain, it will continue to live on.
Unlike services that have been curtailed or shut down by authority, there is no single owner once blockchain released into the wild. There is no one to coerce to hand over encryption keys. There is no one place to tap as it is distributed and dynamic with new nodes coming and going. That makes this organism particularly interesting from an Internet ecosystem point of view. Blockchain is answering the needs of the user base, and its use provides energy for its continued evolution.
This species in the ecosystem is particularly well suited today in the context of concerns such as user privacy and user control over their data.