BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

This One Clause In The New Net Neutrality Regs Would Be A Fiasco For The Internet

Following
POST WRITTEN BY
Nicholas Weaver
This article is more than 9 years old.

I don't trust Internet Service Providers. I've focused much of my research since 2008 on ways in which the Internet fails due to ISP misbehavior, including detecting how ISPs can inject adds into content, how ISPs blocked BitTorrent, how ISPs have manipulated a key Internet protocol for ads and profit, and how network carriers inject tracking into user traffic. For those who want to measure the network for themselves, they can download the free (and ad-free) Android tool I helped develop, Netalyzr.

If this sordid history of ISP shenanigans doesn't make you believe in the need for some sort of common carrier regulation, I don't know what would. But FCC regulations include at least one "kill the Internet" clause: specifically a single sentence stating "Disclosures must also include packet loss as a measure of network performance".

If ISPs optimize to minimize loss (the only performance metric mentioned in the press release!), this will kill interactive video like Skype, FaceTime, and Google hangouts and also decimate high-interactivity online games, so also say goodbye to League of Legends, CounterStrike, Call of Duty, Defense of the Ancients. Unless the FCC removes this clause, the regulations are likely to seriously damage the Internet.

To understand why optimizing for minimum loss would be a disaster, we first need to start with some Internet basics. The Internet is a "best effort packet switched" network: it breaks your communication into little pieces (called packets), and sends them towards the destination. At each hop along the way, the router attempts to send the packet to the destination. But if a packet gets lost because a network connection was busy or full, it's no big deal because packets get "dropped" all the time.

On top of the Internet two primary protocols carry the bulk of the traffic: TCP and UDP. TCP is a reliable stream protocol and carries the bulk of the traffic on the Internet. TCP turns the Internet into a series of lossless contiguous data flows, it recovers from the occasional packet loss by retransmitting data and on the other side it waits until it can receive all data in order. All your web surfing, Youtube videos, and Netflix use TCP.

UDP on the other hand is an unreliable datagram protocol, the underlying IP protocol with just a little wrapper around it. If a packet gets dropped it just goes away. But if a packet gets through, there is no delay, as there is no waiting for a lost packet before returning the next packet to the waiting program. UDP is the foundation for every network application that needs to respond fast. When you speak a word in Skype, or fire your gun in CounterStrike, that message is sent using UDP.

So what happens if a network link gets full? At first, the router will hold onto some packets, storing them in buffers until they can be sent. But once the buffer fills up, the router deliberately drops packets. TCP knows that routers do this, so it responds to packet loss by reducing its sending rate. Since most Internet traffic is TCP, the individual TCP flows effectively all cooperate, ensuring that everyone can still send through the congested link.

But in the meantime, the buffer slows down the packets. If the buffer can hold a tenth of a second worth of traffic, all traffic is delayed by that tenth of a second. TCP needs some buffer in order to work properly, but it doesn't need much: home gateways need to buffer only about a tenth of a second worth of traffic, while core Internet routers only need about a hundredth of a second.

So what would happen if ISPs attempt to minimize packet loss? They simply add more buffering, often a second or more. And the result would be a disaster, a disaster which we have experienced in the past: An overbuffered network, when congested, slows down the traffic. TCP keeps working (although your web surfing will feel "slow"), while realtime UDP games and communication simply become unusable. Can you imagine your Skype call with a 1 second delay, or a one second delay on all your shots in in Call of Duty?

We've seen this in practice. Several years ago, Comcast had a problem: A few users in a neighborhood might run BitTorrent, monopolizing the upload bandwidth. At the same time, everyone's cable modems had extra large buffers, able to store several seconds worth of traffic. So when a few users fired up their uploads, the neighborhood link would saturate and everyone in the neighborhood would experience multi-second delays due to overbuffering.

These delays hurt web browsing but it did not affect overall download speed or speed-test sites. Yet those who wanted to make Vonage calls or play games found the network unusable due to this added delay and many falsely thought Comcast was deliberately targeting Vonage, since Comcast also offers local telephone service. So Comcast added a device to terminate BitTorrent connections, and complaints about Comcast sabotaging Vonage evaporated.

Since then, Comcast switched to a method where the heavy uploaders can delay each other but don't affect others in the neighborhood, but the underlying problem of overbuffered cable-modems remains: its why a cloud backup service at home might make the rest of the net at home feel "slow", even though the network has enough bandwidth.

If the FCC persists in considering packet loss a performance metric, the result would be to simply repeat this fiasco over the entire Internet. Everytime a link encounters congestion, rather than the TCP flows simply backing off, the excessive buffering would first cause Skype to lag, Counterstrike shots to miss, and a host of other UDP protocols to fail long before the TCP flows get the message to slow down and let more traffic through.

So what can be done? The FCC must immediately remove language about loss as a metric: the Internet is supposed to lose packets. Additionally, the negative effects of loss are already captured in bandwidth and jitter metrics: since TCP responds to loss by reducing its sending speed, congested network makes less bandwidth available and the buffering introduces jitter. Yet this is just a starting point.

Because I now worry: If there is one such "kill the net" detail present in the FCC's proposed rules, what other goblins may lurk in dark places?