If your internet connection leaves you constantly waiting for streams to buffer, you probably love nothing more than to bitch and gripe about bandwidth. Easy there, tiger, because your issue could be a much more fundamental issue that everyone seems to have forgotten about: latency.
Fortunately, Ars Technica has taken a good, long look at the subject in a wonderful feature about the forgotten and often troublesome network property. First thing’s first though, let’s remind ourselves what latency is, courtesy of Ars:
Just as we acknowledge what someone is telling us in conversation (with the occasional nod of the head, “uh huh,” “go on,” and similar utterances), most Internet protocols have a similar system of acknowledgement. They don’t send a continuous never-ending stream of bytes. Instead, they send a series of discrete packets. When you download a big file from a Web server, for example, the server doesn’t simply barrage you with an unending stream of bytes as fast as it can. Instead, it sends a packet of perhaps a few thousand bytes at a time, then waits to hear back that they were received correctly. It doesn’t send the next packet until it has received this acknowledgement. Because of this two-way communication, latency can have a significant impact on a connection’s throughput. All the bandwidth in the world doesn’t help you if you’re not actually sending any data because you’re still waiting to hear back if the last bit of data you sent has arrived.
The problem is there are layers upon layers of latency on the modern internet: from the link length (a function of that irksome real-world geography), through the way data is transmitted, to your own dusty router. All told, they can introduce entire seconds of latency if the network is conspiring against you. A second extra to load a web page sucks; a second of stutter on YouTube is wildly annoying; but a second of latency with Skype makes the whole thing practically useless. And that’s nothing to do with bandwidth, whatsoever.