Jim Gettys has been demonstrating the seriousness of the “buffer bloat” problem; see Home Router Puzzle Piece Two — Fun with wireless, and The criminal mastermind: bufferbloat! This is mostly just to draw your attention to Jim’s work, because you can probably improve your own Internet experience by acting on his advice; but I have a related gripe of my own.

As Jim points out, old guys like he and I can remember a time when the Internet used slower connections but felt faster. The good news is that it can probably feel faster again, if certain ISPs and network-hardware engineers stop the bufferbloat abuse.

There’s another overly-fat-pipe symptom that’s been increasingly in my face. I routinely copy big files here and there around the Internet, most commonly from my laptop to tbray.org. There are lots of ways to do this, but I mostly the good old-fashioned scp utility; one of the nice things about scp is that it gives you a real-time readout of how fast the data is flowing and the expected time to completion.

But these days, the readout is often useless. Here’s an example where I’m copying a two-megabyte file, an operation that takes several seconds. What happens is that scp gets its connection set up, and then more or less instantly displays the following big fat lie until the operation completes.

xvr27.pdf                                     100% 2083KB 694.4KB/s   00:03

No, I am not getting 700KB/sec upstream from my home DSL. What’s happening is that the two megabytes of data drop more or less instantly into some piece of fat but not terribly fast piping and scp thinks they’ve been sent.

If I send something so big that it can’t fit into the pipe, like a movie, the scp display starts out showing some insanely fast data rate until the pipe fills up, then it drops and drops and drops and after several minutes starts approximating the real actual speed of the Internet connection.

I’ve noticed that over the last couple of years the top end of the pipe has gotten fatter and fatter, and the output from scp less and less useful. Grrr.


Comment feed for ongoing:Comments feed

From: Zach (Jan 09 2011, at 19:11)

The culprit in your case is almost certainly OSX. I've noticed that exact behavior from OSX for years, while other OS's from behind the same routers to the same destination don't exhibit that pattern.


From: bill (Jan 09 2011, at 20:46)

Actually, I think this has to do with ISP's altering the upload speed dynamically for each connection. After the first few kilobytes uploaded at maximum bandwidth, they throttle the connection to some minimum. Most web users don't notice, and it has the effect of making most web surfing seem "faster". It is, however, evil.


From: james (Jan 09 2011, at 22:33)

Previous comment mentioned ISP throttling. One example is Comcast "PowerBoost," which lets them advertise their service as "(up to) 20Mbps" when the average service speed is less than one third of that, 6Mbps (768KB/s).



From: rns (Jan 09 2011, at 22:41)

So most of us have cascading buffers, which sound good in principle but actually screw-up the flow control algorithms in TCP/IP — the networking technology that runs the Internet.




From: Adam (Jan 10 2011, at 06:40)


It's probably normal rate-limiting by your ISP. They allow very fast bursts of data up to your maximum bandwidth, but over longer periods, you are limited to some lower rate. It's not evil. It lets you transfer large files, or quickly buffer enough of a large video to start watching right away. You continue downloading the video, but slowly. All this while maintaining near-instant webmail and blog reading, assuming the page doesn't have ads, trackers and other crap from 15 different domains.


From: Andrew. (Jan 10 2011, at 08:46)

The problem isn't large buffers but the fact that the OS and consumer router is putting each and every packet into a single FIFO queue. If fair queuing was implemented and each TCP flow (as defined by IP/port pairs) queued independently then this problem wouldn't exist. John Nagle's (yes, that John Nagle) comment describes this perfectly https://gettys.wordpress.com/2010/12/03/introducing-the-criminal-mastermind-bufferbloat/#comment-2105


From: PJ (Jan 10 2011, at 11:18)

I agree that it's likely ISP prioritization/bandwidth limiting effects. They do this so they can advertise faster speeds and so that the various network speed testers on the net show them as 'fast'. They're "fast", however, only for the first 10mb or so.


From: Craig McClanahan (Jan 17 2011, at 00:00)

In other words, you've discovered that scp is likely to be reporting how fast the local TCP stack is accepting data, instead of measuring end-to-end delivery. That issue has been around for a couple of decades, although I'm sure that the bandwidth issues in your use case are highlighting it more and more. But even on a high speed local network, scp reported transfer rates can be wildly inconsistent.

It seems like the right reaction would be to bitch at the scp folks for attempting to sell a "pipe" dream :-) ... or at least asking that they accurately label what they are really measuring.

As for me, I *like* fat pipes ... I want to send my upload as quickly as possible and get on with life, and let the network take care of the ultimate delivery.


author · Dad · software · colophon · rights
picture of the day
January 09, 2011
· Technology (85 fragments)
· · Internet (112 more)


I am an employee of Amazon.com, but the opinions expressed here are my own, and no other party necessarily agrees with them.

A full disclosure of my professional interests is on the author page.