Over the last couple of years, I’ve written lots here about concurrency, and its issues for developers. The proximate cause is that my employer is pushing the many-core CPU envelope harder than anyone. This last week there’s been an outburst of discussion: from Nat Torkington, David Heinemeier Hansson, and Phillip Toland. They’re all worth reading, and the problem isn’t going away. But I’m worrying about it less all the time.
Here’s the thing: I’m looking at the class of problems that are best addressed by real actual shared-address-space threads, and it’s looking smaller and smaller and smaller.
That’s the world you have to live in, within the operating system or the Web server or the database kernel... but for applications? These days, you’re way better scaling out not up. Shared memory? Hell, shared nothing if you can get away with it.
It isn’t free. You have to pay a per-process memory tax, and worry about saturating the network and (de)serializing the data and all sorts of other stuff. But for now, it seems you still win.
Computers are still going to look more and more like those many-core SPARCs we’re shipping, but at the application level, that should be a red herring.
Which is why there’s a new Erlang book and strange words like “Haskell” and “Hadoop” echo in geeky back-rooms. In an ideal world you have a message-passing layer that figures how to move ’em around using whatever combination of threads and processes and shared memory and bits-on-wires gets the job done.
(This also reinforces my opinion that message-queuing systems will inevitably end up at the center of everything serious; but I digress.)
It will doubtless be pointed out that on the client, threads will remain in application programmers’ faces for the foreseeable future. Which I see as another argument for doing everything you can through the Web.