On many-core, wasted cores, threads, processes, transactional memory, and the E-word.
Are Eight Cores Six Too Many? · Sun’s latest silicon, the T2, has eight cores, and a memory/thread architecture that, benchmarks suggest, can keep ’em all pumping. Well, so what? Jeff Atwood, in Choosing Dual or Quad Core, shows that for a whole lot of apps, anything more than two cores seems a waste.
It’s actually not a short-term crisis. There are plenty of Web workloads that naturally run wide rather than hot, whether you’re talking Java EE wrangling a thread pool or Apache in front of a bunch of LAMPware, PHP or Rails or whatever; plenty of work to soak up about as many T2’s as we can build, I’d bet. We haven’t had any trouble selling the less-muscular T1.
But I, and a lot of other people, would like to make more code run better on the new, wider, CPUs. As Jeff points out, “Unfortunately, CPU parallelism is inevitable. Clock speed can’t increase forever; the physics don’t work. Mindlessly ramping clock speed to 10 GHz isn’t an option.”
Transactional Magic Bullet? · In related news, it’s now public that the next wave of SPARCs will have hardware transactional memory.
So, is transactional memory, as in Rock, a magic bullet? [Heh; as I write this, the Wikipedia article on Transactional memory redirects to Software transactional memory. Not for long.] Everyone knows that heavily-threaded apps are often lock-bottlenecked, and transactional memory is a good way to do a whole lot less locking. Well, for the vast majority of programmers, Rock’s TM will be invisible; it operates in a very specific and kinda spooky way down at the instruction-set level. You’re just not gonna be wiring it into your cool new Web 2.0 app. Directly at least.
Where this kind of thing will be useful is down in the guts of core software infrastructure, stuff that gets as close to the metal as it wants. For example, the Java VM.
But Java is still thread-centric, and for really getting the most out of systems that are increasingly going to be many clustered boxes each with many fast-switching threads, you need to abstract away from that; to quote Sam Ruby, you need a Virtual Machine that is “designed from the ground up assuming that objects typically are immutable and serializable”.
The E-Word · Erlang disruption. Erlang influence. Erlang (and Erlang and Erlang) database substrate. Erlang for C#. Erlang thoughts. Erlang for Web 2.0. A first Erlang program. Erlang influence. Erlang distributed DBMS. Erlang message passing. Erlang (and Erlang and Erlang) for Jabber and Atom and IPC.
I smell a lot of interest but no consensus. I personally don’t think we’ll all be writing Erlang next year; pardon me for being old-fashioned, but I think that the human mind naturally thinks of solving problems along the lines “First you do this, then you do that” and thinks that Variables are naturally, you know, variable, and has grown comfortable with living in a world of classes and objects and methods.
My bet is that someone figures out how to apply Erlang thinking in a mainstream coding context and starts a landslide, because everything just starts running faster and reasonably cool, too. Then more cores are unambiguously better, if you can swing the memory bandwidth. And Transactional Memory? Seems like just the thing, at the core of such an infrastructure.