I’ve been trying to avoid editorializing in this “Java-of-concurrency” series. But since people are grumbling about my biases, here are a few notes about how I see things at this (early, I hope) stage in the process.

[This is part of the Concur.next series.]

Why Me? · I think I’m well-positioned to do this research and write-up. First of all, I’ve written a bunch of highly concurrent code over the years (Web crawlers, in-memory data-viz database, one specialized network-management database kernel). So I have a bitter personal appreciation for how hard it is to get this stuff right and debug it, and consequently, how unreasonably long it takes to ship robust working code.

Second, I’m not a language or platform designer, so I don’t really have a horse in this race. Granted, my employer does ship a very nice line of highly-parallel servers, but you think that’s just a Sun problem you’re nuts.

Also, I have a nice little sample project around, the Wide Finder. More on that later.

History · I’m proficient in C, Java, and Perl, can get along in Ruby, have written snippets of working code in Python and Prolog and a couple of Lisps and Erlang.

As regards concurrency, I will freely admit, as I start to dig into this, the following biases:

  • In favor of problems that involve persistent stores and network communication.

  • In favor of the *n*x and Open-Source ecosystems, accompanied by a visceral aversion to .NET.

  • In favor of 80/20 point solutions (see TPSM for background), accompanied by fear of any approach with too many levels.

  • In favor of Test-Driven Development.

  • Against abstractions that try to hide too much.

  • In favor of object-orientation; history shows that programmers are on average capable of understanding this sufficiently well to produce acceptably-good software in accceptably-short timeframes.

  • Having had essentially no experience in conventional “HPC”, which is to say I’ve never been near MPI or any of its ilk. I don’t see a strong connection there to the kind of general-purpose business apps I want to run faster on general-purpose multicore business computers, but I could be wrong on that too.

Erlang · Also, in the context of the HECS languages, I’m obviously biased in favor of Erlang/OTP if only because it’s the one that I’ve actually used. Here are some of the things I like about it:

  • It’s a small language; there’s not much to learn.

  • To borrow some Rails-speak, it’s opinionated: Erlang thinks there’s exactly one way to build concurrent software, and that’s with processes and messages and no global data. If your problem can work that way, you’re in good shape with it.

  • I personally find the process/messaging model of the world fits neatly into my mind and provides a useful basis to think about solving problems. I’m not assuming this will be true for everyone; see below.

  • The implementation is remarkably solid and seems to bend gracefully under pressure rather than breaking. To the extent that there are metaphors or abstractions, the implementation seems to behave exactly in the way that they would encourage you to expect.

On the other hand, Erlang is hardly ready for adoption by the broad mainstream of developers. Out of the box, its file-handling is pathetic and its string processing facilities are putrid. Also, its syntax is irritatingly irregular; the use of comma, semicolon, and period as line-enders makes refactoring your code subject to almost-inevitable syntax errors.

And finally, I’m not quite ready to toss our decades of experience with object-orientation on the scrap heap.

Switching back to the positive, there’s the really big factor: Erlang has been shown to work on highly concurrent systems. To start with, in lots of pieces of telecom equipment. It’s in production at Facebook. I’m impressed.

Recent News · If you care about this stuff, and you haven’t already done so, find 56 free minutes sometime in the next day or so and listen to Rick Hickey’s presentation of Persistent Data Structures and Managed References; you won’t regret a minute of it. Rich walks through, in detail, how you can co-ordinate safe lock-free reasonably-simple access to shared data in a highly-concurrent situation.

Historically, my simplest-thing-that-could-possibly-work thinking has led me to “Don’t share data. Send messages.” But maybe sharing it is not so bad.

While you’re paying attention to Rick, check out his lengthy contribution to this conversation. I disagree with his assertion that I’m “focusing on same-(OS)-process concurrency”; I’m not religious on that subject at all. But Rich is definitely worth paying attention to.

I note in passing that one factor which has historically helped push certain new technologies into software’s mainstream is having an eloquent and appealing chief evangelist.

Functional Programming · Its advocates make two claims. First, that it makes truly concurrent computing tractable. I find this claim convincing.

Second, that it’s a better way to program because code which implements pure side-effect-free functions is easier to reason about and understand and re-use. Well maybe, but I don’t care. I don’t think we’re facing a huge urgent crisis in terms of being able to reason about the code we write. In fact, with the advent of TDD and higher-level languages and opinionated frameworks, we’re doing better at constructing pleasing applications in acceptable time-frames than any other time in my lengthy sojurn in this profession.

But I think we are facing a crisis in understanding how to make our everyday codes take reasonable advantage of the kinds of computers we’re now starting to build. So... Dear FP: You may have all sorts of wonderful and endearing characteristics. But I only want you for your concurrency hotness.

And when I run across things like, for example, the sideways dance Haskell goes through to generate random numbers, my eyes roll. On the other hand, I guess it’s way cool that the language lets me play with infinite lists. I guess.

Language Learning · I’m not going to say anything more about Clojure or Scala or Haskell until my understanding matures some. There’s lots of online material, and there’s the Wide Finder problem, which may not be perfect but I understand its trade-offs. There’s Scala code there now, and I guess I’ll just have to grind out some Clojure and Haskell as part of the education process.

The Berkeley Study · An extremely-rude commenter linked to the a 2006 U Cal Berkeley Technical Report, The Landscape of Parallel Computing Research: A View from Berkeley (PDF), which I’d read but then filed away somewhere. A lot of it is of questionable relevance (in particular I’m unconvinced by the selection of benchmarks) but I was struck by section 5.1, “Programming model efforts inspired by psychological research”. The idea is that what we’re trying to do here is equip actual real-world programmers to solve actual real-world problems. So, as we look at our laundry list of tools and strategies, it’d be nice to, you know, actually measure how good people are at using them.

So I’ve got an action item to track down a few of their references and anything else in the space.

End Game · I don’t know at the moment. I’m envisioning a fully-worked out write-up, with lots of links, on each of the Laundry List top-level items. Then maybe a big honking grid with all the actual languages and platforms classified according to which laundry-list items they implement. Perhaps these last two items belong on a wiki.

We’ll see where this goes.



Contributions

Comment feed for ongoing:Comments feed

From: Mark Volkmann (Oct 05 2009, at 03:45)

My biggest issue with the actor model is the lack of ability to coordinate the activities of multiple actors. Isn't it true that actors can retain state? If so then it seems there is a need for the ability to send messages to multiple actors within a transaction so if one fails to process a message, the others can rollback changes to their state.

[link]

From: Pete Kirkham (Oct 05 2009, at 05:40)

I quite like visualising the generator of a sequence of random numbers as a generator of a sequence of numbers which are random, rather than as a procedure with hidden state. Having all structures immutable by default is a massive win if you want destructuring pattern matching.

As soon as you have by-value message passing you don't need pure FP for concurrency as you've already removed the possibility of shared mutable state between processes.

Stackless Python is based on CSP and hasn't shared state between processes ( I'm not sure if it copies or not ), but still has mutation of state within a process.

Having all structures immutable by default so that in-process messages are immutable is a rather restrictive optimisation against copying; if you want to be able to distribute the system as well you will design for small message size so copying shouldn't be a burden.

[link]

From: projectshave (Oct 05 2009, at 07:04)

I think you misunderstand FP's claim that it is "easier to reason about". Removing side-effects frees the compiler to make bolder transformations of your code. This means you can write very high-level concurrent code and the compiler can more easily deduce what kind of implementation would be appropriate. It could also do powerful static checking of your code at compile-time.

In Fortran and C, by contrast, it is very difficult to automatically parallelize the code because you never know where a variable might point. Same goes for any language that is so casual with side-effects. And if you stick in some OpenMP pragmas, the compiler will dutifully do what you say even if it's completely wrong.

You skipped over dataflow too quickly. A certain sinister investment bank earns gazillions of dollars using a homegrown dataflow language in their trading systems.

Anyway, I've learned long ago to blindly follow anything Guy Steele does. He's doing Fortress now, so that must be the right answer.

[link]

From: Ashwin (Oct 05 2009, at 07:53)

Tim - you may find this useful - the Psychology of Programming Languages Interest Group, http://ppig.org/ .

[link]

From: Andrew (Oct 05 2009, at 09:41)

Processes and message passing is the one guaranteed way I know of for normal human programmers, which I'm one of (I'm also one of those rare people that think they are an average driver), to build reliable concurrent software.

The message passing approach turns us into protocol designers and implementers, which is something we have proven to be efficiently capable of. A great example of message passing is Google's map/reduce implementation. We also have existing tools and processes like UML sequence and state diagrams that can be applied to the problem.

Message passing also has the useful side-effect that the problem then becomes very decomposable such that you can apply multiple engineers to the solution as opposed to relying on one person keeping it all in their head. Building in the capability for time-to-market vs number-of-engineers trade-offs into a system is exactly the type of flexibility managers look to their technical architects to provide.

The downside of message passing is that the state machines can get complex and the overhead of the message passing can become significant. For the majority of application layer problems I think these are worthwhile costs for the robustness message passing provides but as always there will be exceptions.

Where I tend to disagree with people is that the preferred solution to concurrency has to be the one and only one available in the programming language, e.g. Erlang. I guess the presumption is that if you expose raw semaphores in the language a developer will use them to implement something more error prone. For me this is easily solvable with architecture choices and coding standards.

C++ garners many complaints that it is too powerful (for want of a better word), and I agree, but look at Google's open source C++ coding standards. They unambiguously nail C++ down to a less error prone and more maintainable version of the language (no operator overloading, no RTTI, parameter references are always const etc. etc.) yet keep much of the useful OO functionality intact. I see no reason why message passing could not be enforced the same way while still keeping the productivity that comes with modern languages and their toolchains.

For me what a general purpose language should do is provide useful capability that can be used to build a solution. If you look at LINQ in C# for example what they did was add a bunch of capability like lambas, type inference etc. that then came together to form LINQ as a system. For message passing the big problems to solve are around the state machines, which are essentially exercises in non-local flow control on a wide scale. This is most definitely not a solved problem, just look at the continuing debate around whether exceptions are truly a good thing or not (and if you think that debate is over go read Google's C++ coding standards). If support for state machines were built into the language and toolchain a big barrier to using message passing for concurrency would melt away.

[link]

From: Kevin Scaldeferri (Oct 05 2009, at 09:55)

> Second, that it’s a better way to program because code which implements pure side-effect-free functions is easier to reason about and understand and re-use. Well maybe, but I don’t care. I don’t think we’re facing a huge urgent crisis in terms of being able to reason about the code we write

I'm a little confused. Isn't one of your claims precisely that traditional multi-threaded code is too difficult to reason about?

[link]

From: dantakk (Oct 05 2009, at 12:14)

The D (2.0) language's work on integrating "pure" functional and imperative programming is enlightening. They put a lot of effort into it and it could provide a model for similar languages (ie Java) to evolve.

http://www.ddj.com/architect/217801225?pgno=5

[link]

From: Gavin Brelstaff (Oct 06 2009, at 00:54)

Tim wrote:

""I was struck by section 5.1, “Programming model efforts inspired by psychological research”.""

I think it is also a question of perception - the programmer need to be able to "see" the beast in order to interact with it in a time manner.

This is traditionally territory of human factors.

Perhaps to understand in parallel you need to be able to see in parallel and thus the notion of pop-out investigated in visual research might be instrumental.

http://en.wikipedia.org/wiki/Visual_search

I am currently investigating (part-time) ways that our parallel visual search may assist BOTH

(1) to do and sudoku grids and

(2)to write Erlang programs that solves them in a way that we can easily comprehend.

Early days ...

[link]

From: Tony Fisk (Oct 07 2009, at 06:27)

Call me naive, but in what way does the problem differ from that of a server farm?

(ie a queue of requests/instructions to be handled by whichever server comes by next.)

I suppose there's a question of prioritisation:

a=b;

c=a+1;

For that to work properly, you'd better make sure that 'a' is set to 'b' before 'c' is calculated.

[link]

author · Dad · software · colophon · rights
picture of the day
October 04, 2009
· Technology (77 fragments)
· · Concurrency (70 more)

By .

I am an employee
of Amazon.com, but
the opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.