What
 · Technology
 · · Concurrency

Where’s the Apple M2? · DPReview just published Apple still hasn't made a truly “Pro” M1 Mac – so what’s the holdup? Following on the good performance and awesome power efficiency of the Apple M1, there’s a hungry background rumble in Mac-land along the lines of “Since the M1 is an entry-level chip, the next CPU is gonna blow everyone’s mind!” But it’s been eight months since the M1 shipped and we haven’t heard from Apple. I have a good guess what’s going on: It’s proving really hard to make a CPU (or SoC) that’s perceptibly faster than the M1. Here’s why ...
[8 comments]  
Topfew+Amdahl.next · I’m in fast-follow mode here, with more Topfew reportage. Previous chapters (reverse chrono order) here, here, and here. Fortunately I’m not going to need 3500 words this time, but you probably need to have read the most recent chapter for this to make sense. Tl;dr: It’s a whole lot faster now, mostly due to work from Simon Fell. My feeling now is that the code is up against the limits and I’d be surprised if any implementation were noticeably faster. Not saying it won’t happen, just that I’d be surprised. With a retake on the Amdahl’s-law graphics that will please concurrency geeks ...
 
Topfew and Amdahl · On and off this past year, I’ve been fooling around with a program called Topfew (GitHub link), blogging about it in Topfew fun and More Topfew Fun. I’ve just finished adding a few nifty features and making it much faster; I’m here today first to say what’s new, and then to think out loud about concurrent data processing, Go vs Rust, and Amdahl’s Law, of which I have a really nice graphical representation. Apologies because this is kind of long, but I suspect that most people who are interested in either are interested in both ...
[6 comments]  
More Topfew Fun · Back in May I wrote a little command-line utility called Topfew (GitHub). It was fun to write, and faster than the shell incantation it replaced. Dirkjan Ochtman dropped in a comment noting that he’d written Topfew along the same lines but in in Rust (GitHub) and that it was 2.86 times faster; at GitHub, the README now says that with a few optimizations it’s now 6.7x faster. I found this puzzling and annoying so I did some optimization too, encountering surprises along the way. You already know whether you’re the kind of person who wants to read the rest of this ...
[7 comments]  
Topfew Fun · This was a long weekend in Canada; since I’m unemployed and have no workaday cares, I should have plenty of time to do family stuff. And I did. But I also got interested in a small programming problem and, over the course of the weekend, built a tiny tool called topfew. It does a thing you can already do, only faster, which is what I wanted. But I remain puzzled ...
[8 comments]  
Distributed Hardness · The other day this, from Mathias Verraes, got thousands of retweets ...
[1 comment]  
Functional Programming Wisdom · I don’t often dedicate a blog entry to just a link, but this one is important. Important, that is, if you’re a computer programmer; in particular a programmer who needs to make code run faster on existing real-world hardware. Which is a minority of a minority, since it excludes most Webfolk whose servers are fast enough and clients are running 90% idle time. But that minority really needs to be thinking about Functional Programming, and if you’re not 100% sure you know what that means, you should drop everything and go read “Uncle Bob” Martin’s Functional Programming Basics ...
[11 comments]  
WF2: That’s All, Folks · [This is part of the Wide Finder 2 series.] This should be the final entry, after a couple of years of silence. The results can be read here, for as long as that address keeps working. I’m glad I launched that project, and there is follow-on news; taking effect today in fact ...
[1 comment]  
Late Summer Tech Tab Sweep · Some of these puppies have been keeping a browser tab open since April. No theme; ranging on the geekiness scale from extreme to mostly-sociology ...
[2 comments]  
Concur.next — Hard-Core Clojure · Here’s real news: Alex Osborne of National Library of Australia, also known as @atosborne, whom I first met on #clojure, took the Wide Finder bit between his teeth and has posted a remarkable implementation story: Widefinder 2 with Clojure. Um, 8m4.663s! If you care about any aspect of this stuff you really ought to go read it now. Grab yourself a coffee or whatever first; it’s not short ...
[1 comment]  
Concur.next · Are there any computer programs that you wish were faster? Time was, you could solve that problem just by waiting; next year’s system would run them faster. No longer; Next year’s system will do more computing all right, but by giving you more CPUs, running at this year’s speed, to work with. So the only way to make your program faster is to work with more CPUs. Bad news: this is hard. Good news: we have some really promising technologies to help make it less hard. Bad news: none of them are mainstream. But I’m betting that will change ...
[29 comments]  
Concur.next & WF2 — Tuning Concurrent Clojure · I’ve been working on, and writing about, running Clojure Wide Finder code. But I was never satisfied with the absolute performance numbers. This is a write-up in some detail as to how I made the code faster and also slower, including lessons that might be useful to those working on Clojure specifically, concurrency more generally, and with some interesting data on Java garbage collection and JDK7 ...
[16 comments]  
Concur.next — Eleven Theses on Clojure · I’ve been banging away on Clojure for a few days now, and while it would obviously take months of study and grinding through a big serious real-world software project to become authoritative, I think that what I’ve learned is useful enough to share ...
[27 comments]  
Fortress · Since I’m spelunking around the new-languages caverns these days, I really ought to mention the long-ongoing and very interesting Fortress, brain-child of our own Guy Steele, who knows one or two things about designing languages ...
[5 comments]  
Concur.next — Idiomatic Clojure · I’m starting to wind down my Clojure research, but I’m feeling a little guilty about having exposed people to my klunky Lisp-newbie code, perhaps giving a false impression of how the language feels. So I’d like to show you what it looks like when it’s created by someone who’s actually part of the tribe and thinks in it more natively than I probably ever will ...
[12 comments]  
Concur.next — Tab Sweep · Being a basket of concurrency-related morsels too short to stand alone and to long to tweet ...
[4 comments]  
Concur.next — No Free Lunch · In which the actual costs of running concurrently are examined, and seem shockingly high ...
[12 comments]  
Concur.next — More Clojure I/O · I recently wrote-up some Clojure-based Wide Finder work, in Parallel I/O and References. Driven mostly by ideas from commenters, I did some refactoring and knob-spinning. The results are interim at best, and on US-Thanksgiving eve almost nobody’s looking, but it’s good to get this stuff on the record ...
[4 comments]  
Concur.next — References · These, “refs” for short, are one of the three tools offered by Clojure to make concurrency practical and manageable. Herewith a walk-through of code that uses them to accomplish a simple task in a highly concurrent fashion ...
[21 comments]  
Concur.next — Parallel I/O · Conclusion first: It turns out that Clojure’s concurrency primitives allow you, with a very moderate amount of uncomplicated code, to take advantage of parallel hardware and outperform really fast software when it doesn’t take such advantage ...
[9 comments]  
Clojure N00b Tips · Clojure is the new hotness among people who think the JVM is an interesting platform for post-Java languages, and for people who think there’s still life in that ol’ Lisp beast, and for people who worry about concurrency and state in the context of the multicore future. Over the last few days I’ve been severely bipolar about Clojure, swinging from “way cool!” to “am I really that stupid?” Herewith some getting-started tips for newbies like me ...
[7 comments]  
Tail Call Amputation · This is perhaps a slight digression; just an extended expression of pleasure about Clojure’s recur statement. It neatly squashes a bee I’ve had in my bonnet for some years now; if it’s wrong to loathe “tail recursion optimization” then I don’t want to be right ...
[25 comments]  
Concur.next — Messaging · The more I look at Clojure, the more I think it’s a heroic attempt to Do The Right Thing, in fact All The Right Things, as we move toward the lots-of-not-particularly-fast-cores future. I’m still working my head around Clojure’s concurrency primitives. We come to understand the things we don’t by contrast with the things we do; so I’m finding contrast between the Clojure and Erlang approaches to messaging instructive ...
[28 comments]  
Concur.next — C vs. P · There are ripples spreading across the concurrency pond, some provoked by this series, and there’s this one issue that keeps coming up: “Concurrency” vs. “Parallelism”. It is asserted that they’re not the same thing, and that this is important ...
[12 comments]  
Concur.next — My Take · I’ve been trying to avoid editorializing in this “Java-of-concurrency” series. But since people are grumbling about my biases, here are a few notes about how I see things at this (early, I hope) stage in the process ...
[9 comments]  
Concur.next — Crosstalk · This subject seems to have hit a nerve, and there’s been outstanding feedback in the comments. Some of it makes a good case for changes in the series articles, and others just need more attention than I think they’ll get down in the comment section ...
[8 comments]  
Concur.next — The Laundry List · There are a lot of ingredients that might or might not go into the winning formula that brings concurrent programming to the mainstream. This is a very brief run-through of as many as I can think of ...
[34 comments]  
Concur.next — Java · This series argues that there’s an opportunity for some technology to become “The Java of concurrent programming”. For background then, I’ll try to capture the factors that led to Java’s success as the first mainstream object-oriented platform for application developers ...
[10 comments]  
Meat-Grinder! · It’s days like these that make it fun working for Sun. The new server’s official name is the T5440; they call it a “mid-range” box, but to me it looks like a monster; count the numbers for cores, threads, RAM, and so on. It’s astounding what you can fit into a 4U box these days ...
[2 comments]  
Tab Sweep — Technology · I’d kind of gotten out of the habit of doing tab sweeps, largely because my Twitter feed is such a seductive place to drop interesting links. But as of now there are around 30 tabs open on my browser, each representing something I thought was important enough to think about and maybe write about. Some are over a month old. Some of them have been well-covered elsewhere. All I assert is that after I read each one of these, I didn’t want to hit command-W to make that window go away. Unifying theme? Surely you jest ...
[1 comment]  
WF2: Midsummer Update · [This is part of the Wide Finder 2 series.] We’re a few weeks in now, so I should provide an update. Those who are really interested might want to join the Wide Finder group ...
[2 comments]  
WF2: Early Results · [This is part of the Wide Finder 2 series.] The first serious results are in, and they’re eye-opening. The naive Ruby approach, remember, burned some 25 hours. There are now four other results posted, with elapsed times of 8, 9, 15, and 17 minutes. The write-ups are already full of surprises, and I’m expecting more.
[1 comment]  
WF2: Start Your Engines! · [This is part of the Wide Finder 2 series.] I have now done the first “official” run of the naive Ruby implementation of the benchmark. There is some discussion of the code here. The benchmark is described, and the naive Ruby code provided, here. I’ve started a results page here. There are already ten eleven other people now with accounts on the Wide Finder machine, and I know there’ve been results that are hugely better than this first cut. Read on for a couple of notes on this first run ...
[9 comments]  
WF2: People At Work · [This is part of the Wide Finder 2 series.] I’m happy to report that I’ve given out a bunch of accounts on the Wide Finder 2 machine. I’ll aggregate links to others’ work in this entry ...
[3 comments]  
WF2: First Pathetic Results · [This is part of the Wide Finder 2 series.] I made a couple of little changes to the Strawman Benchmark and let ’er rip on the big 45Gig data set. The results were miserable, but instructive ...
[3 comments]  
WF2: The Benchmark · [This is part of the Wide Finder 2 series.] A bunch of people have requested the sample data. Meanwhile, over at the wiki, there’s a decent discussion going on of what benchmark we should run ...
[5 comments]  
WF2: Benchmark Strawman · [This is part of the Wide Finder 2 series.] I’ve just updated the Wide Finder wiki with a straw-man benchmark (see “Strawman”), with sample Ruby code and output. Argh... it takes Ruby over 23 seconds to process just 100K lines. I hear talk of a Solaris/SPARC optimized version, gotta track that down. Comments please on the benchmark, then we can all start bashing our heads against the big file.
[15 comments]  
Wide Finder 2 · Last fall, I ran the Wide Finder Project. The results were interesting, but incomplete; it was a real shoestring operation. I think this line of work is interesting, so I’m restarting it. I’ve got a new computer and a new dataset, and anyone who’s interested can play ...
[18 comments]  
Ruby News · This really isn’t the place to come for Ruby news. But that’s OK, because I have the pointers to where you should go. Plus, one of the news stories is making me think “Smells like Erlang.” ...
[4 comments]  
New Computers · Today, we at Sun had a server announcement, and so did IBM. Get yer hot links & pix here ...
[3 comments]  
Hard Problems · I spent quite a bit of today at the O’Reilly 2008 Concurrency Summit. It was a congenial crowd, but at the end of the day kind of a downer, because we have lots of hard concurrency problems and not too many solutions. Anyhow, two subjects that came up were REST (which is concurrent at the largest possible scale), and, unsurprisingly, Erlang. And it struck me that they’re kind of like each other ...
[4 comments]  
WF XIII: Logistical Pain · This is the thirteenth progress report from the Wide Finder Project. It’s just scratchpad to catalogue all the problems I’ve had getting contributed code to work. Probably not of general interest, but an essential part of a complete write-up ...
[13 comments]  
WF XI: Results · This is the eleventh progress report from the Wide Finder Project; I’ll use it as the results accumulator, updating it in place even if I go on to write more on the subject, which seems very likely. [Update: Your new leader: Perl.] ...
[27 comments]  
The Wide Finder Project · In my Finding Things chapter of Beautiful Code, the first complete program is a little Ruby script that reads the ongoing Apache logfile and figures out which articles have been fetched the most. It’s a classic example of the culture, born in Awk, perfected in Perl, of getting useful work done by combining regular expressions and hash tables. I want to figure out how to write an equivalent program that runs fast on modern CPUs with low clock rates but many cores; this is the Wide Finder project ...
[31 comments]  
WF XV: On Parallel I/O · This is the fifteenth progress report from the Wide Finder Project; it’s fairly-uncooked musing about parallelism and I/O ...
[13 comments]  
WF XIV: My Opinion · This is the fourteenth progress report from the Wide Finder Project. I still have a bunch of Wide Finders to run, and for the time being I’ll keep trying to run what people send me; we can’t have too much data about this problem. But some conclusions are starting to look unavoidable to me ...
[7 comments]  
WF XII: Discussion · This is the twelfth progress report from the Wide Finder Project. It exists to host the excellent discussion so far from others; see the comments ...
[17 comments]  
WF X: The Parade Continues · This is the tenth progress report from the Wide Finder Project. What with Vegas and the Commies, I’m behind on a lot of things, including the Wide Finder. This is another entry just to note other people’s work, which I really absolutely will buckle down and run on the big iron and report back and shower praise on the good ones and derision on the big misses ...
[5 comments]  
WF IX: More, More, More · So, I got distracted by a server launch and a Vegas trip, but the Wide Finder implementations keep rolling in ...
[4 comments]  
The T2 Servers · These T5x20 servers we’re announcing today are a big deal. My bet is that they end up making Sun a lot of money; but on the way, they’re going to bring the whole server business (not just Sun’s piece of it) face to face with some real disruption ...
 
Testing the T5120 · This was going to be a Wide Finder Project progress report, but I ended up writing so much about the server that I’d better dedicate another fragment to the comparisons of all those implementations; especially since there are still lots more implementations to test. So this a hands-on report on a couple of more-or-less production T5120’s, the T2-based server that’s being announced today. Headlines: The chip is impressive but weird; astounding message-passing benchmark numbers; fighting the US DoD ...
[1 comment]  
WF VIII: Snapshot · This is the eighth progress report from the Wide Finder Project; a quick comment-light aggregation of work that other people have been doing in the space. I’ve managed to get access to an unannounced many-core server and have some preliminary results (summary: Vinoski’s in the lead); I’ll publish those in the, uh, very near future, when things are, uh, less unannounced ...
[5 comments]  
WF VI: The Goal · This is the sixth progress report from the Wide Finder Project, in which I try to paint a picture of what the solution should look like. The project really has two goals. First, to establish whether it’s possible to write code that solves this problem and runs much faster in a many-core environment. Second, to consider how that code should look to the programmer who has to write it ...
[16 comments]  
WF VII: Other Voices · This is the seventh progress report from the Wide Finder Project, in which I report on the work of others and turn this into kind of a contest. I’ll be back later with one more whack at Erlang, and then probably move on to other Wide Finder options. [Update: If you’re working on this but I missed you, sorry; holler at me and I’ll make sure you get covered.] ...
[12 comments]  
WF IV: The Cascade · This is the fourth progress report from the Wide Finder Project. Following on my earlier fumbling at the controls of the Erlang race-car, several others joined the conversation. For particularly smart input, check out the Erlang questions mailing list and the already-linked-to Steve Vinoski piece, Tim Bray and Erlang. Other voices have questioned the project’s sanity, and brought Scala and Haskell to bear on the problem. But let’s push all that on the stack, and ask ourselves this question: what if processes were free? Answering it turns out to be all sorts of fun; I seize the Erlang controls back again, and steer it with a firm steady hand into a brick wall at top speed ...
[9 comments]  
WF V: Roundup · This is the fifth progress report from the Wide Finder Project; an aggregation of what other people have been saying ...
[5 comments]  
WF III: Lessons · This is the third progress report from the Wide Finder Project. Given that I launched this brouhaha late on a Friday, totally the worst possible time, I’m astounded at the intensity and quality of the conversation that’s going on. I want to address two themes that have emerged, one of which seems stupid and the other smart ...
[14 comments]  
WF II: Erlang Blues · This is the second progress report from the Wide Finder Project, and a follow-on from the first, Erlang Ho! The one thing that Erlang does right is so important that I’d like to get past its problems. So far I can’t ...
[30 comments]  
WF I: Erlang Ho! · This is the first progress report from the Wide Finder Project. Erlang is the obvious candidate for a Wide Finder implementation. It may be decades old but it’s the new hotness, it’s got a PragBook (@Amazon), I hear heavy breathing from serious software geeks whichever way I listen. So, let’s give it a whirl. [Warning: Long and detailed, but the conclusion comes first.] ...
[8 comments]  
Postmodern Errors · Today’s fashionable programming languages, in particular Ruby, Python, and Erlang, have something in common: really lousy error messages. I guess we just gotta suck it up and deal with it. But today I got something really, uh, special from Erlang ...
[15 comments]  
Sideways Computing · On many-core, wasted cores, threads, processes, transactional memory, and the E-word ...
[14 comments]  
Tab Sweep — Tech · Today we have Java yielding, thread ranting, REST lecturing, and identity insight ...
[6 comments]  
Thread Herrings · Over the last couple of years, I’ve written lots here about concurrency, and its issues for developers. The proximate cause is that my employer is pushing the many-core CPU envelope harder than anyone. This last week there’s been an outburst of discussion: from Nat Torkington, David Heinemeier Hansson, and Phillip Toland. They’re all worth reading, and the problem isn’t going away. But I’m worrying about it less all the time ...
[7 comments]  
Tab Sweep · Perhaps a little more all-over-the-map even than is usual: GPLv3 clarity, Functional Pearls, raina bird-writer, Java credits, framework programmers, and hacking my Canon ...
[4 comments]  
The London Illustrated News · I spent the week in London. Fun was had, pictures were taken, I learned things. Herewith illustrated notes on transportation, energy, finance technology, businesslike drinking, women’s clothing, Groovy, excellent lamb-chop curry, and a round red anomaly ...
[8 comments]  
Berkeley on Parallelism · Anyone who cares at all about taking advantage of these nasty new microprocessors that still follow Moore’s law but sideways not just straight up ought to go and read The Landscape of Parallel Computing Research: A View from Berkeley. As the title suggests, it’s an overview paper. Thank goodness we have universities, so that smart people who know about this stuff can invest a few months in surveying the landscape and report back for the rest of us who are being whipped around by the market maelstrom. Herewith way too much applause and disagreement and expansion ...
[5 comments]  
Clementson on Concurrency · That would be Bill Clementson, in Concurrent/Parallel Programming - The Next Generation. I’ve been working on so much other stuff that the concurrency’s kind of been crowded out. Which isn’t good, because the highly-parallel future hasn’t stopped getting closer, and I just haven’t heard that much exciting concurrency news recently. Except from Google, where MapReduce and Sawzall may be pointing one of the ways forward. I actually did a little fooling around with Erlang (damn, that is one heavyweight install) and there’s a lot to like, but I don’t think the world is ready to give up object-orientation. There’s low-hanging fruit out there, and lots of pieces of the solution are in plain view, and we know where we’re trying to go: to a place where ordinary application programmers’ code naturally and effortlessly takes advantage of multi-core, multi-processor, clustered, or otherwise parallel hardware deployments. Because scaling out, rather than scaling up, is still the future.
 
Statelessness · Check out this surprising piece: The unbearable lightness of being stateless, by Ariel Hendel. He starts with wise words on asceticism and the business traveler, moves through something called “Logical Domains” which can serve as “surf bum domains”. It seems to make sense and I had to go back and read it again, because it was good, even though at the end of the day he is trying to sell you a computer.
 
Those Cruel Irish · People inside Sun were gleefully emailing around Colm MacCárthaigh’s big Niagara benchmark post and I was reading and found myself laughing out loud. The synopsis is: it’s a big serious benchmark and the box did great, pretty well slaughtering both a Dell Xeon and a Dell Itanium. But jeepers, those Irish dudes are heartless, I’m surprised there weren’t smoking shards of casing and silicon on the floor. I think most Apache & *n*x geeks would find themselves gasping and snickering a bit at Colm’s write-up, but there’s some real wisdom there too about filesystem and server tuning and so on, although some of the tricks are definitely don’t-try-this-at-home. Anyhow, here are some cute samples:
“Also, in each case, the system was pretty much unusable by the time we were done!”
“... about 83,000 concurrent downloads.”
[They managed to crash Solaris with the experimental event MPM]: “Then again, it was handling about 30,000 requests at the time, with no accept mutex.”
“Of course, no server should ever be allowed to get into that kind of insane territory.”
“Note: these are stupid values for a real-world server... really only useful if you are doing some insane benchmarking and testing.”
“...5718 requests per second.”
Hey Jonathan, let ’em keep the box. [Update: They’re keeping it.]

 
Niagara Day · You can’t possibly imagine the amount of work it’s taken to get here. Richard McDougall has put together a Niagara Blogging Carnival which is the right place to start if you’re the kind of person that the MSM (Main Stream Marketing, that stands for) isn’t aimed at; i.e., not a CEO, CIO, or journalist. My own personal favorite Niagara newsbites: Item: Nobody gets 100% yield on their chips. I gather that for the Niagaras that don’t turn out perfect, we’ll sell ’em cheaper as 7-core, 6-core, 4-core, or whatever. Some of these configs might turn out to be the deal of the century depending how we price them. Item: They’re open-sourcing the hardware, too. I’m not sure exactly what that means in the big picture, and the licensing is going to matter, but it’s cool. Item: Those eight cores, when one’s not busy, they stop it. No, they don’t idle-loop it, they stop it. Obvious when you think of it. Item: When not to use the new stuff. Item: How the I/O works. Item: What makes chips wear out and fail? Lots of things, but especially heat; so low-wattage chips are RAS winners. Item: Maximum geek-out! Last item: When you have Java threads that map real closely onto Solaris threads that map real closely on to hardware threads, and you also have a lot of well-implemented hardware threads, this is what happens.
 
CMT Rumbles in the Distance · The party line on our Niagara technology continues to be “sometime late this year or early next year”. I’ve written lots on the concurrency issues that arrive along with Niagara (and every other chip designer is heading down the same multithreaded path); if you want more, check out CMT is coming: Is your application ready? by Richard McDougall, with tons of pointers to big, serious pieces on the subject, including one by Kunle Olokotun, the man behind Niagara.
 
The Joy of Threads · I’ve had quite a bit to say here about how concurrent software, which is getting more important, remains brutally difficult—beyond the reach, some say, of many application programmers. I’m a little worried about negative spin, because if you enjoy programming, you should give concurrency a try; some of us find it especially satisfying. I can remember like yesterday in the undergrad CS course when I first understood what a “process” was, and then a few years later the same feeling when I really got threads. Yeah, it’s tough; you’ll find yourself debugging by print statement, and sometimes with a compile-run-think cycle time measured in minutes. But when you have the computer doing a bunch of things at once, and they all fit together and the right things happen fast, well, that’s some pretty tasty brain candy. All this brought to mind during our recent long weekend in the English countryside; it seemed entirely reasonable to me to sit in a quiet corner of the pub, or with a view of the ocean, and get a few of those compile-run-think cycles in. I can understand that not everyone feels this way, but to all the coders out there: this stuff is not only good for your career, it can be its own reward.
 
Threads Redux · The June 12th On Threads piece got slashdotted (twenty thousand hits for a 2,300 word hard-tech piece, not bad), which provoked really interesting feedback from (among others) David Dagastine, Greg Wilson, and Ben Holm, along with pointers to some related work. All those pointers are worth following, and some of the points are worth a little more discussion ...
 
On Threads · Last week I attended a Sun “CMT Summit”, where CMT stands for “Chip Multi-Threading”; a roomful of really senior Sun people talking about the next wave of CPUs and what they mean. While much of the content was stuff I can’t talk about, I was left with a powerful feeling that there are some real important issues that the whole IT community needs to start thinking about now. I’ve written about this before, and of the many others who have too, I’m particularly impressed by Chris Rijk’s work. But I think it’s worthwhile to pull all this together into one place and do some calls to action, so here goes. [Ed. Note: Too long and too geeky for most.] [Update: This got slashdotted and I got some really smart feedback, thus this follow-up.] ...
 
Laptops and Servers · It just dawned on me that in a desk-side box, heat is not a major problem because it’s got a room to cool it off, usually without too many other computers in it. Laptops, however, are like back-room “mainframe” servers in that heat is a big deal. In the one case you’re worried about your users’ gonads and in the other your HVAC budget, but the problem is the same. Right now, the heat budget is a big concern for the guys at Sun who design our biggest servers. It’s no secret that our throughput computing initiative is partly about this: lower clock speeds, more cores, thread-level parallelism. Am I predicting that laptops with SPARC processors will be leaping off the shelves next quarter? I wouldn’t go that far. But I am smelling converging design spaces.
 
Software in the TLP Era · Flying over the Atlantic, I read all eight parts of Chris Rijk’s Thread Level Parallelism Design Decisions, and I wish a few more software geeks would go and read it. Herewith a few notes on software design in the era of Thread Level Parallelism ...
 
author · Dad
colophon · rights
Random image, linked to its containing fragment

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!