This is provoked by a monumental essay over on MSDN by Jan Gray entitled Writing Faster Managed Code: Know What Things Cost. I think people who care about performance in modern programming environments, even those who don’t plan to go near the .NET CLR, ought to read this. A bunch of reactions and observations at varying levels of meta-ness.

Metagripe: Does It Have To Be So Ugly? · I know it's the content that matters, but Microsoft.com in general and MSDN and this article in particular are just eye-bleeding butt-ugly. The navigational apparatus is klunky and bloated, the typefaces are stupidly small, and all these years later, Frames usually suck and in particular they suck here.

This is not to diss the author, I’m sure it’s not his fault.

Take the Pledge! · The first thing in this essay is exactly what it should be, an assertion that performance matters and that individual developers have to pay attention to and measure and use the right tools to make things run fast. Gray provides a pledge that I’d be happy to put my name to, and next time I start a multi-person software product, I’m going to make the programmers read it and sign up.

Example of programmers who have not taken the pledge are the team at Apple who shipped iCal and iSync, both so unusably slow in their first release that I haven’t gone back to check them out, even though they claim they can sync to my current cellphone.

Shipping slow code is profoundly bad practice; when you make a person wait for a computer, you’re implicitly asserting that the person’s time is worth less than the computer’s, which is both immoral and bad economics.

Now onto some specifics.

Out of Order · Once you get past that pledge, the order of this essay goes quickly off the rails. Gray deals out several thousand words of detailed and useful information about performance issues in CLR, and only halfway through the essay does he introduce the “CLR Profiler” and give advice on how to measure what your code is actually doing. This is probably wrong, because I think the way to write fast code is simple and has been well-known for a long time. Here it is:

  1. Design and code your app, trying hard not to do anything really stupid, and striving for flexibility.
  2. If it’s fast enough, don’t worry any more.
  3. If it’s slow, get out your profiler and measure things until you understand where the problem is.
  4. Fix the problem, which may well require major refactoring, but that’s OK because that’s probably coming at you pretty soon anyhow with the next batch of requirements. Furthermore, you couldn’t have avoided it because nobody is smart enough to predict where the bottlenecks will be in a complex application before it’s running.

So, it’s good to know the details about method dispatches and array references and all that stuff, but it would be completely bogus, for example, to decide in the early stages of a product that you’re going to use an array instead of a linked list because of the kind of info presented here, unless you have very powerful reason to believe that the array/list is going to be a performance bottleneck. Because, of course, your guesses in advance about performance bottlenecks are apt to be wrong.

Performance = Memory Management · In the culture where Mr. Gray lives, it is so obvious that performance is all about memory management that he doesn’t even come out and say it specifically. A huge proportion of this essay discusses the memory-allocation costs of different approaches, along with tricks and techniques to keep things under control.

This is consistent with my experience; despite the fact that we’re now measuring RAM in Gigabytes on our personal computers, the world contains a lot of programmers, and the one thing that programmers do is burn memory, and so there’s never enough and it’s never OK to be sloppy about how you use it.

Storage Hierarchy Delay Times · When your program’s running, your data moves back and forth between disks and RAM and cache, and of course the relative performances of these things keep changing, and not linearly with each other either.

But over the years, a couple of things have remained true, and Mr. Gray provides some scary-huge orders-of-magnitude difference numbers that show they’re still true:

  • Page faults totally suck. You really need the important stuff to be in RAM and stay in RAM.
  • The cache/RAM speed difference is big enough that tight inner loops and small data structures continue to pay for themselves big-time.

(Put another way, all CPUs wait for data at the same speed.)

Weirdly enough, those things were true when I was processing the 572MB Oxford English Dictionary on 16MB-RAM Sun workstations in 1987. This is no guarantee that they’s still be true next decade, but I’d probably take a bet on it.


author · Dad
colophon · rights
picture of the day
June 11, 2003
· Technology (90 fragments)
· · Coding (98 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!