The At­lantic pub­lished The Com­ing Soft­ware Apoca­lypse by James Somers, which is full of dire warn­ings and strong claim­s. Here’s one: Since the 1980s, the way pro­gram­mers work and the tools they use have changed re­mark­ably lit­tle. My first pro­gram­ming job was in 1979, I still con­struct soft­ware, and I can tes­ti­fy that that as­ser­tion is deeply wrong, as is much else in the piece.

I would very much like to place an al­ter­na­tive view of my pro­fes­sion be­fore the peo­ple who have con­sumed Mr Somers’, but I wouldn’t know how, so I’ll just post it here; maybe an At­lantic read­er or two will stum­ble across it.

Oops · When I read this piece I tweet­ed “Reading non-coders’ words about cod­ing is… weird.” That was wrong be­cause there’s plen­ti­ful ev­i­dence that he’s a well-accomplished de­vel­op­er. So, apolo­gies. But he’s still wrong.

Wrong, you say? · First of al­l, the peo­ple Somers de­scribes, who write the matter-of-life-and-death log­ic at the cen­ter of the sys­tems that dis­patch 911 calls and drive cars and fly planes, are a tiny mi­nor­i­ty  —  it’s like a dog-care piece fo­cused on wilder­ness search-and-rescue dogs. There’s noth­ing wrong with that kind of dog, nor with the peo­ple who pro­gram safety-critical sys­tem­s, but I’ve nev­er met one, and I’ve been do­ing this for al­most four decades.

There’s an­oth­er prob­lem with Somers’ piece: its claim that writ­ing code is passé, that we’ll be mov­ing away from that in­to a world of mod­els and di­a­grams and bet­ter spec­i­fi­ca­tions and di­rect vi­su­al feed­back. This is not ex­act­ly a nov­el idea; the first time I en­coun­tered it was in a com­put­ing mag­a­zine some­time around 1980.

Yes, the no­tion that you build com­plex in­ter­ac­tions be­tween com­put­ing de­vices and the re­al world by edit­ing lines of code feels un­nat­u­ral and twist­ed, and in fact is slow and ex­pen­sive in prac­tice. We’ve been look­ing for a bet­ter way since I got in­to this game; but most­ly, we still ed­it lines of code.

And as for the sensible-sounding pro­pos­al that we just write down our re­quire­ments, not in code, but in some­thing much high­er lev­el, in such a way that a com­put­er can un­der­stand them as writ­ten and ex­e­cute them? That’s an­oth­er old and mostly-failed idea.

So, Somers is wrong twice. First, in as­sert­ing that soft­ware is mov­ing away from be­ing built on lines of code (it isn’t), and sec­ond, that the craft of con­struct­ing soft­ware isn’t chang­ing and get­ting bet­ter (it is).

So, what do you ac­tu­al­ly do, then? · Glad you asked. All sorts of things! We de­vel­op­ers are now some mil­lions strong world­wide  —  almost cer­tain­ly more than ten mil­lion and I sus­pect less than fifty; but it’s hard to mea­sure.

As in most pro­fes­sion­s, most of the work is strik­ing­ly pedes­tri­an; dis­cov­er­ing what our co-workers need their com­put­ers to do, and al­so what their man­agers wan­t, and try­ing to ar­range to keep these tribes hap­py and at peace with their com­put­ers and each oth­er.

To a huge ex­ten­t, that in­volves ac­quir­ing, de­ploy­ing, and con­fig­ur­ing soft­ware that was cre­at­ed by oth­er­s. Thus, a lot of time in meet­ings, and then even more fig­ur­ing out how to make the trav­el or schedul­ing or amor­ti­za­tion app do what peo­ple need done.

On the oth­er hand, some of us write soft­ware for rock­et­s, for mu­sic syn­the­siz­er­s, for Pixar movies; all these things have an ob­vi­ous cool fac­tor. And oth­ers (sur­pris­ing­ly, among the most-admired) write “low-level” soft­ware, use­ful on­ly to pro­gram­mer­s, which un­der­lies all the soft­ware that is use­ful to ac­tu­al hu­man­s. There are many kinds of this stuff: for ex­am­ple “Operating Systems”, “Database kernels”, “Filesystems”, “Web frameworks”, and “Message brokers”.

Soft­ware is get­ting bet­ter · Let me be more speci­fic: Com­pared to back when I was get­ting start­ed, we build it faster and when we’re done, it’s more re­li­able.

The rea­sons are un­sub­tle: We build it faster be­cause we have bet­ter tool­s, and it’s more re­li­able be­cause we’re more care­ful, and be­cause we test it bet­ter.

Re­view­ing · The big soft­ware builders (for ex­am­ple Ama­zon Web Ser­vices, where I work) have learned to fol­low sim­ple prac­tices with big pay­off­s. First, those lines of code: They nev­er get put to work un­til they’ve been re­viewed by a col­league; in the vast ma­jor­i­ty of cas­es, the col­league finds prob­lems and re­quests changes, ar­gu­ments break out, and the new code goes through sev­er­al re­vi­sions be­fore be­ing giv­en the green light. For ma­jor pieces of in­fras­truc­ture code, re­quired ap­proval from two more re­view­er­s, and ten or more re­vi­sion cy­cles, aren’t ter­ri­bly un­com­mon.

Unit Test­ing! · Soft­ware is con­struct­ed of huge num­bers of (most­ly) very small com­po­nents; we use names like “functions”, “routines”, and “methods”. They are the units that Unit Test­ing test­s. The unit tests are oth­er pieces of soft­ware that feed in many dif­fer­ent pieces of da­ta in and check that what comes out is as ex­pect­ed. There are com­mon­ly more lines of code in the unit tests than the soft­ware un­der test.

We have loads and loads of tools specif­i­cal­ly set up to sup­port Unit Test­ing; among oth­er things, when you look at those lines of code, there’ll be a ver­ti­cal bar in the mar­gin that’s green be­side lines of code that have been ex­er­cised by the unit test­s, red be­side the oth­er­s.

Th­ese days, we don’t al­ways de­mand 100% cov­er­age (some code is just too rou­tine and mun­dane) but we ex­pect any­thing non­triv­ial to be cov­ered well by the test­s. I think the rise of unit test­ing, start­ing some­time not too long af­ter 2000, has yield­ed the sin­gle biggest boost to soft­ware qual­i­ty in my life­time.

There are oth­er kinds of test­ing (“Integration”, “Smoke”, “Fuzz”) and we use them al­l, along with tools that read your code and find po­ten­tial prob­lem­s, just like Mi­crosoft high­lights your spelling mis­takes.

Night and day · It doesn’t sound like much. But se­ri­ous­ly, it’s like night and day. Does it sound a lit­tle te­dious? In truth, it is. But al­so, our tools have been get­ting bet­ter year over year; pro­gram­ming in 2017 is re­al­ly a lot more pleas­ant than it was 2007, 1997, or 1987.

It’s like this: You sit down to im­prove a piece of soft­ware, make a cou­ple of changes, and sud­den­ly a lot of unit tests are fail­ing, leav­ing ug­ly red trails on your screen. (In fac­t, if you made changes and didn’t break unit test­s, you wor­ry that something’s wrong.) But then you dig in­to them one by one, and af­ter not too long, it’s all back to green; which is re­al­ly a good feel­ing.

I’m not go­ing to ar­gue that the ad­vanced meth­ods Somers enu­mer­ates (be­ing model-driven, state ma­chi­nes, things like TLA+) are use­less, or that they’re not be­ing used; I per­son­al­ly have made reg­u­lar use of state-machine tech­nol­o­gy. But by and large they’re side-shows. We build soft­ware bet­ter than we ev­er have, and it’s just a mat­ter of re­view­ing and test­ing, and test­ing and test­ing, and then test­ing some more.

We’re not per­fec­t. But we’re re­al­ly a lot more grown-up than we used to be. And, re­lax: There’s no apoc­a­lypse on the hori­zon.


Comment feed for ongoing:Comments feed

From: Rob Graves (Dec 01 2017, at 03:39)

I read the Atlantic piece some weeks back and it annoyed me. This fragment is a far better rebuttal than I could ever put together. Thank you - I hope non-coding Atlantic readers do stumble across it.

II’ve coded on and off for 30 years, and worked in teams developing 4th generation languages in the 80s - which were going to ‘solve everything’ in a similar sort of conceptual model that the article discussed. (They didn’t.)

I could only suggest to the Atlantic author to review software history before attempting to reinvent programming.


From: William Payne (Dec 01 2017, at 03:42)

I think that the promise of model based engineering in software remains almost entirely unrealised.

First of all, let me address one potential misconception:

Model based engineering isn't about diagrams. Models can also be text. Hardware engineers long ago moved away from line-and-box diagrams towards languages like Verilog and VHDL. The availability of good textual diff and merge tools, in my opinion, makes text a natural 'primary' representation for models. So we can write models in MATLAB or Python or even C++ if we wanted to.

So if a model isn't (necessarily) a diagram, then what is it? What is the difference between a model and any other piece of software?

The answer is in the way that it is used, and to understand this we need to understand the social problems in other engineering disciplines that models help to resolve.

In a multidisciplinary engineering effort, the hardware engineer in charge of the structure of the system and responsible for the weight of the system may find himself in conflict with the engineer in charge of the thermal budget; responsible for heat dissipation; or with the electrical engineer in charge of the power budget, or the systems engineer selecting various electronic components and sensors.

Each engineer not only has different priorities, but as they come from different disciplines they may not even share the same technical concepts and language with which to discuss and resolve their conflicting priorities.

Each one has a different model that they use to describe the part of the system for which they are responsible -- which may be a mental model, or it may be an explicit piece of documentation -- or, as is more likely, it will be a piece of executable code that shows the impact of various design decisions and design parameters.

By using a common modeling tool with support for different types of models in different domains -- the impact of a design decision made from the perspective of one discipline can be confronted against the design decisions made in the other disciplines, and a joint optimization performed which takes account of the parameters and 'red lines' from each discipline.

In other words -- the point of modeling is to provide a facility that supports multiple partial representations of the problem and/or solution; each of which is founded on a different conceptual basis -- and to then permit those representations to interact with one another to identify inconsistencies and to allow them to be resolved through negotiation.

Back to software engineering.

As software engineering becomes more mature, sub-disciplines are emerging. We have front-end developers; back-end developers; database specialists; security specialists; test engineers; build engineers; dev-ops guys; algorithms engineers; machine learning specialists; machine vision specialists... and the tools and terminology for each of these nascent disciplines is diverging and growing more disparate with every passing year.

If we don't need model based software engineering yet -- then it is only because these sub-disciplines have not yet diverged to a level where communication and agreement becomes an issue ...


From: Dennis Doubleday (Dec 01 2017, at 07:30)

Those "higher level requirement specification languages" inevitably lack the precision necessary to specify the actual interactions, which is why programming languages were invented in the first place.


From: Eric H (Dec 01 2017, at 10:37)

Hey, this post just improved my code at work. You mentioned that there are tools that display coverage info and I just realized that I use a tool like that all day every day, but hadn't been paying much attention to the coverage info ... I looked at the info for my most recent code-review, and eww! It was all red. And I remembered that I had thought "I should really write some tests for this change" ...

... which I just did. Thanks!

(Of course, now I can't tell the green bars from the red bars because I'm color-blind, but that's a separate issue...)


From: Paul Clapham (Dec 01 2017, at 13:47)

I've been programming for about as long as you have and I recognize everything you say. The idea that programming can be simplified goes right back to the 1960's, when the decision was made to design COBOL so that it could be read and understood by managers.

One thing which you didn't really mention was this: programming has been able to address more and more complex requirements as time goes on. Back in the 70's it was ground-breaking to attach a truck scale to a computer so that the grain elevator managers could account for the farmer's deposits automatically. But now it seems there's nothing that programming can't address -- for better or for worse.


From: Ivan Sagalaev (Dec 01 2017, at 14:47)

Speaking of big boosts to software quality, along with unit tests I would also name adoption of VCSes, using which was not all that common before about the same point in early 2000s. And developing of all really important core libraries by community with source code in the open, instead of relying on them being delivered and supported by companies.


From: Andrew Reilly (Dec 01 2017, at 17:39)

I haven't been coding for quite as long as you, but nearly. I also agree with you that the original Atlantic article was annoying and that programming has new and shinier tools and techniques, and that code is in some sense "much better" than it was, but I'm afraid that in my opinion, that isn't saying very much. It's not as though much code is actually "good". We can produce vast, vast quantities of software product that "works" in the sense that under certain, benign conditions it will perform as advertised, but we are so far from regularly producing code that works in the sense of "can't fail". The complexity of the multi-component interaction space is so far beyond comprehension that unexpected behavior is essentially inevitable, and will remain so until we can apply something like mathematical rigor to the correctness problem.


From: David (Dec 01 2017, at 23:56)

Thanks for this ilustrating article, it could be a big mistake to enter: from a 12y experienced to a 30y experienced, into contradicting what your eyes haven’t seen, even if we are talking about the same. So my admiration and distance taken.

I’d like to talk about an aspect. Detected throught the article and which lackes of “contrast”.

This is about the “boundaries”, “interfaces offered” and when necessary experts charged to help you to cross by.

Maybe we can’t perform an “all in one” machine or a global slang, which has very often tried to be done. My opinion, can be simplified by an example: every country has its boundaries and often, similar bridges or procedures are used (also often not, but I’m triying to explain how a possible weak point could be improved).


From: Mark (Dec 05 2017, at 09:54)

You're correct that programming and the related tools have really improved. However, I think you miss the points that (1) although the amount of software which can have terrible consequences when it fails is a minority today, this proportion is consistently increasing, and (2) programmers' aversion to spending time on modelling and "requirements" is the main blocker to moving to a much higher quality paradigm for software development.

To be deliberately controversial, do you think that problems like the leftpad debacle are down to individual rogue programmers being bad, and that the tools and approach are all fine, the same way that mass shootings are down to individual rogue gun owners, and gun ownership for all is completely fine? :-)


From: Doug K (Dec 05 2017, at 15:57)

thank you, I had a similar reaction to the article. The dreaming and speculations about "moving away from code to a world of models and diagrams and better specifications and direct visual feedback" were particularly amusing, given the long history of such systems' let us say, incomplete success. Rational UML anyone ?

Another good laugh was "a celebrated programmer of JavaScript" finds that it is hard to teach coding JavaScript. That's mostly because JavaScript is a horrible language, with horrible tools.

Another thing that is making software better is Devops and ideas like the Netflix Chaos Monkey. Rapid deployment and large-scale testing produces much more robust software. It's wonderful to be part of this, to see how we (who is we, paleface ?) are in fact getting better at the hard job of making software.


From: Blaine Osepchuk (Dec 06 2017, at 19:31)

Well, I agree with you a little and disagree with you a little.

I'm a software developer with almost 20 years of experience. I don't develop safety critical systems but I have read and listened to experts and they don't share your sense that everything is okay.

Watch this video of Dr Nancy Leveson (an expert on safety critical systems:

And this one by Martyn Thomas (another expert):

Leveson thinks the problem with safety in software is so bad that she wrote a book and released it for free:

Yes, our tools are better. And we're doing more code reviews and unit testing. That's totally awesome and I'm fully in favor of these efforts and more. I totally agree that writing software in 2017 is a much better experience than it was in 2000. My static analysis tools made me look like an absolute fool the first time I ran them on an old code base. It was quite humbling. quality still sucks (see the link below for my evidence). don't have to write safety-critical software to endanger people or property. I wrote about that here:

So, I know this is the internet and many people are more interested in scoring points than learning something. But if you're reading this and you actually want to learn something, follow the links in this post. Watch the videos and download Leveson's book. I guarantee you'll learn something.


From: Gerard (Dec 07 2017, at 01:12)

Given that you've never met anyone working on safety critical software, I'm not sure you're qualified to talk about how well the process of developing it works .....


author · Dad · software · colophon · rights
picture of the day
November 27, 2017
· Technology (81 fragments)
· · Software (63 more)

By .

I am an employee
of, but
the opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.