The Atlantic published The Coming Software Apocalypse by James Somers, which is full of dire warnings and strong claims. Here’s one: Since the 1980s, the way programmers work and the tools they use have changed remarkably little. My first programming job was in 1979, I still construct software, and I can testify that that assertion is deeply wrong, as is much else in the piece.

I would very much like to place an alternative view of my profession before the people who have consumed Mr Somers’, but I wouldn’t know how, so I’ll just post it here; maybe an Atlantic reader or two will stumble across it.

Oops · When I read this piece I tweeted “Reading non-coders’ words about coding is… weird.” That was wrong because there’s plentiful evidence that he’s a well-accomplished developer. So, apologies. But he’s still wrong.

Wrong, you say? · First of all, the people Somers describes, who write the matter-of-life-and-death logic at the center of the systems that dispatch 911 calls and drive cars and fly planes, are a tiny minority — it’s like a dog-care piece focused on wilderness search-and-rescue dogs. There’s nothing wrong with that kind of dog, nor with the people who program safety-critical systems, but I’ve never met one, and I’ve been doing this for almost four decades.

There’s another problem with Somers’ piece: its claim that writing code is passé, that we’ll be moving away from that into a world of models and diagrams and better specifications and direct visual feedback. This is not exactly a novel idea; the first time I encountered it was in a computing magazine sometime around 1980.

Yes, the notion that you build complex interactions between computing devices and the real world by editing lines of code feels unnatural and twisted, and in fact is slow and expensive in practice. We’ve been looking for a better way since I got into this game; but mostly, we still edit lines of code.

And as for the sensible-sounding proposal that we just write down our requirements, not in code, but in something much higher level, in such a way that a computer can understand them as written and execute them? That’s another old and mostly-failed idea.

So, Somers is wrong twice. First, in asserting that software is moving away from being built on lines of code (it isn’t), and second, that the craft of constructing software isn’t changing and getting better (it is).

So, what do you actually do, then? · Glad you asked. All sorts of things! We developers are now some millions strong worldwide — almost certainly more than ten million and I suspect less than fifty; but it’s hard to measure.

As in most professions, most of the work is strikingly pedestrian; discovering what our co-workers need their computers to do, and also what their managers want, and trying to arrange to keep these tribes happy and at peace with their computers and each other.

To a huge extent, that involves acquiring, deploying, and configuring software that was created by others. Thus, a lot of time in meetings, and then even more figuring out how to make the travel or scheduling or amortization app do what people need done.

On the other hand, some of us write software for rockets, for music synthesizers, for Pixar movies; all these things have an obvious cool factor. And others (surprisingly, among the most-admired) write “low-level” software, useful only to programmers, which underlies all the software that is useful to actual humans. There are many kinds of this stuff: for example “Operating Systems”, “Database kernels”, “Filesystems”, “Web frameworks”, and “Message brokers”.

Software is getting better · Let me be more specific: Compared to back when I was getting started, we build it faster and when we’re done, it’s more reliable.

The reasons are unsubtle: We build it faster because we have better tools, and it’s more reliable because we’re more careful, and because we test it better.

Reviewing · The big software builders (for example Amazon Web Services, where I work) have learned to follow simple practices with big payoffs. First, those lines of code: They never get put to work until they’ve been reviewed by a colleague; in the vast majority of cases, the colleague finds problems and requests changes, arguments break out, and the new code goes through several revisions before being given the green light. For major pieces of infrastructure code, required approval from two more reviewers, and ten or more revision cycles, aren’t terribly uncommon.

Unit Testing! · Software is constructed of huge numbers of (mostly) very small components; we use names like “functions”, “routines”, and “methods”. They are the units that Unit Testing tests. The unit tests are other pieces of software that feed in many different pieces of data in and check that what comes out is as expected. There are commonly more lines of code in the unit tests than the software under test.

We have loads and loads of tools specifically set up to support Unit Testing; among other things, when you look at those lines of code, there’ll be a vertical bar in the margin that’s green beside lines of code that have been exercised by the unit tests, red beside the others.

These days, we don’t always demand 100% coverage (some code is just too routine and mundane) but we expect anything nontrivial to be covered well by the tests. I think the rise of unit testing, starting sometime not too long after 2000, has yielded the single biggest boost to software quality in my lifetime.

There are other kinds of testing (“Integration”, “Smoke”, “Fuzz”) and we use them all, along with tools that read your code and find potential problems, just like Microsoft highlights your spelling mistakes.

Night and day · It doesn’t sound like much. But seriously, it’s like night and day. Does it sound a little tedious? In truth, it is. But also, our tools have been getting better year over year; programming in 2017 is really a lot more pleasant than it was 2007, 1997, or 1987.

It’s like this: You sit down to improve a piece of software, make a couple of changes, and suddenly a lot of unit tests are failing, leaving ugly red trails on your screen. (In fact, if you made changes and didn’t break unit tests, you worry that something’s wrong.) But then you dig into them one by one, and after not too long, it’s all back to green; which is really a good feeling.

I’m not going to argue that the advanced methods Somers enumerates (being model-driven, state machines, things like TLA+) are useless, or that they’re not being used; I personally have made regular use of state-machine technology. But by and large they’re side-shows. We build software better than we ever have, and it’s just a matter of reviewing and testing, and testing and testing, and then testing some more.

We’re not perfect. But we’re really a lot more grown-up than we used to be. And, relax: There’s no apocalypse on the horizon.



Contributions

Comment feed for ongoing:Comments feed

From: Rob Graves (Dec 01 2017, at 03:39)

I read the Atlantic piece some weeks back and it annoyed me. This fragment is a far better rebuttal than I could ever put together. Thank you - I hope non-coding Atlantic readers do stumble across it.

II’ve coded on and off for 30 years, and worked in teams developing 4th generation languages in the 80s - which were going to ‘solve everything’ in a similar sort of conceptual model that the article discussed. (They didn’t.)

I could only suggest to the Atlantic author to review software history before attempting to reinvent programming.

[link]

From: William Payne (Dec 01 2017, at 03:42)

I think that the promise of model based engineering in software remains almost entirely unrealised.

First of all, let me address one potential misconception:

Model based engineering isn't about diagrams. Models can also be text. Hardware engineers long ago moved away from line-and-box diagrams towards languages like Verilog and VHDL. The availability of good textual diff and merge tools, in my opinion, makes text a natural 'primary' representation for models. So we can write models in MATLAB or Python or even C++ if we wanted to.

So if a model isn't (necessarily) a diagram, then what is it? What is the difference between a model and any other piece of software?

The answer is in the way that it is used, and to understand this we need to understand the social problems in other engineering disciplines that models help to resolve.

In a multidisciplinary engineering effort, the hardware engineer in charge of the structure of the system and responsible for the weight of the system may find himself in conflict with the engineer in charge of the thermal budget; responsible for heat dissipation; or with the electrical engineer in charge of the power budget, or the systems engineer selecting various electronic components and sensors.

Each engineer not only has different priorities, but as they come from different disciplines they may not even share the same technical concepts and language with which to discuss and resolve their conflicting priorities.

Each one has a different model that they use to describe the part of the system for which they are responsible -- which may be a mental model, or it may be an explicit piece of documentation -- or, as is more likely, it will be a piece of executable code that shows the impact of various design decisions and design parameters.

By using a common modeling tool with support for different types of models in different domains -- the impact of a design decision made from the perspective of one discipline can be confronted against the design decisions made in the other disciplines, and a joint optimization performed which takes account of the parameters and 'red lines' from each discipline.

In other words -- the point of modeling is to provide a facility that supports multiple partial representations of the problem and/or solution; each of which is founded on a different conceptual basis -- and to then permit those representations to interact with one another to identify inconsistencies and to allow them to be resolved through negotiation.

Back to software engineering.

As software engineering becomes more mature, sub-disciplines are emerging. We have front-end developers; back-end developers; database specialists; security specialists; test engineers; build engineers; dev-ops guys; algorithms engineers; machine learning specialists; machine vision specialists... and the tools and terminology for each of these nascent disciplines is diverging and growing more disparate with every passing year.

If we don't need model based software engineering yet -- then it is only because these sub-disciplines have not yet diverged to a level where communication and agreement becomes an issue ...

[link]

From: Dennis Doubleday (Dec 01 2017, at 07:30)

Those "higher level requirement specification languages" inevitably lack the precision necessary to specify the actual interactions, which is why programming languages were invented in the first place.

[link]

From: Eric H (Dec 01 2017, at 10:37)

Hey, this post just improved my code at work. You mentioned that there are tools that display coverage info and I just realized that I use a tool like that all day every day, but hadn't been paying much attention to the coverage info ... I looked at the info for my most recent code-review, and eww! It was all red. And I remembered that I had thought "I should really write some tests for this change" ...

... which I just did. Thanks!

(Of course, now I can't tell the green bars from the red bars because I'm color-blind, but that's a separate issue...)

[link]

From: Paul Clapham (Dec 01 2017, at 13:47)

I've been programming for about as long as you have and I recognize everything you say. The idea that programming can be simplified goes right back to the 1960's, when the decision was made to design COBOL so that it could be read and understood by managers.

One thing which you didn't really mention was this: programming has been able to address more and more complex requirements as time goes on. Back in the 70's it was ground-breaking to attach a truck scale to a computer so that the grain elevator managers could account for the farmer's deposits automatically. But now it seems there's nothing that programming can't address -- for better or for worse.

[link]

From: Ivan Sagalaev (Dec 01 2017, at 14:47)

Speaking of big boosts to software quality, along with unit tests I would also name adoption of VCSes, using which was not all that common before about the same point in early 2000s. And developing of all really important core libraries by community with source code in the open, instead of relying on them being delivered and supported by companies.

[link]

From: Andrew Reilly (Dec 01 2017, at 17:39)

I haven't been coding for quite as long as you, but nearly. I also agree with you that the original Atlantic article was annoying and that programming has new and shinier tools and techniques, and that code is in some sense "much better" than it was, but I'm afraid that in my opinion, that isn't saying very much. It's not as though much code is actually "good". We can produce vast, vast quantities of software product that "works" in the sense that under certain, benign conditions it will perform as advertised, but we are so far from regularly producing code that works in the sense of "can't fail". The complexity of the multi-component interaction space is so far beyond comprehension that unexpected behavior is essentially inevitable, and will remain so until we can apply something like mathematical rigor to the correctness problem.

[link]

From: David (Dec 01 2017, at 23:56)

Thanks for this ilustrating article, it could be a big mistake to enter: from a 12y experienced to a 30y experienced, into contradicting what your eyes haven’t seen, even if we are talking about the same. So my admiration and distance taken.

I’d like to talk about an aspect. Detected throught the article and which lackes of “contrast”.

This is about the “boundaries”, “interfaces offered” and when necessary experts charged to help you to cross by.

Maybe we can’t perform an “all in one” machine or a global slang, which has very often tried to be done. My opinion, can be simplified by an example: every country has its boundaries and often, similar bridges or procedures are used (also often not, but I’m triying to explain how a possible weak point could be improved).

[link]

From: Mark (Dec 05 2017, at 09:54)

You're correct that programming and the related tools have really improved. However, I think you miss the points that (1) although the amount of software which can have terrible consequences when it fails is a minority today, this proportion is consistently increasing, and (2) programmers' aversion to spending time on modelling and "requirements" is the main blocker to moving to a much higher quality paradigm for software development.

To be deliberately controversial, do you think that problems like the leftpad debacle are down to individual rogue programmers being bad, and that the tools and approach are all fine, the same way that mass shootings are down to individual rogue gun owners, and gun ownership for all is completely fine? :-)

[link]

From: Doug K (Dec 05 2017, at 15:57)

thank you, I had a similar reaction to the article. The dreaming and speculations about "moving away from code to a world of models and diagrams and better specifications and direct visual feedback" were particularly amusing, given the long history of such systems' let us say, incomplete success. Rational UML anyone ?

Another good laugh was "a celebrated programmer of JavaScript" finds that it is hard to teach coding JavaScript. That's mostly because JavaScript is a horrible language, with horrible tools.

http://www.commitstrip.com/en/2015/09/16/how-to-choose-the-right-javascript-framework/

Another thing that is making software better is Devops and ideas like the Netflix Chaos Monkey. Rapid deployment and large-scale testing produces much more robust software. It's wonderful to be part of this, to see how we (who is we, paleface ?) are in fact getting better at the hard job of making software.

[link]

From: Blaine Osepchuk (Dec 06 2017, at 19:31)

Well, I agree with you a little and disagree with you a little.

I'm a software developer with almost 20 years of experience. I don't develop safety critical systems but I have read and listened to experts and they don't share your sense that everything is okay.

Watch this video of Dr Nancy Leveson (an expert on safety critical systems: https://youtu.be/WBktiCyPLo4

And this one by Martyn Thomas (another expert): https://youtu.be/E0igfLcilSk

Leveson thinks the problem with safety in software is so bad that she wrote a book and released it for free: https://www.dropbox.com/s/dwl3782mc6fcjih/8179.pdf?dl=0

Yes, our tools are better. And we're doing more code reviews and unit testing. That's totally awesome and I'm fully in favor of these efforts and more. I totally agree that writing software in 2017 is a much better experience than it was in 2000. My static analysis tools made me look like an absolute fool the first time I ran them on an old code base. It was quite humbling.

But...software quality still sucks (see the link below for my evidence).

And...you don't have to write safety-critical software to endanger people or property. I wrote about that here: https://smallbusinessprogramming.com/great-power-comes-great-responsibility/

So, I know this is the internet and many people are more interested in scoring points than learning something. But if you're reading this and you actually want to learn something, follow the links in this post. Watch the videos and download Leveson's book. I guarantee you'll learn something.

[link]

From: Gerard (Dec 07 2017, at 01:12)

Given that you've never met anyone working on safety critical software, I'm not sure you're qualified to talk about how well the process of developing it works .....

[link]

author · Dad
colophon · rights
picture of the day
November 27, 2017
· Technology (90 fragments)
· · Software (82 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!