Another pile-on story; this time on David Heinemeier Hansson’s Dependency injection is not a virtue. I agree with every word DHH writes here, but I think I have a better example. Tl;dr:
Statically-typed languages can make unit testing hard, so
People adopt dependency injection to work around this, and
In a sort of Stockholm-syndrome effect, people argue that DI is A Good Thing and over-use it, to harmful effect.
Another Example ·
DHH’s example is slick, but the
publish! method includes
enough deep-Ruby idioms that I bet it’s opaque to a lot of
perfectly smart developers who think in Java or C# or ObjC or whatever.
Let’s do a more meat-&-potatoes example: Unit testing a basic HTTP call. The context is my dinky little google-id-token gem (where “gem” is Ruby for a standard library package). To validate a Google ID Token, you have to fetch Google’s OAuth 2 public keys and check the digital signature.
So your unit tests obviously have to check the cases where the key retrieval works and doesn’t work. If you were a Java programmer you’d probably type something like mock httpurlconnection or unit test httpurlconnection into Google. Frankly, the answers are not that great, and probably DI is in your future.
I’m not going to display the Ruby code from my
method, it’s a straightforward call to the
So I typed ruby unit test net::http into Google, and there were a lot of different options; a variety of general-purpose mocking tools (that don’t require you to change your app source), but for some reason I decided to use fakeweb (which doesn’t require you to change your app source).
Here’s a chunk of my test code:
it 'should complain if unable to fetch Google tokens' do FakeWeb::register_uri(:get, CERTS_URI, :status => ["404", "Not found"], :body => 'Ouch!') t = GoogleIDToken::Validator.new t.check('whatever', 'whatever').should == nil t.problem.should =~ /Unable to retrieve.*keys/ end it 'should successfully validate a good token against good certs' do FakeWeb::register_uri(:get, CERTS_URI, :status => ["200", "Success"], :body => @certs_body) jwt = @validator.check(@good_token, @token_aud, @token_cid) jwt.should_not == nil jwt['aud'].should == @token_aud jwt['cid'].should == @token_cid end
The Dependency-Injection Big Picture · DI, like Wikipedia says, “allows removing hard-coded dependencies and making it possible to change them, whether at run-time or compile-time”. Surely that has to be a good thing, right? Since the solution to every problem in Computer Science is another level of indirection, right?
Obviously, if you have to have DI to have unit testing, then you have to have DI. But in practice, when I try to read application code based on DI frameworks like Guice or Spring, it feels like there’s a whole lot of stuff between me and the business logic, and I have to understand it all to understand anything.
When a veteran Java-head or Rubyist sees calls into well-known standard libraries, they instantly know a whole lot about what’s going on; obfuscating them with factories and implementors and injectors and so on significantly impairs readability, and that’s a big deal. One of the nicest things about Java is its huge repertoire of well-known well-documented battle-tested APIs for more or less anything; they add significantly not just to its capabilities, but its expressiveness and (especially) readability.
Also, call me old-fashioned, but I think that, as far as statically-typed languages go, Java’s “interface” mechanism offers just the right level of indirection, semantically: I don’t care what it is, I care what it does. Too bad you have to pile DI on top of that.
This is another reason why dynamically-typed languages are usually a better choice for implementing application programs.