Should Unit Tests Verify Requirements Only?
Categories: Programming, Architecture
There is a very interesting presentation by Ian Cooper titled TDD - Where Did It All Go Wrong. If you are interested in Test-Driven Development (TDD), whether for or against it, you should watch that. Regardless of your opinion, Ian makes some very thoughtful points. In summary: Ian is definitely in favour of TDD, but thinks many people (including himself originally) are not doing it right.
Actually the assertion he makes is not exclusively about Test Driven Development: the claim is that tests should verify the externally visible behaviour of modules, and not their implementation. In effect (my phrasing), any test should be checking behaviour that is specified in a requirements document, change ticket, etc. and NOT lower level details. Such tests are closer to integration-tests than unit-tests, with some practical compromises that ensure they run fast enough to give developers feedback without breaking their concentration. This seems to lead in the direction of Behaviour-Driven Development (BDD).
Ian argues (well) that the common industry practice of testing classes/methods in isolation is an anti-pattern; we should be writing far fewer tests and those tests we do write should be testing end-to-end behaviour (or near to it). And with this change in testing approach, one of the major reasons for using dependency-injection frameworks disappears(!) - or at least the number of resources injected via DI drops significantly.
I’m not quite sure what to think about this yet. On one hand, we should only be writing code to support required behaviour of the top-level API; if code doesn’t affect that API then it shouldn’t exist. This in turn implies that all useful code is testable via the API (together with stubbed1 behaviour of the interfaces connecting to external systems). On the other hand, this does seem to require complicated test fixtures, make it hard to understand the cause of any failed test, and generally brings into question a lot of conventions I’ve been following for many years.
Martin Thwaites says some similar things (no surpise as he references Ian Cooper at the end of the presentation) - including confirming that this actually is a kind of BDD. However he goes even further, recommending effectively to do only integration-tests, with (at least) a real database, and verify behaviour only via network requests to the exposed API of the codebase - not something that Ian proposes. I’m a big fan of isolating business logic (eg see hexagonal architecture) but Martin’s approach tests the (very important) business behaviour only indirectly via one (or more) of its techical interfaces; what if multiple interfaces exist, or if the interface changes (eg REST -> GraphQL)? In addition, the tests are written in a “language” that isn’t part of the domain’s “ubiquitous language” (in DDD terms) and so isn’t something that can be naturally discussed with business experts. Testing at the domain level instead of the interface level solves these problems - but (a) potentially tests behaviour that no interface needs (something we wanted to avoid), and (b) raises the question of how to then test that interface.
And here’s J.B. Rainsberger arguing exactly the opposite view: ‘Integration Tests are a Scam’!
Only one thing is sure: No single damn thing about software development practice seems to be universally agreed on. Sigh.
Footnotes
-
Stubs are test dummies which return “emulated data”. Mocks extend this to also remember which methods are called on it so that test code can then assert that specific methods were called the expected number of times with the expected parameters. The use of test stubs is well accepted, but mocks (and the associated assertions) are more controversial. ↩