When we’re implementing a feature, we start by writing an acceptance test, which exercises the functionality we want to build. While it’s failing, an acceptance test demonstrates that the system does not yet implement that feature; when it passes, we’re done. When working on a feature, we use its acceptance test to guide us as to whether we actually need the code we’re about to write—we only write code that’s directly relevant. Underneath the acceptance test, we follow the unit level test/implement/refactor cycle to develop the feature
We have had the question come up a couple times about whether we should be writing unit tests and acceptance tests and what happens if they overlap. It seems like the book is advocating you do always do both. You write the acceptance test to guide you on whether you actually need the code you are about to write. The acceptance tests also show demonstrable progress on customer features. The unit tests support the developers by helping them write high quality code and protecting against regression failures while refactoring.
The book also recommends that acceptance tests should be end-to-end tests meaning they interact with the UI and not call directly into the code. On every check-in they
1) check out latest version
2) compile code
3) run unit tests
4) integrate and package system
5) perform a production like deploy into a realistic environment
6) exercise system through external access points by running acceptance tests
No comments:
Post a Comment