in continuous delivery git jenkins stash bdd ~ read.
Behaviour driven development in action

Behaviour driven development in action

Have you ever heard about automatic acceptance testing? No? In short, these test are in the middle between developers and business (requirement specifiers). They are written in human (not only developer) readable language and in fact they are describing all the features which are belonging to the software.

Cucumber

And what is Cucumber? Cucumber is a tool with own structure for "human readable" language with technical background and other things needed to have ability to execute these tests against working software and verify that software is working as expected.

In these days is here tension to replace human testers with these tests and/or replace some kind of Requirement specification document. Fact is that software is changing every day and is very hard to maitain documents with each change in software. So in ideal world when business specifier is requesting some feature he write this tests which are failing at start and passing after developer finish his implementation.

Ok, here is end of theory and what about real life, hm? I was on two projects with this type of testing and after two years experience here is my commented list of positives and negatives:

Identified advantages

  1. Best thing for newcomers on the project.

    • Working great as an documentation, reference guide and reference how is this software used.

  2. Support of tools in IDEs.

  3. Easy to run against various environments.

    • It’s easy to set .properties files and profiles in your build system to run tests against various servers (test, stage, local etc.).

  4. Easy to discuss and write down requirements with business people.

    • When the requirements coming from business to developers there is always some misunderstanding. When business sees all scenarios in written form it’s easier to see if all counterparts are on the same page.

  5. Much better than standalone documents.

    • Who is happy in searching 100+ pages in Microsoft word document or PDF, right?

  6. Testers have more time to do exploratory testing.

    • See next point.

  7. No more manual regression testing.

    • In most projects are acceptance test added afterwards the software is developed. But in most projects are firstly developed tests which cover basic functionality. From this we can conclude that testers don’t have to perform manual testing on that and have a free hands on something more usefull.

  8. Easier refactoring of older code.

    • If you have comprehensive suite of tests, you can sleep well after some refactoring as well :).

  9. Support continuous delivery because you’re not being stuck with time of your tester.

    • sometimes is not easy to test something by hand (set up of scenario, test data, calling external services) so if you haven’t this covered by automatic scenarios it’s impossible do continuous delivery at all.

Drawbacks

  1. Business people are only able to read acceptance test, not write them.

    • After uncountable attempts deliveries of acceptance tests scenarios from business was not sufficient. There were still problems - step definitions doesn’t match developed ones, unnecessary steps, complex steps, steps which aren’t prescriptive etc. After all developers are writing these scenarios and business only verifies that they are right.

  2. Tests doesn’t contain project dictionary - dev and buisness terms are not the same terms.

    • Missing project dictionary (and ubiquitous language indeed) is always a problem and it’s not only problem for acceptance testing. If your business analyst talking about e.g. “loans" and you are talking about “imprest” it’s a problem. Because after several times the confusion for others not included in your separate discussion grows.

  3. Huge library of undocumented step definitions - hard to maintain and find steps.

    • Each developer solves his problem by his hands. Even when you have some convention there are still some grey space between developers and if some step doesn’t work for someone he is developing another one with same meaning immediately. Or some configuration is different for your case and current step doesn’t taking care. It’s not pleasant work with such a rapidly growing library and after some time it’s unmaintainable as any other code without the right care.

  4. Untestable areas in software - not fully covered software.

    • This is generally for tests added to working software. There is no simple way how to cover all current scenarios, it’s hard, expensive and not bulletproof. After several attempts in one project we end with covering all the basics plus every new feature has had new acceptance test. It was still huge success to have it but we still needed manual testers because some logic was not yet covered.

  5. Teams on same repository are writing tests in different ways.

    • Enforcing rules and coding standards as well as approach for creating a step definitions and code review for tests is the hard part also.

  6. Huge amount of infrastructure work needed.

    • Way how to deploy applications on several environments to have working copy for running acceptance test is not easy. Maintain infrastructure takes many resources.

  7. Developers are not able to write these tests before implementation.

    • This is somehow my feeling. Just as it is with convincing developers about TDD it’s not easy to convince devs to write tests + prepare infrastructure for test being able to pass before implementation. Firstly it’s always a problem with choosing appropriate steps, choosing strategy how to test will cooperate with software and negotiation with business also.

  8. Quality metrics on test codebase was not same as for production code.

    • Static analysis, checkstyle, nothing set up for test codebase by default. Problem is that with this fashion your test codebase will smell as any other unmaintainable code.

  9. Testing of unneccessary features.

    • If you are adding tests into working software you are detecting features which are nobody use. But nobody wants to remove something which is working usually. And in worst case is somebody so diligent that he write test for it to cover it. Just wrong.

  10. Long running and fragile tests.

    • If you are not able to run tests in parallel mode your tests are written badly and will be fragile in build environment. Acceptance tests are often very time consuming. So it’s common tension to execute them parallel.

  11. learning curve about infrastructure is all but not easy

    • Contacting third-party providers, stubs for them, execution in continuous integration server … for newbies is a long running task to be familiar with all of that.

  12. Tests were not executed before pushing changes to master.

    • Biggest problem so far. If you are not able to execute all tests before your implementation and tests are in master branch you are in big problem. See this one about read only master branch.

  13. Needed to document how to execute tests and run on some environment.

    • Wiki pages are becoming standard so documenting of all information available for your test codebase is necessary.

In conclusion

Automatic tests are really outstanding thing. Of course there are many things which are not easy and especially adding these tests to working software is unconscionably hard. But in the end they decrease time needed to deliver new features. They decrease time needed for manual tests and continuously improving testability of whole software. At one project was almost rule that implementing of acceptance tests for new feature took two (three) more times than the implementation itself - but in the end it was still much less than several manual testing by human on various environments.

Small set of lessons learnt:

  1. Dedicate a person to be architect/lead developer of tests - improves maintainability, documentation and usage.

  2. Ensure that all the tests are running before deploying to production.

  3. If you are collaborating with third-party software, create stub for them.

  4. Don’t use fragile assertions - use assertOrderWithIdAdded(12) instead of assertOrderAdded(1, orders.count()).

  5. Set up static code analysis and other tools for managing quality of code even for test codebase.

  6. Use pull requests for defend codebase stability.

  7. Create documentation, approach for creating new tests visible for both developers and business analyst.

  8. If you are testing API of your application, don’t create any artificial API used only by tests.

Have you automatic acceptance test in your suite as well?

P.S. If you enjoyed reading this blog post, could you do me favor and tweet it and/or leave a comment? Thanks!

comments powered by Disqus