1. Choose “low hanging fruits”: automate test cases that are represent most popular use cases and look easiest to automate
2. Go by “breadth first” principle, rather than “depth first”: it’s better to have some basic coverage for majority of the areas in the project, than a very in-depth automation for only one area:
- If no automation exists for the project at all yet, start from creating automation for 1 most obvious use case for the majority of the areas (except for those areas, where automation creation is significantly more complicated). This type of automation can answer a question whether functionality works at all or not. At this stage, no need to worry about reusable libraries, or for tests to be extensible: you are likely to make enough mistakes to want to rewrite them later. But by making those mistakes, you will also understand what you actually need, and how you could organize things better.
- Once you have a basic automation for the majority of the features, look at:
- Most popular features
- Most popular use cases (including positive once, and the basic errors)
- “Cheapest” areas to extend automation
- Target at least 50-70% savings in time as an exit criteria from this step.
- If you still have some time and no other project in hand, look at the remaining functionality, and the way automation is organized:
- Are there any areas where existing automation could be extended to cover all or most of the known test cases?
- Can you improve the organization of your automation, to allow a “one click” automation, as well as binary (pass/fail) result reports?
- Is you automation organized in the way that allows other people to pick it up and use it effectively?
This is also a good time to invest in test infrastructure, reusable libraries, etc. Ideally while working on this stage, you gradually get rid of primitive tests you created before, and replace them with more sophisticated ones.
3. Maximize time savings, and minimize maintenance time:
- How much time automation will save per day/week/iteration/month/release/year?
- How many times are you likely to run it? How many times you must run it and how many times you would like to run it, for further confidence for example?
- For how long automation will run comparing to similar manual test?
- How long does it take to create the automation?
- How much additional work is expected every time before automation can run
- How much maintenance will be required if tested feature is changed (slightly or significantly), and how likely the feature is to change in the nearest future?