Strategy-->Plan-->Implement-->Maintain "Automation Testing"

Strategy
First  it should be clear about what your strategy for automation is. (And don't say Automate everything. The people who think you can automate everything are just as wrong as the people who think you can manually test everything. Automation can help, but you need to know its strengths and weaknesses.)

Plan
Secondly, you should decide what success might look like. Most teams will do a lot better if they try to achieve something close to the testing pyramid, that being where developers write a lot of unit tests and specific integration tests with dependencies, and then a combination of devs/testers write service level, and API tests, but in a smaller proportion. lastly any UI should probably be kept small and focused on critical areas of the software (because they are so slow).

Implement
Another thing you should consider, is that automation efforts slow down, the further from the code it actually tests it is to develop. This means that while your testers may be purely focused on testing efforts, you're going to have to cross train devs on how to do some of the automation that's needed too.

Manual testing won't go away, its cheap disposable and very easy to use in spurts, even in agile teams. Use it to inform where you need automation, use it where automation can't reliably reproduce consistent results. 

Maintain
One more thing. Automation is NEVER free. it is a cost center. It costs to design it properly, it costs to build it properly, and it costs to maintain it overtime. In my experience as your framework matures you'll see more than a third of your time on maintenance.

This is not always because of brittle tests. The character of the automation around a product will eventually take on a similar character to the product it tests. Which means, inconsistencies in the application will eventually result in what appear to be brittle tests. This means a lot of time will be spent investigating tests that appear to fail for inexplicable reasons, and then pass when run again. Sometimes this is the tests, but too many people assume that to be true. Sometimes its a result of the ever increasingly asynchronous nature of software today. Sometimes things we think are timely, actually aren't as timely as we expect, and a small performance hit in the app will impact the tests.

Enhancement
So when you think about automation, I'd add to consider some basic performance testing as well, look for ways to record the speed of various parts of the application, establish benchmarks, and then you can know whether changes in the application are adversely impacting performance, even slightly. (But if you don't test it, you will never know)


Comments

Popular posts from this blog

Scrum Answer to Top 10 Software Project Risks