Test automation can bring substantial benefits: increasing test coverage (test more), reduce time-to-market (test faster), ability to repeat testing often (increase agility) and avoiding costly upstaffing for peaks in test effort (reduce test cost). On the other hand, there is also a cost. Test automation needs to be designed and implemented. Depending on the technology and solution, there might be a licensing and infrastructure cost. An often underestimated cost factor is the maintenance effort which can be significant, especially when the application-under-test or the platform changes frequently.
Both the implementation and maintenance effort are directly correlated with the complexity of the test automation. A complex automation solution requires additional design, coding and testing (yes, test automation needs testing as well). Additionally, higher complexity of test automation also makes the test results harder to analyze. Especially in case when the automated test has failed, we’ll need to verify if it the error is in the automation or in the application. If the automation is complicated there will be (1) a higher probability that the error is indeed in the automation, (2) this will take more time & effort to assess and (3) it will be harder to correct (and re-test).
So, what do we mean by complex test automation? Let’s take an example of an eCommerce application that produces huge amounts of data. There is a module in the solution that applies complex business logic to trigger automatic orders (based on the inventory data & sales trend information). Failures in this module will incorrectly trigger orders which will have a direct financial impact on the organization. Testing needs to assure that this risk is mitigated. Not only is the business logic very complicated (with many calculations, decision rules and triggers), we should also anticipate that the input data will be very divers. As an eCommerce application changes frequently, the testing of the automatic ordering module should be automated so that it can be re-run frequently.
The test scenarios consist of a set of input data that is submitted to the ordering module and a verification that the output of the decision logic is correct (i.e. which article are/are not ordered and with what quantity). Manual testing would happen by submitting the test data to the system, manual calculation or simulation of the business logic for this test data and then verifying the actual result with this expected result. While this might be a valid approach for manual testing, it would not be suitable for automation. This would require automation code to perform the calculations & business logic. This means that we are actually recreating the application that we are testing. The effort to create this automation would be comparable to building the actual application. Moreover, we run the risk of making the same errors that we did during the implementation of the solution.
An alternative solution is to perform an initial run of the testing with a given set of input data (chosen such that it maximizes test coverage). We could then perform the validation of the business logic manually (preferably with input from the business stakeholders). We then store the result of the processing of this input test data set. This is commonly referred to as golden data set.
The automated testing would then consist of (1) submitting the same input test data set (2) trigger processing and (3) comparing the actual result with the verified golden data set. This automation solution is much simpler but therefore more robust, reliable and usable. Of course, this requires a set-up of the application which is clean so that it can work again and again with the same test data. This could take some effort, but this will easily be compensated with the reduced effort for the test automation itself.
Every time that we observe complexity in the test automation (replication of business logic, complex asserts, dynamic validation of output data…) we should ask ourselves the question if this complexity is needed and if we are willing to pay the additional cost for this complexity. Often, a simpler solution exists that is not only be cheaper but might also result in much more reliable testing outcomes.
About Bart Vanparys
Bart has carried many titles in his 20 year career. He’s been an analyst, tester, quality assurance consultant, test manager, project manager, BI developer, quality manager, change manager, CoE lead, program quality lead… A constant has been his search for ways to deliver value through IT solutions in a controlled and safe manner.
More on Bart Vanparys.