When thinking about test automation, the first thing to remember is that testing includes much more than just a set of repetitive tasks. Therefore, not everything can (and should) be automated. For example, key testing tasks such as the definition of test objectives or the design of a test plan are engineering tasks that constitute the base for testing, regardless of whether the tests are manually or automatically executed. On the other hand, not all test case components or test types are needed to be repeatable, while others (loading tasks, regression tests, etc.) may really maximize the benefits of automation.
The decision on whether test cases are to be or not to be automated needs to be supported by the expected Return on Investment (ROI) analysis. It’s done by considering several aspects such as the effort for the creation of automated tests, the execution time, the feedback provided and the maintenance effort according to expected changes. In other words, we cannot limit the analysis to the conception that automation is a one-shot task, because obtained test case scripts need to be maintained.
It is known that in software engineering, the maintenance effort may be significantly greater than that required for development of new functionalities. We also know that maintainability relies on the ability to change/modify the implementation, and that this ability depends on the architecture, which organizes the code and makes it more or less modular, changeable, understandable and robust. This is exactly what happens in automation: Not every automation approach will lead us to the same ROI. This is why we promote it in an automation project, where we need to align the objectives and the environment characteristics (frequency of software changes, data availability and integrity, etc.) with the definition of a suitable architecture, aimed at obtaining well-organized and structured test implementations to minimize the risks (maintainability effort, data variability, requirements changes).
This discussion poses a question: Is test automation simply a ‘recording & reproducing’ process or a more complex engineering process? There exist some tools that help us record scripts, which are able to interact with graphical user interfaces and reproduce the exact recorded interactions. However, what happens if something changes? Do we again record all the scripts or do we modify, script by script, the generated code in order to adapt it to the changes? And, if we modify the code (maintenance), would it be worth to have it implemented (with the assistance of recording to generate code chunks) based on a modular, understandable and maintainable architecture? The report – The Forrester Wave: Modern Application Functional Test Automation Tools, Q2 2015 – enforces this idea and states that “Fast feedback loops frequently break UI-only test suites”. Consequently, it suggests “Ratchet down UI test automation in favor of API test automation. Modern applications require a layered and decoupled architecture approach.”