In this 3rd blog on enablers by the 3rd era of Test Automation (TA), I’d like to put the AI-ones in a broader and more holistic quality context for which some retrospective elaboration is required. I’ve always chosen my path in life looking ahead and thought history was boring, but I’ll try to make a brief exemption here 😉
In my world of optimized quality in software deliverables, all team members would contribute to improved quality, and there would be ‘no-but-allowing-code’ quick’n’easy built’n’maintained TA adaptable to change. That is, not at only unit and system test levels but also at integration and acceptance test levels, while sessions of exploratory testing would be setup for more subjective testing by business SMEs or Test Analysts. I wouldn’t mind if APIs, the central backend of most applications, would be test covered as according to real functional UI usage. Upscaling the TA into automated performance testing is also in demand.
AI Applied to Automation Merged Test Design
At most organizations insufficiently specified requirements or user stories lacking Specific, Measurable, Attainable, Relevant, and Time-boxed (SMART/INVEST) unambiguous detailing, cause code mistakes by unintended coding and testing. In addition, while test design is crucial to providing clear advice on quality and risk, many organizations seem to ignore the fact that successful solution design requires quite a bit of joint efforts in terms of shared imagination in unambiguous detail of potential business workflow scenarios. Adequate test design methodologies should be part of a sustainably structured framework, support improved co-creating and scenario imagining dialogue, give desired coverage, and facilitate test case writing and TA.
During the 2nd TA era, Model Based Testing (MBT) techniques were applied to test design in TA tooling by which the desired outcome of those methodologies could be realized. Visualizing user stories alone in BPMN (Business Process Model & Notation) diagrams/flowcharts could provide a shared “minimum-documented” reference for co-creation and scenario imagination. Almost 10 years ago one tool provided calculation and highlighting of a desired path coverage in such diagrams, auto-generation of manual test cases based on BPMN step applied test steps, and Regression Test analysis based on diagram versioning. Others correlated MBT empowered test design and flowchart steps with TA script steps, hence providing semi-automated TA.
Disruptive to traditional test design and by the 3rd TA era, is the use of much further evolved AI. Change adaptable Machine Learning (ML) can take in not only single code-objects and their many properties for self-healing recognition but also identify models of real user journeys with recurrent failures to improve the prioritization of test cases capable of actively detecting anomalies. Utilizing ML, AI can improve correlations between code, meta data and testing to identify as well as (de)prioritize and create automated test cases.
Holistically spoken, quality is not just a matter of testing code, such as in a traditional focus by SDETs. Such efforts are wasted when there’s a leak in co-creation all the way from requirements creation. We shouldn’t forget about unambiguous SMART/INVEST requirements, and futuristic MBT processing nurture further shift-left quality improvements, so that design, code, and test do match sufficiently matured requirements. Only then – while in addition related to that – AI and ML working on code could provide reliable and successful TA.
AI of Computer Vision, Real End-User Behavior and Self-Automation
Prior to practicing in the 3rd TA era, it’s been the nature of automated UI tests to run slowly, be technically complex, fast to break, and hard to maintain. As its own type of AI, code-independent computer vision aims to teach computers to see, comprehend, and interpret digital images as humans do. That helps reduce test flakiness in UI automation and increase automation speed and accuracy.
Computer vision scans the application UI for objects and adjust testing to changes in the UI or environment. While the test flow remains unchanged and even if objects change, the test case needs no maintenance, because the algorithm isn’t dependent on an object’s implementation or properties. That also allows for in-sprint TA prep-activities, since a mockup.pdf or a hand-written note virtually could act as a user interface.
Among pain-points with functional regression testing is loss of relevance, obscure coverage, and maintenance effort out of proportion. AI exists in TA tooling with ML, reading in system logs to determine and build usage-driven regression testing – no additional code should be needed.
While most UI TA tooling still don’t consider the API layer and towards autonomous testing for at least 1 tool, AI is now capable of creating TA tests and detecting anomalies in this real-user activity based BDD process:
- train the AI to establish a baseline of UI and API requests/responses
- run a new AI blueprint at each new build, allowing the AI to learn what changed on its own and create flows based on models and limits provided in training – UX, API, validations, and faults should be found by itself
- fine-tune and business logic generically concatenate the flows based on learning from standard API production logs and upscaling those API requests to UI actions.
AI Applied to Performance Testing
Lots of separate tools for either API, functional TA or performance testing exist. Some vendors provide all in one tool so that the performance testing part leverages on test definitions made in the API or/and functional part(s) in a smooth way.
The tool in mention earlier doesn’t just have its API production logs fine-tuned blueprint upscaled into UI activities. Any test definition should be executable in parallel for up to 3 million concurrent virtual users.
AI’s Role in Practicing the ‘No-but-allowing-Code’ 3rd era of Test Automation
What role does AI play in practicing the ‘no-but-allowing-code’ 3rd era of TA?
AI is paving the way for codeless and faster TA. Increasingly AI solutions will become more accessible, not only to SDETs but also to Test Analyst techies. The solutions require less manual training and only very little code knowledge, while in an AI pre-setup their AI rules were seen configurable. Vendors are trying to automate or reduce the time required to train algorithms.
What role will AI play in future in practicing the ‘no-but-allowing-code’ 3rd era of TA?
Analytics for test results and TA tool vendors will look to link unit, API, UI, end-to-end, and even performance tests via faster and multi-parallel test execution, UI, and log output analytics. To identify untested or inadequately covered test activities across all phases of testing, AI will apply or integrate coverage features.
With time, AI algorithms will be able to predict deviations and help developers prevent them. Predictive rather than prescriptive approaches save time by reducing back and forth between development and QA for anomaly detection and resolution.
Moreover, AI and ML algorithms will be used more extensively to automate software testing support functions such as test environment, release candidate evaluation, and business impact prediction.
Don’t just take my words for the whole truth – this blog was partly inspired by elaborated contents of the report ‘State of AI applied to Quality Engineering 2021-22’. I’ve added on to that and welcomed Business and Test Analyst techies in the era of AI applied TA – personally encouraged to ‘no-but-allowing-code’ proceed.