Traditionally testers have mitigated development risk by checking the quality of software prior to deployment into production environments, helping prevent defects from impacting the end-users of systems. This has been done through manual testing, and these roles are rapidly dying out.
Over time the risk testers take responsibility to mitigate have morphed from internal systems to public facing interfaces, where a missed defect now has the potential to impact a business’s reputation, and therefore it’s revenue, all the more so for those firms which have made their mark by upsetting the status quo using technology.
From personal experience, I can recall a defect triage where the decision on how to progress was largely based upon the phrase “It’ll never happen in production”. However true that may have been, the service with which we had to perform acceptance testing with to achieve accreditation prior to release disagreed. And I can also remember my response of “Release it and you own the risk” to the pressure from project management to “get it done” being successful in buying me a little breathing space when testing is being squeezed because everyone else was late or had minimal impact to the critical path to production.
I mention these two examples because ultimately in the old, waterfall world test was frequently seen as a bottleneck, slowing the release process down and delaying delivery. Where checking we the software did what it was supposed to do and, even more so, checking the software was built right, were seen as costs in terms of time and money. The reality is, I feel, slightly different. The earlier stages of the waterfall usually over-ran or were really easy (a one-line code change may require considerable testing) and often didn’t meet requirements, leaving us as gatekeepers to production, mitigating the risk of something seriously bad happening. A defect is live was more “missed by testers” than “carelessness my developers or business analysts”.
The very same advances in technology which have enabled start-up businesses have introduced us to new ways of creating software, enabling us to move away from waterfall to more agile software delivery lifecycles. It is not for this article to focus on the benefits of agile methods over waterfall but is instead focusing on the implications of those benefits:
Shift Left creates a renewed focus on requirements with an increased understanding of those requirements preventing deviation from requirements through the software development life cycle. The idea is that we “get it right the first time”, reducing the need for expensive rework to correct mistakes.
We also now have increased collaboration, whereby developers, testers and (business) users work together, creating a constant feedback loop into the SDLC to maintain direction and velocity, as opposed with conducting Lessons Learned after the horse has bolted, and more often than not ignoring any recommendations.
And finally we have introduced a Look Right aspect, turning agile into DevOps, where we have come to realise it’s not just the business which dictates our requirements, but the technical requirements from the detail needed to support the day-to-day operation of our software in our customer-facing production environment.
Which brings me an update on the “just put it live” and the “It’ll never happen in production” conversations: What do testers need to do if we genuinely got it right the first time? The idea of quality control, or testing, becomes an exercise in spending time finding nothing wrong. Personally I strongly believe this would be a foolish approach to take but there are genuinely agile team structures where the role of tester no longer exists. As a performance tester, some of the best days were when we reported the system worked just fine under load but that message was often heard by stakeholders as “performance testing is a waste of time and money”.
Newer technologies enable not just to build and test software faster and cheaper but also at lower risk – or higher quality. And quality, for far too long the ignored corner of the Prince2 triangle, is suddenly at the very heart of the whole SDLC process through culture, process, and technology.
This is done through quality assurance, whereby a series of joined-up processes work to enforce quality gates, using technology to expedite quality control activities through automation.
Testers, therefore, need to work in a different way. They need to be able to articulate acceptance criteria. They need to be able to identify, install, and configure tools to do what they used to do. Examples of this include defining and enforcing coding standards via SonarCube, and working with virtualization to create and deploy test harnesses to support automation. Skills not traditionally familiar to the role of a tester. Gone are the days of waiting at the end of the SDLC production line, testing finished products. Gone are the days of being able to wait for others. Now we must collaborate from the outset. We can’t blame the business analysts for poor requirements – because we collaborated with them. We can’t blame the developers, fun as it was, for not understanding the requirements and building the wrong thing – because we collaborated with them. We can’t whinge about the environment being delayed and not working properly for the first few days – because we collaborated with the experts, and we have even been empowered to provision our own environments as and when we need them.
We have to change to keep ourselves relevant, to keep ourselves employed in the software development life cycle. Otherwise, are we blockers to innovation, blockers to velocity, and blockers to quality and success. The future of testers is in defining and governing quality processes, and in employing technology to industrialise our old, manual ways. At Sogeti we are already embracing this change, leading the charge with technical testing and automation, and our experience can help organizations transform.