Skip to Content

Confidence in Upscaled Quality

Sogeti Labs
April 15, 2022

What justifies any test variety activity?

Confidence in quality does; according to TMAP too. Each project defined test variety must thus measure and indicate a level of multiple quality characteristics in order to get the right test done accurately. Risk assessment levels and risk mitigation should also contribute to increased confidence in quality, and it’s also a means of test extent’n’effort planning within a test variety. Practicing the ‘no-but-allowing-code’ 3rd era of Test Automation (TA) can provide any testing not only at speed but also at scale in case of Performance Testing (PT).

The DevOps Monitoring activity is often supported by PT. Response time or fault/failure patterns measured by PT should be E2E across anything, including DB-CPU, -retrieval, UI-rendering, and error payload. By letting PT-upscaled TA tooling handle all that during execution, it may be possible for reverse performance engineering to provide elapsed time measures and eventually down to a particular fault entry, while allowing for digital inclusion and professional diversity for Test/Business Analysts without a coding background.

The Performance Testing Test Variety

In fact, PT is a test variety many DevOps teams seem to ignore or low-prioritize, and this is a task often handled by tool dependent, code-skilled SDETs. In order to perform PT, there must be an extensive amount of test data available, and it must run in a test environment that is also a (often down-) scaled (and masked) version of the production. There isn’t always the capacity in an organization to provide the basis for PT; while others focus strictly on developers’ performance engineering.

However, that may change. I am a tech-savvy Senior Test Analyst, who approaches PT in black- or grey-box thinking, while challenging infrastructure, architecture design, NFRs (Non-Functional Requirements), logging and monitoring in backends of DBs, APIs, batch-jobs, events, cloud, data centers, and other networks. The TA tooling mentioned herein is in line with that and more test varieties.

In other words, the confidence in upscaled quality that PT should provide is the ability to run Prod-like scenarios under high-load, load-balanced or/and stressed usage simulated multi-scenarios. One current TA tool processed option is to go from re-/capturing API, ML-driven and self-healing change-adaptable tool-upscaling to UI, functional usage, and finally to PT.

PT lifecycles are typically iterative in nature and require continuous tuning, execution overhead, and analysis until the exit criteria are met. Why not have that replaced by TA tooling that enables self-automated creation and tuning (self-healing) based on AI and ML?

‘No-but-allowing-code’ Merged Performance Testing

Since PT has been seen as a separate TA discipline – does that have to be the case in practicing the ‘no-but-allowing-code’ 3rd era of TA with new AI-/ML-/MBT-driven technology enablers?
I don’t think so. In my 3rd blog, I mentioned: “Some vendors provide all-in-one tools, so that the performance testing part leverages on test definitions made in the API and/or functional tests smoothly. part(s) in a smooth way.” Let your PT be based on the results of other test varieties done by your Business/Test Analyst techies – and let them do your PT.

In order to give PT a more hands-on practical flavor, I will share a couple of stories from my related background. Due to the rise of configurable ‘no-but-allowing-code’ tools, my experience in PT also made me remain within the TA sphere. With my experience as a Senior Test Analyst, I’ve now come to a point of combining all of my acquired test variety experiences. My PT experience includes:

  • A few years ago, I was doing a PoC of an API test tool and using REST requests for webservice JSON-endpoints with successful responses, the tool’s integrated load test add-on let me configure and made it easy to run and over time repeat those for hundreds of concurrent virtual users. The graphs reported load, performance, and single-endpoint request/response elapsed duration including that of CPU-processing over time.
  • Over a decade ago, I orchestrated and ran thousands of card transactions per hour in a down-scaled production test environment using PT-tailored tools, and then manually created business simulated payment- and ATM-transactions using physical test cards and terminals at a national critical infrastructure provider for card payments. The Legacy IBM system was also setup to generate notifications for business partners at various end-user events.
    This load-testing then built up a huge notification queue which took quite some time to process – considerably slowing down (or preventing) the “real-time” processing of card transactions. Therefore, a mechanism for prioritized processing was developed and applied. PT then came in very handy in order to create confidence in quality – with the high-load of transactions generating notifications and in parallel the stressing of “real-time” card transactions.

In DevOps, teams are well known for their adherence to the idea that nobody should assume they are the only person who can do their specialty alone – and that has recently become relevant also for the PT variety efforts, enabled by modern tooling in the 3rd era of TA. I’m bold enough to take up the challenge – are you?

About the author

SogetiLabs gathers distinguished technology leaders from around the Sogeti world. It is an initiative explaining not how IT works, but what IT means for business.

    Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *