Skip to Content

How Many Defects Are Too Many?

Sogeti Labs
October 29, 2014

11 thoughts on “How Many Defects Are Too Many?

  1. A great point I see in this article is the need for metrics. Many can argue the desired amount of defects all day, but unless you know where you are now, you will have a very hard time getting to where you want to be. Clearly critical programs that impact lives have a vastly different view of risks than a program whose failure risk would be solely monetary. But a risk is a risk and needs to be weighed against the cost of potential failure. And the best way to have the information needed to assess the risks and costs is to know your current standing. I cannot think of anyone who would plan a road trip without knowing their starting point, but very often we see the push to reduce defects but have no true way to measure the results. If we know where we are starting from we can not only see a clearer path to where we want to be, but it can also give us an idea of the speed at which we improve.

  2. Nice Article Greg. In my opinion more defects occur due to requirements lost in translation while feeding to developer. The inability of BA’s to think through whole bunch of scenarios and the developers varied experience and knowledge of the domain/application leads to assumptions filling in the gaps in the requirements leading to defect.Again I am not sure if you can assign a linear relationship to lines of Code and defects.

    1. While I would agree that BA’s can leave gaps in the requirements; that is no excuse for insufficient requirements. As the receiver of requirements document, it is the developer’s responsibility to approve the requirements document. If there are gaps, the developer should reject the requirements document until it is right instead of making assumptions about the gaps.
      Indeed the relationship between lines of code and defects is not linear due to coding style and language. However, if developers follow Clean Code practices where methods are not longer than 20 lines of code; it will have greater consistency.

  3. There are known bugs and then there are unknown bugs. The known bugs aren’t the ones you worry about.
    I’m not sure that it would be easy to create reasonable analytic measurements. Even the measurement ‘lines of code’ is problematic when applied to anything higher than assembly language. I often see whitespace or braces measured as if that was meaningful. A single linq query or regular expression can contain dozens of points of failure. For the purposes of measurement, JavaScript lines of code count as more lines than code written than lines in lower level compiled languages. Elegantly written professional code reduces the number of lines by gathering functionality in reusable packets. Trying to measure lines of code is likely to punish people for good coding practices while rewarding large blocks of dense linear fall through code that has a high non-bug maintenance cost even when theoretically ‘correct’ and bug free for existing requirements.
    Defects is likewise a vague measurement. To actually measure it requires a level of requirements documentation that simply isn’t going to exist in 95% of your projects, and how many defects each coding error counts as is going to be dependent on how the requirements are written and how the defects are recorded. A requirement that says something like, “Each button once pushed will become inactive until the function indicated by the button completes”, generates fewer defects than, “The save button once clicked is disabled until the save is complete.”, “The delete button once clicked is disabled until….”, and so forth. And its not unusual to see a QA defect reopened for a completely different problem within the same workflow.
    From a purely economic level, the problem with trying to measure defects analytically is the cost of manipulating the numbers is lower than the cost of improving them. Business processes that are cheaper to evade than actually follow, tend to be evaded rather than followed.
    I think far more valuable than the actual measurements would be the rigor you’d encourage if you tried to measure them. QA’s and BA’s seldom have a technical understanding of requirements and defects but would benefit from understanding them as technical language which ought to be as rigorous as the code required by the developer. Developers seldom have a technical understanding of what makes for maintainable code. In my experience, the biggest determinant of the cost of a defect is not even when the defect is detected, but the structure of the code it is detected in. Code with few assumptions, single repositories of data, few dependencies, no remote side effects, and well structured inputs and outputs and a well thought out order of operations is cheap to fix – particularly if you have a flexible business organization that prioritizes purpose over process. Code lacking these features is expensive to fix at every stage. I’ve seen 20 minute turn arounds from the point a client contacts the organization to resolution in deployed enterprise scale production code – not because the defect was ‘trivial’ but because the CSR knew the product, the product robustly logged the right information, and the code itself was well structured and amendable to modification. And I’ve seen man-days devoted to simple text changes in internal facing in house software because you had process over purpose, magic strings, remote side effects, poor understanding of inputs, outputs, and order of operations, and so forth.
    Which is to say, some of the real defects in your code never show up as failures to meet requirements at all and get vaguely lumped as ‘technical debt’ to the extent your organization even understands that there is a problem at all.

  4. Measure defects? Rate a BA based on requirements defects?
    Or, maybe measure by client satisfaction instead?
    Do we want 0 bugs in the software to huge costs or should we instead deliver what the client wants to pay for?

  5. The number of defects is completely meaningless!!!! SERIOUSLY…
    I could have 10,000 defects that are of negligible importance in a given project and be fine; yet have one defect that bankrupts the company.
    Also the meaning of a given defect is highly subjective in many dimensions. Consider a gradient (color) in a UI that is slightly off from what the UX designer specified. If this is an internal LOB application, it is probably meaningless. But if it an “app store” product, where word of mouth and first impressions make the difference between an app that might nor make $100 and one that provides a full income it can be fatal.

  6. David,
    Saying that the number of defects doesn’t matter is like saying the number of automated tests don’t matter. While it is true that some tests are more important than others, having automated tests is a good thing. It is a general visibility into the project. Conversely having a lot of defects in a project is a bad thing. I would agree that there are different levels of defects from low to critical.
    If somehow there was a project with 10,000 open defects and they were all negligible, that tells me there are way too many testers on the project. Most companies don’t do enough testing. I view this scenario as highly unlikely.

  7. Hi Greg,
    Do I need to consider only code/development defects for calculating Defect Density ?
    Thanks
    Arjun

Leave a Reply

Your email address will not be published. Required fields are marked *