October 29, 2014

How Many Defects Are Too Many?

BY :     October 29, 2014

iStock_000038465306LargeThere once was a project manager that pitted departments against each other. The Business Analysts were rated on their requirements defects. The testers were rated based on could not reproduce defects. The developers were rated based on coding defects found by QA (Quality Analysis) testing. As you can imagine, the development team had the most number of defects. The project manager was always disappointed with the amount of coding defects. He wanted zero defects. Obviously this is an apples to oranges comparison. Although the teams worked together on the same project, the activities, complexity level, and defects are completely different.

So how many coding defects are too many? According to Steve McConnell’s book, Code Complete “Industry Average: about 15 – 50 errors per 1000 lines of delivered code.” This is known as the defects per KLOC (1000 lines of code).  He goes on to say that “Microsoft Applications: about 10 – 20 defects per 1000 lines of code during in-house testing, and 0.5 defect per KLOC in production.” It is possible to achieve zero defects but it is also costly. NASA was able to achieve zero defects for the Space Shuttle Software, but at a cost of thousands of dollars per line of code. If people will die because there are bugs in the software then that kind of cost makes sense. Most projects simply cannot afford the same level of testing as NASA.

The important thing is to start measuring the defect density and graph it over time. Find out how many lines of code were added/changed for a release. Then figure out how many defects were found during quality assurance testing, user acceptance testing, and production. After you have these numbers you can then predict future defects quantities for QA, UAT, and production for a release based on the lines of code and the historic defect density. Find out the defect density and then set a goal to do better.

Greg Finzer

About

Greg Finzer is the Custom Application Development Community of Practice Lead for the Sogeti Columbus Region. His duties include identifying technology trends, facilitating access to training & certifications, developing architecture expertise, supporting sales & delivery, and increasing participation in the local developer community.

More on Greg Finzer.

Related Posts

Your email address will not be published. Required fields are marked *

2 + 6 =


  1. Heather Crawford · October 30, 2014 Reply

    A great point I see in this article is the need for metrics. Many can argue the desired amount of defects all day, but unless you know where you are now, you will have a very hard time getting to where you want to be. Clearly critical programs that impact lives have a vastly different view of risks than a program whose failure risk would be solely monetary. But a risk is a risk and needs to be weighed against the cost of potential failure. And the best way to have the information needed to assess the risks and costs is to know your current standing. I cannot think of anyone who would plan a road trip without knowing their starting point, but very often we see the push to reduce defects but have no true way to measure the results. If we know where we are starting from we can not only see a clearer path to where we want to be, but it can also give us an idea of the speed at which we improve.

  2. Seshaprasad K · October 30, 2014 Reply

    Nice Article Greg. In my opinion more defects occur due to requirements lost in translation while feeding to developer. The inability of BA’s to think through whole bunch of scenarios and the developers varied experience and knowledge of the domain/application leads to assumptions filling in the gaps in the requirements leading to defect.Again I am not sure if you can assign a linear relationship to lines of Code and defects.

    • Greg Finzer · December 16, 2014 Reply

      While I would agree that BA’s can leave gaps in the requirements; that is no excuse for insufficient requirements. As the receiver of requirements document, it is the developer’s responsibility to approve the requirements document. If there are gaps, the developer should reject the requirements document until it is right instead of making assumptions about the gaps.

      Indeed the relationship between lines of code and defects is not linear due to coding style and language. However, if developers follow Clean Code practices where methods are not longer than 20 lines of code; it will have greater consistency.

  3. Matthew Reynolds · October 30, 2014 Reply

    There are known bugs and then there are unknown bugs. The known bugs aren’t the ones you worry about.

    I’m not sure that it would be easy to create reasonable analytic measurements. Even the measurement ‘lines of code’ is problematic when applied to anything higher than assembly language. I often see whitespace or braces measured as if that was meaningful. A single linq query or regular expression can contain dozens of points of failure. For the purposes of measurement, JavaScript lines of code count as more lines than code written than lines in lower level compiled languages. Elegantly written professional code reduces the number of lines by gathering functionality in reusable packets. Trying to measure lines of code is likely to punish people for good coding practices while rewarding large blocks of dense linear fall through code that has a high non-bug maintenance cost even when theoretically ‘correct’ and bug free for existing requirements.

    Defects is likewise a vague measurement. To actually measure it requires a level of requirements documentation that simply isn’t going to exist in 95% of your projects, and how many defects each coding error counts as is going to be dependent on how the requirements are written and how the defects are recorded. A requirement that says something like, “Each button once pushed will become inactive until the function indicated by the button completes”, generates fewer defects than, “The save button once clicked is disabled until the save is complete.”, “The delete button once clicked is disabled until….”, and so forth. And its not unusual to see a QA defect reopened for a completely different problem within the same workflow.

    From a purely economic level, the problem with trying to measure defects analytically is the cost of manipulating the numbers is lower than the cost of improving them. Business processes that are cheaper to evade than actually follow, tend to be evaded rather than followed.

    I think far more valuable than the actual measurements would be the rigor you’d encourage if you tried to measure them. QA’s and BA’s seldom have a technical understanding of requirements and defects but would benefit from understanding them as technical language which ought to be as rigorous as the code required by the developer. Developers seldom have a technical understanding of what makes for maintainable code. In my experience, the biggest determinant of the cost of a defect is not even when the defect is detected, but the structure of the code it is detected in. Code with few assumptions, single repositories of data, few dependencies, no remote side effects, and well structured inputs and outputs and a well thought out order of operations is cheap to fix – particularly if you have a flexible business organization that prioritizes purpose over process. Code lacking these features is expensive to fix at every stage. I’ve seen 20 minute turn arounds from the point a client contacts the organization to resolution in deployed enterprise scale production code – not because the defect was ‘trivial’ but because the CSR knew the product, the product robustly logged the right information, and the code itself was well structured and amendable to modification. And I’ve seen man-days devoted to simple text changes in internal facing in house software because you had process over purpose, magic strings, remote side effects, poor understanding of inputs, outputs, and order of operations, and so forth.

    Which is to say, some of the real defects in your code never show up as failures to meet requirements at all and get vaguely lumped as ‘technical debt’ to the extent your organization even understands that there is a problem at all.

  4. Björn Dahlberg · November 14, 2014 Reply

    Measure defects? Rate a BA based on requirements defects?
    Or, maybe measure by client satisfaction instead?
    Do we want 0 bugs in the software to huge costs or should we instead deliver what the client wants to pay for?

    • Greg Finzer · December 16, 2014 Reply

      How would you suggest measuring client satisfaction other than the industry standard of ensuring the application meets the requirements?

      • Yavor · April 4, 2017 Reply

        by customer surveys; subjective means is the only and best way to measure customer satisfaction.

      • Yavor · April 4, 2017 Reply

        @Greg, by customer surveys; subjective means is the only and best way to measure customer satisfaction.

  5. David V. Corbin · January 14, 2015 Reply

    The number of defects is completely meaningless!!!! SERIOUSLY…

    I could have 10,000 defects that are of negligible importance in a given project and be fine; yet have one defect that bankrupts the company.

    Also the meaning of a given defect is highly subjective in many dimensions. Consider a gradient (color) in a UI that is slightly off from what the UX designer specified. If this is an internal LOB application, it is probably meaningless. But if it an “app store” product, where word of mouth and first impressions make the difference between an app that might nor make $100 and one that provides a full income it can be fatal.

  6. Greg Finzer · April 5, 2017 Reply

    David,

    Saying that the number of defects doesn’t matter is like saying the number of automated tests don’t matter. While it is true that some tests are more important than others, having automated tests is a good thing. It is a general visibility into the project. Conversely having a lot of defects in a project is a bad thing. I would agree that there are different levels of defects from low to critical.

    If somehow there was a project with 10,000 open defects and they were all negligible, that tells me there are way too many testers on the project. Most companies don’t do enough testing. I view this scenario as highly unlikely.

*Opinions expressed on this blog reflect the writer’s views and not the position of the Sogeti Group