There once was a project manager that pitted departments against each other. The Business Analysts were rated on their requirements defects. The testers were rated based on could not reproduce defects. The developers were rated based on coding defects found by QA (Quality Analysis) testing. As you can imagine, the development team had the most number of defects. The project manager was always disappointed with the amount of coding defects. He wanted zero defects. Obviously this is an apples to oranges comparison. Although the teams worked together on the same project, the activities, complexity level, and defects are completely different.
So how many coding defects are too many? According to Steve McConnell’s book, Code Complete “Industry Average: about 15 – 50 errors per 1000 lines of delivered code.” This is known as the defects per KLOC (1000 lines of code). He goes on to say that “Microsoft Applications: about 10 – 20 defects per 1000 lines of code during in-house testing, and 0.5 defect per KLOC in production.” It is possible to achieve zero defects but it is also costly. NASA was able to achieve zero defects for the Space Shuttle Software, but at a cost of thousands of dollars per line of code. If people will die because there are bugs in the software then that kind of cost makes sense. Most projects simply cannot afford the same level of testing as NASA.
The important thing is to start measuring the defect density and graph it over time. Find out how many lines of code were added/changed for a release. Then figure out how many defects were found during quality assurance testing, user acceptance testing, and production. After you have these numbers you can then predict future defects quantities for QA, UAT, and production for a release based on the lines of code and the historic defect density. Find out the defect density and then set a goal to do better.