The indicators used to measure software quality are usually classified into two groups - end-product quality metrics and in-process quality metrics. The study of the relationship between in-process and end-product quality metrics is the main focus of software quality engineering. It is not surprising that results of numerous studies conducted at Motorola and IBM have found that improvements in in-process quality almost always translate to an increase in end-product quality.
End-Product Quality metrics consist of:
- Defect density
- Mean time to failure
- Customer problems
- Customer satisfaction
Defect density is a measure of the number of defects relative to the size of software, which is a function of the number of lines of code (LOC). The standard definition of LOC used in software engineering is any line of program text that is not a comment or blank line, regardless of the number of statements or fragments of statements on the line.
A simplistic definition of defect density is the number of defects discovered over a specific period of time divided by the size of software expressed in thousands of lines of code (KLOC).
The mean time to failure (MTTF) is defined as how long the software can run before it encounters a "crash".
Defect density and MTTF are correlated - a software program with higher number of defects is likely to crash more often than a software program with a lower defect density. It is important that defects (or bugs) that are associated with a low MTTF are identified and removed early. The defect density is also important for another reason because it is used as one of the factors in computing the cost and resource estimates of the maintenance phase of the software life cycle.
Customer problems - while defect density and MTTF are useful to drive quality improvement from the development engineer's point of view, it is equally important to consider the customer's perspective. All problems reported by the customer - not just valid defects - are counted as customer problems. Usability problems, unclear documentation, and duplicates of valid defects (i.e., defects reported by customers for which fixes or work-around solutions are available, but the current customers are unaware of them) make up the problem space from the customer's perspective. This metric is expressed in terms of Problems per User Month (PUM), which is the total number of customer problems reported in a specific time period divided by the number of users divided by the number of months in the reporting time-period. PUM is usually calculated every month after the software is released to arrive at a yearly average.
To reduce the value of PUM:
- Reduce defect density and increase MTTF.
- Improve usability.
- Provide clear documentation.
- Improve customer training and support.
The fourth metric - customer satisfaction - is measured by customer survey data via a typical 5-point scale:
- Very satisfied
- Very dissatisfied
Based on the 5-point scale several types of analysis can be conducted using:
- Percent of completely satisfied customers (Very satisfied)
- Percent of satisfied customers (Very satisfied and satisfied)
- Percent of dissatisfied customers (dissatisfied and Very dissatisfied)
- Percent of unsatisfied customers (neutral, dissatisfied and very dissatisfied)
In-Process Quality metrics are less formally defined and varies depending on the make-up of the software engineering team - higher the skill level and domain knowledge of the team, less is the use of in-process quality metrics. However, the goal is to learn how to engineer quality into the process so that the end-product quality is consistently high.
In most cases, in-process quality metrics means the detection and tracking of defects (bugs) during software testing. This is costly and not very effective in increasing end-product quality. However, there are several organizations with well-established mature processes where inspections are conducted at every phase of the software development life-cycle (SDLC) not just during the testing phase. A defect that is detected at a later phase is much more expensive to fix than if it was either prevented or detected and fixed in an earlier phase of the SDLC. Inspections are carried out to specifically identify defects in architecture, interfaces, logic, and documentation at every phase - inception, elaboration, construction and testing - of the SDLC by reviewing the high-level design, low-level design, code and documentation.
All the new studies, once again, prove the old adage to be true - quality cannot be inspected into a product!