It appears that in a rush to get software products to the market quickly there is a growing trend of releasing them for public use even though they have not yet been thoroughly tested by branding them as “beta”. Gmail by Google was in "beta" for almost five years. Beta software by definition is expected to have defects, so I guess we better get used to using software that doesn’t work as expected, which is one of the main reasons of public frustration.
Peter Denning, one of the great computer scientists of our time known for his work on OS memory management, said some time ago that public dissatisfaction with the IT industry is at an all time high. The Chaos Survey by the Standish Group says that one of the primary reasons for this dissatisfaction is because customers find that what they get is not what they asked for. In other words, the software doesn’t do what the customer expects it to do. This is usually a result of inadequate quality control during the development of the software application.
One of the most common and dangerous misconceptions is that quality can be improved by testing or inspecting the software after the application has been built. Testing will certainly uncover defects but it will do nothing to improve the quality of the software unless a root cause analysis is performed and the underlying cause of the defect, not the symptom, is fixed. The later in the software development life cycle a defect is found the more expensive it is to fix it especially if the problem is related to architectural deficiencies rather than coding bugs.
So, what is software quality? According to the American Society of Quality (ASQ) it is the conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics, such as maintainability, operability and modularity. I think the best definition of quality was given by Phil Crosby, the “Quality Guru”, who said that “Quality is conformance to requirements.” There are many other definitions of quality. For example, the ANSI/IEEE 828 standard states that quality is the degree to which a system meets specified requirements and user expectations.
The benefits of software quality from a customer’s perspective are that it increases customer satisfaction through reduced errors, increased usability and meeting functional expectations. From an organization’s perspective the benefits of software quality are increased customer satisfaction in addition to reduced maintenance and operational expenses due to increased stability of the application.
Software quality cannot be measured if you cannot manage it. You cannot manage something that has not been defined. Quality metrics and predictive control processes have to be clearly defined for every software application before the development starts not as an after-thought. Before I go any further, let me address one of the myths being perpetuated by many which is that “following predictive processes in software development increases cost and takes a project much longer to finish.” Several studies show that this is just not true. In fact companies that have embraced the Capability Maturity Model (CMM), according to published reports, have improved software quality by as much as 130% and increased productivity by 62%. This means reduced development and operational costs, and better customer satisfaction over the lifetime of the software application.
As I mentioned earlier, it is quite expensive to find and fix defects after a software application has been built. How can we change this?
1. Get detailed requirements and document them in a language that is universally understood before a single line of code is written. The Unified Modeling Language (UML) by the Object Management Group is one that I would recommend to document the results of the “requirements gathering phase” regardless of the methodology used to gather or implement them.
2. Create a prototype that completely demonstrates the user experience and let the customer test-drive it. This dramatically decreases the possibility of finding usability defects after the application has been built. There are a plethora of tools available for creating these prototypes quickly.
3. Create a detail design of all the application layers before starting the implementation phase. If the design is inherently flawed even the best written code cannot improve the quality of the software. Writing code is quite meaningless before a detailed architectural design of the system has been conceived and until it has been thoroughly reviewed to ensure that the design meets functional and operational requirements of the product. In fact, there are many tools that automatically generate code based on design specifications.
4. Get QA (quality assurance) people involved at the beginning of the project. This will ensure that proper metrics, measurement and validation processes are in place to develop the QC (quality control) tasks that will have to be incorporated into the project plan.
5. Run the project using a seasoned manager who has the management and technical skills to execute, manage and monitor the project as per the plan.
6. Most importantly, make sure that the plan has a series of interim deliverables that can be tested and validated not by programmers but by the customer or end-user. In other words, transparency and keeping the customer involved throughout the project is absolutely important.
Someone said that, “in the computer world, hardware is anything you can hit with a hammer, software is something you can only curse at when it doesn’t work.” Hopefully with a planned approach to the science of software development we don’t have to curse as often.