Software Quality in Growing Development Organizations (Part 1)
“We never used to have problems with software quality - I don’t know what happened…”
We frequently get asked about how to manage software quality. Often times this question comes up after a successful development team has grown. It’s relatively easy to manage the quality of small teams (say, 10 or fewer developers) delivering one or two projects: just staffing the team with good talent goes a long way towards managing quality. As teams grow in size and multiple projects are being delivered simultaneously, managing quality becomes a much more active task. Quality always begins with having a talented team, but even the most talented and well-intentioned team of developers may deliver a product of low quality without the right support structures in place.
As a result, “managing quality” means having a system of controls and metrics in place that help you measure quality over time. These metrics, used in conjunction with examination of processes and the inputs to the development process (usually requirements or user stories) will help you uncover the root cause of quality problems. Understanding the root cause is critical to solving quality problems, because the root cause is often different than the symptom. Poor code quality may sometimes be caused by an unmotivated or poorly trained developer. More often though, it’s a symptom of a deeper problem like an understaffed project or an unrealistic schedule.
Where To Start
Software quality has to be managed throughout the delivery life cycle. Managing quality can be done by combining tools and metrics.
There are three key points at which quality can be managed: during development, at the point where code is checked in, and during the QA process. It’s important to note that tools play a supporting role in quality. The most expensive golf club can’t convert a beginning golfer into a pro, and software tools are no different. With that in mind, let’s take a look at a couple key metrics to help manage quality. In this post, I’ll start by discussing measuring opportunities for defects over time. In my next post, I’ll cover code quality, QA metrics, and putting the whole picture together.
Measuring Opportunities for Defects over Time
In order to measure quality, you need to be able to quantify defects relative to the opportunity for those defects to occur. Manufacturing analogies are often inappropriately applied to making software, but this is one area where it makes sense: If someone told you a factory production line created 10,000 defective units, what does that mean? It sounds like a big number, but is it 10,000 units out of 20,000 units (a 50% defect rate) or out of a billion units (a .001% defect rate)? Were those units all produced in one (maybe particularly bad) day or over the span of multiple weeks? The answers to these questions all turn on knowing about opportunities for defects and convey different meanings of the data.
Measuring opportunities for defects to occur in software development is more challenging because there is no perfect direct measurement. Instead we use proxies based on effort, like:
- Development hours spent
- Estimated story points
- Total delivery hours (development + infrastructure + QA + other supporting resources)
The goal of these measurements is not for each individual measurement to be precise but for their meaning to be relatively consistent over time. Whatever the effort metric, using it as the denominator over time allows you to develop a trendline by measuring the number of defects divided by the number of opportunities for those defects to occur. Usually it is best to measure these opportunities for defects on some coarse-grained timescale, like development hours spent per sprint or per release.