Reflections on Using Quality Views

When you work in engineering or operations, you become intimately familiar with the challenges of technical debt. It can be difficult, however, to communicate the full cost of technical debt to others in the organization, particularly to people who are non-technical. Communicating the latent risks for software systems that are not directly customer-facing can be especially difficult. For these systems, it can be easy to overlook the technical debt, until it becomes crippling to the business.

Near the end of last year, I detailed my approach to using quality views to communicate software quality and evolution. The approach that I developed, inspired in part by Michael Feathers, is a way to represent complex systems, or systems that are not customer-facing—like infrastructure platforms—to people who are non-technical, or not intimately familiar with the software. My goal was to represent the system holistically, evaluate its effectiveness in meeting business requirements, and demonstrate how it was changing over time. This technique has been particularly effective for me in highlighting and tackling technical debt, through improved communication.

Earlier this year, I gave a presentation on quality views at QCon London, in the Dark Code: The Legacy/Tech Debt Dilemma track hosted by Michael Feathers. The more visual and interactive format of a conference presentation allowed me to elaborate on some of the examples that I provided in my original blog post. I was also recently on the InfoQ Culture and Methods podcast, to have a follow-on discussion about quality views. My original blog article, as well as my presentation, generated some valuable discussion and feedback. In addition, I have now had the luxury to observe a few other people adopt this technique. This article will expand on some of the questions and discussions that surfaced at QCon, as well as provide some commentary based on my observations of others adopting this technique.

Quality Versus Features

A reaction that I have encountered a number of times—and a reaction that downright confuses me—is the idea that quality and features are in conflict. Some people believe that it is either-or, zero-sum. Without a doubt, these people believe that improving the quality of a software system is a noble initiative, but they make comments like, "We need to balance these quality objectives with business objectives, aspects of delivery, shipping a certain amount of product, or some measure that management cares about". Some people suggest calling out features or business value in a separate category of the quality view.

My intention was never to juxtapose quality with features. My intention was for the technique to represent the system holistically, to highlight technical debt, and ensure the risks associated with it are balanced against the product-development objectives of the organization. The word feature, funnily enough, is a synonym of the word quality.

The ISO 8402-1986 standard defined quality as:

The totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs.

This standard has been revised by the ISO 9000:2015 standard, which defined quality as the:

Degree to which a set of inherent characteristics of an object fulfils requirements.

Note that I did not invent the name quality views, it was inspired by Michael Feathers.[1] Perhaps quality views is not the best name—the term quality may be too loaded. Regardless, my intent in using the name was in the spirit of the holistic definitions above, rather than some narrow definition of quality that is ignorant of business objectives and risks, or customer requirements.

I think that the most comprehensive description for the objective of quality views is the following paragraph from Michael Feathers' writing on symbiosis, which I read a few months after publishing my original article:

We can grade areas of our systems and see how they change over time. We can have continual conversations about the quality and readiness of our systems not just at the development level, but at the organization level. We can feed that information into our work and make better choices.

I am always baffled when people believe that our focus must be squarely on delivering new features and that investing in quality will compromise product development and delivery. The opposite it true. As Uncle Bob Martin says:

The only way to go fast is to go well.

Another objection that I have heard is, "Why would we invest in improving the quality of a component that delivers little or no value, or only a few customers use?" I have never suggested that one invest in improving the quality of such a component. In fact, if there is no value being derived from the component, focus on eliminating it, rather than improving its quality. This does not mean that representing the component as being of poor quality is of no value. In fact, the key example in my original blog post—highlighting the moment when I knew quality views were an effective technique—was when I had a valuable discussion of the risks associated with a poor-quality, legacy component—a component that we chose not to invest in improving. Continuing to highlight the poor quality of such a component is effective for reminding everyone of these explicit trade-offs.

Quality views are not about features versus quality. They arise from the understanding that in order to deliver high-quality systems, business objectives and customer requirements must be met, or exceeded. It is fine to keep focusing on features, but if those features cannot be realized reliably by customers, then nothing else matters.

Objective Criteria

Many people appreciate the value of using quality views, but rather than embrace qualitative measures to contrast and evaluate different components—which is the approach that I used—people are eager to apply more quantitative measures, like the percentage of code with unit-test coverage. I understand the motivation—people imagine objective criteria can be applied more uniformly and, therefore, used as a means for comparing different parts of the system, or even making comparisons across projects and teams, such that the most impactful work can be prioritized. However, I think using this approach is as futile and dangerous as comparing other quantitative metrics—like velocity, defect count, or lines of code produced—across disparate projects and teams.

Objective measures are no panacea. It is not hard to envision a component with exhaustive unit-test coverage, that is difficult to evolve and deploy, and fails to meet business requirements in terms of performance, high-availability, or security. This is why quality needs to be represented more comprehensively, often with qualitative measures. Some criteria can, perhaps, be applied more uniformly, like evaluating systems based on the STRIDE technique, commonly used in security threat-modeling, that I discussed in my QCon talk. But even seemingly uniform measures like this require evaluation and discussion. For example, one serious security threat in a critical component of the system may be deemed more important to address than a large number of flaws in a component that is being deprecated and is only used by a small number of customers, even though the more critical component is represented as being of higher overall-quality, in terms of the quality view.

People are often attracted to objective measures because they believe that, "If you can’t measure it, you can’t manage it." This is a quote by W. Edwards Deming, an engineer, statistician, management consultant, and authour, most famous for his impact on the manufacturing industry in Japan after World War II, through his methods for continual quality improvement. The quote, however, is taken out of context. What Deming actually said was the exact opposite:[2]

It is wrong to suppose that if you can’t measure it, you can’t manage it—a costly myth.

I think the most valuable aspects of employing quality views are in the discussions that take place in forming the quality dimensions, evaluating the various components, and prioritizing the work. It is the qualitative aspects, involving our intuition and experience, that are more valuable than any set of quantitative metrics. This is not to say that quality views are a replacement for quantitative metrics. Quality views are about facilitating discussions around business value and preparedness. Measures like cycle-time, or size of the backlog of work, may still be valuable quantitative metrics in other contexts.

Lastly, I have observed some misunderstanding with regard to the criteria that I used for evaluation. The categories that I developed are not prescriptive—the categories were simply the most important aspects to my team at the time. The criteria may be completely different for another system, or objective, and they may also evolve over time. As I just emphasized above, having an ongoing discussion around the evaluation criteria is a valuable exercise in itself. As my colleague insightfully observed:

Quality views should be the start of a discussion, not the end of the conversation.

There is also nothing limiting one from implementing multiple views of quality, each with a unique set of criteria, to represent different aspects of the product, component, service, or deliverable.

Implementation

Finally, I want to provide some observations on implementing quality views within a team or organization. I have seen some managers intrigued by this technique and keen to introduce it into their organizations. For this technique to ultimately have an impact, it certainly needs to be sponsored, or recognized as valuable, by management. But if it is imposed on the engineering team by managers, the engineering team will be skeptical, or even offended, and it will severely discount the initiative, making it much less effective, or even a total failure.

Since quality views are ultimately about facilitating an ongoing discussion across different parts of the organization, they cannot be developed by an engineering team in isolation. However, I think they will be much more effective if they are implemented bottom-up, rather than top-down. Quality views, in some ways, are a means to empower the engineering team to highlight concerns to the broader organization—concerns that management may not have considered, or even understand.

If you do want to take a more objective approach to applying uniform evaluations, I think this would be best coming from an external and impartial team that supports or compliments other teams within the organization, rather than imposing a set of evaluation criteria on a project. This auxiliary team could be a team focused on quality assurance, operations, maintenance, or security assessments. Any one of these teams would be considered a peer of the engineering team in terms of evaluating quality.

A question that I have encountered a few times is, "When should I update the quality view?" I would not necessarily focus on sprints or business quarters. I would instead focus on significant milestones—perhaps meeting a contractual service-level-agreement, or standardizing on a technology for deployment and run-time—and update the quality views in relation to these milestones.

As a final point on implementation, the quality of a component will not necessarily be ever-improving. From time to time, I would expect the quality view to regress in quality. For example, a new business objective may be introduced to move some components to a different platform, perform certain static-analyses before deployment, or meet more stringent service-level agreements. Perhaps you even represent a component that is not at the latest patch level, or has not been deployed recently, as having regressed in quality, as a means of encouraging the routine application of security patches and the regular exercising of deployment capabilities, ideas that Rob Witoff discussed in his excellent QCon London presentation.

Conclusion

Using quality views has continued to be an effective technique for representing the systems that I work on holistically, facilitating conversations around business priorities and risks, and making progress on technical debt. It has been very rewarding to see other people adopt this technique and find it valuable. If you have experimented with quality views and are interested in sharing your experiences, please reach out to me.


  1. When I asked Michael about the origin of the name quality views, he did not remember coining it, but see my QCon presentation for evidence that he was the source of the name. ↩︎

  2. A variant of this quote also seems to be misattributed to the management consultant, educator, and authour, Peter Drucker, in a similar context. ↩︎