Skip to main content
Cutter-IT Advisory Notes

Measurement Is No Silver Bullet

By December 19, 1999May 15th, 2022No Comments5 min read

When it comes to measurement, the IT industry acts strangely. While other industries depend on measurement, tracking, and control as keys to profitability, the IT industry has yet to embrace measurement on a widespread basis. Even when it recognizes the merits of software measurement, the expectations for it are often unrealistic. Software practitioners want a silver-bullet metric that can answer any development question and do it to several-decimal-point accuracy. Predictably, software measurement doesn’t match these expectations and, thus, is usually abandoned before it can deliver a return on investment.

There are a number of things measurement practitioners can do to realign expectations and gain software measurement benefits for their organization:
1. Follow the Goal-Question-Metric approach to software measurement introduced by Victor Basili of the University of Maryland. This approach forces companies to clearly identify their strategic goals for metrics and to pose questions that will track whether or not the goals are being met. Only then are the metrics needed to answer the questions identified and data collection mechanisms put into place. The resulting metrics necessarily depend on the specific goals and questions of the organization. Within the SEI Capability Maturity Model for software are a number of Level-3 key process areas that can form the basis of an organization’s goals/questions/metrics.

(Note: A new book featuring a foreword by Victor Basili was recently published by McGrawHill: “The Goal/Question/Metric Method” by Rini van Solingen and Egon Berghout.)

2. Communicate that there is no silver-bullet software metric, just as there is no silver-bullet accounting metric. Defects, functional size, project duration, and work effort all measure a different aspect of software development, and they are not interchangeable. No single measure or single combination metric will satisfy all goals or answer all measurement questions — one must choose the metric suitable for each specific question.

3. Learn about the available metrics and what they mean before implementing them in an organization. For example, work effort is a function of many variables, including software size, implementation technology, development tools, skills, hardware platforms, degree of reuse, tasks to be done, and many others. As such, no single variable can accurately predict work effort, yet there is often an expectation that a single variable (for example, degree of reuse) can accurately predict effort.

4. Plan a measurement program by using metrics and measures in the manner for which they are intended, and ensure that there is a common understanding of the chosen measures. For example, functional size reflects the size of the software based on its functional user requirements, not the physical size of software. (Physical size of software is often expressed in lines of code.) Together with other variables, it can be used as a technology-independent measure of software size in order to predict effort in software estimation models. However, functional size is not the right measure for predicting data access storage device needs — these depend on the technology and physical space taken up by the software and the volume of data.

5. Remember that the accuracy of a metric is a function of the least accurate measure it involves. For example, if defects are reported as whole numbers and are used in defect density (defects divided by the software size in function points), the result cannot be decimal-point accurate.

6. Use common sense and statistics to correlate collected data, and question figures that seem out of line. Don’t accept data purely at face value without verifying its consistency or accuracy. Many companies collect work effort data on completed projects, but the definition of project work effort can vary widely across different teams (e.g., overtime recorded/not recorded, resources included, work breakdown structure, commencement/finish points, etc). Be careful not to compare data that appears comparable because of common units (e.g., hours) that is actually based on different measurement criteria. For example, two projects may report 100 development hours, but one included overtime and user training hours while the other did not. Although the units are the same, the hours are not comparable.

These are a few of the factors, both human and technical, that can lead to software measurement success. There is a great deal to be gained by tracking and controlling software development through measurement — if only companies would consider what various measures can provide, rather than seeking a non-existent silver bullet to solve all their measurement needs.