Benchmark secret #2: bad standards = pointless comparisons
- Andrew Gibson
- Nov 1, 2019
- 2 min read

I have a benchmark study on my desk. It's not one I created, it's one I contributed to as a client some years ago. The data request landed in my inbox with a requirement for over 100 separate measures of performance. I'm good at wrangling data and while I have forgotten the details in the intervening years I can guarantee that these metrics were not accurately specified and had to be pulled from multiple systems with differing scope. I gave it my best shot to return a set of aggregate-level metrics that fairly represented the business unit but failed.
How do I know I failed? When the benchmark study was returned, and I examined the charts (that would deservedly rate an F in any data viz class) I could see a number of metrics for which my business-unit's values were not just outliers but could not have possibly been built to the same definition. There were similar issues for other metrics for our industry competitors. Such is the consistency of "industry standard" metrics.
Get three demand planners together to talk forecast accuracy and they will all talk in percentages. This gives a high-level impression that they must be reporting the same thing but dig deeper and you will find that they forecast at different levels of geography, at different levels of the product-hierarchy, for different time-horizons and use different calculations to get the aggregate result. So what value is there in comparing the end results ? Absolutely none.
Get three transportation managers to talk on-time delivery and you will also get percentages that sound like they are talking about the same measure. But what constitutes on-time for an individual load? How do they get this data (as they are not there when the load arrives). How do they then aggregate to a single value? All these can vary.
The key mistake here is to let companies self-report poorly defined metrics data at a high level. There is no need to assume malicious intent or incompetence, for this to turn out badly. Even with good intent on the part of the company supplying data, there is no true industry standard for most performance metrics of interest. The only way to get around this is to work with clear data standards for low-level data and let the benchmarking analysts do the math for everyone.
Comments