As more tools are available to businesses to make data driven decisions, many people are tempted to invent their own metrics or KPIs to measure success and progress. This is especially true in the case of software companies, where many leaders have an engineering mindset and are interested in coming up with metrics of their own.
While many people are aware of fallacies to watch out for when they’re analyzing data, few are paying attention to them when they’re designing new metrics to guide their teams. Here are 3 I’ve learned to watch out for.
Wikipedia defines survivorship bias as “the logic error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility”. The canonical example is from WWII — looking at the damages from warplanes that flew back to the base might falsely conclude that the heaviest damaged parts of the planes are the areas that require fortification.
In software engineering, we might be tempted to measure the number of bugs reported by customers alone as an indication of the quality of the product. However, there’s a survivorship bias built into this. Only customers who were not frustrated by the initial experience and have found sufficient value from the product would raise bugs. Many users may choose to abandon whatever workflows they have in mind, or worse, switch to another product before they would create a bug report. The areas of the product that have the most bugs might actually not be the areas that require the most focus on.
According to Wikipedia, the “cobra effect occurs when incentives designed to solve a problem end up rewarding people for making it worse.”
If engineering leaders define metrics to track the performance of their team without the appropriate…