It’s relatively easy to collect information, aggregate it and apply algorithms to derive insights, but much harder to understand where the data comes from, what type of analysis is being applied and what types of error are involved. Insights, even when powered by impressive technology, never come easy. To get full value from data, we must understand its downside.
Yet in one study it was found that 65% of a retailer’s inventory data was inaccurate. In other cases, products are in the store, but not where they are supposed to be. Often, these discrepancies lead to inaccurate assessment of product demand, because customers are looking for products that they can’t find. The world can be a very messy place.
With such sophisticated systems, it seems unbelievable that these kinds of errors would be so pervasive. However, when you treat people as mere data points, pay them poorly, neglect their working conditions and cut back on training to save money, data quality suffers. Is it any wonder that overworked, poorly treated, ill-trained employees make mistakes?
Cognitive scientists call this problem the availability bias and it is data’s achilles heel. We tend to give more weight to information that is readily available—such as numbers floating across a computer screen back at headquarters—and less to that which is harder to come by, like the day-to-day realities involved in everyday store operations.
Yet with all the wonderful tools that we now have to capture, store and analyze data, we often forget that we need to do our part. The great gift of the digital age is that we can now effectively collaborate with immensely powerful machines and extend our human faculties farther than McLuhan had ever imagined. Still, all that is for naught if we cut ourselves out of the process.
The faster and more acrobatically the computer can perform, the greater the necessity that its presumed beneficiaries must first think about what it’s all for. (Thinking About Management)