Reflections on how and why…we work and live.

One of the defenses that corporations have used when being sued is something referred to as the “IT burden defense”, the basic premise being that the effort required to find information sought by the court is an unreasonable burden on the defendant’s presumably beleaguered IT departments. It’s an interesting premise.

Driven at least partly, one presumes, by Sarbannes-Oxley, a company named Index Engines has created a “Unified Discovery Platform” that indexes all the information across an enterprise at a rate of roughly “1TB per hour per node.”

Index over a billion objects per system? It seems that the need to clean out the digital garage is disappearing – just keep everything.

Still, it seems to me that there may be some good reasons for not saving everything in my unbounded virtual garage.

The foundational assumption behind the efforts to “save everything” is that if the “saving” is cheap and the “finding” is easy, why not save everything? Which seems reasonable, until you start thinking a little deeper about the third leg of this information stool: looking (I view “searching” as a special case of “looking”).

When searching in the virtual world you’ll always find something – whether it’s the something you want is a slippery question, in part because of the curious characteristics of looking.

If I index a few billion “objects” and what I’m looking for is an “object” and I understand the indexing system, chances are good that eventually I’ll find what I’m looking for – probably along with some stuff I’m not looking for. (Which makes me think of the difficulty of throwing out those things in the garage I didn’t know I had, and wasn’t expecting to find.)

It’s the nature of looking that every once in a while the something I wasn’t looking for becomes a critical insight into what I want to know – which is why I’m “looking” in the first place.

Everything gets reduced to “sort and filter”?  How do you sort and filter 100 (or 1000) virtually identical things? When I Google “eDiscovery” I see that their search engine finds over a million entries (in a mind-boggling .29 seconds). In some manner that is hidden from me, Google decides which of those million entries to display on the first couple screens – which is as far as almost anyone will look. Google often seems to ‘know’ what I’m looking for before I do. That’s creepy.

Makes one wonder about the value of finding something you’re not looking for? Does Google know that too?


facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

This book review was written for Manufacturers Alliance –

“How to Measure Anything” – Douglas W. Hubbard    2nd edition published April 2010

“It’s better to be approximately right than to be precisely wrong.” –Warren Buffett

Douglas Hubbard’s underlying philosophy about “measurement” is that, in his own words, “…it never really meant an exact quantity.” That perspective begins to pry loose some of the skepticism that the title of his book, “How to Measure Anything – Finding the Value of Intangibles in Business”, invites. In a manufacturing world where the precision of six sigma seems pervasive, his idea of the “quality of information” is appealing, if sometimes confusingly recursive – a measurement of a measurement. Still, there is much to reflect on regarding Hubbard’s views of collecting and using information.

Hubbard is the inventor of something he calls Applied Information Economics (AIE), a counter view to traditional accounting-style risk/benefit analysis that focuses on measuring risk and the monetary value of information – a topic we’ll return to. He is the president of Hubbard Decision Research and the author of several books and numerous articles on risk analysis and information modeling. Much of his early work centered on quantifying the value of large IT projects. His web site,, offers additional information and downloadable samples of material referred to in his book.

“How to Measure Anything” isn’t really about measurement, it’s about making better decisions. The title of his book may beguile executives and managers, but to actually read it requires either the mindset of a statistician or an acceptance that getting the gist of things will have to suffice. I relied mainly on the latter.

The book is broken up into four sections. The first section deals with the concept of measurement itself and maps out the basis for what is to follow. The second section has insights into what to measure, and the third section talks about how to measure. The final section, entitled “Beyond the Basics”, ventures into interesting territory, including a chapter where he talks about “homo absurdus” and behavioral biases any application of measurement results must deal with.

Hubbard’s view is that the most valuable use of measures is to reduce uncertainty in the service of making better business decisions. Questions with high levels of uncertainty (many of the “intangibles” referred to in the title) are often susceptible to simple techniques to reduce that uncertainty. If one thinks about measurements in terms of ranges of values within a “confidence interval”, the link between “intangible” and “immeasurable” begins to seem less inevitable.

It’s generally accepted among statisticians that almost any model is an improvement over expert human decision-making, and using multiple models provides even further improvement. Hubbard makes a crucial argument that “expert opinions” can be calibrated and usually improved. In his book are several practice examples to demonstrate how you can calibrate your own “90% Confidence Interval” and get some practice improving your application of the concept.

Perhaps the most revealing aspect of Hubbard’s experience is his reflection that business leaders seldom ask the right questions or measure the right things. What gets measured tends to be what has always been measured or what’s easiest to measure in traditional ways. One of his primary tenets is that it is imperative to quantify the value of the information you’re trying to capture as well as the cost of capturing it. Once you’ve determined if the value of the information is worth the effort, careful analysis reveals that most of that value can be realized via one or two key measures – often not the measures initially assumed as most valuable.

If you tend to get excited by z-scores, regression analysis, Monte Carlo simulations, and Bayesian inversions, you’ll find much to keep you up late at night reading “How To Measure Anything”. If your interest falls more towards the bottom-line effect of informed decision-making, you’ll get to sleep earlier but you’ll still find much to challenge your existing worldview of what can and should be measured within your organization.

For most, Hubbard’s book is more of an uncomfortable paradigm shift than a simple how-to manual. It’s an important work that deserves serious reflection. As in many areas of inquiry, the right answers are often simpler to find than the right questions.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather