Hindsight is 50/50: Valuing consumer research in business

In my third year of college I took a macro economics course taught by a visiting professor from China. While a wonderful lecturer, English was not his first language and he would often make mistakes in his idioms. One day, while describing a time he misread macro economic indicators and missed out on a lucrative growth sector, he sighed and said “well, as they say, hindsight is 50/50”.

Hindsight is 50/50

Nodding my head in silent agreement, it slowly dawned on me that this is not the saying at all, but rather it should be “hindsight is 20/20”. I chuckled and moved on, taking his words as intended, that there is perfect clarity of the path to the end result when viewing the past from a time period beyond the outcome.

Over the years, I have used this malformed phrase for comedic effect, but one day, I realized the true brilliance of his words when I was once again confronted by a question that has hounded my career for years…

How do you value the impact of your Consumer Insights, UXR, or Analytics work?

-Every manager, everywhere, at some time

I bet you have received some form of this question yourself. If so, count yourself lucky if you are among the blessed few who have the good fortune to be in an industry or a position where that research directly and overtly impacts the top line.

In this case, quantification comes more easily. As long as similar Pre and a Post periods exist, where intervening variables are few or analytically manageable, then you can analyze the impact by comparing key metrics in the Pre period to the same metrics in the Post period. This works wonderfully for A|B tests or other binary actions where application of the insight driven action is more controlled and immediate relative to the Pre period. For example:

  1. App updates
  2. Website updates
  3. CRM communications

However, what about executions that have a longer roll out period? Like:

  1. Retail shelf planogram optimization
  2. Product line extensions
  3. Marketing campaigns

For these types of projects, many intervening variables can occur along the path from original research to final execution. As I have often said to my teams…

A lot happens between concept and execution

-Researchador

Along this road, many discoveries may emerge. From capability deficits, to resource constraints, to emerging competitive pressures, and macro-economic changes, a horrible parade of unforeseen circumstances can debilitate an otherwise viable concept. So, how do we ascribe value to our work, when the final sales (or lack there of) may better reflect the unanticipated parade of horribles than our original research.

Unfortunately, we only have one data point – the particular outcome of this projects. In hindsight, we may have perfect 20/20 clarity of the impact of decisions we made along the way, but we do not know the outcome had we taken different steps. Since perfect clarity of exact paths to different outcomes does not exist, as every one of those steps along the way creates a potential different outcome from today, hindsight reduces to a series of unknown coin flips.

Wouldn’t it be great if we had some sort of simulator where we could model all potential outcomes even without prior data as guidance. This science fiction would be similar to what Jean-Luc Picard experienced in The Inner Light episode of Star Trek: The Next Generation.

“The Inner Light” (Star Trek: The Next Generation)

Captain Picard experiences a profound journey after being struck by an alien probe, living a full life as a village scientist named Kamin on a planet threatened by its dying sun, before returning to the Enterprise, where only 25 minutes had passed.

While AI and predictive models promise this sort of simulation research, if we could do this, then our competitors could do this. With all actors having access to the same simulators, then unanticipated may be the only viable strategy to impact outcomes, and that might still reduce hindsight to a coin toss. Knowing this leads many great leaders to say something akin to “fail early”. There exist many versions of this quote across the years, but I like this version best…

Fail fast, succeed sooner

-David Kelley (founder of IDEO)

Of course, the unstated hard work here is that one has to learn from the failure and adjust to move forward. In that light, perhaps this quote better captures the spirit…

Fail early, fail often, but always fail forward

-leadership expert and author, John C. Maxwell

Maybe we can simplify with “Fail fast, fail forward”. In any case, the point remains, one must learn and adjust to make new strides forward.

So, how does one adjust from a research perspective? You can either field new research, or refield past research, both with the goal to collect more information based on the current state of the market, the concept, and other factors. You can also follow the advice offered to Dr. Laszlo Kreizler from his old college professor in the TV series adaptation of Caleb Carr’s magnificent The Alienist

Look at your bird, Laszlo, look at your bird

Super vague, I know. For context, they reminisce over a project the Professor gave Laszlo when he first began college, wherein Laszlo had to repeatedly return to a specimen of a bird, a Hildebrandt’s Starling, each time seeing more than he had the time before.  The lesson being not the bird itself but learning to see.  And now, Kreizler fears he can only see what he sees, what is already known to him. What can I do, he asks?  “Look at your bird, Laszlo. Look at your bird

Essentially, you can constantly revisit your previous data, your previous assumptions, and your previous interpretations, and look at them fresh through the perspective of the current considerations. This presents a much faster, and perhaps more robust, process for gleaning new insights. As the current state of affairs may be in-part based on a series of 50/50 choices made from original interpretations of this dataset, removal of those decisions as options may positively impact the probability of success moving forward (much like the Monty Hall problem).

Getting back to value. As researchers trying to ascribe value to our work, we face these issues that negatively impact the ability to use end results as appropriate metrics of value:

  1. We do know the potential outcome had we NOT done research
  2. The execution may not accurately reflect the tested concept
  3. Market conditions may have significantly changed since initial research

However, if we shift the interpretation of research value from product success to time saved then we can assess the impact in the “fail fast, fail forward” paradigm via…

  1. Time to identify failure
  2. Time to develop new tactics
  3. Time to evaluate alternative decisions options
  4. Speed to decision making

Much like Laszlo, when you learn to see things through this new perspective, not only will you be able answer to the painful question of Research Value, but you will also start to craft your research in anticipation of these needs, which will in turn make your work more efficient, long lasting, and seemingly more predictive.

As my professor prophetically pronounced many years ago, hindsight truly is 50-50. Looking back only gives insight to what lead us here, but fails to clearly describe where we could have gone. To move forward, we need to constantly revisit our data sets, re-examen our insights, and adjust for current perspectives and conditions. Field new research when appropriate, and seek to build in alternative decision options in anticipation of the organization seeking to fail forward.