Market studies become more useful when it is acknowledged that they are flawed and cannot provide precise answers.

Nobel laureate economist Ken Arrow told of his time during the Second World War when he was assigned to the Army weather bureau with the task of using math to build longterm weather forecasts. After about 2 years, he suggested that they shut the operation down because they had yet to produce an accurate forecast—their numbers were always wrong. He was told that although the Army was aware that their forecasts were never correct, they were crucial for making battle plans so they would continue to be needed. There was such a need for certainty that even when they knew the information was wrong they relied on it heavily.

I have often been stuck by a similar need for certainty by marketers. Of course, certainty is not something the world provides us with. Still, when it comes to marketing research or modeling, people in the industry seem to demand it. Many vendors obligingly supply “certainty” by delivering estimates out to three decimal places or making sure that a projected share or score ends in a number other than “5” or “0” so that it seems precise.

Using Fuzzy Numbers

I recently discovered that this kind of false precision has a name: disestimation. This term was coined by Charles Seife, whose book Proofiness (Viking Adult; 1 edition (September 23, 2010)) is a must read. Seife defines disestimation as “the act of taking fuzzy numbers way too seriously,” and I believe that this is a serious problem among marketers in all industries, but it is particularly problematic in pharma and other technology industries.

The problem with disestimation in technology industries is that natural sciences, such as chemistry, physics and biology, have some specific and fixed rules for relationships and the way things work. The results of tests and experiments in these areas can be replicated in a fairly consistent way.

But when it comes to studying human behavior, which is a social science, the replication of findings is difficult because conditions that are not factored into the research can (and almost always do) change what you are trying to measure. A slight difference in the wording of a question can bring massively different results, but so can one new piece of information (or disinformation) in the minds of the respondents.

Because there are so many factors that cannot be measured in social science research, we acknowledge a much larger range of error. But unlike the results of a clinical trial or other study in the natural science, marketers would usually prefer that confidence intervals and error ranges not be stated in reports. Apparently they would rather be precisely wrong than be roughly right! Rather than acknowledging the problems with the data and building plans based on a real and practical understanding of the way the market operates, most marketers appear to prefer to take the falsely precise numbers and build all their plans around them.

Examples abound in our industry of cases where this false precision has resulted in bad decisions and disappointments. Understanding that any estimation contains elements of fact, actual error and randomness, should lead everyone to the conclusion that any research involving people making estimates—whether that be a market research study or a forecast developed by one person using secondary data—is subject to a great deal of uncertainty and should never be taken as an absolute.

I know that people will say that their bosses require certainty, but promising certainty from data that is filled with errors sets expectations that are simply unreal. If senior managers don’t understand that part of their job is dealing with uncertainty then perhaps they should reconsider their career choices.

Despite “advances” in study designs and statistics, our markets are becoming more complex and less predictable, which makes projections and estimates much more uncertain than in years past. New methods to measure growing levels of uncertainty and randomness may be cool, but they are not really more accurate —although it might make you feel better to believe they provide something new.

The areas of biggest concern in regards to disestimation and the problems it can cause are in forecasting and pricing, and when we consider the importance of these two endeavors for planning and eventual commercial success, a company simply can’t afford to be naïve when it comes to dealing with them. A forecast that is precise but based on a single wrong assumption is worse than useless; it can be downright dangerous to the future of the company.

My favorite examples of disestimation in forecasts come from the market for psoriasis. Many companies have staked their futures on drugs for this disorder; but the data on psoriasis are all over the place, with as much as a 300% swing in estimations of the number of patients with moderate to severe disease. And the higher estimates invariably come from patient groups and organizations focused on support for the disease. By making a patient-based forecast on a set of questionable numbers you can go anywhere, but most people try to make the market as big as possible to justify the product. If bad numbers are all you have to go with then it is reasonable to use them as the basis for a forecast. It’s just not reasonable to expect the forecast to even approach being accurate.

The problem can actually be much worse in cases where you are asking respondents to provide you with their own estimates of what they would do in the future, simply because individuals are among the worst sources of information on their own future behavior. Every market researcher knows this as well, whether they admit it or not. Most firms have standard “discount rates” they apply to reduce the stated intended use for a new drug down to a level to reflect “reality.” These standard rates are usually derived by comparing the bad projections from previous studies to actual usage once a product is launched.

Taking some sort of average of the gap in bad estimates we arrive at an adjustment factor or “discount rate” to fix the knowingly inaccurate estimates. I will leave it to you to decide whether this approach is appropriate, but you cannot argue that there will be any precision in the final estimates derived from this approach. But, these numbers are usually reported without acknowledging the error that is built into them—or that they were built upon.

This does not mean that we should ignore the results of market studies; it’s just that we cannot accept them without acknowledging that they are flawed and cannot provide precise answers. This is why it is crucial that those investigating the market understand the structure of the market completely. Medical markets are not neat and precise entities that can be understood like machines, whose parts can be examined. It is usually very apparent what role they play.

In markets, or any other organic system (like the weather), the roles played by the individual components are often not readily apparent, and can’t be neatly described in a model. Those that believe the results of a survey alone can provide accurate estimates of future product use, pricing sensitivity, or any of a number of other crucial aspects of the market are simply fooling themselves—choosing to wrap themselves in the false precision of disestimation. This might be the cheap and easy alternative to real understanding and accurate planning, but that seems to be a very high price to pay to save money and effort.

 

Ads

You May Also Like

Healthcare Watch August 2017

Discoveries/Innovations: T-cells Discovery Can Help Pick Best Cancer Treatments UK researchers have found that ...