Saturday, March 15, 2008

Those Dastardly Statistics

There is a recent article titled "Customer Feedback: Are You Putting It to Good Use?" that makes some simple claims and then reminds me of the deadly misuse and abuse of statistics. Here are the statistics from the article:

1. A recent Gartner report revealed that 95% of firms surveyed collected customer information, but only 35% of these used the insight gathered from this information in any way.
2. A second study conducted by CDC Respond in 2005 and 2006 surveyed 130 of the top banks and insurance companies and found 95% reported they were collecting information, but only 41% of the firms actually use the information to alert staff to problems and drive change.

The article appears to draw the following conclusion:

"The main reason feedback tends not to be put to productive use is that companies generally don't have a clear and high-level vision for why they're collecting feedback, and no formal business process to ensure that the feedback collected is actionable."

While I basically agree with the conclusion, I think it is a little too simplistic. Let me add the other items that I think would also contribute to the lack of use. They are:

3. Most customer information data gathering occurs without the use of a survey plan. The purpose of the plan is to determine the objective and demonstrate that the information collected will meet some pre-defined goal that has been agreed to by top management. Thus without the survey plan, once the information is collected, no one know how to use the data or for what reason the data was created.
4. There is a DRAMATIC lack of knowledge of statistics and one of the obvious results of their misuse is the discovery that they do not produce the intended results.

Let me give an example of each of the two items I have suggested.

Several years ago I was given a consulting assignment for a company that had just spent about one quarter million dollars collecting customer information for one year. Before they were to renew their contract to spend another quarter million dollars, they asked me to look at the information collected. They were told that their customer satisfaction level was increasing and they should be happy with the work they were doing. When I looked into the data, I found that, indeed, their smaller customers were exceedingly happy and their satisfaction level was soaring. At the same time I found their larger customers were ready to leave them as soon as possible. The problem that came up was that the additional information needed to determine why the larger customers were ready to leave was not collected. There was no survey plan to guide them in defining what was to be collected. Hence, although they had the general idea that the smaller customers were happy, and the larger customers were furious, they had no place to go from there. Of course, the survey company was more than willing to go and collect that additional data for a significant incremental charge (a charge that would have been ZERO if it had been collected with the original data).

The misuse and abuse of statistics is a subject that I could write a book based solely on my experiences as a consultant. Oh boy, the troubles I've seen!!! The worst part is that some of the most egregious errors I have seen have been in textbooks shoiwng how to analyze customer satisfaction information. Let me give a few examples of what I have seen (some of which are pervasive in the industry).

Example #1 - everybody uses the arithmetic mean to compute average satisfaction scores. Unfortunately, satisfaction scores come from an ordinal scale. An ordinal scale does not provide sufficient information to calculate the mean. There are two serious violations of math and simple logic. The first is that the satisfaction scale is assumed to be linear (equal energy required to move from one score to the next) e.g. does it take the same energy to move a customer score from a 3 to a 4 as it does to move the same customer from a score of 4 to a 5? The answer is maybe, but probably not - hence the linearilty assumption is violated. The second problem is that an ordinal scale only assigns order and from that we assign numbers. The problem occurs when a 5 point scale (or any numeric scale) is used, the numbers lose their meaning. For example, one could ask is a score of 4 equal to twice the score of 2 (do 2 dissatisfied customers equal one satisfied customer?). The answer is emphatically NO.
Example #2 - Everybody wants to compare the score from one time period to another or between two products or two geographical areas. Someone with almost no knowledge of statistics will offer the idea that a statistical test could test the hypothesis that the scores between the two groups to detect a difference. They suggest the student-t test since it can be used when the population standard deviation is not known. GREAT, however, the student-t test requires two VERY important assumptions to be met; namely, the data must be normally distributed and, perhaps even more importantly, the data must NOT be ordinal. (There is that word again.) Of course, almost all customer data is highly skewed and not normal and, of course it is all ordinal data.

The bottom line is that companies do perform better when they have good information. I suggest you read the article in The Business Renaissance Quarterly (Summer 2007) by my colleague Dr. Darrol Stanly and me which demonstrates statistically that companies with higher levels of customer satisfaction have better financial performance than those with lower levels of customer satisfaction. The key is that companies must plan properly to get the correct information and then, they MUST analyze it correctly. Using the wrong statistics to make decisions is somewhat akin to trying to drive a car with the steering wheel disconnected.

Here are the downside risks to data problems and analysis errors:
1. The companies using the data will give up on the data because it is not helping them and continue to fly "blind"; that is, they will not see customer trends as they occur.
2. The companies may make some strategic errors based on the improper analysis and may, as noted in the previous risk, either miss a business trend, a product problem, or even personnel problems.
3. The survey company may lose business.
4. The survey company may be partially liable for either providing bad data or incorrect analysis and interpretation of the information.

No comments:


web visitor stats
OptiPlex 755 Desktops