Many pundits, journalists, and interested laymen confuse two important but very different scientific concepts: substantive vs. statistical significance. They are hardly to blame, since scientists often make it very difficult for them to understand what scientists do with statistics. Sometimes scientists do this on purpose because they want to obfuscate, but often it is the result of their own ignorance or sloppiness.
What is at stake? Well, simply said substantive significance is about the size of a relationship/ an effect, whereas statistical significance is about measurement precision (usually based on a sample). These two concepts are answers to two very different questions. Substantive significance asks how much, statistical significance asks how sure we are about it. If you think about it statistical significance is much less interesting for the general public, because it essentially only tells you one thing: the ‚finding' a statistician identified is pretty certain. We still don't know whether the finding is interesting, it may be way too small for you to bother!
The two concepts sometimes are related to each other with substantive significance often implying statistical significance. But they don't have to. Stephen Ziliak and Deirdre McCloskey have written a fascinating book on the topic showing how the confusion between the two concepts can lead to some fatal consequences. They give examples about medical research in which a statistically significant reduction in depression is substantively too small to justify the purchase of the more expensive drug. In another example, research funded by pharma industry even suppresses fatalities just because there were too few of these to be counted as statistically significant.
But it is not only about health, it happens in all kind of research from natural to social sciences, from engineering to linguistics. In my own field, political science, it seems very frequent, and I don't deny that I myself get sometimes confused. Media coverage is full of examples. It often starts with ‚new research shows X'. For instance, new research shows that home teams in football are more likely to win. For a statistician it is important to show that the home advantage is precisely measured and identified. For a club owner this is only of secondary importance. True he also needs to be sure that the statistician did his job well. However, for him it is much more important to know how large the home advantage is, and how safely he or she can rely on that. Do I have a 50% higher chance of winning, or just a 5% chance? In both cases home advantage may be statistically significant, but the substantive difference decides on championship or relegation.
Test yourself. How would you interpret the following sentence full of statistical jargon? ‚In a randomized controlled trial we find that the difference between income levels of those who participated in a microfinance program and those who did not was statistically significant at the 1% level?' If you answer here something like microfinance makes a huge difference for people, you were a) wrong because we cannot know on basis of this sentence, b) clearly you are not alone as this happens to many. What this sentence should have read is something like this: ‚We compare income levels between those who participate in a microfinance program and those who don't. We find that the average difference is large, about 200 Euros, which is equivalent to roughly half the average wage in society X. We also have found that this average difference is not a result due to chance, i.e. it is statistically significant, and not due to other distortions. Hence, we trust our findings.'
What to do? Well pundits, journalists, policy experts need to sharpen their critical reading of scientific literature. They need to vex scientists to tell them quantities of interest which are easy to interpret. As for us scientists, we need to communicate our findings more clearly.