Research has a very established vocabulary…
If I had a pound for every time I had heard the phrases ‘getting under the skin’, ‘the whites of their eyes’ or ‘bring to life’, I would be a rich man, and believe me, I have considered a quant cliché swear jar on more than one occasion. But there is one quant phrase in particular that echoes permanently around market research meeting rooms across the globe, that if monetised could give Elon a run for his money. That phrase is significant difference, or for those looking to save on syllables, sig diffs. In short, a sig diff denotes a real difference between two or more data points, ruling out the possibility of chance or sampling error. It validates the insight being observed, rubber-stamping the difference as true. Now that’s a pretty crucial part of the job, and as such the literal naming convention has become cemented in the research parlance.
At their worst, sig diffs are translated by researchers as insights in their own right, rather than just differences. At best they become sign-offs to good insights, ‘and yes that’s significant’. At d+m, we think of them a bit more holistically, and to do so, we had to flex our quant vocabulary to better accommodate them. We think of sig diffs as contributing to high-resolution insights; insights that are observable with such clarity and certainty, that you can practically see the whites of their eyes (£1, ker-ching). If we can build an insight, and the framework is built out of significantly different data points, then we can see that insight in high resolution. But, when those data points aren’t significant, we don’t walk away, we build what we call ‘low resolution’ insights. The data has pointed us towards something observable and understandable, but through sample sizes or survey method, we just haven’t passed into statistical certainty.
This small change in vocabulary has led to big changes in our approach to unlocking strategic recommendations; not through dismissing the importance of statistical differences, but by reframing their use in leading us to high and low resolution insights, both of which are important for different levels of objective. The reason this is important to planners and marketers requires us to step back for a moment to consider the two easiest ways to get significant differences in your data:
- Spend more: You invest heavily in the research and gather a large sample set. The greater the number of participants, the statistically more likely you are to observe significant differences. We’re here for this!
- Be manipulative: Game your survey to maximise the differences where you need them; tricking participants into giving you the answers you want to hear. You’ll need another partner for this.
Neither of these are ideal assuming you have budgets and/or integrity, and so you are left only creating, testing or validating things that are head and shoulders above everything else. Your hands are tied into only progressing concepts and campaigns into research that are so great, or so divisive, that participants won’t have a choice but to tell you how outstanding they are. If only it was that easy.
There is then one other option which is to think about research differently. To respect the statistical value of significant differences, but at the same time view your data set through high and low resolution insights. Learn how to scale your confidence based on the clarity and quality of the insights you are observing, rather than just the presence of a sig diff footnote.
Featured image: Yan Krukau / Pexels