If AI took over making marketing decisions, would anyone notice?

Introducing Jenni Romaniuk's new column, Just In Case You Missed It...

Welcome! At the Ehrenberg-Bass Institute for (*gulp) many decades now, I have the privilege to see a wide variety of great research, some of which is under appreciated because it was ahead of its time, and/or it missed getting the publicity it deserved. My aim for this column is spotlight some cool past research and show how this knowledge can help us with today’s problems. 

Hype/angst about AI taking over jobs is rampant, and marketing is no exception. Tasks such as creating digital content, direct marketing emails, or desk market research, are now routinely done by AI products.  With tasks that are repetitive or involve synthesizing vast amounts of information, human brains are competing with (and often losing against) the ChatGPTs, Bards, Geminis and Claudes of the world.  

While this causes angst amongst the Assistant Social Media Managers of the world, experienced marketers are feeling more secure, trusting their experience makes them less easily replaceable by an AI tool. But how safe should experienced marketers feel?

No better than a coin toss?

When thinking about this question I remembered research by Dr Nicole Hartnett, a Senior Marketing Scientist at the Ehrenberg-Bass Institute. She tested whether marketers could pick the ‘winner’ when presented with two advertisements, where one generated better sales. She recently shared with me that this research was borne from the concern that identifying effective advertising creativity was treated as more art than science, so needed to rely on marketer judgement/intuition. A choice that had a clear winner and loser in sales, rather than just award winners, was a crucial part of this study, as Dr Hartnett said in a conversation we had about this research ‘you talk to any brand manager, the commercial objective is king’. 

Each participant saw a randomised set of five pairs of ads, one generated sales, one did not, and they just had to pick which one of the two ads generated sales for the brand. It had an impressive sample size, with over 600 marketers taking part, leading to 1,909 pair assessments.

And the result? Let’s first set the context. When one of two options is correct, multiple coin tosses or random guesses will achieve 50% accuracy. Marketers, with all their wealth of experience, came in at a whopping 52%. The best performers were actually from the insights department, with 61% accuracy.  

Forget AI, at this stage perhaps we could replace marketers with a coin toss when selecting sales selecting ads? 

A finding that surprised me was that Dr Hartnett found length time employed as a marketer did not affect accuracy. This result calls into question the value of experience as an indicator of expertise. Often these two words are mistakenly treated as synonyms.  However experience is what happens to you, expertise is what you (hopefully) learn from the experience. 

In marketing we experience two types of decisions:

  • Decisions that happen so often or quickly we are on autopilot and looking to get things done as efficiently as possible. We usually don’t take time to reflect upon these decisions and learn from the outcomes. 
  • Decisions that happen so infrequently and are so situation specific that even if you learn from the experience, instances do not accumulate quickly enough to see the elements that contributed to success or opportunities for improvement.

This makes it hard to ‘learn on the job’ as a marketer, and easier for AI to make better decisions as it can absorb wide swaths of information. However it doesn’t have to be that way — we can take steps to build our own expertise.

How can you build the expertise to make better decisions?

To gain the expertise needed before you are replaced by an AI decision-making tool, the onus is on you to actively learn more effectively. Here are three steps you can take.

1. Self-reflect on all decisions in the areas you want to build expertise

Pick an area, such as advertising effectiveness, and take the time to critically review your prior decisions, regardless of if they worked or not. This will help you avoid treading a well-worn, comfortable path, and not seeing more effective options. Establishing a peer-review group to re-examine decisions might help see your own decisions in a more objective light and learn from others in similar situations.

2. Stop relying on case studies to uncover insights

Case studies are poor ways to discover new insights about the world. Each case is so situation specific it is dangerous to attribute success or failure to any specific aspect. When trying to extract knowledge from a case study it’s easy to draw the wrong conclusion, such as that the lesson from Snow White is to never eat apples.

Case studies can illustrate knowledge that was generated elsewhere. However, knowledge that is likely to be helpful for you in the future needs to be generated from many different studies under different conditions, not a single story anchored in a specific brand and context.

3. Learn from better quality sources

Our knowledge is only as good as the quality of the sources we learn from. This holds true for people, and for AI too. Rather than wait to naturally experience all the different conditions we need to gain broad learning (which might take many lifetimes), we can turn to experts for help. Right now, AI has been trained on such a broad swarth of information it lacks the discrimination to detect what is good or poor-quality inputs. For example, I get a very different answer on advertising effectiveness if I ask ChatGPT the general question ‘what makes for effective advertising?’, than if I put some quality control constraints around the response by asking ‘what does the Ehrenberg-Bass Institute say makes for effective advertising?’ Our current advantage is to implement quality control on our sources of learning, so look for advertising effectiveness knowledge that is built from sound science-based principles. 

Featured image: Ryan Arya / Pexels

Jenni Romaniuk, Research Professor of Marketing and Associate Director (International) at the Ehrenberg-Bass Institute

Professor Jenni Romaniuk is a Research Professor of Marketing and Associate Director (International) at the Ehrenberg-Bass Institute. Jenni is the key architect behind the Ehrenberg-Bass approaches to Distinctive Asset, Category Entry Point and Mental Availability measurement. She has written three books: Building Distinctive Brand Assets, which helps marketers to future-proof their brand’s identityHow Brands Grow Part 2 which builds on the knowledge revolution started in How Brands Grow and Better Brand Health provides a valuable resource for those looking to get the most out of their brand health tracking. A past editor of the Journal of Advertising Research, Jenni now sits on the Journal’s Senior Advisory Board.

All articles