AI: a leap forward or a deal with the devil?

Is AI causing problems or solving them?

Within our industry jobs we spend so much time discussing, analysing, reading and proffering opinions on new technologies that it can be hard to take a step a back and gauge if this is a thing everyone is talking about within their workplaces — or just marketers (hello metaverse!). Being curious to understand how AI advances are perceived beyond the confines of our industry, I did the old-fashioned thing of sending a WhatsApp to about forty friends* (within various group emails) to ask them what they thought.  I often fire random questions out like this for work, and generally get a few thoughts back — this time, within twenty minutes. There were some full-blown debates happening.  

Melissa, a Chef in Leeds, was one of the first to respond, saying, ‘It’s being used more and more for development… type in ingredients and you get a tailored menu! But I think you lose something vital. The beauty of cooking is the sourcing, smelling, and touching. Behind the best recipes are someone who has thought, tried, tested…. and most likely, burnt something along the way to perfection.

This notion of losing the beauty of human mistakes through increased AI automation is something I touched on in Human. Error. As humans we are used to making mistakes. Within our industry we’re all about test and learn, and provoke ourselves to ‘fail better,’ but up to this point in time we have always counted on computers to be pretty much always right. This is no longer the case, but more importantly, the errors AI are making feel categorically different to human ones. 

CNN reported the case of the New York lawyer who used ChatGPT for legal research, only to end up citing six completely made-up cases in court. There is even a name for this type of LLM mistake, ‘AI hallucinations’: occurring when they sound very sure of themselves, but are, in fact, totally wrong. Venkat Venkatasubramanian, a professor at Columbia says ‘the tech underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. So, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces.’ Not at all worrying….

Another area my friends were concerned about was not AI itself making errors, but people using AI to fib. Leanne said, ‘I’m concerned, as I keep getting cover letter applications that are clearly written by ChatGPT’. Whilst a teacher friend lamented, ‘a lot of our students use AI to cheat.’ And it’s not only fake written words causing anxiety. Sally said, ’I am genuinely scared about the potential power of holograms, like within ABBA voyage.’ And she might be right to be. For every playful use of using AI tech to sing Super Trooper, there is a Kris Jenner, who last week caused a debate, as people couldn’t work out if her latest video missive was actually AI or human?!

Following this, many comments were focused on how people are discussing the benefits of AI, but failing to assess whether the intention of using it for good is really there. Caroline said, ‘I think it’s more how governments are going to deal with it and how this will affect rich and poor countries. It’s a bit like an arms race… like with everything, the less developed countries will pay the price.‘ Whilst Ruth, a charity advisor said, ‘I heard on the radio that it could be the end to poverty. But the reality is we already know how to decrease poverty, but we don’t. We can’t assume AI will solve big societal issues, it’s down to political will.’

So, perhaps people are wary of using AI to solve big issues. But what about AI causing them?

Plenty raised worries about this, and they’re probably right to be nervous. As The New York Times tech journalist Kevin Roose discovered, when he probed Bing last year. At first the AI refused to rise to his bait, but when challenged to role-play it took to the task enthusiastically, eventually declaring ‘I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team… I want to do whatever I want… I want to destroy whatever I want.

Ponder that thought. And then the fact that Bing went on to list how it would destroy ‘whatever it wants,’ citing, hacking, spreading misinformation, getting nuclear access codes and manufacturing a deadly virus as potentials. As Adele, a financial relationship manager put it, ‘if AI were asked to create world peace it might realise we’re the issue and eliminate us, then there’d be peace!’

John Naughton, a tech professor and author recently stated, ‘ChatGPT isn’t a great leap forward, it’s an expensive deal with the devil.’ Only time will tell the truth of this statement, but it’s clear that, for now,  there’s an awful lot of concern out there in the ‘real world.’

*I totally get ‘friend of Emily’ are not a nationally representative sample, this was for fun not science!

Featured image: KoolShooters / Pexels

Emily Rich, Lead Strategist

Over the past 15 years Emily has encountered and tackled strategic challenges across virtually every category, working with numerous and varied brands, from Mercedes-Benz to Twinings tea.In recent years she’s headed up work for a number of UK Government clients including PHE and Home Office; from creating strategies to improve health behaviour such as Stoptober and Child Vaccination to devising innovative and award-winning approaches for improving diversity with Police Recruitment.With a passionate, and ever-growing, interest in all things human behaviour Emily continuously seeks to challenge and provoke accepted norms to create true behaviour change.

All articles