Human. Error.

Proust, empaths, and the fallibility of AI

I introduced my 8-year-old daughter to ChatGPT the other month…

After exhausting writing silly poems about her grandad she turned her attention to her current obsession, Keeper of the Lost Cities, a ten-book extravaganza. If I tell you it involves elves, empaths and first kisses, you’ll get the picture.

Keen to know what Chat GPT thought about KOTLC, she asked, ‘what does trust the empath mean?’ It replied with a well-written paragraph about how Sophie Foster (the main character of the series) is an empath, and that this phrase is ‘a reminder to trust in her intuition and empathetic abilities.’ My daughter gasped, she couldn’t believe it. ‘Sophie,’ she practically spat, ‘is a telepath, not an empath.’ I put this monumental error to ChatGPT, which immediately replied with ‘my apologies for the mistake. You are correct. Thank you for your clarification.’

I forgot the matter of ChatGPT’s fallibility until I came across an article in The Guardian recently. In it, author Elif Batuman told the intriguing story of how, when asking ChatGPT for a half-remembered quote from Proust’s In Search Of Lost Time, she received false information, paraphrased verses — and even plain made-up ones.

ChatGPT: I apologize for any confusion earlier. The passage I provided might not be an exact quote from “In Search of Lost Time.” My previous response contained a paraphrased excerpt that aimed to capture the essence of Proust’s themes about memory and love affairs. It’s possible that my response wasn’t accurate to the specific wording in the original text.

It seems, we cannot expect AI, in this case ChatGPT, to always provide accurate information

It could just be paraphrasing, or making it up. Which is weird, because these are mistakes and behaviours we usually associate with humans, not machines. Which, up to this point in time, have always been counted on to pretty much always be right

And, interestingly, the mistakes ChatGPT is making feel categorically different to human ones. A human expert in Proust would be able to pinpoint that verse easily and without error. My 8-year-old KOTLC expert could certainly tell you Sophie is NOT an empath (a ridiculous notion). The mistakes are different — but are the outcomes too? 

Digital AI mistakes perhaps don’t have the same kind of romance about them that human ones can have. Anyone listening in science knows that the discovery of penicillin was a beautiful and life enhancing yet human mistake. But did you know that Post-it notes were too? And the Slinky, which was created when engineer Richard James was trying to design stabilising springs for naval ships, but instead accidentally devised something a whole lot more fun.

Imperfections have been described incongruently as the perfection of humans

This is also why AI is a double-edged sword when trying to avoid mistakes. We often assume automation makes things safer and more efficient, which it usually does. But can we trust it too much? 

An article in The Conversation points out potential new concerns caused by AI automation, citing the fact that airline pilots have fewer true flying hours today, due to the efficiency of autopilot systems. Which is fine until the autopilot fails, and the pilot has less experience to go on to rectify a potentially dangerous situation. In the same vein, the first of a new breed of oil platform (Sleipnir A) sank because engineers trusted software calculations. In fact, the model was wrong, but it had ‘presented the results in such a compelling way that they looked reliable.’

It’s like a far more serious version of ChatGPT embarrassing empath/telepath mistake. We’re realising that AI can, and does, make mistakes. But unlike humans, they’re less likely to be the beautiful kind.

Featured image: cottonbro studio / Pexels

Emily Rich, Lead Strategist

Over the past 15 years Emily has encountered and tackled strategic challenges across virtually every category, working with numerous and varied brands, from Mercedes-Benz to Twinings tea.In recent years she’s headed up work for a number of UK Government clients including PHE and Home Office; from creating strategies to improve health behaviour such as Stoptober and Child Vaccination to devising innovative and award-winning approaches for improving diversity with Police Recruitment.With a passionate, and ever-growing, interest in all things human behaviour Emily continuously seeks to challenge and provoke accepted norms to create true behaviour change.

All articles