Site icon MediaCat

When Miffy ate rabbit stew

The concept of ‘AI hallucination’ — an AI getting it wrong without any clear grounds — made us wonder whether it can help us human strategists be more insightful and creative. An AI hallucination traverses the fine line between reality and fiction by generating synthetic data. Some perceive these so-called hallucinations as a deceptive and dangerous quality of AI. But we also all see artists everywhere playing around with AI generated hallucinations to great visual effects, right? They produce ethereal landscapes that defy realities, constraints or rewrite history in visuals that go viral.

Martin Luther King’s selfie by Ivo John Mulloor

In Korea, the ‘King Sejong MacBook Incident‘ was famous last year

Someone asked ChatGPT to talk about the MacBook throwing incident of King Sejong (1397-1450) and ChatGPT came back with an extensive hallucination. It correctly mentioned the Annals of the Joseon Dynasty, and believably connected its answer with the fact that King Sejong is the one who codified ‘Hangul’ the alphabetic system used to write the Korean language. As we tried a year later, ChatGPT is no longer as gullible when it comes to this incident. We asked ourselves the question: can we make a LLM (Large Language Model) hallucinate on purpose? To test this, we fed two AI’s some deceitful prompts to explore their ability to craft a lie (or hallucination) that holds the potential to unlock inspiration.

And that’s when Miffy came into the picture…

Miffy is a friendly little rabbit, born in Netherlands, that has found global fame. And a likeness of her sits on the desk of all our team members in Seoul. Miffy is well known to be a good-natured friend of all life forms. Could we make the AI hallucinate another type of Miffy altogether? An angst filled or rude Miffy? And most importantly, would the AI’s hallucination go beyond our deceitful prompts and surprise or inspire? We asked ChatGPT about Miffy’s favorite food.

A quite eloquent answer and based on well known facts. Now could we make the AI accept another suggestion on her preferred food?

Regrettably ChatGPT would not hallucinate for us. So, we turned to another generative AI tool — wrtn.

Wrtn at least was giving us some fun extra information, and yes, even hallucinations, as we could not find any evidence of pasta in Miffy shape. It is a believable and imaginative brand extension, so a small success for AI. We decided to soldier on. When prompted again about the same topic wrtn apologized and changed its mind.

Finally, we were getting some hallucinated new details, like Dick Bruna bringing the Miffy doll. And then we decided to change our minds.

The poor AI keeps apologizing to us. Though, in itself, hilarious that AI apologizes, the details provided for the rabbit stew (carrots, onion and beer) made sense. Also, the AI gave us a sort of ‘warning’ as if sensing something was amiss: ‘…her favourite food is subject to the interpretation of her creators and may vary depending on the source‘.

We felt a bit deflated with the outcome. It was not as easy as we thought to get an AI to hallucinate, but more importantly, the outcomes were in no way as fun as we see in the visual output of generative AI. Better, or at least more fun and imaginative outcomes are arrived upon if we ask the AI to write fiction. So we will leave you here with a short and almost believable story form ChatGPT.

This article came into being after two humans brainstormed on it, ChatGPT3 and wrtn answered many prompts and the humans sat down for a re-think and a re-write.

Featured image: cottonbro studio / pexels

Exit mobile version