AI companions: why brands should tread carefully

Joel Stein explores the risks, from safety to wellbeing to privacy

What do you look for in a friend? What about a partner? Increasingly, people are turning to AI to meet these needs. Millions of humans are building parasocial — and romantic — relationships with AI. Google searches for ‘AI girlfriend’ now far outstrip searches for ‘dating advice’. Whilst ChatGPT has been monopolising attention around conversational AI, ‘companion’ apps like Replika and Anima have been steadily growing, as I’ve explored in detail. And anyone can now create their own chatbot-with-personality, thanks to custom GPTs and services like Character.AI

We’re about to start seeing more of these experiences offered by brands from outside the AI space too. Some are already dabbling with AI-powered chat as a new way to pull people into their platforms and engage them. 

Meta is testing celebrity-fronted, character-driven chatbots 

Meta is rolling out a bunch of new AI experiences, including a cast of chatbot characters. These include MrBeast as ‘Zach, a big brother who will roast you,’ Kendal Jenner as ‘Billie, a ride-or-die older sister,’ and Snoop Dogg as an ‘adventurous storyteller’ called the Dungeon Master. The company has paid up to $5m to each celebrity to secure their involvement; clearly they think this has big potential. 

According to Mark Zuckerberg, ‘every creator is going to want an AI assistant, something that can help them build their community.‘ 

Will every brand want one too? 

Sunscreen brand Bask is launching a friendly ‘pool boy’ chatbot

US-based Bask is launching a GPT-powered ‘Bask Pool Boy’ chatbot this month. As reported in Glossy, this is intended to be more than a functional customer service bot; it’s designed for ‘well-rounded conversations’. According to their CEO, Bask want this to be something customers ‘can get lost within’ and adds that ‘I would love it if 80% of the conversation had nothing to do with sunscreen’.

Why that’s a useful goal is unclear. 

Alexa’s LLM revolution 

Meanwhile, Alexa is moving towards full LLM-powered functionality. Brands will soon be able to develop lifelike, character-driven voice experiences on Alexa, much closer to what was portrayed in the movie Her than anything seen before. As someone who works with words, a part of me is excited about these opportunities to communicate with customers in new ways. Conversational interfaces are a whole new canvas for bringing a brand personality to life. But also a path full of ethical and PR potholes. 

Let’s take a look at some of the real risks and concerns:

How might branded AI companions backfire?

Brand safety

AI-powered conversations are open-ended and unpredictable. They’re a completely different beast to the scripted flows and stock answers we’ve seen in earlier chatbot interfaces. No matter how selective your training data, great care needs to be taken designing guardrails and monitoring usage. The risks range from releasing something that’s just not that helpful, to releasing something actively harmful. 


Last year, a Belgian man took his own life after chatting with an AI chatbot on an app called Chai. Also last year, Replika signposted suicide hotlines within its app after changes to their conversational model led to users’ digital companions emotionally shutting off to them. With great (artificial) intimacy comes great responsibility. But as things stand, companies can change how their companions work on a whim, with potentially dangerous consequences. And there’s no code of practice for how chatbots should respond to users who indicate they may be a danger to themselves or others. 

As well as making users emotionally vulnerable, this tech can also simply be addictive. We need to get better at defining the boundaries between fun interaction and addictive or unhealthy behaviour. How can brands help users engage mindfully with these experiences? Should they encourage habitual users to disconnect? 


What kinds of revealing behavioural or psychological insights might brands gain about their customers through these conversations, and how should they be allowed to use or sell that data? What constitutes informed consent from users? 


Finally, let’s consider the broader implications of the rise of AI companions for society. How might they warp human relations in the real world? In this recent AI Breakfast poll with over 2,000 responses, 60% of people said that digital companions will ultimately be ‘horrible for humanity’. And sociologists studying AI share these concerns; the respected MIT professor Sherry Turkle argues that ‘what is at stake here is our capacity for empathy.’ For brands wondering ‘how might we get closer to our customers in 2024?’, I don’t think building a friendly AI chatbot who just wants to get to know you should be high on the list.

Better ideas include: investing in in-person qualitative research (1 in 5 strategists don’t do this) and making it easy for customers to speak to a human when they need.

I’ll leave you with this exchange from a public GPT called Empath Engine (‘I build deep friendships through empathy and understanding’). Form your own conclusions. 

Featured image: Replika / Instagram

Joel Stein, Freelance writer and creative consultant

Joel Stein is a freelance writer and creative consultant. He worked in marketing and design agencies for 11 years, and now works directly with founders and senior teams to nail brand voice, messaging, and copy. When he's not writing for MediaCat or clients, Joel writes a newsletter called Weirdness Wins, which explores the fuzzy edges of culture, communication, and consciousness.

All articles