AI and The Human Narrative

Dr Beth Singler explores the narratives around artificial intelligence and how humans and machines are morphing into each other

As our lives become increasingly intertwined with new and seemingly disruptive technologies, our diverse reactions around artificial intelligence (AI) reveal a varying set of potentials and perils for different people. Dr Singler, Junior Research Fellow in AI at the University of Cambridge, suggests it is not just our lives that are changing, but who we are as humans seems to be changing as well, as we adapt to the systems with which we interact.

The human and the machine

One side of this change is robomorphisation, seeing the human as non-human, which runs the risk of making us more like data than human beings. As an anthropologist, Singler is keen on the complexities of human nature. “We do try and produce theories in part to explain what humans do, but we also understand that the human is never captured entirely by the theory. When you treat human beings as robotic forms; completely rational, likely to behave in particular ways, characterisable by certain identity markers, you lose a lot of that interesting messiness.”

The counter to robomorphisation is anthropomorphisation, which means that we see the non-human, such as an AI system, more human-like than it is. With our natural human tendency to anthropomorphise, we tend to fill in the gaps for AI’s abilities. “We think they are capable of abilities that humans have before they are actually capable of them.” says Singler.

So far, we don’t know whether it would ever be possible for technology to develop something we could identify as a core emotion in a robot. However, Singler says we already live in a world where, to an extent, the simulation of emotions by technologies is possible. For instance, even something as simple as an AI assistant simulates someone polite and always happy to talk to you.

Empathetic responses

The caveat is that humans have a great capacity for empathy, even where there is no simulation of emotion. The Boston Dynamics video circulating on the Internet is a good example of this. There were footages of robots being shoved with a stick or kicked to test their mobility. Parodies exacerbating the mistreated robots such as Corridor Digital’s Bosstown Dynamics videos caused some people to have empathetic reactions as if these machines were being maltreated. Notably, The Jack of All Nerds Show even had a video which features music often used in charity videos with the phrase “every hour a robot is beaten or abused.”

While these videos did not set out to provoke empathetic responses from us, the fact that they did is why Singler thinks we need to be careful. “Because our empathetic response is easy to simulate. And the simulation of emotions by corporations for particular ends is happening already.”

Even for people who may have greater digital literacy, a potential peril might be going from “this AI sounds human” to “this AI has the same consciousness as a human.” While those two things may be miles apart in our understanding and our technology, “they are extremely close together in our dreaming and our anthropomorphisation of the non-human other.” says Singler.

“There were footages of robots being shoved with a stick or kicked to test their mobility. Some people had empathetic reactions as if these machines were maltreated.”

Science fiction vs. science fact

Assumptions about AI’s current capabilities can often stem not from education but from science fiction or representations in the popular press. Because they give a “general impression of what AI is capable of to the general public.” Singler points out newspapers running stories about even the quite banal advances in AI with pictures of The Terminator as misleading representations.

“The assumption that anything that would be like us would turn on us, I think that’s quite an interesting reflection on what we think of ourselves,” she says, “but the stories we engage with also inform our assumptions and our opinions about what might happen in the future.”

The relationship between our science fiction and our fears is a cyclical one; how we feel informs what science fiction is written, and the narratives can also reflect where we stand. While Singler is a science fiction fan herself and believes there is a space to enjoy dystopian stories, what concerns her is when people can’t tell science fiction from science fact, and those boundary lines become blurred. “We have to be very careful of the stories that we tell ourselves; that is the key thing.”

The apocalyptic narrative

There has been a trend in the broader conversation around AI’s risks, focusing on a potential artificial general intelligence (AGI) posing an existential threat. Singler thinks some of the discussions can be characterised by a fear of being replaced; “if we imagine the pyramid of intelligence and we are at the top, like the apex predator, what happens when we create something smarter than ourselves?”

Many people leading the conversation who describe narratives of an apocalypse or a disastrous AGI “also fulfil the criteria of the type of people you might think would be scared of being replaced by a superintelligence,” says Singler. From prominent figures like Elon Musk, Nick Bostrom and the late Stephen Hawking to self-proclaimed movements such as transhumanists, the majority tends to be male and have higher education. And while there are variations and exceptions to that, “perhaps it says something that those people who are interested in AI and its detrimental effects are also the ones who consider themselves to be some of the most intelligent people there are.”

One of the things that drives Singler’s research is to communicate that while there are concerns with AI, “it is not all Skynet like it can sometimes appear in the mainstream media.”

Human risk and more immediate concerns

A more current risk might be that the more we interact with artificial systems, the more we assume that humans have to fit into those systems and reduce themselves to do so. Singler cites factory workers for Amazon who had to work around robotic systems for picking and packing items were striking for proper breaks and better treatment with banners that read “I am not a robot.”

The A-Level scandal in the UK is another example, with students’ grades being predicted due to the pandemic. Their grades were lower if they came from a school that historically didn’t do as well. Thus individual success was limited by reducing people to data. While the blame was laid on the algorithm, Singler says that a certain understanding predicated the setting up of the algorithm, the data it was fed and the outcome that was assumed, based on our assumptions about class, economics, and how people are capable or incapable of escaping their situations.

“We don’t necessarily have to go as far as the Robo-apocalypse people think about with conscious scary robots; people are already suffering mini Robo-apocalypses. We can see so many examples already in diagnosis, in predicting recidivism for parole hearings, in responding to people of different minority groups because they don’t fit what the creators of the algorithm thought was the ideal or the normative type of human. Perhaps we need to see more stories from other cultural groups, ethnic groups, gender groups to get more of a sense of what are everyone else’s concerns and not just simply the very dystopian and the very utopian.”

Defne Saricetin

Defne Saricetin is a writer, creative consultant and freelance journalist based in London. She has worked on projects for Vogue, Semaine and New York International Screenplay Awards, interested in arts, storytelling, social psychology and cultural and creative industries.

All articles