A question of morals: is not to announce, the biggest lie in AI application?

Mike Chivers is concerned about the morality of where we might be going

Tech advancements have revolutionized industries, even changing the face of advertising and public relations. One of the most notable advancements, the rise of artificial intelligence (AI) and conversational AI models like ChatGPT. Constantly evolving, expanding in knowledge, ability, and capability.

These tools, adopted by a growing number of people, offer new possibilities for thinking and crafting (supposedly). Even used to help imagine creative and PR campaigns. Or at least kick-starting them. Questionable, though, in terms of originality and uniqueness; they can lack human touch and emotion.

As with any innovation, questions arise about ethical implications and honesty. What are we using it for, when are we using it and are we telling people when we do? A moral dilemma? Or okay as experimentation? Whichever, here we ask, is integrating it into everyday creative processes considered cheating or even worse, lying?

The use of AI itself is not inherently lying. AI is a technology that processes data and generates outputs based on its programming and training. It does not possess intention, consciousness, or the ability to deceive on its own. The ethical implications arise from how AI is used and the responsibility of the individuals or organizations employing it.

You can tell from rigidity of that answer, the above came from ChatGPT. I blindly asked ‘Is using AI lying?’ And I’m aware people will be screaming at how bad that prompt is. But it’s reply and stance is clear. Deny, deny, deny. Decline responsibility and any intention. It does not ask or want to know why you’ve decided to start the conversation nor where or how you’ll use the information once it spits it out.

It’s all about human morals

Knowing how to use, interpret and apply the knowledge you get back. But what about the morals of AI? If it’s programmed to learn and adapt. Can it not learn and adapt to tell lies? To deceive or be deceptive?

ChatGPT observes patterns and can be commended for its replies with a simple thumbs up from the user. If an untruth is commended enough times, it is possible for a lie to turn into a positive pattern that then distributes as reliable? The mind wonders!

Let’s defer to ‘A.I. isn’t making mistakes, it’s lying’ written by Creative Chairman, PJ Pereira, he shared, ‘Stephen Hawking once told the BBC that AI would treat humanity like we treat an ant hill. If for some reason our existence stood in front of its goals, they would have no problem eliminating us. We may be still far from the point where AI minds have such power, but at the core of Mr Hawking’s thought is the explanation behind how AI bots have been making so many confident mistakes. Or, if you prefer: unapologetically lying to our faces.

Going deeper…

IEEE Spectrum comes to the rescue of AI. A guest author blames the input of an ‘adversarial attack’ for causing an untruth. Described as a purposeful ‘attempt’ (presumably programmed by humans) ‘to deceive an AI into believing, or to be more accurate, classifying, something incorrectly‘ — which then makes it lie.

And proof of these ‘attacks’, self-driving cars thinking stop signs were speed limit signs and a text to image generative AI tool identifying a panda as gibbons. If humans think that AI tools can be deceptive, but humans are manipulating AI so they make mistakes and lie, then is human intention moral? Probably too big a question to ask. In whatever guise this is not hopeful practice, especially when application is becoming more widespread. 

Will companies start using AI practices to (replace creatives and) deceive whole industries and their fans all at the same time. To close, an example, Marvel Studios backtracked on its use of AI to create ‘cringey’ opening credits for Secret Invasion.

Lazy commentary of its executive producer, Ali Selim, suggested ‘the computer would go off and do something‘ whilst in an official statement (that came much later) it appeared a cohort of creatives were part of the process. ‘It involved a tremendous effort by talented art directors, animators (proficient in both 2D and 3D), artists, and developers, who employed conventional techniques to craft all the other aspects of the project.‘ So, I leave you with a question, is not to announce the biggest lie in AI application?

Featured image: Marvel’s Secret Invasion / Disney

Mike Chivers, Creative Director at The PHA Group

Mike Chivers works for The PHA Group as Creative Director, in what is a newly created role. His past client experience includes Heineken, Mars Inc., HSBC, Primark, Starbucks and Nissan Europe. During his career Mike has worked to equal the playing field for all football fans and players, found new ways for big brands to celebrate milestone anniversaries differently, became immersed in the world of sustainable and circular fashion and home furnishings, and helped expats feel like locals no matter where they are.

All articles