The principles are designed to ensure that the industry embraces AI in an ethical way that protects both consumers and those working in the creative sector. They cover issues around transparency, intellectual property rights, human oversight and more.
The twelve principles:
- AI should be used responsibly and ethically.
- AI should not be used in a manner that is likely to undermine public trust in advertising (for example, through the use of undisclosed deepfakes, or fake, scam or otherwise fraudulent advertising).
- Advertisers and agencies should ensure that their use of AI is transparent where it features prominently in an ad and is unlikely to be obvious to consumers.
- Advertisers and agencies should consider the potential environmental impact when using generative AI.
- AI should not be used in a manner likely to discriminate or show bias against individuals or particular groups in society.
- AI should not be used in a manner that is likely to undermine the rights of individuals (including with respect to use of their personal data).
- Advertisers and agencies should consider the potential impact of the use of AI on intellectual property rights holders and the sustainability of publishers and other content creators.
- Advertisers and agencies should consider the potential impact of AI on employment and talent. AI should be additive and an enabler – helping rather than replacing people.
- Advertisers and agencies should perform appropriate due diligence on the AI tools they work with and only use AI when confident it is safe and secure to do so.
- Advertisers and agencies should ensure appropriate human oversight and accountability in their use of AI (for example, fact and permission checking so that AI generated output is not used without adequate clearance and accuracy assurances).
- Advertisers and agencies should be transparent with each other about their use of AI. Neither should include AI-generated content in materials provided to the other without the other’s agreement.
- Advertisers and agencies should commit to continual monitoring and evaluation of their use of AI, including any potential negative impacts not limited to those described above.
The AI Safety Summit, which took place on 1-2 November, saw delegates from 28 governments around the world, as well as leaders from top AI companies, gather to address risks related to AI.
On Wednesday, the 28 countries, including the US and China, agreed to work together to contain the potentially ‘catastrophic’ risks posed by the advances in artificial intelligence.
Richard Lindsay, Director of Legal & Public Affairs, IPA said: ‘The use of AI has grown exponentially in all industries, bringing with it huge opportunities as well as a wealth of new legal, regulatory and ethical challenges that need to be understood and addressed.
‘The importance of AI is evidenced by the Government’s bringing together of world leaders and tech giants at this month’s AI Safety Summit.‘
Featured image: Pixabay