MUNICH, February 16, 2024: About 40 countries across the globe are likely to have elections in 2024. Prominent technology companies concerned at any possible misuse of Artificial Intelligence (AI) in these elections have decided at the ‘Munich Security Conference’ to counter ‘deceptive AI content’ which might interfere in the voting processes’.
Global tech biggies are part of this important initiative. AI probably has already been used in some recent elections and it remains to be seen what happens at the ground level when in big democracies people going to vote are confronted by AI-generated content. The issue could be concerning as many voters may not be tech conscious (AI-literate).
The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. Signatories pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem. The accord is one important step to safeguard online communities against harmful AI content, and builds on the individual companies’ ongoing work.
A media release from the aielectionaccord.com says:
Digital content addressed by the accord consists of AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote. As of today, the signatories are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.
Participating companies agreed to eight specific commitments:
• Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
• Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content
• Seeking to detect the distribution of this content on their platforms
• Seeking to appropriately address this content detected on their platforms
• Fostering cross-industry resilience to deceptive AI election content
• Providing transparency to the public regarding how the company addresses it
• Continuing to engage with a diverse set of global civil society organisations, academics
• Supporting efforts to foster public awareness, media literacy, and all-of-society resilience These commitments apply where they are relevant for services each company provides
“Elections are the beating heart of democracies. The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices,” said Ambassador Dr. Christoph Heusgen, Munich Security Conference Chairman. “MSC is proud to offer a platform for technology companies to take steps toward reining in threats emanating from AI while employing it for democratic good at the same time.”