Big tech companies at the Munich Security Conference pledged to prevent deceptive AI content from interfering with global elections. The signatories include Adobe, Amazon, Arm, Google, IBM, LinkedIn, Meta, Microsoft, OpenAI, Snap Inc., TikTok, and more. They will work on tools, awareness, election integrity, and safeguarding democracy against false information and AI weaponisation.
Recently, at the Munich Security Conference (MSC), big technology companies pledged to help prevent deceptive AI content from interfering with this year’s
global elections
in which more than four billion people in over 40 countries will vote. Elections are set to be held in countries like India, the US in 2024. Keeping that in mind Signatories pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem.
The accord is one important step to safeguard online communities against harmful AI content, and builds on the individual companies’ ongoing work. Digital content that will be closely watched consists of AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.
As of today, the signatories are: Adobe,
Amazon
, Anthropic, Arm, ElevenLabs,
,
IBM
, Inflection AI, LinkedIn, McAfee, Meta,
Microsoft
, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.
“Democracy rests on safe and secure elections,” said Kent Walker, President, Global Affairs, Google. “Google has been supporting election integrity for years, and today’s accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust. We can't let digital abuse threaten AI's generational opportunity to improve our economies, create new jobs, and drive progress in health and science.”
“Disinformation campaigns are not new, but in this exceptional year of elections – with more than 4 billion people heading to the polls worldwide – concrete, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content,” said Christina Montgomery, Vice President and Chief Privacy & Trust Officer, IBM. “That's why IBM today reaffirmed our commitment to ensuring safe, trustworthy, and ethical AI.”
“With so many major elections taking place this year, it's vital we do what we can to prevent people being deceived by AI-generated content,” said Nick Clegg, President, Global Affairs at Meta. “This work is bigger than any one company and will require a huge effort across industry, government and civil society. Hopefully, this accord can serve as a meaningful step from industry in meeting that challenge.”
“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponised in elections,” said Brad Smith, Vice Chair and President of Microsoft. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”