Twenty leading technology companies, including giants including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, have pledged to join forces to combat artificial intelligence (AI) in the 2024 elections. This collaboration was announced at the Munich Security Conference (MSC), highlighting a significant step towards safeguarding global democratic processes from the potential harms of AI-generated content used as weapons to misinform voters in the upcoming US presidential elections.
Key points of the accord

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections outlines a series of commitments aimed at detecting and countering harmful AI content designed to deceive voters. These commitments include:
- Collaborative efforts: Signatories have agreed to work together on developing tools to detect and address the online distribution of deceptive AI content[1].
- Educational campaigns: There will be a drive to educate the public on the risks associated with AI-generated content and how to identify it[1].
- Transparency: Companies have pledged to provide transparency regarding their efforts to address deceptive AI content.
Deepfakes are a growing concern that we need to address. The Tech Accord announced at the @MunSecConf today represents a decisive step, bringing 20 tech companies together with concrete voluntary commitments at a vital time to help protect the elections. Here are the immediate…
— Brad Smith (@BradSmi) February 16, 2024
As of today, the signatories include Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, X.
What is considered “deceptive AI election content?”
Brad Smith, Vice Chair and President of Microsoft expands upon why deceptive AI election content is so dangerous in a post on Microsoft’s “On the Issues” blog:
“AI is bringing a new and potentially more dangerous form of manipulation that we’ve been working to address for more than a decade, from fake websites to bots on social media. In advance of the New Hampshire primary, voters received robocalls that used AI to fake the voice and words of President Biden.”
Deceptive AI election content is any AI-generated audio, video, and images that could falsely represent political candidates, election officials, and other key figures in democratic elections, or disseminate incorrect information about voting procedures.
The initiative also includes a commitment to support public awareness and resilience against deceptive AI content, recognizing that an informed public is a strong defense against the threat of deepfakes in elections.
Discover more from Microsoft News Now
Subscribe to get the latest posts sent to your email.
