deceptive ai election content

Deceptive AI election content: Big U.S. tech companies sign accord combating the massive problem in 2024

User avatar placeholder
Written by Dave W. Shanahan

February 17, 2024

Twenty leading technology companies, including giants including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, have pledged to join forces to combat artificial intelligence (AI) in the 2024 elections. This collaboration was announced at the Munich Security Conference (MSC), highlighting a significant step towards safeguarding global democratic processes from the potential harms of AI-generated content used as weapons to misinform voters in the upcoming US presidential elections.

Key points of the accord

deceptive ai election content
(Image: Microsoft)

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections outlines a series of commitments aimed at detecting and countering harmful AI content designed to deceive voters. These commitments include:

  1. Collaborative efforts: Signatories have agreed to work together on developing tools to detect and address the online distribution of deceptive AI content[1].
  2. Educational campaigns: There will be a drive to educate the public on the risks associated with AI-generated content and how to identify it[1].
  3. Transparency: Companies have pledged to provide transparency regarding their efforts to address deceptive AI content.

As of today, the signatories include Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, X.

What is considered “deceptive AI election content?”

deceptive AI election content

Brad Smith, Vice Chair and President of Microsoft expands upon why deceptive AI election content is so dangerous in a post on Microsoft’s “On the Issues” blog:

“AI is bringing a new and potentially more dangerous form of manipulation that we’ve been working to address for more than a decade, from fake websites to bots on social media. In advance of the New Hampshire primary, voters received robocalls that used AI to fake the voice and words of President Biden.”

Deceptive AI election content is any AI-generated audio, video, and images that could falsely represent political candidates, election officials, and other key figures in democratic elections, or disseminate incorrect information about voting procedures.

The initiative also includes a commitment to support public awareness and resilience against deceptive AI content, recognizing that an informed public is a strong defense against the threat of deepfakes in elections.


Discover more from Microsoft News Now

Subscribe to get the latest posts sent to your email.

Image placeholder

I'm Dave W. Shanahan, a Microsoft enthusiast with a passion for Windows, Xbox, Microsoft 365 Copilot, Azure, and more. I started MSFTNewsNow.com to keep the world updated on Microsoft news. Based in Massachusetts, you can email me at davewshanahan@gmail.com.