AI-Generated Fraud Poses Significant Threat to Voters Ahead of 2024 Elections

Experts Warn of Growing Risk as Artificial Intelligence Evolves, Targeting Electoral Processes


In the wake of a fraudulent robocall targeting New Hampshire residents with a fabricated message from President Biden, experts caution that voters may face a wave of AI-generated content threatening to interfere with the upcoming 2024 primary and presidential elections. As threat actors leverage AI to enhance their attacks, researchers are racing to develop defensive capabilities, recognizing the “significant threat” posed by generative AI.

AI’s Potential Impact on Elections:

James Turgal, Optiv Vice President of Cyber Risk, emphasizes that the greatest impact of AI could be its potential to disrupt the security of party election offices, volunteers, and state election systems. Threat actors aim to alter vote totals, undermine confidence in electoral outcomes, or incite violence, with the ability to do so on a massive scale.


Mitigating the Threat:

To counteract the threat, Turgal recommends:

1. Policies Against Social Engineering Attacks:
– Election offices should establish policies to defend against social engineering attacks.

2. Deep Fake Video Training:
– Staff must undergo deep fake video training to recognize attack vectors, including email text and social media platforms.

3. Responsibility of AI Tool Developers:
– Private sector companies developing AI tools, especially large language chatbots, have a responsibility to ensure accurate information on elections. AI models must acknowledge their limitations and redirect users to authoritative sources.

Challenges and Investigations:

– The New Hampshire fraudulent robocall, featuring a manipulated Biden voice, is under investigation. Determining the origin of such programs is challenging due to the widespread availability of voice replication applications.

– Chris Mattmann, NASA Jet Propulsion Laboratory Chief Technology and Innovation Officer, acknowledges the challenge of discerning AI-generated content that reaches high levels of authenticity. The rapid advancement of voice cloning software, aided by data collected from virtual voice assistants, poses a significant concern.

Addressing Privacy Concerns:

– The accelerated development of voice cloning software is attributed to data collected by virtual voice assistants like Amazon Alexa and Google Assistant, raising privacy concerns about the use of individuals’ voice snippets without informed consent.

– Despite ongoing experimentation with labels to inform voters about AI-generated campaign products, regulations on deceptive AI use in politics are yet to be implemented.

Political Landscape and AI Manipulation:

– Political groups and politicians have actively used AI to target opponents in political attacks. Instances include AI-generated images in attack advertisements and depictions of migrant camps in national parks.

– The federal government has urged organizations to experiment with labels for AI-generated campaign products, but adoption is pending.

Future Challenges and Preparedness:

– Mattmann highlights the need for tools to discern AI-generated content in political campaigns. While there are methodologies, access is limited, and campaigns lack understanding of the technical aspects.

– Anticipating a shift in the audio, text, and video realms, Mattmann emphasizes the importance of developing tools ready for future election cycles.

The growing intersection of AI and electoral processes prompts a call for heightened awareness, defensive strategies, and regulatory measures to safeguard the integrity of democratic processes.