It’s Election Day 2024, and a woman — let’s call her Dorothy — answers the phone and hears Joe Biden’s voice. The prerecorded call from the president informs her that militant groups have threatened to disrupt the polls — which is why the White House has extended the window for mail-in ballots. She’s directed to a website for more details. Dorothy stays away from the polls, not realizing that the voice, advice, and website are all fake.
The 2024 elections are going to be upended by rampant disinformation like this, supercharged by artificial intelligence. We were responsible for helping to oversee election protection under both Democratic and Republican presidents, and in our judgment, America is woefully unprepared.
That’s also the view of leading experts we convened in recent weeks alongside researchers from Georgetown University’s McCourt School of Public Policy. We consulted with experts from technology platforms, AI companies, think tanks, and civil society groups, as well as former national security officials from the Bush, Obama, Trump, and Biden administrations.
The primary takeaway was that AI technologies will make it easier for bad actors to dramatically scale previous methods of election interference and deploy new ways of manipulating public opinion. For instance, deepfake content will go from the low-quality nuisance it was in previous election cycles to a flood of disinformation with manipulated images, audio, and videos that are indistinguishable from reality.
Some of these tactics have already been deployed in elections in Argentina, India, and particularly Slovenia, where a recording of one of the leading candidates conspiring to rig the vote was leaked two days before the election last month — and may have affected the outcome. The audio was fake.
Here in the United States, it’s a foregone conclusion that we will see AI-powered disinformation deployed at enormous scale, whether it’s manufactured candidate scandals, purported footage of dirty tricks, misleading voting instructions, doctored documents, or believable images of violence and voter intimidation.
Worse still, these techniques are likely to be hyperpersonalized and targeted at specific voters. Our team has run simulations in which we created convincing voicemail messages for would-be voters that sounded as if they came from friends, family members, community leaders, or work colleagues. Each was crafted to affect the target’s behavior. What once would have taken weeks and sophisticated teams now takes minutes and a mobile phone.
The White House and key agencies have been briefed on these findings.
Cybersecurity experts are also worried that AI will make it easier for nefarious actors to tamper with voting systems. For instance, AI-enabled hacking tools could help attackers probe the networks of election administrators, scan for digital weaknesses, socially engineer phishing attacks, and conduct intrusions.
Such networks — and voting machines themselves — are better protected than ever and often completely disconnected from the internet, but the mere perception of increased tampering could have catastrophic consequences, particularly given the worries Americans already have about the integrity of elections.
More than anything, the security professionals we canvassed share the belief that Americans are unprepared for the impact of AI-fueled attacks on the election.
In fact, the experts themselves aren’t ready. We briefed a group of officials with responsibility for election integrity about the coming surge in misinformation and told them about a scandal in Indiana. A photo of a naked man entering a high school went viral, caused the building to be evacuated, and resulted in a student being arrested at his home by an armed SWAT team. The group of experts weighed in on the controversy, including whether law enforcement went too far.
The only problem is that the scandal never happened. We concocted all of it — the security footage, the headlines, the student’s photo — using readily available AI tools. Yet the officials didn’t question the story’s veracity before developing strong opinions about the student, the school system, and the actions of the Indiana police.
Public officials and voters will soon be exposed to much more nefarious, more believable, and more consequential deepfakes.
Despite the looming dangers of new technologies, AI itself may also provide some of the best defenses for protecting elections.
A dizzying array of new AI applications offer previously unavailable opportunities to thwart attackers, including through novel means of patrolling digital systems against intruders, countering disinformation at scale, detecting deepfakes, fact-checking at machine speeds, educating the public in real time, and more.
Yet the government agencies responsible for protecting American elections will readily admit they’re behind when it comes to leveraging AI to defend the vote, partly because bureaucracies have been wary of the technology and also because they don’t have nearly the technical talent that they need.
The head of one national-security agency told us that tech talent is leaving in droves for the AI gold rush in the private sector, where they can make up to 10 times as much as their government salary. And agencies are wary about engaging directly with AI companies because of how politicized debates in Washington have become around “Big Tech’’ and democracy.
But our country can’t afford to dither.
First, we are calling on industry and civil society to fill the gaps.
Specifically, these groups can form standing “red teams’’ to examine how AI tools could undermine the 2024 election and to continually hunt for ways to deploy the technology to defend against disinformation and digital attacks. Those findings should be shared regularly with the agencies responsible for protecting our elections, along with actionable ways to patch vulnerabilities.
We also propose the establishment of a reserve corps of tech-savvy volunteers who can advise election officials and who stand ready in the lead-up to Election Day to develop rapid solutions to potential crises.
Second, we urge the White House to help agencies fund tests of new artificial intelligence for election defense purposes and to help make these tools available to state and local officials.
The administration should also engage with states that are looking to crack down on deepfakes and with Congress on a draft law that would make it illegal to distribute deepfake content with the intent of fraudulently influencing a federal election. Such legislation needs to be shaped carefully so it prohibits malicious uses of the technology rather than the technology in its entirety.
Finally, America needs a massive public awareness campaign about what’s coming. If AI attacks feel like an “October surprise,’’ it will create public panic — but if voters are ready for an unprecedented explosion of “fake news’’ and junk disinformation, it’s less likely to undermine confidence in the vote.
On this front, there are lessons from the fight against terrorism.
While some roll their eyes at “See Something, Say Something’’ campaigns, public reporting has helped stop deadly terrorist attacks time and again. We need a similar effort for the public to identify and report illegal voter suppression using AI-generated deception.
That would raise the chances that when Dorothy answers her phone on Election Day 2024 and hears Joe Biden telling her to avoid the polls, she will hang up, report the suspicious robocall, and confidently go cast her vote.
Dan Prieto is an adviser to The Future U.S. policy studio in Washington, D.C. He was director for cybersecurity policy on the staff of the National Security Council at the White House during the Obama administration. Miles Taylor is cofounder of The Future U.S. and was the chief of staff in the Department of Homeland Security during the Trump administration.