When Russia interfered in the 2016 U.S. presidential election, spreading divisive and inflammatory posts online to stoke outrage, its posts were brash and riddled with spelling errors and strange syntax. They were designed to get attention by any means necessary.

“Hillary is a Satan,” one Russian-made Facebook post read.

Now, eight years later, foreign interference in U.S. elections has become far more sophisticated, and far more difficult to track.

Disinformation from abroad — particularly from Russia, China and Iran — has matured into a consistent and pernicious threat, as the countries test, iterate and deploy increasingly nuanced tactics, according to U.S. intelligence and defense officials, tech companies and academic researchers. The ability to sway even a small pocket of Americans could have outsize consequences for the presidential election, which polls generally consider a neck-and-neck race.

Russia, according to U.S. intelligence assessments, aims to bolster the candidacy of former President Donald Trump, while Iran favors his opponent, Vice President Kamala Harris. China appears to have no preferred outcome.

But the broad goal of these efforts has not changed: to sow discord and chaos in hopes of discrediting American democracy in the eyes of the world. The campaigns, though, have evolved, adapting to a changing media landscape and the proliferation of new tools that make it easy to fool credulous audiences.

On of the most significant adaptations is that now political disinformation is basically everywhere.

Russia was the primary architect of American election-related disinformation in 2016, and its posts ran largely on Facebook.

Now, Iran and China are engaging in similar efforts to influence American politics, and all three are scattering their efforts across dozens of platforms, from small forums where Americans chat about local weather to messaging groups united by shared interests.

There are hordes of Russian accounts on Telegram seeding divisive, sometimes vitriolic videos, memes and articles about the presidential election. There are at least hundreds more from China that mimicked students to inflame the tensions on American campuses this summer over the war in the Gaza Strip.

Russian operatives have also tried to support Trump on Reddit and forum boards favored by the far right, targeting voters in six swing states along with Hispanic Americans, video gamers and others identified by Russia as potential Trump sympathizers, according to internal documents disclosed in September by the Department of Justice.

One campaign linked to China’s state influence operation, known as Spamouflage, operated accounts using a name, Harlan, to create the impression that the source of the conservative-leaning content was an American, on four platforms: YouTube, X, Instagram and TikTok.

The new disinformation being peddled by foreign nations is also far more targeted. It is aimed not just at swing states, but also at specific districts within them, and at particular ethnic and religious groups within those districts. The more targeted the disinformation is, the more likely it is to take hold, according to researchers and academics who have studied the new influence campaigns.

Iran has spent its resources setting up covert disinformation efforts to draw in niche groups. A website titled “Not Our War,” which aimed to draw in U.S. military veterans, interspersed articles about the lack of support for active-duty soldiers with virulently anti-American views and conspiracy theories.

Other sites included “Afro Majority,” which created content aimed at Black Americans, and “Savannah Time,” which sought to sway conservative voters in the swing state of Georgia. In Michigan, another swing state, Iran created an online outlet called “Westland Sun” to cater to Arab Americans in suburban Detroit.

“That Iran would target Arab and Muslim populations in Michigan shows that Iran has a nuanced understanding of the political situation in America and is deftly maneuvering to appeal to a key demographic to influence the election in a targeted fashion,” said Max Lesser, a senior analyst at the Foundation for Defense of Democracies.

Recent advances in artificial intelligence have boosted disinformation capabilities beyond what was possible in previous elections, allowing state agents to create and distribute their campaigns with more finesse and efficiency.

OpenAI, whose ChatGPT tool popularized the technology, reported this month that it had disrupted more than 20 foreign operations that had used the company’s products between June and September. They included efforts by Russia, China, Iran and other countries to create and fill websites and to spread propaganda or disinformation on social media — and even to analyze and reply to specific posts.

“AI capabilities are being used to exacerbate the threats that we expected and the threats that we’re seeing,” Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency, said. “They’re essentially lowering the bar for a foreign actor to conduct more sophisticated influence campaigns.”

The countries are also becoming better at covering their tracks.

Last month, Russia was caught obscuring its attempts to influence Americans by secretly backing a group of conservative American commentators employed through Tenet Media, a digital platform created in Tennessee in 2023.

The company served as a seemingly legitimate facade for publishing scores of videos with pointed political commentary as well as conspiracy theories about election fraud, COVID-19, immigrants and Russia’s war with Ukraine. Even the influencers who were covertly paid for their appearances on Tenet said they did not know the money came from Russia.

In an echo of Russia’s scheme, Chinese operatives have been cultivating a network of foreign influencers to help spread its narratives, creating a group described as “foreign mouths,” “foreign pens” and “foreign brains,” according to a report last fall by the Australian Strategic Policy Institute.

The new tactics have made it harder for government agencies and tech companies to find and remove the influence campaigns — all while emboldening other hostile states, said Graham Brookie, the senior director at the Atlantic Council’s Digital Forensic Research Lab.