WASHINGTON >> Over the past 11 months, someone created thousands of fake, automated Twitter accounts — perhaps hundreds of thousands of them — to offer a stream of praise for Donald Trump.

Besides posting adoring words about the former president, the fake accounts ridiculed Trump’s critics from both parties and attacked Nikki Haley, the former South Carolina governor and U.N. ambassador who is challenging her onetime boss for the 2024 Republican presidential nomination.

When it came to Ron DeSantis, the bots aggressively suggested that the Florida governor couldn’t beat Trump, but would be a great running mate.

As Republican voters size up their candidates for 2024, whoever created the bot network is using online manipulation techniques pioneered by the Kremlin to sway the digital platform conversation about candidates while exploiting Twitter’s algorithms to maximize their reach.

Spotting the bots

The sprawling bot network was uncovered by researchers at Cyabra, an Israeli tech firm. While the identity of those behind the network of fake accounts is unknown, Cyabra’s analysts determined that it was likely created within the U.S.

To identify a bot, researchers will look for patterns in an account’s profile, its follower list and the content it posts. Human users typically post about a variety of subjects, with a mix of original and reposted material, but bots often post repetitive content about the same topics.

That was true of many of the bots identified by Cyabra.

“One account will say, ‘Biden is trying to take our guns; Trump was the best,’ and another will say, ‘Jan. 6 was a lie and Trump was innocent,’” said Jules Gross, the Cyabra engineer who first discovered the network. “Those voices are not people. For the sake of democracy I want people to know this is happening.”

Bots, as they are commonly called, are fake, automated accounts that became notoriously well-known after Russia employed them in an effort to meddle in the 2016 election.

The new pro-Trump network is actually three different networks of Twitter accounts, all created in huge batches in April, October and November 2022. In all, researchers believe hundreds of thousands of accounts could be involved.

The accounts all feature personal photos of the alleged account holder as well as a name. Some of the accounts posted their own content, often in reply to real users, while others reposted content from real users, helping to amplify it further.

One way of gauging the impact of bots is to measure the percentage of posts about any given topic generated by accounts that appear to be fake. The percentage for typical online debates is often in the low single digits. Twitter itself has said that less than 5% of its active daily users are fake or spam accounts.

When Cyabra researchers examined negative posts about specific Trump critics, however, they found far higher levels of inauthenticity. Nearly three-fourths of the negative posts about Haley, for example, were traced back to fake accounts.

DeSantis as VP?

The network also helped popularize a call for DeSantis to join Trump as his vice presidential running mate — an outcome that would serve Trump well and allow him to avoid a potentially bitter matchup if DeSantis enters the race.

The same network of accounts shared overwhelmingly positive content about Trump and contributed to an overall false picture of his support online, researchers found.

A message left with a spokesman for Trump’s campaign was not returned.

Most bots aren’t designed to persuade people, but to amplify certain content so more people see it, according to Samuel Woolley, a professor and misinformation researcher at the University of Texas whose most recent book focuses on automated propaganda.

When a human user sees a hashtag or piece of content from a bot and reposts it, they’re doing the network’s job for it, and also sending a signal to Twitter’s algorithms to boost the spread of the content further.