AI-Driven Misinformation Campaigns Threaten Democracy

“We are entering a new era of informational warfare on social media, where advancements in technology have rendered traditional bot strategies obsolete,” states Jonas Kunst, a professor of communication at BI Norwegian Business School and one of the report’s co-authors.
For those who have dedicated years to monitoring and counteracting disinformation efforts, the findings in this paper paint a grim picture of the future.
“What if AI wasn’t merely fabricating information, but instead thousands of AI chatbots collaborated to create an illusion of grassroots support that didn’t exist? This is the future outlined in this paper—Russian troll farms taken to an extreme,” remarks Nina Jankowicz, who served as the disinformation czar under the Biden administration and is now CEO of the American Sunlight Project.
The researchers express uncertainty about whether this strategy is currently in use, as existing systems designed to monitor and identify coordinated inauthentic activities are insufficient to detect them.
“Due to their deceptive ability to imitate human behavior, it’s challenging to properly identify them or assess their prevalence,” Kunst explains. “Access to most social media platforms is increasingly restricted, making insights difficult. Technically, it’s definitely feasible. We are fairly confident it’s being tested.”
Kunst further noted that these systems likely still have some degree of human oversight in their development. He anticipates that while they may not significantly influence the 2026 US midterms in November, they are expected to be used to disrupt the 2028 presidential election.
Accounts that appear indistinguishable from actual humans on social media are just one concern. The capability to analyze social networks on a large scale, the researchers assert, will allow those orchestrating disinformation campaigns to target specific communities, maximizing their impact.
“Armed with such capabilities, swarms can position for optimal influence and customize messages to resonate with the beliefs and cultural nuances of each community, allowing for more precise targeting than prior botnets,” they write.
These systems could essentially improve themselves, utilizing feedback from responses to refine their messaging. “With enough signals, they could conduct millions of microA/B tests, amplifying the most effective versions at machine speed, and iterating far more swiftly than humans can,” the researchers note.
To address the threat posed by AI swarms, the researchers recommend creating an “AI Influence Observatory” comprised of individuals from academic institutions and NGOs, aimed at “standardizing evidence, enhancing situational awareness, and enabling quicker collective responses rather than enforcing top-down reputational penalties.”
Notably absent from this group are executives from social media companies, as the researchers believe that these organizations prioritize engagement above all else, leaving them little incentive to identify such swarms.
“Imagine if AI swarms become so prevalent that people lose trust and start leaving the platform,” Kunst says. “That directly threatens the business model. If engagement is increased, platforms might see it as more positive not to expose this issue, as it appears there’s more interaction, leading to more ad views, which could boost valuations.”
In addition to the lack of action from platforms, experts contend that there is minimal motivation for governments to intervene. “The current geopolitical climate may not favor the establishment of ‘Observatories’ that essentially monitor online conversations,” Olejnik observes, a sentiment echoed by Jankowicz: “What’s most concerning about this potential future is the scant political will to tackle the harmful effects of AI, meaning [AI swarms] could soon become a reality.”
