How the Internet Undermined Our Ability to Spot Deception

Lego-style propaganda videos alleging war crimes are inundating online platforms, mirroring the White House’s own shift towards cryptic teaser clips and visuals native to memes. This isn’t mere content drift; it represents a new battleground in the information war, where speed, ambiguity, and algorithmic reach are as crucial as accuracy.
One Iran-linked outlet, Explosive News, can reportedly produce a two-minute synthetic Lego segment in roughly 24 hours. The emphasis is on speed. Synthetic media doesn’t need to last indefinitely; it simply needs to circulate before verification catches up.
Last month, the White House added to this confusion by posting two vague “launching soon” videos, which were later taken down after online investigators and open-source researchers began analyzing them.
The reveal ended up being anticlimactic: a promotional push for the official White House app. However, this incident highlighted how deeply official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt a leak-like aesthetic, the only remaining defensive move is to question whether a record is authentic or synthetic.
Real vs. Synthetic: The New Friction
A zero digital footprint used to signify authenticity. Now, it can indicate the opposite. The lack of a trace doesn’t imply originality; it might mean it was never captured by a camera at all. The signal has flipped. Truth trails behind; engagement leads.
Automated traffic now accounts for an estimated 51 percent of internet activity, accelerating eight times faster than human traffic, according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems not only disseminate content but also prioritize low-quality virality, ensuring the synthetic record spreads while verification still seeks to catch up.
Open source investigators are maintaining their ground, but they are engaged in a war of volume. The emergence of hyperactive “super sharers,” often backed by paid verification, introduces a layer of false authority that traditional open source intelligence (OSINT) must now navigate.
“We’re always playing catch-up to those who press repost without a moment’s thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm favors that instinct, and our information is perpetually one step behind.”
Simultaneously, the boom in war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, highlights the false certainty fostered by the inundation of aggregated content on Telegram and X.
“Open source verification starts to generate false certainty when it ceases to be an inquiry method—through confirmation bias, or when OSINT is used to superficially validate official accounts or is intentionally misapplied to sync with ideological narratives instead of scrutinizing them,” Ganguly explains.
Meanwhile, access to the verification toolkit is becoming increasingly challenging. On April 4, Planet Labs—one of the most trusted commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the wider Middle East conflict zone, retroactive to March 9, upon a request from the US government.
US defense secretary Pete Hegseth’s response to concerns about the delay was clear: “Open source is not the place to determine what did or did not happen.”
This shift holds significance. With limited access to primary visual evidence, the capacity to independently verify events decreases. And within that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it strives to shape what is seen in the first place.
Generative AI Is Getting Harder to Spot
Generative AI platforms have been evolving based on past mistakes. Henk van Ess, an investigative trainer and verification specialist, notes that many classical indicators—such as incorrect finger counts, distorted protest signs, and garbled text—have largely been addressed in the latest models. Tools like Imagen 3, Midjourney, and Dall·E have enhanced their prompt understanding, photorealism, and text-in-image rendering.
However, the more complex issue is what van Ess refers to as the hybrid.
