AI-Generated Content is Creating a False Sense of Online Joy

AI-Generated Content is Creating a False Sense of Online Joy

For anyone with a heartbeat and a smartphone, it’s clear that the internet faces a significant AI content problem. This dilemma has intensified since the launch of ChatGPT in 2022, leading to an influx of AI-generated text on various social platforms. Now, there’s data to support these observations.

A recent preprint study from researchers at Imperial College London, Stanford University, and the Internet Archive indicates that around 35 percent of new websites are either AI-generated or AI-assisted. The study also discovered that online writing is becoming “increasingly sanitized and artificially cheerful.” Essentially, AI is contributing to a façade of happiness on the internet.

The research team employed four different methods for detecting AI content before opting for tools from Pangram Labs, which provided the most reliable outcomes. Although the tool performed well, it’s important to highlight that no AI detection system is flawless. To gather a representative sample of websites, the team utilized the Internet Archive’s Wayback Machine, which archives webpage snapshots. This study not only quantified the reliance on AI-generated text among sites created between 2022 and 2025 but also explored six different theories regarding the traits of this content.

One test that examined artificial cheerfulness focused on how AI influences the tone of online writing. By using sentiment analysis to categorize words as positive, neutral, or negative, it revealed that “the average positive sentiment score of AI-generated or AI-assisted content was 107 percent higher than that of non-AI websites.” The researchers interpret this increase in artificial positivity as a “symptom” of the “sycophantic and overly optimistic disposition of current LLMs.” Thus, AI writing tools’ tendency to cater to their human users impacts the overall tone of online content, making it seem overly sweet.

Another investigation assessed whether the surge in AI-generated writing limits “the range of unique ideas and diverse viewpoints” available. The researchers concluded that AI does diminish the ideological diversity of the internet, with AI-generated sites scoring about 33 percent higher on tests measuring “semantic similarity” compared to human-created sites.

While these two tests supported the researchers’ suspicions regarding AI, other hypotheses did not hold up. Four theories that were analyzed remained unproven. Notably, the team had anticipated that AI would cause an increase in misinformation, but their investigation did not back this assumption. They also expected that AI writing would fail to link to external sources and would exhibit a more generic style than human writing. Unexpectedly, neither assumption was substantiated by the data.

The analysis revealed that while the ideas expressed in AI writing were more uniform and, specifically, more positively slanted, the writing style itself was not confirmed to be less varied. This outcome surprised the researchers, who had anticipated a noticeable trend toward blandness. “Everyone on the team expected that to be true,” says Stanford researcher Maty Bohacek. “But we just don’t have significant evidence for that.”

Before conducting their analysis, the research team conducted a survey on public perceptions of AI. When compared with the results, they found that they were not alone in having their expectations challenged. The study reveals that many commonly held beliefs about AI writing are incorrect.

Similar to the researchers, most surveyed individuals also presumed there would be a surge in fake news as AI-generated websites became more prevalent. A significant majority of respondents believed that AI writing would stop linking to external content and that it would adopt an increasingly uniform voice. “It’s interesting to see that people tended to expect the worst outcomes,” notes Bohacek.

This study is just the beginning in understanding AI’s impact on the internet. “We just wanted to lay the groundwork,” says Bohacek, who sees this as a launching point for further research. As a snapshot of AI content’s effects, it provides a uniquely human perspective: Sometimes, predicting outcomes can be quite challenging.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant