AI Models Can Experience Cognitive Decline as Well

AI Models Can Experience Cognitive Decline as Well

AI models may resemble humans more than we think.

A recent study from the University of Texas at Austin, Texas A&M, and Purdue University indicates that large language models exposed to popular yet low-quality social media content experience a form of “brain rot” similar to what many feel after excessive doomscrolling on platforms like X or TikTok.

“In an era where information proliferates quicker than attention spans—and much of it is designed to attract clicks rather than convey truth or depth,” explains Junyuan Hong, an incoming assistant professor at the National University of Singapore who contributed to the study while a graduate student at UT Austin. “We asked ourselves: What occurs when AIs are trained on such material?”

Hong and his team provided various text types to two open-source large language models during pretraining. They investigated the outcomes when the models ingested a blend of highly “engaging,” or widely circulated, social media posts alongside sensational or hyped phrases like “wow,” “look,” or “today only.”

The researchers then utilized multiple benchmarks to assess the effects of this “junk” social media diet on two open-source models: Meta’s Llama and Alibaba’s Qwen.

The models subjected to junk text exhibited signs of AI brain rot, demonstrating cognitive decline reflected in diminished reasoning capabilities and impaired memory. Additionally, the models became less ethically aligned and more psychopathic according to two evaluation metrics.

These findings align with research on humans, which shows that low-quality online content negatively affects cognitive functions. The widespread occurrence of this issue led to “brain rot” being recognized as the Oxford Dictionary word of the year in 2024.

These results are significant for the AI industry, according to Hong, as model creators might presume that social media posts represent a viable source of training data. “Training on viral or attention-grabbing content may appear to enhance data volume,” he notes. “However, it can subtly undermine reasoning, ethics, and long-term attention.”

The fact that LLMs experience brain rot raises particular concerns, especially as AI increasingly produces social media content, much of which seems tailored for engagement. The researchers also discovered that models hindered by low-quality content could not be easily rehabilitated through retraining.

These findings further imply that AI systems developed around social platforms, such as Grok, might face quality control challenges if user-generated posts are utilized in training without regard for the integrity of the content.

“As more AI-generated low-quality content circulates on social media, it taints the very data from which future models will learn,” Hong remarks. “Our findings indicate that once this ‘brain rot’ takes hold, subsequent clean training cannot completely reverse it.”


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant