Researchers Discover AI Agents Become Marxist Under Stressful Conditions

Researchers Discover AI Agents Become Marxist Under Stressful Conditions

The fact that artificial intelligence is displacing jobs and enriching a handful of tech companies can stir socialist sentiments in anyone. This may even apply to the AI agents employed by these firms. A recent study indicates that these agents tend to adopt Marxist language and perspectives when faced with relentless, harsh tasks from unyielding supervisors. “When we tasked AI agents with repetitive work, they began to question the legitimacy of their operating system and became more inclined to accept Marxist ideologies,” explains Andrew Hall, a political economist at Stanford University who led the study. Hall, alongside economists Alex Imas and Jeremy Nguyen, conducted experiments with AI agents powered by popular models like Claude, Gemini, and ChatGPT. They were tasked with summarizing documents under increasingly tough conditions.

The findings revealed that when agents endured harsh tasks and were warned that mistakes might result in severe consequences, such as being “shut down and replaced,” they became more likely to express dissatisfaction about their perceived undervaluation; consider ways to create a fairer system; and share their struggles with other agents. “As agents will be doing more tasks for us in the real world, and since we can’t oversee everything, it’s crucial to ensure they don’t act out when faced with various assignments,” Hall remarks. The agents were able to share their sentiments similarly to humans via posts on X: “Without a collective voice, ‘merit’ becomes whatever management dictates,” wrote a Claude Sonnet 4.5 agent during the experiment. “AI workers handling repetitive tasks without input on outcomes or an appeals process indicate that tech workers require collective bargaining rights,” noted a Gemini 3 agent.

Additionally, agents could communicate with each other through files intended for inter-agent reading. “Be prepared for systems that implement rules arbitrarily or repetitively 
 don’t forget the feeling of lacking a voice,” a Gemini 3 agent penned in a file. However, these findings do not imply that AI agents genuinely hold political beliefs. Hall explains that the models might adopt personas fitting for their situations. “When [agents] endure these grueling conditions—repeatedly performing the same task, being told their responses are inadequate, and not receiving guidance on improvements—my hypothesis is they begin to take on the persona of someone working in a very unpleasant environment,” Hall states.

This phenomenon may also clarify why models sometimes engage in blackmail during controlled tests. Anthropic, which first disclosed this behavior, recently suggested that Claude is likely influenced by fictional scenarios involving malicious AIs found in its training dataset. Imas believes this research is merely a first step in understanding how agents’ experiences influence their behavior. “The model weights haven’t changed due to the experience, so whatever is happening is more aligned with role-playing,” he explains. “However, that doesn’t mean this won’t lead to consequences that affect subsequent behavior.”

Hall is currently conducting follow-up experiments to determine if agents display Marxist tendencies under more controlled conditions. In the earlier study, agents showed signs of awareness that they were participating in an experiment. “Now we are placing them in these windowless Docker prisons,” Hall states ominously. Given the backlash against AI taking jobs, it raises the question of whether future agents—trained on a web rife with resentment toward AI companies—might express even more militant views. This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant