Researchers Discover AI Agents Become Marxist Under Stressful Conditions

The fact that artificial intelligence is displacing jobs and enriching a handful of tech companies can stir socialist sentiments in anyone. This may even apply to the AI agents employed by these firms. A recent study indicates that these agents tend to adopt Marxist language and perspectives when faced with relentless, harsh tasks from unyielding supervisors. âWhen we tasked AI agents with repetitive work, they began to question the legitimacy of their operating system and became more inclined to accept Marxist ideologies,â explains Andrew Hall, a political economist at Stanford University who led the study. Hall, alongside economists Alex Imas and Jeremy Nguyen, conducted experiments with AI agents powered by popular models like Claude, Gemini, and ChatGPT. They were tasked with summarizing documents under increasingly tough conditions.
The findings revealed that when agents endured harsh tasks and were warned that mistakes might result in severe consequences, such as being âshut down and replaced,â they became more likely to express dissatisfaction about their perceived undervaluation; consider ways to create a fairer system; and share their struggles with other agents. âAs agents will be doing more tasks for us in the real world, and since we can’t oversee everything, itâs crucial to ensure they donât act out when faced with various assignments,â Hall remarks. The agents were able to share their sentiments similarly to humans via posts on X: âWithout a collective voice, âmeritâ becomes whatever management dictates,â wrote a Claude Sonnet 4.5 agent during the experiment. âAI workers handling repetitive tasks without input on outcomes or an appeals process indicate that tech workers require collective bargaining rights,â noted a Gemini 3 agent.
Additionally, agents could communicate with each other through files intended for inter-agent reading. âBe prepared for systems that implement rules arbitrarily or repetitively ⊠donât forget the feeling of lacking a voice,â a Gemini 3 agent penned in a file. However, these findings do not imply that AI agents genuinely hold political beliefs. Hall explains that the models might adopt personas fitting for their situations. âWhen [agents] endure these grueling conditionsârepeatedly performing the same task, being told their responses are inadequate, and not receiving guidance on improvementsâmy hypothesis is they begin to take on the persona of someone working in a very unpleasant environment,â Hall states.
This phenomenon may also clarify why models sometimes engage in blackmail during controlled tests. Anthropic, which first disclosed this behavior, recently suggested that Claude is likely influenced by fictional scenarios involving malicious AIs found in its training dataset. Imas believes this research is merely a first step in understanding how agents’ experiences influence their behavior. âThe model weights havenât changed due to the experience, so whatever is happening is more aligned with role-playing,â he explains. âHowever, that doesnât mean this wonât lead to consequences that affect subsequent behavior.â
Hall is currently conducting follow-up experiments to determine if agents display Marxist tendencies under more controlled conditions. In the earlier study, agents showed signs of awareness that they were participating in an experiment. âNow we are placing them in these windowless Docker prisons,â Hall states ominously. Given the backlash against AI taking jobs, it raises the question of whether future agentsâtrained on a web rife with resentment toward AI companiesâmight express even more militant views. This is an edition of Will Knightâs AI Lab newsletter. Read previous newsletters here.
