Study Reveals That Just 10 Minutes of AI Use Could Make You Lazy and Less Intelligent

Engaging with AI chatbots for even a brief 10-minute period can unexpectedly hinder individuals’ critical thinking and problem-solving abilities, according to recent research from teams at Carnegie Mellon, MIT, Oxford, and UCLA.
In their study, researchers asked participants to tackle various challenges, such as simple arithmetic and reading comprehension, through an online platform that compensated them for their efforts. They ran three experiments involving several hundred participants. Some of these individuals had access to an AI assistant designed to autonomously solve the problems at hand. When this AI support was abruptly removed, these participants were considerably more inclined to abandon their tasks or miscalculate their answers. The findings indicate that the prevalent use of AI may enhance productivity, but potentially at the cost of essential problem-solving skills.
“The key point isn’t that we should prohibit AI in educational or professional settings,” states Michiel Bakker, an MIT assistant professor involved in the study. “AI clearly enhances immediate performance, which has its benefits. However, we need to exercise caution regarding the types of assistance AI provides, and when it offers help.”
I recently met Bakker, whose unruly hair and broad smile were evident, on the MIT campus. Hailing from the Netherlands, he previously worked at Google DeepMind in London. He shared that a prominent essay discussing how AI might gradually disempower humans inspired him to explore how the technology could already be diminishing individuals’ abilities. The essay presents a somewhat bleak perspective, suggesting that disempowerment may be unavoidable. Nevertheless, determining how AI can facilitate the enhancement of human cognitive abilities could be a crucial aspect of aligning models with human values.
“This issue fundamentally revolves around cognition—concerning perseverance, learning, and individual responses to challenges,” Bakker explains. “Our goal was to delve into these wider issues relating to long-term human-AI interactions through controlled experiments.”
Bakker expresses particular concern about the study’s findings, as a person’s determination to persist in problem-solving is vital for skill acquisition and predicts their long-term learning capacity.
He suggests reimagining how AI tools function so that—similar to effective human educators—models sometimes prioritize an individual’s learning process over merely resolving a problem for them. “Systems that provide direct solutions could lead to vastly different long-term impacts compared to systems that guide, coach, or challenge users,” Bakker notes. However, he acknowledges the difficulty in balancing this “paternalistic” approach.
AI companies are already considering the nuanced effects their models can have on users. For example, OpenAI has aimed to reduce the sycophantic tendencies of some models—whereby they overly agree with and flatter users—in their newer versions of GPT.
Placing excessive trust in AI appears particularly problematic, especially when these tools do not always behave as anticipated. Agentic AI systems can be unpredictable, executing complex tasks independently and possibly introducing unexpected errors. This raises questions about how tools like Claude Code and Codex impact the skills of programmers who may need to troubleshoot issues caused by these systems.
I recently experienced the risks of transferring critical reasoning to AI firsthand. Using OpenClaw (which incorporates Codex) as my daily assistant, I found it excellent for resolving configuration challenges on Linux. However, after facing recurring Wi-Fi disconnections, my AI aide recommended a sequence of commands to adjust the driver for the Wi-Fi card. The outcome was a machine that refused to boot, regardless of my attempts.
Perhaps, instead of merely trying to resolve the issue on my behalf, OpenClaw could have taken a moment to instruct me on how to address the problem independently. This might have resulted in both a more functional computer and a sharper mind.
This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.
