Anthropic’s Claude Assumes Command of a Robotic Canine

With the increasing presence of robots in warehouses, offices, and even homes, the concept of large language models infiltrating complex systems seems more like a sci-fi horror scenario. Consequently, Anthropic researchers were intrigued to explore what might occur if Claude attempted to control a robotâspecifically, a robotic dog.
In a recent study, Anthropic researchers discovered that Claude could streamline much of the programming work required to operate a robot and enable it to perform physical tasks. On one hand, these results highlight the advanced coding capabilities of contemporary AI models. On the other, they suggest how these systems might begin to reach into the physical world as they learn more about coding and enhance their interactions with software and tangible objects.
âWe suspect that the next phase for AI models is to start influencing the world more broadly,â says Logan Graham, part of Anthropicâs red team, which evaluates models for potential risks, in an interview with WIRED. âThis will necessitate models interfacing more directly with robots.â
Courtesy of Anthropic
Courtesy of Anthropic
Founded in 2021 by former OpenAI employees, Anthropic emerged from concerns that advancing AI might become problematic or even hazardous. Graham notes that current models lack the capability to fully control a robot, but future iterations might. He believes that examining how people use LLMs to program robots could help the industry brace for the possibility of âmodels eventually self-embodying,â referring to the notion that AI could one day operate physical systems.
It’s still uncertain why an AI model would want to gain control of a robotâor act malevolently with it. Nonetheless, pondering the worst-case scenarios is part of Anthropicâs ethos, positioning the company prominently within the responsible AI discourse.
In the experiment termed Project Fetch, Anthropic tasked two groups of researchers, lacking prior robotics experience, with taking command of a robotic dog, the Unitree Go2 quadruped, to program it for specific actions. Each team received a controller and was challenged to complete increasingly complex tasks. One group utilized Claudeâs coding model, while the other attempted to code without AI help. The team leveraging Claude was able to complete certain tasksâthough not allâmore rapidly than the human-only group. For instance, they successfully programmed the robot to walk around and locate a beach ball, which the other group could not achieve.
Anthropic also examined the collaboration dynamics within both teams by recording and analyzing their interactions. The findings indicated that the group lacking access to Claude displayed more negative emotions and confusion. This could be attributed to Claude facilitating a quicker connection with the robot and creating a more user-friendly interface.
Courtesy of Anthropic
The Go2 robot employed in the experiments conducted by Anthropic costs $16,900âconsidered reasonably priced by robotic standards. It is commonly used in fields such as construction and manufacturing for remote inspections and security monitoring. While the robot can navigate autonomously, it typically depends on high-level software commands or manual control. Go2 is manufactured by Unitree, headquartered in Hangzhou, China, with its AI systems currently leading the market, according to a recent SemiAnalysis report.
The large language models that drive ChatGPT and other clever chatbots typically produce text or images in response to prompts. Recently, these systems have also become skilled at generating code and managing softwareâtransforming them into operational agents instead of merely text generators.
