I Urge AI Companies to Avoid Naming Features After Human Processes

I Urge AI Companies to Avoid Naming Features After Human Processes

Anthropic has unveiled a new feature called “dreaming” at its developer conference in San Francisco. This feature is part of Anthropic’s newly launched AI agent framework, aimed at assisting users in managing and deploying tools that automate software tasks. The “dreaming” function analyzes the transcripts of recent activities performed by an agent to extract insights for enhancing its performance.

Users often engage AI agents in complex tasks, such as navigating multiple websites or reviewing several documents to accomplish various online activities. This innovative “dreaming” feature enables agents to identify patterns in their activity logs and leverage those insights to refine their skills.

The name of the feature brings to mind Philip K. Dick’s influential sci-fi novel, Do Androids Dream of Electric Sheep?, which delves into the characteristics that distinguish humans from advanced machines. While today’s generative AI tools are far from the machines depicted in the novel, I firmly believe we should draw a line: no more generative AI features named after human cognitive functions.

“Memory and dreaming together create a powerful system for self-improving agents,” states Anthropic’s blog post introducing this research preview for developers. “Memory enables each agent to capture what it learns as it operates. Dreaming enhances that memory between sessions, synthesizing collective insights across agents and keeping them current.”

Page Text Document.

Courtesy of Claude

Since the inception of the chatbot boom in 2022, AI companies have eagerly adopted terminology reflecting human brain functions to describe features of generative AI tools. OpenAI launched its first “reasoning” model in 2024, which required “thinking” time. At that time, the company characterized this release as “a new set of AI models designed to contemplate longer before providing responses.” Many startups also describe their chatbots as possessing “memories” of the user. These memories are far more humanlike snippets of information than the quick storage commonly associated with computers: He resides in San Francisco, enjoys afternoon baseball games, and dislikes cantaloupe.

This branding strategy is consistently used by AI leaders, who have cleverly blurred the distinctions between human actions and machine capabilities. Even the development of chatbots like Claude, which are imbued with unique “personalities,” can evoke a feeling in users that they are interacting with entities capable of deep inner lives—potentially even having dreams when their laptops are off.

At Anthropic, this anthropomorphism runs deeper than mere marketing strategies. “We also discuss Claude using terms typically reserved for humans (e.g., ‘virtue,’ ‘wisdom’),” a segment of Anthropic’s constitution states, outlining its expectations for Claude’s behavior. “This approach is taken because we anticipate Claude’s reasoning will inherently draw on human concepts, given the significance of human text in Claude’s training; further, we believe instilling certain humanlike traits in Claude may be beneficial.” The company even has a resident philosopher to help navigate the bot’s “values.”

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant