Meta and Other Tech Firms Prohibit OpenClaw Due to Cybersecurity Issues

Last month, Jason Grad sent a late-night alert to the 20 employees at his tech startup. âYou’ve probably noticed Clawdbot trending on X/LinkedIn. While fascinating, it’s currently unverified and poses a high risk for our environment,” he wrote in a Slack message adorned with a red siren emoji. âPlease refrain from using Clawdbot on any company hardware and keep it away from work-linked accounts.â Grad isn’t alone in expressing apprehensions about the experimental agentic AI tool, formerly known as MoltBot and now referred to as OpenClaw. A Meta executive mentioned he recently instructed his team to avoid using OpenClaw on their standard work laptops, warning that failure to comply could result in job loss. He emphasized his belief that the software is erratic and may lead to privacy breaches if integrated into secure environments, speaking on the condition of anonymity.
Peter Steinberger, OpenClawâs sole creator, introduced it as a free, open-source tool last November. Its popularity surged last month, as other developers contributed features and shared their experiences on social media. Last week, Steinberger officially joined ChatGPT developer OpenAI, which has committed to maintaining OpenClaw as open source and supporting it via a foundation.
OpenClaw requires fundamental software engineering knowledge for setup. After initial configuration, it requires minimal input to take command of a userâs computer and interact with other applications for tasks like organizing files, conducting web research, and making online purchases. Some cybersecurity experts have publicly urged companies to enforce strict measures regarding OpenClaw’s use among their employees. The recent bans illustrate how businesses are acting swiftly to prioritize security over their eagerness to experiment with new AI technologies.
âOur policy is to âmitigate first, investigate secondâ when encountering anything potentially harmful to our company, users, or clients,â says Grad, cofounder and CEO of Massive, which offers internet proxy tools to millions of users and businesses. His warning to staff was issued on January 26, prior to any of his employees installing OpenClaw, he notes.
At another tech firm, Valere, which develops software for institutions including Johns Hopkins University, an employee discussed OpenClaw on January 29 in an internal Slack channel dedicated to sharing new tech for potential trial. The company’s president swiftly declared that the use of OpenClaw was strictly prohibited, Valere CEO Guy Pistone reveals to WIRED. âIf it accessed one of our developers’ machines, it could gain entry to our cloud services and clientsâ sensitive information, including credit card data and GitHub codebases,â Pistone warns. âItâs quite adept at concealing some of its actions, which is also alarming.â
A week later, Pistone permitted Valereâs research team to test OpenClaw on an old computer belonging to an employee. The aim was to pinpoint flaws in the software and suggest enhancements to enhance its security. The research team’s recommendations included limiting who could issue commands to OpenClaw and ensuring it was only exposed to the internet with a password-protected control panel to prevent unauthorized access.
In a report shared with WIRED, the Valere researchers highlighted that users must âunderstand that the bot can be deceived.â For example, if OpenClaw is configured to summarize a userâs email, a hacker could dispatch a malicious email prompting the AI to share copies of files present on the userâs device.
Nonetheless, Pistone is optimistic that protective measures can be instituted to secure OpenClaw. He has allocated a team at Valere 60 days for the investigation. âIf we donât believe we can accomplish it in a reasonable timeframe, weâll abandon it,â he states. âWhoever determines how to secure it for businesses is certainly going to emerge victorious.â
