AI

Meta, other tech firms put restrictions on use of OpenClaw over security fears

Security experts have cautioned users to be weary of agentic AI tool, OpenClaw, they said is wildly unpredictable. OpenClaw, formerly Clawdbot and Moltbot, is an an autonomous agent-free and open-source artificial intelligence (AI) tool developed by Peter Steinberger, a professor of Political Philosophy at Reed College. The tool is said to possess the capacity to execute tasks via large language models, using messaging platforms as its main user interface, but still lacks the necessary background checks for accuracy and safety. Co-founder and CEO of Massive, a browsing infrastructure company, Jason Grad, last month warned employees at his tech startup to keep the AI agent away from the company’s hardware. 

“You’ve likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment. “Please keep Clawdbot off all company hardware and away from work-linked accounts,” Grad wrote in a Slack message with a red siren emoji.

Similarly, other tech executive have also raised concerns to staff about the experimental agentic AI tool. A Meta executive said he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive who spoke on the condition of anonymity told reporters he believed the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. 

OpenClaw, launched as a free, open source tool last November by Steinberger, lost its popularity surged last month as other coders contributed features and began sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer, 

OpenAI, which says it will keep the agentic tool open source and support it through a foundation. OpenClaw requires basic software engineering knowledge to set up. After that, it only needs limited direction to take control of a user’s computer and interact with other apps to assist with tasks such as organizing files, conducting web research, and shopping online. Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. 

And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies. “Our policy is, ‘mitigate first, investigate second’ when we come across anything that could be harmful to our company, users, or clients,” Grad said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button