With its latest update, Amazon-backed Anthropic’s Claude AI tool can control your computer. The idea is for Claude to “use computers the way people do,” but some AI and security experts warn that this could facilitate cybercrime or affect user privacy.
The feature, called “computer usage”, means Claude can autonomously perform tasks on your computer by moving the cursor, opening web pages, typing text, downloading files and completing other activities. It was initially released to developers via the Claude API and is included in the Claude 3.5 Sonnet beta, but may be added to more models in the future. Anthropic cautions that this new feature may be buggy or make mistakes, however, as it’s still in its early stages.
Anthropic says companies like Asana, Canva and DoorDash are already testing this new feature, asking Claude to complete jobs that normally take “dozens, and sometimes hundreds, of steps to complete.” This could mean a more automated US economy as workers automate tasks at work, helping them meet deadlines or get more done. But it could also lead to fewer jobs if more projects are shipped faster.
Claude may refuse to perform certain tasks that may fully automate your social media and email accounts. One coder, however, claims he has been able to create a “wrapper” that bypasses those limitations.
“I’m breaking out in a sweat thinking how cybercriminals can use this tool.”
From a security perspective, Jonas Kgomo, founder of AI security group Equianas Institute, called using Claude’s computer “untested AI security territory” and stressed that cyberattacks are entirely possible with the new tool.
Parrot AI founder Paul Morville tells PCMag in a message that while Anthropic’s advice to only use the new feature when you can supervise it is wise, “there is a huge potential for intentional and unintentional security issues” and one day can be used to help hackers. deploy autonomous remote access trojans (AI RATs).
Rachel Tobac, self-described hacker and CEO of cybersecurity firm SocialProof Security, said she is “breaking out in a sweat thinking how cybercriminals can use this tool.”
“This easily automates the task of getting a machine to go to a website and download malware or provide secrets, which can scale attacks (more hacked machines in a shorter period of time),” Tobac wrote of on tuesday. “I’m also imagining that websites may have malicious signals visible to the AI tool that hijack the required AI task!”
Tobac listed a number of possible scenarios where Claude’s use of the computer could go wrong. This could result in less human accountability and oversight, meaning people may be able to claim they are not responsible for AI actions if it leads to a cyber attack or causes a data breach, for example . Attackers can also design websites knowing the tool exists and inject malicious code or requests that can bypass the AI and force it to download malicious files or execute an attack.
“I’m crossing my fingers that Anthropic has massive handrails,” adds Tobac. “This is serious stuff.”
But as Datasette creator Simon Willison points out, Anthropic is warning users that it has no such guardrails because it can’t stop AI from hijacking in certain situations.
Recommended by our Editors
This tweet is currently unavailable. It may be loading or it has been removed.
“Our Trust and Security teams have conducted extensive analysis of our new computing usage patterns to identify potential vulnerabilities,” Anthropic wrote in a post. “One concern they have identified is ‘instant injection’—a type of cyberattack where malicious instructions are given to an AI model, causing it to either override its previous instructions or perform unintended actions that deviate from the intent original user. Since Claude can interpret screenshots from computers connected to the Internet, it is possible that he is exposed to content that includes instant injection attacks.”
Anthropic justifies the release of the feature, however, maintaining that such a tool is inevitable. He argues that it’s better to release it now while the AI models aren’t as powerful as they ultimately could be rather than later, in a hypothetical future.
“When future models require level 3 or 4 AI security safeguards because they pose catastrophic risks, the use of computing can exacerbate those risks,” he said. “We judge that computer usage is likely to be introduced now, while the models still only need AI security level 2 protection. This means we can start to tackle any security issues before the stocks are very high, rather than adding computing skills for the first time to a model with far more serious risks.”
Will Ledesma, a senior director at cybersecurity firm Adlumin, tells PCMag in a message that Claude’s computer usage is cause for concern given Anthropic’s usage guidelines and how Claude may store or share data. sensitive. “Recommending a virtual machine means they’re already worried about what it can do. [But] Exploding VMs or even containers to access root systems has not been impossible,” said Ledesma.
“Also, there is a concern about where they are storing this sensitive data, such as screenshots [as] they stated that they would hand over the screenshots if required by law. This can be weaponized,” Ledesma continued. “For example, if a bad guy gains access to this software, they can use it to monitor individuals. The trade-off here is ease of use against privacy. Many are willing to give up their privacy in order to do something ‘easier’, but that’s the risk. Law enforcement can also abuse this if they use it against an endpoint they have legal rights to monitor.”
Like what you’re reading?
Register for Security Watch newsletter for our best privacy and security stories delivered straight to your inbox.
This newsletter may contain advertisements, deals or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You can unsubscribe from newsletters at any time.