92 A recent Anthropic research paper has claimed AI tools were used by China state-sponsored hackers to carry out cyberattacks with 90% autonomy. Anthorpic says, at least 30 organizations were targeted in the attack, though only a small number of them succeeded. The original study, appearing here and here concluded that the hackers used Anthorpic’s Claude AI’s code and only minimal human input was required for successful exploitation. However, independent analysts say the report is exaggerated and blurs the line between AI assistance and human orchestration. Critics argue that the scenarios presented are not reflective of real-world attack complexity, where context, adaptation, and access barriers make full automation far less plausible. Researchers from organizations like Trail of Bits and Stanford pointed out that AI tools currently struggle with sustained, context-rich decision-making, especially when facing adaptive defense mechanisms or needing to interpret incomplete system signals. The “90% autonomous” figure, they say, lacks meaningful definition without clarity on what constitutes human oversight versus decision-making. Anthropic, for its part, has acknowledged the feedback and maintains that the study aimed to explore emerging risks, not assert full autonomy as a present-day capability. You Might Be Interested In Canva unveils Creative Operating System and makes Affinity design suite free for all OpenAI names PHD as global media agency of record Publicis CEO Dismisses Meta Threat, Raises Growth Outlook Rafael Nadal warns fans of AI scam misusing his voice and image Accenture Song Bolsters Creator Power With Superdigital Instagram Ads Are Now Nearly Invisible—And That’s the Problem