123 A recent Anthropic research paper has claimed AI tools were used by China state-sponsored hackers to carry out cyberattacks with 90% autonomy. Anthorpic says, at least 30 organizations were targeted in the attack, though only a small number of them succeeded. The original study, appearing here and here concluded that the hackers used Anthorpic’s Claude AI’s code and only minimal human input was required for successful exploitation. However, independent analysts say the report is exaggerated and blurs the line between AI assistance and human orchestration. Critics argue that the scenarios presented are not reflective of real-world attack complexity, where context, adaptation, and access barriers make full automation far less plausible. Researchers from organizations like Trail of Bits and Stanford pointed out that AI tools currently struggle with sustained, context-rich decision-making, especially when facing adaptive defense mechanisms or needing to interpret incomplete system signals. The “90% autonomous” figure, they say, lacks meaningful definition without clarity on what constitutes human oversight versus decision-making. Anthropic, for its part, has acknowledged the feedback and maintains that the study aimed to explore emerging risks, not assert full autonomy as a present-day capability. You Might Be Interested In Meta trims detailed ad targeting as AI takes the wheel for performance Google launches Gemini Enterprise to accelerate AI adoption in workplaces Generative AI reshapes marketing: 73% of teams now onboard Leaked files suggest Meta may have earned up to $16 billion from scam ads Why Conversational Data May Be the Future of Marketing Intelligence HCLTech partners with OpenAI to accelerate enterprise AI adoption