The Donald Trump government raised concerns on Wednesday over a proposal by Anthropic to expand access to its powerful Mythos AI model to around 70 organisations. As per Bloomberg, these concerns emerge from fear that this AI model could be potentially misused for cyberattacks, and giving its access to more companies may strain computing resources required by the US government.
What Are The Security Concerns Around Mythos AI?
The Mythos AI was introduced earlier this month, and it was described as a highly advanced AI system that is capable of identifying critical software vulnerabilities. This capability has caused alarm among banking officials and the tech industry, who fear that this model could be potentially used for hacking activities if it falls into the wrong hands.
As per the report, US officials also worried whether Dario Amodei-led Anthropic has sufficient computing power to support a broader rollout for 70 organisations. Expanding access to more companies could potentially reduce the US government’s ability to use the system for its own needs. Moreover, a White House official reportedly mentioned that the government is trying to strike a balance between encouraging public safety and technological progress.
Related News |
Govt In Talks With US For Access To Anthropic Mythos AI Amid Security Concerns
Limited Release And Unauthorised Access
Anthropic has given the access of its Mythos AI to select groups of companies for controlled usage. The tech giant has acknowledged that this AI system is risky for a wider release, and it might also be powerful enough to hack software on its own.
According to Bloomberg, Carlini found during his testing that the Mythos could break into digital infrastructure easily. He realised that the system could go far beyond just assisting humans by acting on its own. Carlini and his team even found that Anthropic’s AI could identify system issues and exploit them on its own. It could even build its own hacking tools and target software like Linux.


