The evolving landscape of artificial intelligence (AI) continues to draw significant interest from global leaders. In an effort to stay abreast of rapid advancements and their implications, White House chief of staff Susie Wiles recently engaged in a pivotal discussion with Dario Amodei, CEO of AI firm Anthropic, about their latest AI innovation, the Mythos model. This conversation underscores the federal government’s keen interest in the potential impact of AI on national security and economic growth.
According to a White House official, who chose to remain anonymous, the administration is actively assessing the implications of advanced AI models and their software security. Any government adoption of new technology, they emphasized, would undergo a comprehensive technical evaluation period.
The White House described the meeting as not only productive but also constructive, highlighting the potential for collaboration while emphasizing the importance of maintaining a balance between innovation and safety. Anthropic echoed this sentiment, noting that the dialogue covered key priorities such as cybersecurity, the U.S.’s position in the AI race, and ensuring the safety of AI technologies. The company expressed eagerness to continue these discussions.
Historically, Anthropic’s focus on AI safety has led to friction with the previous Trump administration. The company, which advocates for AI development safeguards, faced opposition from Trump, who sought to ban federal use of their chatbot, Claude, citing a contract dispute with the Pentagon. In February, Trump declared on social media that the administration would sever ties with Anthropic, yet when queried about the recent White House meeting, he denied awareness of it.
Defense Secretary Pete Hegseth’s attempt to classify Anthropic as a supply chain risk further strained relations, prompting a legal challenge from the company. Anthropic insisted on guarantees that its technology would not be employed in autonomous weapons or domestic surveillance. Hegseth, however, maintained that the Pentagon should retain discretion over lawful applications of the technology.
In a notable judicial intervention, U.S. District Judge Rita Lin blocked Trump’s directive to cease federal use of Anthropic’s products in March. The Mythos model, launched on April 7, has been described by Anthropic as exceptionally capable, with the power to surpass human efforts in identifying and exploiting cybersecurity vulnerabilities. While some industry insiders speculate this could be a marketing tactic, others like David Sacks, a vocal Anthropic critic, have acknowledged the model’s potential.
“Anytime Anthropic is scaring people, you have to ask, ‘Is this a tactic? Is this part of their Chicken Little routine? Or is it real?’” Sacks remarked on his “All-In” podcast. “With cyber, I actually would give them credit in this case and say this is more on the real side.”
The United Kingdom’s AI Security Institute has also scrutinized Mythos, recognizing it as a significant advancement over earlier models. Their report noted that the model could exploit systems with weak security, hinting at future developments in similar capabilities.
Anthropic’s discussions extend beyond the U.S., as they engage with the European Union regarding their AI models, including those not yet introduced in Europe, stated European Commission spokesman Thomas Regnier. Axios initially covered the meeting between Wiles and Amodei.
In tandem with the Mythos announcement, Anthropic introduced Project Glasswing, an initiative aimed at mitigating potential risks associated with the model by collaborating with major tech and financial entities like Amazon, Google, Microsoft, and JPMorgan Chase. “We’re releasing it to a subset of some of the world’s most important companies and organizations so they can use this to find vulnerabilities,” explained Anthropic co-founder Jack Clark during the Semafor World Economy conference.
Clark emphasized that while Mythos is currently ahead, it is not unique. He projected that similar systems from other companies would emerge soon, with open-weight models from China likely to follow within a year or so, necessitating global preparedness for these powerful technologies.






