The U.S. National Security Agency (NSA) has quietly integrated Anthropic’s latest artificial intelligence model, Claude Mythos Preview, into its operational workflow. The move, first reported by Axios, signals a significant pivot for an agency that operates at the intersection of high-stakes signals intelligence and emerging technology.

The adoption is particularly noteworthy given the historical friction between the federal government and Anthropic. Previous restrictions, established during the Trump administration, sought to limit the use of the company's technology for military and defense purposes following disagreements over the ethical boundaries of AI deployment. Anthropic has long positioned itself as a "safety-first" developer, a stance that has occasionally clashed with the more aggressive requirements of national security infrastructure.

This integration suggests a pragmatic softening of those barriers as the intelligence community races to keep pace with rapid advancements in large language models. For the NSA, the utility of Claude’s sophisticated reasoning and linguistic capabilities appears to have outweighed lingering policy hesitations, marking a new chapter in the complex relationship between Silicon Valley’s AI pioneers and the American defense establishment.

With reporting from Exame Inovação.

Source · Exame Inovação