Sources: DOD asked Boeing and Lockheed Martin to assess their reliance on Claude, a first step toward blacklisting Anthropic; Lockheed confirms it was contacted

The Pentagon has initiated a process that could lead to blacklisting Anthropic, the AI company behind Claude, by designating it as a "supply chain risk." This unprecedented move stems from a dispute over Anthropic's refusal to lift safeguards that prevent Claude's use in sensitive military applications, specifically mass surveillance of Americans and the development of autonomous weapons. The Defense Department views these restrictions as unworkable and has given Anthropic a Friday deadline to comply with its terms. As a preliminary step, the Pentagon has asked major defense contractors like Boeing and Lockheed Martin to assess their reliance on Claude. If designated, this could significantly disrupt the military's use of advanced AI, as Claude is reportedly the only AI model currently operating within classified systems and has been used in critical operations. This action highlights the growing tension between national security imperatives and ethical considerations in AI deployment, particularly for military use cases.

Key Details

The Pentagon's request for Boeing and Lockheed Martin to assess their reliance on Anthropic's Claude model is a significant first step toward potentially labeling the AI firm as a "supply chain risk." This designation, typically reserved for entities from adversarial nations, marks a critical escalation in the dispute between the Defense Department and Anthropic. The military's current dependence on Claude, including its use in sensitive operations like the capture of Nicolás Maduro, underscores the potential impact of such a designation. Lockheed Martin's confirmation of the Pentagon's inquiry validates the seriousness of these actions and indicates a broad outreach to major defense contractors, known as "the primes," is imminent.

Market implications of this potential blacklisting are substantial. While the immediate impact is on defense contractors utilizing Claude, a broader consequence could be the forced diversification of AI models within the defense sector. Competitors like Google (with Gemini) and Elon Musk's xAI, which are already in talks to enter classified systems under less restrictive terms, could see accelerated adoption. Anthropic, despite its robust funding and market penetration in the commercial sector, faces a significant challenge to its government contracts and its reputation as a reliable partner for national security applications. However, the company may also leverage its stance on ethical AI as a differentiator.

Technically, the Pentagon's frustration centers on Anthropic's refusal to allow Claude for "all lawful purposes," particularly concerning surveillance and autonomous weapons. The military finds the requirement to clear individual use cases with Anthropic unworkable for time-sensitive operations. The Defense Department's threat to use the Defense Production Act or impose the supply chain risk designation underscores its determination to gain unfettered access to AI capabilities. This clash highlights a fundamental tension: the need for cutting-edge military AI versus the ethical boundaries and safety protocols championed by AI developers like Anthropic.

Moving forward, the crucial development to watch is the outcome of the Friday deadline set by Defense Secretary Pete Hegseth. Whether Anthropic concedes to the Pentagon's demands, leading to a potential modification of its AI safeguards for military use, or the Pentagon proceeds with the supply chain risk designation or Defense Production Act invocation, will dictate the immediate future. The Pentagon's willingness to "make them pay a price" suggests a firm resolve, but Anthropic's consistent stance indicates it may be prepared to fight any punitive measures, potentially through legal channels, further complicating the landscape of AI in defense.

AI
Covered by: techmeme
Read full article →