A new Trump administration contracting clause would require AI companies like Anthropic to make their technology available to federal agencies “for any lawful government purpose” — even for uses their systems are designed to prevent.

The Trump administration is quietly advancing a sweeping new contracting clause that would empower federal officials to force artificial intelligence companies to scrap safety protocols, eliminate privacy protections, and follow other Trump directives as a condition of doing business with the government, according to draft text reviewed by the Lever.
The new proposal comes just as the Trump administration faces off in court against the major AI company and military contractor Anthropic, which has objected to government directives mandating the use of its algorithms for mass surveillance and fully autonomous weapons.
The proposed government-wide change to procurement policy would require all artificial intelligence vendors to make their technology available to federal agencies “for any lawful government purpose” — even those that the companies object to or that their systems are engineered to prevent. This would enshrine the same demand that the Pentagon is contesting in court in its ongoing battle against Anthropic.
Under the new guidance, this “key portion” of the Anthropic contract under dispute would become “a required standard clause for all government contracts and solicitations for any AI systems,” said Quinn Anex-Ries, a senior policy analyst at the Center for Democracy and Technology, an advocacy group that has intervened in the Anthropic case.
The language is so broad that even a former Trump official who wrote the administration’s policy proposal for artificial intelligence, known as the AI Action Plan, called the provision legally “unworkable” in a public comment.
The clause was initially given only a two-week comment period — a “very short window,” according to one law firm tracking the proposal, due to the use of a nontraditional rulemaking process. Officials extended the comment period this week at the AI industry’s request.
Anthropic did not respond to a request for comment.
“For Any Lawful Government Purpose”
On March 6, the General Services Administration, the federal government’s primary contracting authority, proposed new rules for all AI vendors working with federal agencies. The framework covers a broad range of issues, from data protection and portability to incident reporting, and includes requirements for departments to work with “American AI” companies.
But buried in the proposal is a provision requiring vendors to grant the federal government an “irrevocable” license for their software, which the government could use for any lawful purpose. As a condition of the license, the vendor would be barred from refusing to “produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.”
As Anex-Ries noted, requiring vendors to allow their technology to be used for anything so long as it is technically legal is an “extremely broad” carveout.
“‘Lawful purpose’ might sound nice on the surface but actually would include a bunch of pretty harmful use cases that a vendor would rightfully not want to agree to,” Anex-Ries said.
Such “legal” applications could, for instance, include mass surveillance or fully autonomous weapons, the two red lines that Anthropic has drawn in its battle with the Pentagon over the use of its technology. The US military is currently using Anthropic’s AI systems in its war against Iran, which has killed over two thousand people.
The proposal has garnered pushback even from Trump allies. In a public comment, Dean Ball, a former Trump official who played a key role in writing the administration’s AI Action Plan last year, called the clause “unworkable and legally unstable.”
In comments submitted on Friday to the General Services Administration, Ball wrote that the proposal suffers “several serious deficiencies,” calling the “no-refuse” provision an “unworkable and legally unstable mandate.” He argues that the clause could lead to the “elimination of all model-level and system-level safeguards” by AI companies.
The proposed amendment to federal contracting shares similar aims to national AI legislative guidelines announced on Friday by the White House, which create a light-touch regulatory framework that explicitly overrides state AI regulations on matters like safety and privacy.
Battle Over AI Safety
Anthropic has cultivated a reputation as the “AI safety” company for using relatively rigorous internal risk assessments on its models and lobbying federal and state regulators to adopt similar protocols. The company is also a major government contractor, first striking a deal in 2024 with the Biden administration and then signing a $200 million deal with the Pentagon in 2025 for the use of its technology.
In negotiations over the renewal of its contract with the federal government, which have been ongoing for months, Anthropic pressed for several additional safety provisions. The company sought contractual guardrails to block the use of its technology for domestic mass surveillance and autonomous warfare capabilities, neither of which would technically be prohibited under federal law.
The Defense Department refused to accept these terms, arguing that a private company should not be able to dictate extralegal terms of use to the government. When Anthropic objected, Defense Secretary Pete Hegseth blacklisted Anthropic from all government contracting and labeled Anthropic a “supply chain risk,” a designation that could damage the firm’s commercial business.
Anthropic warned on March 5 that it was planning to sue the government over the dispute. The following day, the General Services Administration released its new AI proposal mandating that all such technology could be used for any lawful purpose.
Anthropic followed through with its threat three days later, suing the Defense Department over what it called “unprecedented and unlawful” actions in violation of the company’s First Amendment rights.
“Weaponizing the Procurement Process”
If enacted, the proposed new AI clause would give federal agencies the authority to supersede any AI contractor’s terms of use that don’t align with federal laws on matters such as privacy, surveillance, and safety. It would apply to both subcontractors and commercial vendors whose technology is used under a federal contract.
The proposal also requires AI systems to “not refuse to produce data outputs or conduct analyses” based on any “discretionary policies.”
In addition to granting the government an irrevocable license to the technology, the proposal has several other concerning elements, said Anex-Ries. That includes a clause barring companies from favoring “ideological dogmas such as diversity, equity, inclusion,” while at the same time requiring all vendors to be ostensibly “unbiased.”
“We have a number of instances where they’re trying to score political points by weaponizing the procurement process to pressure vendors into changing their practices or their design of tools,” he said.
This article was first published by the Lever, an award-winning independent investigative newsroom.