Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


Court temporarily blocks US government from labeling Anthropic as a 'supply chain risk'
Court temporarily blocks US government from labeling Anthropic as a 'supply chain risk'

The court has granted Anthropic’s request for a preliminary injunction, preventing the government from banning its products for federal use and from formally labeling it as a “supply chain risk,” at least for now. If you’ll recall, things turned sour between the company and the Trump administration when Anthropic refused to change the terms of its contract that would allow the government to use its technology for mass surveillance and the development of autonomous weapons.In response to Anthropic’s refusal, the president ordered federal agencies to stop using Claude and the company’s other services. The Defense Department also officially labeled it as a supply chain risk, which is typically reserved for entities typically based in US adversaries like China that threaten national security. In addition, department secretary Pete Hegseth warned companies that if they want to work with the government, they must sever ties with Anthropic. The AI company challenged the designation in court, calling it unlawful and in violation of free speech and its rights to due process. It asked the court to put a pause on the ban while the lawsuit is ongoing, as well. In a court filing, the Defense Department said giving Anthropic continued access to its warfighting infrastructure would “introduce unacceptable risk” to its supply chains. But Judge Rita F. Lin of the District Court for the Northern District of California said the measures the government took “appear designed to punish Anthropic.” Lin wrote in her decision that it seems Anthropic is being punished for criticizing the government in the press. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she continued. The judge also said that the supply chain risk designation is contrary to law, arbitrary and capricious. She added that the government argued that Anthropic showed its subversive tendencies by “questioning” the use of its technology. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” she wrote.Anthropic told The New York Times that it’s “grateful to the court for moving swiftly” and that it’s now focused on “working productively with the government to ensure all Americans benefit from safe, reliable AI.” The company’s lawsuit is still ongoing, and the court has yet to issue its final decision. Judge Lin said, however, that Anthropic “has shown a likelihood of success on its First Amendment claim.”This article originally appeared on Engadget at https://www.engadget.com/ai/court-temporarily-blocks-us-government-from-labeling-anthropic-as-a-supply-chain-risk-083857528.html?src=rss

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Anthropic rolls out Code Review for Claude Code as it sues over Pentagon bl

<p><a href="https://www.anthropic.com/">Anthropic</a> on Monday released <a href="https://claude.com/blog/code-review">Code Review</a>, a multi-agent [...]

Match Score: 173.31

venturebeat
AI agent evaluation replaces data labeling as the critical path to producti

<p>As LLMs have continued to improve, there has been some discussion in the industry about the continued need for standalone data labeling tools, as LLMs are increasingly able to work with all t [...]

Match Score: 131.45

venturebeat
Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to

<p><a href="https://www.anthropic.com/">Anthropic</a> dropped a bombshell on the artificial intelligence industry Monday, publicly accusing three prominent Chinese AI labor [...]

Match Score: 123.87

venturebeat
Anthropic says its most powerful AI cyber model is too dangerous to release

<p><a href="https://www.anthropic.com/">Anthropic</a> on Tuesday announced <a href="https://www.anthropic.com/glasswing">Project Glasswing</a>, a swee [...]

Match Score: 112.37

venturebeat
Anthropic is giving away its powerful Claude Haiku 4.5 AI for free to take

<p><a href="https://anthropic.com/"><u>Anthropic</u></a> released <a href="https://www.anthropic.com/news/claude-haiku-4-5"><u>Claude Haik [...]

Match Score: 112.19

venturebeat
Anthropic's Claude Code can now read your Slack messages and write cod

<p><a href="https://anthropic.com/"><u>Anthropic</u></a> on Monday launched a beta integration that connects its fast-growing <a href="https://www.claud [...]

Match Score: 110.97

venturebeat
Anthropic just launched Claude Design, an AI tool that turns prompts into p

<p><a href="https://www.anthropic.com/">Anthropic</a> today launched <a href="https://claude.com/blog/claude-design-anthropic-labs">Claude Design</a>, [...]

Match Score: 110.90

venturebeat
Seven steps to AI supply chain visibility — before a breach forces the is

<p>Four in 10 enterprise applications will feature <a href="https://www.gartner.com/en/newsroom/press-releases/2025-04-08-gartner-forecasts-spending-on-information-security-in-mena-to-gr [...]

Match Score: 110.74

Anthropic says it will challenge Defense Department's supply chain risk designation in court
Anthropic says it will challenge Defense Department's supply chain ris

<p>In a new blog post, Anthropic CEO Dario Amodei has <a target="_blank" class="link" href="https://www.anthropic.com/news/where-stand-department-war" data-i13n= [...]

Match Score: 102.42