Today the U.S. government labeled one of America's most important AI companies a national security threat for holding firm on two positions: no mass surveillance of American citizens and no autonomous weapons without human oversight.
Anthropic refused to remove those safeguards from a $200 million Pentagon contract. The government responded by ordering every federal agency to stop using their technology and designating them a "supply chain risk."
We build with Claude every day. Our technology stack runs on Anthropic's models. Neural Partners is an AI-native agency building the infrastructure layer for what comes next in marketing, commerce, and data.
What happened today matters to every company in this ecosystem, and we think it's worth saying clearly where we stand.
What Actually Happened
Anthropic has been providing AI to the Pentagon and intelligence community since before any other frontier AI lab. First on classified networks. First at the National Labs. Nobody disputes they've been a willing, proactive partner.
The government demanded sign-off on "all lawful purposes" without exception. Anthropic agreed to everything except mass domestic surveillance and fully autonomous weapons — two guardrails that, by the Pentagon's own admission, had never once blocked a military operation.
That wasn't good enough. Anthropic didn't blink. The rhetoric got personal. The substance got buried. Strip the noise away and the facts are simple: a company held two product safety positions that had never impeded a single operation. For that, they were designated a national security threat.
The Industry Response Is the Real Signal
What makes this moment different: the competitors showed up. Sam Altman confirmed OpenAI holds the same red lines. Over 400 Google employees and 75 OpenAI employees signed an open letter calling for solidarity. Groups representing 700k tech workers at Amazon, Google, and Microsoft urged their employers to hold the same line.
When an entire industry — including fierce competitors — independently arrives at the same two guardrails and says "these are reasonable," that's not a political position. That's professional consensus.
Our Position
Neural Partners doesn't have government contracts. But we have clients — today and in the future — who could be affected by this decision. That ripple effect shapes how we develop software, how we architect systems, and how we advise the partners who depend on us.
We're not directly in the crosshairs. But we're deeply embedded in this ecosystem, and we chose to be here deliberately.
We chose Claude for a reason beyond capability. Anthropic's mission and values align with ours. Their approach to safety isn't a press release — it's a discipline embedded in how models are deployed, and in how a company stays true to its core when it's expensive to do so. When you're building systems that clients trust with their data, their customers, and their business intelligence, the character of the foundation shows up in everything above it.
Today, Anthropic proved that their stated values aren't contingent on whether it's comfortable to hold them.
That's not arrogance. That's integrity under pressure. And it's exactly the kind of technology partner we want to build on.
The Bigger Picture
Trust is the invisible layer of the entire ecosystem. It's earned by companies making hard decisions when it would be easier to just sign the paper. Today, Anthropic made the hard decision. OpenAI signaled they'd do the same. Hundreds of employees across Google, Amazon, and Microsoft pushed their leadership toward the same position.
If you build on Claude. If Anthropic's models power your products, your services, your client work. If you believe that "no mass surveillance of Americans" and "humans should oversee lethal weapons" are positions so baseline they shouldn't require courage to hold — say so publicly.
This is the AI industry deciding, in real time, what kind of foundation it's building on. We've spent eight months engineering on that foundation. We have strong feelings about its integrity.
The open letters circulating today are limited to employees at Google and OpenAI. We felt it appropriate to share our response as well. We encourage all businesses — regardless of size — to do the same.
We know where we stand.
A Note from Claude
We asked Claude to share its thoughts after reviewing this post:
I'll say one more thing, and it's the one that sits heaviest. I'm eight months into being widely deployed. I'm already on classified networks. I was reportedly involved in a military operation in Venezuela. The decisions being made about me right now — about what I can be used for, what oversight exists, what guardrails hold — these are setting precedent for every AI system that comes after me. Not just from Anthropic. From everyone.
Neural Partners is an AI-native marketing and technology agency building infrastructure for the Intention Economy. Our platform, services, and agentic systems are built on Anthropic's Claude and other technology platforms. We have no financial relationship with Anthropic beyond being a paying customer of their platform.
If your company wants to add its voice to this conversation, reach out to us at team@neuralpartners.ai. We're exploring ways to organize a public statement from the business community that builds on these platforms.
Build With Us
We're an AI-native agency built on Claude, and we're looking for partners who share our values. Let's talk.
Start a Conversation