This isn’t just another AI blunder; it’s a judicial smackdown. A federal judge just declared a government agency’s $100 million grant cancellation unconstitutional, all thanks to a bafflingly dumb and borderline illegal use of ChatGPT. Who would have thought?
Look, I’ve been watching this AI circus for years, seen more hype than a Bitcoin convention in its prime. But this? This is special. The Department of Government Efficiency (DOGE) – what a name, by the way – decided to slash over $100 million in federal grants. Their justification? Apparently, they tasked ChatGPT with figuring out if a grant was related to Diversity, Equity, and Inclusion (DEI). Yeah, you read that right. They just fed descriptions into the bot and let it decide.
The judge, Colleen McMahon, didn’t mince words. Her 143-page decision essentially says DOGE went full-on Rube Goldberg with a chatbot. They were trying to cut grants related to DEI, and instead of, you know, actually reading the grant proposals or having humans with actual domain knowledge make decisions, they punted it to an AI they didn’t even bother to define the terms for. “It could not be more obvious that DOGE used the mere presence of particular, protected characteristics to disqualify grants from continued funding,” she wrote, and honestly, my hat’s off to her for keeping a straight face through that one.
The ‘Craziest Grants’ Fueled by AI
This whole mess hinges on testimony from a DOGE staffer, Justin Fox. He admitted to using ChatGPT to “highlight why [a] grant may relate to DEI” and “to pull out anything related to DEI.” His method? Feed grant descriptions into ChatGPT with a prompt like, “Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation.” Fox’s pièce de résistance? He freely admitted he didn’t define “DEI” for the AI and had “not the slightest idea how ChatGPT understood the term.” I mean, it’s almost impressive in its sheer, unadulterated incompetence.
And it gets better (or worse, depending on your perspective). Fox also used ChatGPT to scan for terms he’d flagged as “Detection Codes” – think “BIPOC,” “Minorities,” “LGBTQ,” “Gay.” When questioned, he confirmed he ran this list of words through every grant description. So, DOGE didn’t just use AI to guess about DEI; they used it to actively search for keywords associated with protected characteristics, then conveniently labeled those grants as “wasteful.” Grants dealing with the Holocaust, civil rights, or indigenous knowledge? All dumped into the same bin of “wasteful” projects because a chatbot flagged a word. It’s like using a smoke detector to find a black cat in a coal mine.
McMahon also pushes back on the government’s argument that “there is no real constitutional problem here because any viewpoint-based classification was ChatGPT’s doing, rather than the Government’s:”
There is no distinction to be drawn here between the Government and ChatGPT. ChatGPT was the Government’s chosen instrument for purposes of this project, and DOGE’s use of AI to identify DEI-related material neither excuses presumptively unconstitutional conduct nor gives the Government carte blanche to engage in it. …There is not a scintilla of evidence that Fox or Cavanaugh, having obtained a “DEI” rationale from ChatGPT, undertook any meaningful review of whether that rationale made sense.
This is the core of it. The government can’t just outsource its constitutional obligations to a black box. They’re trying to slough off responsibility onto the AI, but the judge rightly pointed out that ChatGPT was their tool. If the tool is being used to violate fundamental rights, that’s on the user, not the tool manufacturer. They didn’t even bother to check if the AI’s output made sense. Zero oversight. Just blind faith in the magic box.
Who’s Actually Making Money Here?
This ruling is a loud, clear warning shot. It tells government agencies that blindly trusting AI, especially when dealing with sensitive legal and constitutional matters, is a recipe for disaster. It’s not just about efficiency; it’s about fundamental rights. The irony is thick: a tool often touted for its ability to streamline processes is being held up as the very mechanism that undermined due process. And who profits? Well, OpenAI, I suppose, if more agencies decide to blindly outsource their thinking. For the rest of us, it’s another reminder that AI is a tool, not a solution, and the humans wielding it still need to, you know, think.
Ultimately, McMahon found DOGE’s move unconstitutional, violating the First and Fifth Amendments, and yanked their authority to do such a thing. She ordered the cancellation reversed. So, the grants are back. The lesson? When the government uses AI, it’s not about whether the AI is smart; it’s about whether the people using it are.
**
🧬 Related Insights
- Read more: Meta Rips Away Instagram’s Encryption Shield – Privacy’s Dark Day
- Read more: Legora’s aOS: Dawn of ‘Agentic Law’ or Just More Hype?
Frequently Asked Questions**
What was DOGE’s role in this case? DOGE, the Department of Government Efficiency, cancelled over $100 million in federal grants. They used ChatGPT to determine if grant proposals related to Diversity, Equity, and Inclusion (DEI).
Why was the use of ChatGPT deemed illegal? Judge Colleen McMahon ruled that DOGE’s reliance on ChatGPT without defining terms or performing meaningful review was unconstitutional. The AI was used as a tool to disqualify grants based on protected characteristics, violating First and Fifth Amendment rights.
Will this ruling impact other government AI use? Yes, the ruling serves as a significant warning to government agencies about the unconstitutional risks of blindly outsourcing decision-making to AI. It emphasizes the need for human oversight and adherence to legal standards when implementing AI tools in sensitive areas.