A federal judge ruled that the Department of Government Efficiency violated the Constitution when it used ChatGPT to cancel over $100 million in grants. US District Judge Colleen McMahon issued a 143-page decision Thursday finding the cancellations unconstitutional.
DOGE's process relied on ChatGPT to flag grants allegedly connected to diversity, equity, and inclusion programs. The AI tool became the primary mechanism for determining which grants to eliminate, a significant methodological flaw in the judge's assessment.
McMahon's ruling exposes two distinct problems. First, using an AI chatbot as the sole analytical tool for major funding decisions lacks adequate deliberation and procedural safeguards required by law. ChatGPT outputs can be inaccurate, biased, or inconsistent, making it an unreliable foundation for multimillion-dollar determinations.
Second, the cancellations violated administrative law requirements. Federal agencies must follow notice-and-comment procedures before eliminating established funding commitments. DOGE bypassed these protections entirely. The ruling notes that grants recipients received no opportunity to defend their work or challenge categorizations before losing money.
The decision specifically criticizes DOGE's approach as both procedurally inadequate and substantively flawed. Using a large language model to identify DEI-related grants demonstrates what McMahon calls negligent policymaking when administrative law demands rigorous analysis.
This case highlights a broader tension. Government agencies increasingly adopt AI tools for efficiency, but courts now scrutinize whether speed replaces due process. The judge's language suggests that automating consequential decisions through unvetted AI systems violates fundamental administrative principles.
DOGE's reliance on ChatGPT appears designed to process large grant portfolios rapidly without human review of individual cases. That speed came at a constitutional cost.
THE BOTTOM LINE: Automating
