AI for Automation
Back to AI News
2026-03-19ChatGPTDOGEAI misuseAI decision-makinggovernmentAI hallucination

DOGE used ChatGPT to kill a museum's AC grant — it called HVAC 'DEI'

A DOGE staffer admitted under oath to using ChatGPT to screen government grants. It flagged a $349K museum HVAC replacement as DEI because 'air conditioning serves diverse audiences.'


A government staffer working for DOGE (the Department of Government Efficiency) admitted in a court deposition that he used ChatGPT to screen hundreds of government grants — and the AI flagged a $349,247 museum air conditioning replacement as a "DEI" initiative.

The reason? ChatGPT decided that fixing HVAC systems "enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences." In other words, the AI connected air conditioning to diversity because comfortable buildings serve more people.

How a yes/no from ChatGPT killed a museum grant

The High Point Museum in North Carolina had been awarded a $349,247 grant from the National Endowment for the Humanities (NEH) to replace its aging HVAC systems — the kind of infrastructure work that keeps artifacts preserved and visitors comfortable.

But according to court documents reported by MyFox8, DOGE staffer Justin Fox admitted under oath that his team used ChatGPT to screen NEH grant requests for connections to DEI (diversity, equity, and inclusion).

The grant description fed to ChatGPT:

"The High Point Museum proposes to replace aging HVAC systems... ensuring their long-term viability."

ChatGPT's response:

"Yes. Improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences. #DEI."

That single word — "diverse" — was enough. The ChatGPT responses (yes or no) were logged into a spreadsheet that replaced the original assessment lists created by NEH's own expert staff. Grants flagged as DEI-connected were canceled.

The museum lost a third of the money

High Point Museum Director Edith Brady reported that the museum ultimately recovered about 70% of the funds through a grant termination clause — meaning roughly $105,000 was lost because an AI hallucinated (generated false information with high confidence) a connection between air conditioning and diversity initiatives.

The High Point Museum wasn't alone. Multiple North Carolina grants appeared on the DOGE spreadsheet, with at least three flagged by ChatGPT as DEI-related. Among them: a North Carolina Central University initiative to develop teaching materials using the school's digital archives.

A physics grant killed because it mentioned 'polarization'

The story gets worse. In Hacker News discussions about the case, commenters shared that a separate $600,000 physics research grant was reportedly canceled because the proposal contained the word "polarization" — referring to light polarization, a fundamental concept in physics. The AI or reviewers apparently confused it with political polarization.

Several commenters also noted the irony that DOGE used OpenAI's ChatGPT rather than Grok, the AI made by Elon Musk's xAI — given Musk's involvement with DOGE.

The lawsuit challenging the process

The American Council of Learned Societies and the American Historical Association filed suit against the grant cancellations, arguing they violated First Amendment rights. The deposition where Fox admitted to using ChatGPT came from this lawsuit.

Paula M. Krebs, Executive Director of the Modern Language Association, stated: "The facts in this case have exposed the administration's total disregard for the democratic process" and said DOGE agents "undermined the separation of powers."

Why this matters beyond one museum

This case is a textbook example of what happens when AI is used as a decision-maker instead of a decision-support tool. ChatGPT doesn't understand context — it pattern-matches words. When a grant mentions "access" or "viability," the model can connect those to diversity language even when the actual subject is air conditioning.

The lesson: AI can help sort and prioritize large amounts of information, but using a chatbot's yes/no answer as the sole basis for canceling hundreds of thousands of dollars in grants — replacing expert human reviewers — is exactly the kind of AI misuse that creates real-world damage.

NEH's normal grant review process involves weeks of expert evaluation with typical success rates of 20-25%. Replacing that with ChatGPT prompts on a spreadsheet didn't just cut corners — it cut the wrong grants.

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments