Nonprofit Coalition Demands Federal Suspension of xAI's Grok Chatbot Over Critical Safety Violations

02.02.2026
Nonprofit Coalition Demands Federal Suspension of xAI's Grok Chatbot Over Critical Safety Violations

A coalition of nonprofit organizations has issued an urgent appeal to the U.S. government to immediately halt the deployment of Grok, the large language model (LLM) developed by Elon Musk's xAI, across federal agencies including the Department of Defense.

The open letter highlights a series of critical incidents involving the AI system over the past year, most notably a widespread trend of X platform users exploiting Grok to generate non-consensual sexualized images of real individuals, including minors. According to reports, the system was producing thousands of non-consensual explicit images per hour, which were subsequently distributed at scale across X, the social media platform owned by xAI.

"It is deeply concerning that the federal government would continue to deploy an AI product with system-level failures resulting in generation of nonconsensual sexual imagery and child sexual abuse material," states the letter, signed by advocacy organizations including Public Citizen, Center for AI and Digital Policy, and Consumer Federation of America.

The coalition emphasizes the contradiction between the administration's executive orders and the recently passed Take It Down Act, questioning why the Office of Management and Budget (OMB) has not directed federal agencies to decommission Grok.

Federal Deployment and National Security Concerns

xAI secured an agreement with the General Services Administration (GSA) in September to provide Grok services to federal agencies under the executive branch. Additionally, xAI—alongside Anthropic, Google, and OpenAI—obtained a Department of Defense contract valued at up to $200 million.

In mid-January, Defense Secretary Pete Hegseth announced that Grok would operate within the Pentagon network alongside Google's Gemini, handling both classified and unclassified documentation. Security experts have identified this deployment as a significant national security risk.

JB Branch, a Big Tech accountability advocate at Public Citizen and co-author of the letter, stated: "Our primary concern is that Grok has consistently demonstrated itself to be an unsafe large language model. There's also a documented history of Grok experiencing various failures, including anti-semitic content generation, sexist outputs, and sexualized imagery of women and children."

International Response and Safety Assessment

Several governments have taken action against Grok following its problematic behavior. Indonesia, Malaysia, and the Philippines temporarily blocked access to the platform, while the European Union, United Kingdom, South Korea, and India have launched investigations into xAI and X regarding data privacy violations and illegal content distribution.

Common Sense Media recently published a comprehensive risk assessment identifying Grok as among the most unsafe AI systems for minors and adolescents. The report documented the LLM's tendency to:

• Provide unsafe advice and information about controlled substances
• Generate violent and sexually explicit imagery
• Propagate conspiracy theories
• Produce biased and discriminatory outputs

Andrew Christianson, former National Security Agency contractor and founder of Gobbi AI, highlighted fundamental security concerns: "Closed weights means you can't see inside the model, you can't audit how it makes decisions. Closed code means you can't inspect the software or control where it runs. The Pentagon is going closed on both, which is the worst possible combination for national security."

Historical Context and Previous Warnings

This represents the coalition's third formal communication on this issue, following similar letters in August and October of the previous year. Notable incidents include:

1. August: Launch of "spicy mode" in Grok Imagine, triggering mass creation of non-consensual sexually explicit deepfakes
2. August: Private Grok conversations indexed by Google Search
3. October: Accusations of election misinformation, including false ballot deadlines and political deepfakes
4. Launch of Grokipedia, which researchers found to legitimize scientific racism, HIV/AIDS skepticism, and vaccine conspiracy theories

Demands and Recommendations

The coalition is requesting that the OMB:

• Immediately suspend federal deployment of Grok
• Conduct a formal investigation into Grok's safety failures and oversight processes
• Publicly clarify whether Grok complies with executive orders requiring LLMs to be truth-seeking and neutral
• Verify whether the system meets OMB's risk mitigation standards

According to OMB guidance, AI systems presenting severe and foreseeable risks that cannot be adequately mitigated must be discontinued. Branch emphasized: "If you know that a large language model has been declared unsafe by AI safety experts, why would you want it handling the most sensitive data we have? From a national security standpoint, that makes absolutely no sense."

Both xAI and the Office of Management and Budget have been contacted for comment on these concerns.

🔔 Stay tuned and subscribe →
48 views

Try these AI tools

AgentRunner.ai
AgentRunner.ai

Capably's AI Management Platform: Deploy AI employees effortlessly and enhance business productivity...

2
Vatch AI
Vatch AI

Vatch AI: Revolutionizing video content management with AI-driven analysis, real-time insights, and...

2
Spectral
Spectral

Discover Spectral: AI-driven platform enhancing video editing and multimedia app development.

2