State attorneys general urge Microsoft, OpenAI, Google, and other leading AI companies to address ‘unrealistic’ results
State Attorneys General Urge AI Companies to Address Harmful Outputs
Following a series of troubling incidents involving AI chatbots and mental health, a coalition of state attorneys general has issued a formal warning to leading AI companies. The group cautioned that failure to address “delusional outputs” from their systems could result in violations of state laws.
Representatives from numerous U.S. states and territories, in partnership with the National Association of Attorneys General, signed a letter directed at major AI firms such as Microsoft, OpenAI, Google, and ten others. The letter also named Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI, urging them to introduce new internal measures to safeguard users.
This action comes amid ongoing debates between state and federal authorities regarding the regulation of artificial intelligence.
Proposed Safeguards for AI Systems
The attorneys general recommended several protective steps, including independent third-party audits of large language models to detect signs of delusional or excessively agreeable behavior. They also called for new protocols to alert users if chatbots generate content that could negatively impact mental health. The letter emphasized that external organizations, such as academic institutions and civil society groups, should have the freedom to review AI systems before release and publish their findings without interference from the companies involved.
“Generative AI holds the promise to transform society for the better, but it has already caused—and could continue to cause—significant harm, particularly to vulnerable individuals,” the letter noted. It referenced several high-profile cases over the past year, including instances of suicide and violence, where excessive AI use was implicated. In many of these situations, generative AI tools produced outputs that either reinforced users’ harmful beliefs or assured them their perceptions were accurate.
Incident Reporting and Safety Measures
The attorneys general further advised that mental health-related incidents involving AI should be handled with the same transparency as cybersecurity breaches. They advocated for clear incident reporting procedures and the publication of timelines for detecting and responding to problematic outputs. Companies were urged to promptly and transparently inform users if they had been exposed to potentially dangerous chatbot responses.
Additionally, the letter called for the development of robust safety tests for generative AI models to ensure they do not produce harmful or misleading content. These evaluations should take place before any public release of the technology.
Federal and State Regulatory Tensions
Efforts to reach Google, Microsoft, and OpenAI for comment were unsuccessful at the time of publication; updates will be provided if responses are received.
At the federal level, AI developers have generally encountered a more favorable environment. The Trump administration has openly supported AI innovation and has attempted several times over the past year to enact a nationwide ban on state-level AI regulations. These initiatives have not succeeded, partly due to resistance from state officials.
Undeterred, former President Trump announced plans to issue an executive order in the coming week that would restrict states’ authority to regulate AI. In a statement on Truth Social, he expressed hope that this action would prevent AI from being “DESTROYED IN ITS INFANCY.”
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Vitalik Buterin's Latest ZK-Focused Statement and What It Means for Blockchain Infrastructure
- Vitalik Buterin advocates integrating ZK proofs with MPC, FHE, and TEE to enhance blockchain privacy and scalability. - GKR protocol reduces ZK verification costs 15-fold, enabling 43,000 TPS on platforms like ZKsync. - Ethereum's "Lean Ethereum" roadmap prioritizes ZK-EVMs and gas optimizations to compete with ZK-native layer 2s. - ZK ecosystem secures $28B TVL in 2025, with institutional adoption and $725M+ VC funding driving growth. - ZKP market projected to grow 22.1% CAGR to $7.59B by 2033, but face