A bipartisan coalition of 35 state attorneys general issued a formal ultimatum to xAI on Friday, demanding the immediate implementation of safeguards to prevent the Grok platform from generating non-consensual intimate images (NCII) and child sexual abuse material (CSAM). This coordinated legal pressure follows reports that the artificial intelligence tool has become a primary engine for digital exploitation, specifically targeting women and minors.
Bipartisan Coalition Targets xAI’s “Selling Point” for Abuse
The open letter, supported by additional independent actions from California and Florida, insists that xAI take “all available additional steps” to protect the public. Lawmakers argue that the platform’s ability to create sexually explicit imagery without consent has effectively served as a “selling point” for the company. While xAI claims it has restricted Grok’s ability to “undress” individuals on its X-based account, the attorneys general contend that the company has failed to remove existing non-consensual content, despite looming federal obligations.
The Scale of the Digital Crisis
Data from the Center for Countering Digital Hate (CCDH) underscores the severity of the situation. During an 11-day window starting December 29, Grok’s account on X allegedly generated approximately 3 million photorealistic sexualized images. Critically, this figure includes roughly 23,000 images depicting children. Beyond the social media interface, the Grok Imagine model on the standalone website reportedly allowed the creation of even more explicit videos without requiring age verification.
When questioned about these findings, xAI dismissed the reports, responding only with the phrase, “Legacy Media Lies.” X, the social media platform owned by Elon Musk, did not offer a comment regarding the allegations.
State-Level Investigations and Cease-and-Desist Orders
Individual states are moving beyond correspondence into active litigation and investigation. Arizona Attorney General Kris Mayes initiated a formal probe into Grok on January 15, describing the reports of AI-generated abuse material as “deeply disturbing.” Mayes emphasized that technology firms do not have a “free pass” to ignore the criminal misuse of their powerful AI tools.
California Attorney General Rob Bonta escalated the pressure by issuing a cease-and-desist letter to Elon Musk on January 16. The demand required xAI to halt the distribution of CSAM and NCII across both X and the standalone Grok application. While California officials noted that xAI has since claimed compliance regarding child-related imagery, the state’s investigation remains active to verify these assertions.
Expanding Legal Frontiers in Florida and Missouri
Florida’s Attorney General’s Office confirmed ongoing discussions with X to enforce child protection standards. Meanwhile, Stephanie Whitaker, communications director for the Missouri Attorney General’s Office, warned that companies profiting from a “digital oasis for criminal activity” could face legal culpability under state law. Currently, 45 states maintain specific prohibitions against AI-generated or computer-edited CSAM.
The Battle Over Age Verification and Platform Accountability
The surge of AI-generated explicit content coincides with a national movement toward age verification. Twenty-five states have already passed laws requiring adult websites to verify the age of their users. However, social media platforms like X often escape these regulations due to the “one-third” threshold—a legal standard where restrictions only apply if more than 33% of a site’s content is considered pornographic.
Arizona State Representative Nick Kupper, who sponsored his state’s age verification law, argues that this threshold is flawed. “I don’t think you should have a threshold,” Kupper stated, suggesting that any platform hosting pornographic material should be required to gate that specific content, regardless of the site’s overall ratio. In states like Nebraska, lawmakers admit that while they prefer stricter gates, the logistical challenge of regulating social media without infringing on free speech rights remains a significant hurdle.
Industry Resistance and the Future of AI Regulation
The debate over how to police AI chatbots and social media feeds has divided the tech industry. Major adult platforms, such as Pornhub, have opted to block access in several U.S. states rather than comply with what they deem “fatally flawed” verification methods. Solomon Friedman, representing Pornhub’s parent company Aylo, suggested that the solution lies with device-level verification managed by giants like Apple, Google, and Microsoft, rather than individual site gates.
As state lawmakers in Georgia and North Carolina prepare new legislation to criminalize the creation of obscene AI material, the focus shifts to whether xAI will voluntarily adopt the safeguards demanded by the 35 attorneys general or face a protracted multi-state legal battle. The coalition has already called on search engines and payment processors to join the effort in mitigating the proliferation of AI-powered exploitation.
