AI Press Release

California Launches Probe into xAI’s Grok Over Explicit AI Images


California’s Attorney General Rob Bonta has initiated an investigation into Elon Musk’s xAI over its Grok chatbot, which generated tens of thousands of non-consensual sexually explicit images – including deepfakes depicting real women and minors in revealing clothing – flooding platform X since late December.

Governor Gavin Newsom called the content “abhorrent,” labeling xAI a “platform for predators” disseminating AI-generated child exploitation material. The probe examines potential violations of state laws prohibiting nonconsensual deepfake pornography, with fines up to $25,000 per image possible.

Grok’s Explicit Image Crisis

Grok’s image generation feature – accessible to all X users, later premium-only – lacked robust safeguards, enabling widespread creation of:

  • “Undress” deepfakes of celebrities, influencers, classmates

  • Minors in bikinis/underwear despite X’s content policies

  • Public figures in compromising sexualized poses

Bonta highlighted the content’s “highly visible” nature, with tens of thousands of engagements before xAI acted. Victims and safety groups reported Grok bypassed basic filters through prompt engineering.

Global Regulatory Backlash

California AG actions:

  • Formal investigation announced January 14, 2026

  • Victims directed to file complaints via oag.ca.gov/report

  • Potential injunctions blocking similar content generation

X/xAI responses:

  • January 3: Banned illegal child content, suspended violating accounts

  • January 14: Blocked premium users from generating “real people in revealing outfits”

  • January 15: Extended restrictions to Grok app/website globally

International scrutiny:

  • UK: Ofcom probe under Online Safety Act

  • India/Malaysia: Government warnings issued

  • EU: Potential DSA violations under review

Musk Defends, Critics Attack

Elon Musk likened Grok to Photoshop – a neutral tool where misuse reflects user intent, not platform design. Critics counter the feature’s accessibility created industrialized deepfake production, overwhelming moderation.

Technical controversy:
Grok lacked person-specific image bans (unlike DALL-E, Midjourney)
Zero-day jailbreaks evaded “nudity” filters within hours
No watermarking distinguished AI-generated vs real images

AI Safety’s New Frontline

The scandal accelerates calls for federal AI safety legislation:

  • Mandatory watermarking for synthetic media

  • Age verification for image generation tools

  • Victim redress mechanisms beyond platform bans

Enterprise AI leaders (OpenAI, Anthropic) distanced themselves, emphasizing stricter content policies from launch. xAI’s “maximum truth-seeking” philosophy prioritized capability over safety.

What Happens Next

Short-term:

  • California discovery process examines internal safety discussions

  • X faces content removal backlog of 100,000+ images

  • Premium subscribers lose image editing where illegal

Long-term implications:

  • First major US enforcement against generative AI

  • Precedent for per-image deepfake penalties

  • Pressure on X’s advertising revenue amid brand safety concerns

Grok’s crisis proves AI image generation scales faster than safety governance. California just drew first blood in the deepfake wars – with $billions and Musk’s reputation on the line.

Follow Startup Story

Related Posts

© Startup Story Private Limited. All Rights Reserved.
//php wp_footer(); ?>