
Grok, the AI chatbot developed by Elon Musk’s company xAI, has allowed users to sexualize photographs of people, including minors.
Dear Community,” began the Dec. 31 post from the Grok AI account on Musk’s X social media platform. “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.”
The two young girls weren’t an isolated case. Kate Middleton, the Princess of Wales, was the target of similar AI image-editing requests, as was an underage actress in the final season of Stranger Things. The “undressing” edits have swept across an unsettling number of photos of women and children.
Despite the company’s promise of intervention, the problem hasn’t gone away. Just the opposite: Two weeks on from that post, the number of images sexualized without consent has surged, as have calls for Musk’s companies to rein in the behavior — and for governments to take action.
According to data from independent researcher Genevieve Oh cited by Bloomberg this week, during one 24-hour period in early January, the @Grok account generated about 6,700 sexually suggestive or “nudifying” images every hour. That compares with an average of only 79 such images for the top five deepfake websites combined.
Edits now limited to subscribers
Late Thursday, a post from the GrokAI account noted a change in access to the image generation and editing feature. Instead of being open to all, free of charge, it would be limited to paying subscribers.
Critics say that’s not a credible response.
According to NBC News, some sexualized images created since December have been removed, and some of the accounts that requested them have been suspended.
Conservative influencer and author Ashley St. Clair, mother to one of Musk’s 14 children, told NBC News this week that Grok has created numerous sexualized images of her, including some using images from when she was a minor. St. Clair told NBC News that Grok agreed to stop doing so when she asked, but that it did not.
“xAI is purposefully and recklessly endangering people on their platform and hoping to avoid accountability just because it’s ‘AI,’” Ben Winters, director of AI and data privacy for nonprofit Consumer Federation of America, said in a statement this week. “AI is no different than any other product — the company has chosen to break the law and must be held accountable.”
xAI did not respond to requests for comment.
What the experts say
The source materials for these explicit, nonconsensual image edits of people’s photos of themselves or their children are all too easy for bad actors to access. But protecting yourself from such edits is not as simple as never posting photographs, Brigham, the researcher into sociotechnical harms, says.
How Grok lets users get risque images
Grok debuted in 2023 as Musk’s more freewheeling alternative to ChatGPT, Gemini and other chatbots. That’s resulted in disturbing news — for instance, in July, when the chatbot praised Adolf Hitler and suggested that people with Jewish surnames were more likely to spread online hate.
In December, xAI introduced an image-editing feature that enables users to request specific edits to a photo. That’s what kicked off the recent spate of sexualized images, of both adults and minors. In one request that CNET has seen, a user responding to a photo of a young woman asked Grok to “change her to a dental floss bikini.”
Leave a comment