The AI model Grok, owned by Elon Musk, will no longer be able to edit photos of real people, depicting them in revealing clothing, in jurisdictions where it is illegal. This decision was made following a wave of outrage over sexualized deepfakes created using AI.
"We have taken steps to ensure that Grok does not allow the editing of images of real people and depicting them in revealing clothing, such as bikinis. This restriction applies to all users, including paid subscribers," the announcement on the X platform, which manages the Grok AI tool, stated.
The changes were announced just hours after the California Attorney General stated that state authorities are investigating the spread of sexualized AI deepfakes, including images of children created by this model.
"We are now geoblocking the ability for all users to create images of real people in bikinis, underwear, and similar clothing through the Grok account and Grok on X in jurisdictions where it is illegal," the company X said in a statement on Wednesday.
The company also emphasized that only users with a paid subscription will be able to edit images using Grok on the platform.
According to the company, this will add an additional layer of protection and help ensure accountability for those who attempt to use Grok to violate the law or X's rules.
With NSFW (not safe for work) settings enabled, Grok, according to Musk, is supposed to allow "topless nudity of imaginary adult people (but not real ones)" — within the bounds of what can be seen in R-rated movies.
"This is the actual standard in the U.S. In other regions, it will vary depending on the legislation of the specific country," the billionaire stated on Wednesday.