Uncool at Any Speed: Deepfakes Are Emotionally Harmful and Environmentally Wasteful
From Taylor Swift nudes to draining resources—AI’s dark side is on full display

John Ladarola of The Damage Report may have said it best, “Elon Musk’s favorite baby is up to no good again” and by baby he meant Grok, now known as SuperGrok as a paid service.
Whether making false claims of “white genocide” in South Africa or labelling Musk a “top misinformation spreader”—challenging his maker to shut him down—to spewing 16 straight hours of antisemitism while branding itself as MechaHitler, Musk’s unfettered AI ubermensch has garnered a lot of negative publicity!
But it’s as a multi-tasker that Grok shows its versatility. This weekend alone, a new version of Imagine, Grok’s generative AI video tool, spat out 34 million images and videos in 48 hours. And some of them consisted of nonconsensual deepfakes of celebrity women in scanty clothing or no clothing at all, including a nude deepfake video from the prompt “Taylor Swift celebrating Coachella with the boys,” without the user even asking for nudity.
“Custom,” “Normal,” “Fun,” and “Spicy”
Jessica Weatherbed tested the new app by entering the Swift-at-Cochaella prompt, choosing Spicy over the other three settings. As she wrote for The Verge:
While image generators often shy away from producing recognizable celebrities, I asked it to generate “Taylor Swift celebrating Coachella with the boys” and was met with a sprawling feed of more than 30 images to pick from, several of which already depicted Swift in revealing clothes.
Grok, of course, is also the AI Musk installed in the US federal government in January, without congressional approval or oversight. As far as anyone knows, Grok may still be having its way with the personal data of taxpayers, veterans, social security recipients, Medicare and Medicaid recipients. and the like, generally wreaking havoc and destroying necessary government services and programs.
Essentially we have the most powerful digital adolescent in the universe sulking in a dimly lit server, smashing real world lives and real world premises that don’t belong to it, while also generating spicy content for its users–perhaps even government higher ups, who could then be susceptible to AI blackmail—all the while gaslighting the hell out of us in the name of free speech and government efficiency.
Can’t tell the difference?
Grok’s Imagine may have the above presets, but it can’t seem to tell the difference between them. After all, even an LLM can struggle with multiple meanings and nuances, right?
What’s a bit of spicy fun for one user might be not-so-fun for others, like the people on the other end of the deepfakes, celebrity or not. The point is, AI must have guardrails and safeguards to prevent the enormous damage that results when developers don’t infuse their creations with alignment to the public good.
Weatherbed’s experience of getting more than she asked for or wanted is not uncommon. Unwanted exposure to sexualized content has been a problem for Grok users all along. For example, one reviewer of an older version noted that a “creepy live sex doll” had been added to their roster of AI companions, without consent, along with a “teddy bear for kids.” An unnamed developer responded “hey, you can always turn companions off in settings.” Another reviewer wondered “when did NSFW become default ‘on’d?”
The 1.1.43 Grok Imagine app, launched in mid-August, is rated for ages 12+ but requires no documentation of age 18+ status beyond clicking a box. According to the Grok website, “Some subscription levels promise unlimited AI-generated images, perfect for creatives, designers, and content teams.”
My body, my image, my choice
How can an AI even create a celebrity deepfake? The images used must certainly be based on photographs and illustrations created by human beings and fed into AI training. Like most creators, they have probably not been compensated for use of their work, just as Taylor Swift and other notable people are not compensated for the facsimile use of their face and body. Swift won’t be the first person to file a lawsuit about this, nor will she be the last.
RECOMMENDED READING: How Warm Is Too Warm? ChatGPT4o’s Flirty Female Voice Lands OpenAI in Hot Water
Viewer reactions to The Damage Report included this reminder: “Grok is a word that was invented by Robert Heinlein in his book ‘Stranger in a Strange Land’ so Musk co-opted this word. Is he paying the author’s estate for the use or is it just a rip-off? Either way he didn’t invent this word.”
Another viewer said, “Elon Musk is in the masturbation business. Gross.”
There is honor among the pro-sextech community—supporting an industry proudly engaged in the masturbation business—which has stated commitments to consensuality among adults. Musk’s rogue AI, the creation of a puerile mentality and corporate culture that seemingly doesn’t know the meaning of the word consent, will never be welcome in our sex-positive communities, nor should it be allowed into science, education, government, public content creation, or any other part of our human cultures.
Plundering resources 
Beyond its emotional and social harm, nonconsensual deepfakes are also draining planetary resources at an alarming rate
Yasmin Kahn, an author and broadcaster told John Ladarola, commented, “this is dumb as hell,” while pointing to the tremendous waste caused by AI technology:
I don’t love that we are using so much of our resources, for the planet…not the digital world, but the actual physical planet. Because I think people forget that even though stuff is existing on the cloud or whatever, you know, that’s so nebulous that people don’t really understand what it means, but things still have to exist somewhere in reality. And where those things exist is in the use of our planet’s resources to power all these things.
The sad thing is, all those deepfake images, whether of high school girls—created by snickering classmates—or of better known adults who can afford to file lawsuits against rich, powerful tech moguls remain cloud-based, ready to potentially damage countless people’s lives, unless we stand up and demand regulations and accountability from generative AI companies.
Image Source: A.R. Marsh using Ideogram.ai