Pixels and Parameters: Why AI Art Needs Moderation
While AI art is arguably the next creative frontier, what role do the human conventions of consent play?
Keeping calm in the digital age
Wherever there are tools, people will make pornography. And whenever people make pornography, some take it too far. Mainstream access to AI art generation tools such as DALL-E, Midjourney, and Stable Diffusion has opened up opportunities and fears alike.
The potential benefits promise lightening workloads and better access to the arts for anyone with an internet connection. However, there are concerns from all corners.
Workers fear being pushed into unforgiving unemployment queues. Consumers worry about the volume of data fed to opaque algorithms, and the potential for new forms of sexual misconduct. And legislators? They’re still struggling with video game loot boxes. Legislature is not ready for AI.
Then again, are any of us truly ready for AI?
AI is different
AI art generation is different from any other artistic medium. Machine learning applies finely-tuned mathematical algorithms to develop the procedures for artistic generation. The AI ‘learns’ by sampling libraries of art, receiving human feedback, and generating images.
The process is similar to how we learn art. We are given tools and knowledge. We take inspiration from existing work and listen to feedback. Then, we practice our techniques to refine them.
The difference lies in the machine’s ability to process an incredible volume of data, with less of the intuition that a human brings. Last year, I tried to coax Midjourney into giving me a dog portrait. It resulted in countless images that were more detailed than anything I can draw. Except, all of them had too many eyes, or eyes in frankly disturbing places.
AI is magnificent in its potential, but its lack of moral reasoning makes it prone to abuse. It needs stringent oversight from its developers. Otherwise, it can unwittingly produce sexual content of children or produce unauthorized facsimiles of real people. The latter possibility is worrisome enough in the plane of misinformation, but it can easily be used to create hyper realistic sexual blackmail material.
The risks of harm with AI – both predictable and unknown – are too high to just turn machine learning loose without foresight. The need for moderation and restrictions on machine learning are evident.
People worry about ‘deepfake’ impersonation making sexual extortion easier than ever before. One of the greatest fears of AI relates to children. An insensitive machine can merge real people and fictional bodies to create new child sex abuse material.
Elsewhere, artists face the reality that AI is trained using billions of hours of their work before replacing them outright. The first major class action lawsuit filed on behalf of affected artists is already underway. The reporting centers the experiences of mainstream visual artists, but sexual content creators doubtless feel the jaws of obsolescence closing on them.
All it takes is one highly proficient machine learning system with no restrictions on adult content.
These aren’t the fears of a SkyNet-esque doomsday machine. They are the realities of human greed and immorality backed by new tools.
Everything in moderation
No AI company wants to be remembered for instigating a socio-cultural disaster. The teams behind AI art generation are responding to social and legal pressure.
Stability AI has filtered Stable Diffusion’s learning library of sexual content, and weakened its ability to replicate individuals following concerns of deepfaked pornography featuring real people. Midjourney’s bans users who repeatedly prompt using prohibited terms. This has been controversially used for political purposes.
AI art communities are the second line of moderation, and enforce their own brands of harm reduction. Subreddits like /r/sdnsfw and /r/AdultDiffusion are dedicated to skirting the rules of appropriateness, but still feature two cardinal rules:
- No deepfaked images of real people.
- No content featuring underaged persons.
These rules may be self preservation to keep the fate of the unrestricted Unstable Diffusion project at bay. They may come from a place of genuine integrity and care. The answer probably lies somewhere in between.
Despite these efforts, internal moderation isn’t enough. AI companies are interested in expansion and seldom impose restrictions that hurt their bottom line. Online communities and social media sites can moderate themselves, but can’t prevent AI from generating scantily-clad versions of users without consent.
AI is becoming a reality for everyone with an internet connection. It’s everyone’s opportunity and everyone’s problem. Moderation and regulation are important for supporting consumers, sex workers, artists, and children alike.
It’s not there to stop a fanciful robot apocalypse, either. It’s needed to protect people from extensions of the very human qualities of greed, dishonesty, and criminality.
Image Sources: AI Generated Image with Stable Diffusion Web UI, Pexels