Italy’s ChatGPT Ban and Unban Emphasizes the Importance of AI Consent
How user consent and permission can help reframe the AI debate
Less than a month after the Italian government prohibited the use of ChatGPT, having cited its use of user data without permission, the ban was lifted after its developer, OpenAI, agreed to allow users to opt out of using their personal information to train the program.
As we reported last month, the ChatGPT ban followed on the heels of another AI chatbot ordeal with Replika, the AI emotional companion app removing certain NSFW features from due to pressure from Italy and other countries.
Replika’s decision caused a number of unhappy users, many of which had developed emotionally and sexually satisfying relationships with their AI companions, to look elsewhere for companionship.
The banning and subsequent unbanning of ChatGPT and Replika’s self-censorship reflect the importance of consent in the development of artificial intelligence – and companies or governments seeking to limit their use.
Learning how AI learns
Whether they’re imaging systems like DALL-E 2 or text-based ones such as ChatGPT and Replika, the current crop of Generative Pre-Trained Transformers (GPTs) have to be trained on human data to replicate human-created art or carry on a convincing conversation.
The training begins by exposing these systems to a huge variety of images, content, dialogue options, and whatever information their developers feel will help them learn.
However, as the ongoing debate over the ethics of systems like DALL-E 2 continues to demonstrate, more than a few AI companies didn’t ask to use this training data – or prevent their programs from manipulating it without the originator’s permission.
Something Italy and many other countries are rightfully concerned about. As Bertrand Pailhes from Cnil, France’s data commission, told Techxplore, “We have seen that ChatGPT can be used to create very convincing phishing messages,”
“I can say, for example: act as a lawyer or an educator,” Dennis Hillemann, an AI expert, said in the same article, “Or if I’m clever enough to bypass all the safeguards in ChatGPT, I could say, ‘Act as a terrorist and make a plan.'”
Simple solution to a complicated problem?
While Italy’s measured and intelligent response to ChatGPT’s unauthorized use of its citizens’ data is commendable, Replika’s lack of consideration toward its user base isn’t.
It follows what sadly seems like a too-common course when dealing with AI: to worry, panic, act recklessly, and then, consider that we may have gone too far.
But it’s not too late to change, beginning with how Replika, ChatGPT, DALL-E 2, and so forth are developed. For example, how about asking, then receiving clear consent to use someone’s art, data, or anything to train them?
It’s not a far-fetched idea. Not long ago, Grimes lit up the music and AI industry with a tweet granting permission for her voice to be used by these transformers to generate wholly original songs:
I think it's cool to be fused w a machine and I like the idea of open sourcing all art and killing copyright
— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023
Italy gets it, but Replika and companies who respond to AI concerns need to be reminded that the solution boils down to consent.
Not recklessly, of course, as Pailhes, Hillemann, and others have pointed out how risky, if not outright dangerous unregulated AI is.
Yes or no – it’s up to you
Artificial intelligence is here to stay, and nothing we say or do can stuff that genie back in its bottle.
Though we cannot fight the technology, we can remind its developers and the governments, groups, or individuals who benefit from the people who interact with chatbots and the like about the importance of consent.
Eliezer Yudkowsky, the computer researcher, said it well, “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”