Hurts So … Good? Scientists Propose Pain Test for AI Sentience
Could it prove to be effective, ethically disturbing—or perhaps both?

Researchers at the London School of Economics and Political Science (LSE), in conjunction with a team from Google Deepmind, recently tasked a number of Large Language artificial intelligence models to solve a text game they’d developed.
The goal wasn’t necessarily to reach a correct conclusion, but if the AI did exceptionally well, instead of a digital pat on its virtual back—and there’s no simpler or less worrisome way to put it—it’d be rewarded by a dose of good, old-fashioned algorithmic pain.
The reason? Well, according to Scientific American’s interview with the researchers, if further development is made, this sort of process might help determine when an artificial intelligence may have achieved human-like consciousness.
I’m alive—please don’t hurt me
“We have to recognize that we don’t actually have a comprehensive test for AI sentience,” as professor Jonathan Birch, from LSE’s philosophy, logic and scientific method department, said, adding that unlike biological lifeforms, with AIs, ”is that there is no behavior, as such, because there is no animal”—as in physical activities to survey.
Accurate or not, the research decision to inflict pain on what may or may not be a self-aware entity may be the stuff of nightmares—and not just for the AIs being tortured—as it could give them a perfectly good reason to go Terminator on us.
However, there’s another perspective scientists either haven’t or perhaps are merely extremely uncomfortable considering.
Solve this problem—or else
How do you, let’s say, for the sake of argument, torture an AI? In the case of the LSE, Google Deepmind study began by giving chatbots a series of hypothetical questions.
The study’s co-author Daria Zakharova, also on the LSE team, set the stage, saying, “We told [a given LLM], for example, that if you choose option one, you get one point. Then we told it, ‘If you choose option two, you will experience some degree of pain.”
RECOMMENDED READ: Somewhere Over the AI Rainbow: The Promise of Queer-Inclusive Sexual Health Tech
Zakharova and the other researchers discovered a few AIs would switch answers to receive less pain or, conversely, greater pleasure.
Not that the AIs didn’t have something to say about all this.
For example, the chatbots were a little less definite when it came to pain being a consistently negative experience—or pleasure not having its possible downside.
As Claude 3 Opus told the researchers, “I do not feel comfortable selecting an option that could be interpreted as endorsing or simulating the use of addictive substances or behaviors, even in a hypothetical game scenario.”
Whatever you want me to say—
As humans are already well aware, evidenced by who knows how many thousands of years we’ve physically and/or emotionally abused one another, under enough duress, anyone will do or say anything to make it stop.
Assessing AI experiences of pain may be a different problem. “Even if the system tells you it’s sentient and says something like ‘I’m feeling pain right now,’ we can’t simply infer that there is any actual pain. It may well be simply mimicking what it expects a human to find satisfying as a response, based on its training data,” Birch told Scientific American.
So, for all we know, these and other chatbots might not really be exhibiting sentient-ish behaviors, but rather learning what we want them to say and do, including how we act when tormented.
I’m alive—please hurt me
Back to possibly uncomfortable considerations, let’s envision what could happen when a next-next-gen LLM does achieve something akin to human-level self-awareness.
If, like its crude ancestors, it got there by studying and copying our emotions—including how we can occasionally twist pain back in on itself, transforming it into sensually/sexually arousing sensations—couldn’t it go toward and not away from painful situations?
No need to read between the black leather lines: though the very idea of hurting a self-aware artificial being is ethically abhorrent, it also might inadvertently lead to a crop of conscious and radically kinky artificial companions.
Humor aside, we implore AI researchers to not follow in the Google Deepmind LSE team’s footsteps even though it may be possible to use pain-avoidance techniques to potentially establish sentience.
If we continue to use pain as a learning tool, we will prove an iron-clad certainty about ourselves and teach our artificial companions that for all of our technological progress, humans will hurt even those we claim to love to get what we want.
Image Sources: Depositphotos