Whose Values Are We Coding? AI Needs a Human—And Sexual—Rights Upgrade
Before we let machines shape our future, we must teach them what truly matters

AI alignment is a term I encounter frequently, as in aligning artificial intelligence with human values—but those values are seldom specified. A more narrow definition refers to how AI performs and completes specific tasks—operating in alignment with desired outcomes.
But somehow I’ve missed the part where we—globally—put our heads together and decided exactly what values and rights we want AI to hold dear, because we know there’s a good chance the values and rights embedded in and enacted through AI technologies will eventually permeate every aspect of our lives.
Since AI technologies are as ubiquitous and rapidly deployed as they are, this would be the perfect time to give the tech a fundamental upgrade by insisting on incorporating human rights into each and every discussion of AI alignment and governance–and installing them into all those perky little bots and LLMs too.
The good news is, we don’t have to reinvent such discussions! The United Nations created a perfectly good Universal Declaration of Human Rights in 1948, and the World Association for Sexual Health (WAS) created its Declaration of Sexual Rights in 1997, with revisions in 1999 and 2014.
These international documents, and others like them, express a unified commitment as to how human beings ought to live.
But these human rights declarations were created before AI came into all of our lives, back when the only extinction level event we had to worry about was atomic.
Artificial intelligence impacts human sexuality
At the end of 2024, Current Sexual Health Reports published “The Impact of Artificial Intelligence on Human Sexuality: A Five‑Year Literature Review 2020–2024” by Nicola Döring and colleagues. The researchers examined 106 studies published in 88 English-language journals. The abstract summarizes:
Generative AI tools present new risks and opportunities for human sexuality and sexual health. More research is needed to better understand the intersection of GenAI and sexuality in order to a) help people navigate their sexual GenAI experiences, b) guide sex educators, counselors, and therapists on how to address and incorporate AI tools into their professional work, c) advise AI developers on how to design tools that avoid harm, d) enlighten policymakers on how to regulate AI for the sake of sexual health, and e) inform journalists and knowledge workers on how to report about AI and sexuality in an evidence-based manner.
Such research, and its conclusions, demonstrate the AI industry’s lack of foresight and consideration of consumers by launching products that have affected one of the most profound aspects of human life in general, not to mention individual human lives. If that’s not a case for human rights inclusion, I don’t know what is.
What we care for and believe in
Getting back to alignment. According to an initiative called The Compendium:
Alignment is the harder version of the kind of problems with which humanity already struggles: for example making companies and governments beneficial for what the actual citizens care for and believe in. Solving alignment would require massive progress on questions like finding what we value and reconciling contradictions between values, predicting the consequences of our actions to avoid unintended side-effects, and design processes from the people’s will to AIs doing the correct things.
It is hard to escape the notion that the values embedded in most current AI reflect the world views of a small number of entitled, wealthy, powerful individuals, particularly those in the US, who have no idea how the rest of us live, and what it is we want.
Seeing as there is a dearth of human rights consideration in the AI industry it is up to the rest of us to broaden the conversation and insist upon human rights inclusion, in every arena where AI is discussed and/or developed–particularly sexual human rights.
Working both ends from the middle
It’s not just the tech sector that needs an upgrade. We can urge organizations such as WAS to address technological impacts on human sexual behavior and identities, as a rights issue, in their declarations.
For example, one of the current sections of the Declaration of Sexual Rights:
STATES that sexual rights are grounded in universal human rights that are already recognized in international and regional human rights documents, in national constitutions and laws, human rights standards and principles, and in scientific knowledge related to human sexuality and sexual health.
The WAS declaration as a whole follows a logical flow from human rights to the addition of sexual rights. It’s not a stretch then to consider adding a tech-specific section to the existing roster of sexual rights. I wrote the following draft, just to get us started:
Companies using advanced technologies to develop products, such as artificial or digital personas and partners, which result in new forms of human sexual behavior and identities—such as digisexuality and technosexuality—are ethically and morally responsible for supporting the sexual rights of their consumers by providing and maintaining access and technical support to all those who depend on those products for their intimate relationships and sexual health.
I would also include language that points out that such products cannot be discontinued or substantially changed without ample notice and offers of emotional and new product support to these consumers. Companies which profit from the creation of new sexualities are ethically and morally obligated to actively advocate for the rights, protection, and privacy of their consumers, even including intervening in policy decisions that may threaten the sexual health and mental and emotional well-being of such consumers, or that may threaten their social status, human rights, and freedoms.
We can also encourage the many organizations devoted to AI safety, ethics, justice, sustainability, and governance—such as BlueDot, European Network for AI Safety (ENAIS), and Center for Future Generations (CFG)—to incorporate these rights into their educational and advocacy activities.
No reason to prevaricate
Longstanding international human rights documents, such as the UN and the WAS declarations, are accepted by many, if not most, nations. They represent fundamental values.
As such, they bypass any need to debate which rights should be included in training AI to align with human values. CEOs of AI corporations will have no grounds to drag their feet on this. Other documents, such as the United Nations Rights of Indigenous Peoples (UNDRIP), should also be included.
It might take a little extra work to make the case for sexual rights inclusion too, but research such as that cited above, and the WAS declaration should prove convincing.
Go for it. You know you want to!
Image source: A.R. Marsh using Ideogram.ai