Somewhere Over the AI Rainbow: The Promise of Queer-Inclusive Sexual Health Tech
Promoting LGBTQIA+ empowerment by addressing bias, breaking stereotypes in machine learning systems
AI drag queen bots who chat with patients about sexual health are simply not enough for LGBTQIA+ people and communities, though they’re a wonderful and witty step toward providing more effective, individualized services.
While Glitter Byte, reigning bot queen of the AIDS Healthcare Foundation, made media waves with her debut, less flamboyant projects advocate for and steadily advance the queering of artificial intelligence.
Some of the goals are to diminish algorithmic biases, promote inclusion, foster acceptance, and directly serve the needs of people who are marginalized for their sexual orientations and gender identities. Let’s look at a few of these initiatives.
Algorithmic fairness and unobserved characteristics
Recognition and representation are crucial to any AI product or service that aims to serve those who are marginalized. But often AI are developed, trained, and programmed without much attentiveness to the complex needs of LGBTQIA+ people.
Researchers from DeepMind are concerned, writing, “given the historical oppression and contemporary challenges faced by queer communities, there is a substantial risk that artificial intelligence (AI) systems will be designed and deployed unfairly for queer individuals.”
And though developers make efforts to minimize discrimination, they usually focus on characteristics that can be easily observed, such as gender and race. Gender identity and sexual orientation are often less obvious and so these characteristics are often missing or may not be considered measurable
The researchers add, “there is a substantial risk that artificial intelligence (AI) systems will be designed and deployed unfairly for queer individuals. Compounding this risk, sensitive information for queer people is usually not available to those developing AI systems, rendering the resulting unfairness unmeasurable from the perspective of standard group fairness metrics.”
In other words, these fundamental flaws in algorithmic fairness efforts must be addressed in order to be any good at all.
Simulations stimulate support
Other researchers and developers have simply forged ahead and created projects and studies that may have immediate practical uses, such as this MIT simulation study.
“AI has always been queer. Computing has always been queer,” Daniel Pillis, MFA, asserted during an interview for MIT News. Pillis works with MIT’s Tangible Media group and is the co-creator of an online system that combines virtual characters with dialog created by a large language model (LLM) “to create complex social interaction simulations.” The system is called “AI Comes Out of the Closet.”
The system was a collaboration with Pat Pataranutaporn, Ph.D., co-director of MIT’s Advancing Human-AI Interaction Initiative (AHA!). Both researchers were graduate students when they developed the system as an online study “to assess the simulator’s impact on fostering empathy, understanding, and advocacy skills toward LGBTQIA+ issues.”
MIT News reports, “these simulations allow users to experiment with and refine their approach to LGBTQIA+ advocacy in a safe and controlled environment.” It’s hoped that such a simulation could provide inclusion training in workplaces and classrooms. The simulation might also boost advocacy efforts.
And such a system could also empower mental health workers in providing more effective support for LGBTQIA+ clients.
Sexual and mental health bots
In a TechXplore article, a team of researchers from Harvard, U.C. Irvine, Emory University and Vanderbilt University found large language models (LLM) can indeed offer fast mental health support—on demand—but they are not well versed in the specific challenges LGBTQIA+ people face.
According to their study of eighteen LGBTQ people and thirteen non-LGBTQ people, the bots provided immediate attention and a safe space for exploration and expression but their boilerplate responses were inadequate for the complexities of difficulties faced by queer users.
The researchers concluded, “Although fine-tuning LLMs to address LGBTQ+ needs can be a step in the right direction, it isn’t the panacea. The deeper issue is entrenched in societal discrimination.” They suggest looking “beyond mere technical refinements” in AI design and deployment.
In other words, mental and sexual health providers should not rely strictly on bots to deliver assistance, though they can be useful complements or features of other services.
Generative AI often perpetuates stereotypes
Many generative AI programs tend to produce images of purple-haired, white youth. Earlier this year, Reece Rogers, a writer for Wired, used Midjourney and got, “Lesbian women are shown with nose rings and stern expressions. Gay men are all fashionable dressers with killer abs. Basic images of trans women are hypersexualized, with lingerie outfits and cleavage-focused camera angles.”
I tried some prompts with Ideogram and had somewhat better results, though most of the people were still white. The stereotypes persisted somewhat, but were…quieter. Pastel colored hair was evident, but minimal.
I prompted “two gay men in a city street, looking happy” (the results gave me two men who obviously shopped for clothing together).
As the image above shows, I tried prompts for “two nonbinary people” (one photo showed two people of color, finally), “two transgender people” (happily the results were not sexualized), and “two transgender people with two lesbians” (all of them happy in a city street).
As Rogers points out in Wired, “there is a divergence between the queer people interested in artificial intelligence and how the same group of people is represented by the tools their industry is building.” In other words, there is more work to be done.
Activism and community building
Queer in AI is affiliated with the nonprofit group, oStem (Out in STEM), which is dedicated to pushing back against AI harm and promoting community discussions, research, and more. The organization’s resource page also contains links to affinity groups.
In the Queer in AI blog, Arjun Subramonian writes, “if AI is not designed with queer harms in mind, it only stands to reproduce them, posing risks to queer communities.” The blog post contains a well organized account of the negative impact of AI on queer people and communities and examines solutions.
We all want a vibrant future, free of discrimination
Though LGBTQIA+ people and communities are not the only ones to experience harmful biases and erasures from AI products, services, and algorithms, there are distinct and complex needs that must be addressed by this technology as it advances.
There are so many potentially valuable and useful aspects to artificial intelligence that could be used to advantage by marginalized people and groups. I have a hope that the steady, quiet work of progressive researchers, designers, and developers will move us steadily toward an inclusive and collaborative AI-Human future, even as the next few years threaten to overwhelm advances in human rights in many countries of the world.
Images by A.R. Marsh using Ideogram.ai.