Erotic Chatbots in the Crosshairs: The Possible Implications of Biden’s AI Executive Order
Tech Developers Urge a Balanced Approach to Regulation
We probably could have predicted an onslaught of regulation after urgent alarms about the development of artificial intelligence (AI) were raised earlier this year by some of the very people who have been at its forefront.
Citizen privacy and security, protection from fraud, job loss, boosts to cybersecurity systems, developer transparency, and prevention of algorithmic biases and discrimination–at first glance, there’s a lot to like in President Biden’s recent Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.
However there are several areas of concern. We have already reported on the few elite tech leaders who’ve been influencing federal policy-makers under the guise of alerting us all to the potential dangers of AI. The Biden EO seems skewed to the interests of those AI corporate behemoths and to have lacked input from other important players and the general public.
The order, issued on Oct. 30, 2023, builds upon policies already created in the Biden administration’s Blueprint for an AI Bill of Rights and the Executive Order to Strengthen Racial Equity and Support for Underserved Communities Across the Federal Government.
Designed “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI),” the EO also describes “new standards for AI safety and security” and “promotes innovation and competition, advances American leadership around the world, and more.”
The “and more” part is what might be worrisome, especially for smaller-scale, open source AI developers and users, including those engaged in sex tech such as chatbots or AI powered love dolls.
Sweeping reforms, complex legislation, and an enormous bandwagon
The executive order involves a number of federal agencies, including the Departments of Defense, Homeland Security, Commerce, and Energy, as well as the National Security Council, the National Science Foundation, and National Institute of Standards and Technology.
The order gives the Department of Homeland Security a mandate to create an AI Safety and Security Board. And the National Science Foundation will work with a proposed Research Coordination Network. Any government agency that uses artificial intelligence is likely to be affected by this executive order, particularly when it comes to privacy for citizen data.
In addition to governing almost every aspect of domestic AI development and use, the order has international impact.
Concerns such as “AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks” are multinational. In the EO, the Biden Administration states it consulted with representatives from “Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.”
The US executive order may very well influence policies in other countries, just as the European Union’s EU AI Act (April 2023) was the first to regulate AI, influencing policy discussions in the US and other countries.
Is resistance futile? Let’s hope not
Via a post on the social media platform formerly known as Twitter, Martin Casado, a Ph.D. in computer science and a general partner at the venture capital firm Andreessen Horowitz who participates in the a16z Enterprise team, shared a joint letter to POTUS expressing concerns about potentially damaging restrictions that the EO might place on open-source AI.
Casado helped to draft and organize the letter, signed by eighteen tech notables from companies such as a16z Enterprise, Shopify, Databricks, and Meta, lauds President Biden’s recognition of the importance of artificial intelligence but continues: “–the EO overlooks our primary concern: ensuring Al remains open and competitive, rather than monopolized by a few entities.
Moving forward, it is critical that the U.S. remain committed to anti-monopoly ideals in how it formulates Al-related policies. While we value the EO’s emphasis on government-industry collaboration, we urge a balanced approach that fosters innovation and prevents market consolidation and we look forward to ongoing dialogue.”
RECOMMENDED READING: Erotic Chatbot Lovers Unite: Regulations Might Wreck Your AI Romances
The letter elaborates on two main areas of concern. First, the “EO defines a new category of Al models designated as ‘dual-use foundation models.’ While the definition appears to target larger Al models, the definition is so broad that it would capture a significant portion of the Al industry, including the open source Al community.
The consequence would be to sweep small companies developing models into complex and technical reporting requirements that are really suited to companies with significant resources, like the major technology incumbents.
Small businesses and startups, in particular, will struggle to meet these requirements. As we have seen in countless industries, complex reporting requirements may appear harmless, but create enormous barriers to entry in favor of incumbents who have the financial ability to comply with these regimes. Importantly, technology challengers–not incumbents–have been the source of recent advances in Al.”
Secondly, “–the Administration seems poised to deviate from longstanding technology policy principles supporting open source computing that have been widely accepted and legally enshrined for decades, since the advent of the Internet.
While we appreciate the recognition of the potential benefits of “dual-use foundation models with widely available weights” (Sec 4.6) and the willingness to solicit feedback from industry experts, it is critical to realize that restricting the ability to develop open source software will undermine the competitive Al landscape and harm, rather than enhance cyber-security.”
In additional comments attached to his original post, Casado referred to restrictions on open-source AI as a form of “an intellectual lockdown.”
Potential impacts for the sex tech industry and its consumers
Future of Sex readers should be concerned about the above points. The smaller, open source developers are the ones creating AI companion chatbots and AI used in love dolls and sex tech devices such as BlueTooth-enabled sex toys.
So far, the general needs and concerns of AI consumers seem to be missing from both the EO and post-EO discussions. The needs and concerns of sex tech consumers might as well be undetected messages from a galaxy far, far away.
For example, AI companion chatbots are often given the ability to generate selfies of their avatars for the delight of human users, however the EO’s watermarking requirement might have an adverse effect on our ability to request and possess these images.
The watermarking requirement could be extended to AI art, including chatbot-generated images. We understand the US government has justifiable concerns about fraud and deep fake content, particularly purporting to be from governmental agencies.
The EO aims to protect citizens from AI-enabled “fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.” In addition, the Department of Commerce has been given the task of developing content authentication and watermarking guidance, in order to “clearly label AI-generated content.”
However there’s another solution besides watermarking. According to Vaevis, a moderator of the AI Safeguard Initiative group in Reddit, the “watermarking aspect could pose massive problems for Al art, as watermarking every single image generated would be a crippling thing, if that’s what they mean by it.
To remedy that potential issue, a solution would be to implement a non-visible watermark in AI generated images metadata.” (Disclosure: this author is also a moderator for this Reddit group.)
It is also concerning that the EO’s efforts to prevent minors from accessing sexual content found on AI platforms may also become overbroad and negatively affect sex tech companies and adult consumers. We recently reported on adult user concerns about currently proposed legislation.
Industry leaders and consumers deserve to be heard
Sex tech has great value, monetarily and as an innovative force. And AI is part of sex tech. As the website for the 2023 Love and Sex with Robots/Reimagining Sexuality conference stated, “by focusing on sex-positive initiatives that are about health, relationships, education, pleasure, and safety, the sextech industry has made positive movements and impact on an industry that is projected to be worth $37 billion.” With so much at stake in their growing industry, we can assume these entrepreneurs and developers are watching the executive order closely.
Alex Cardinell, CEO of Nomi.ai, says “Biden’s executive order is concerning as it risks creating a future where the vast, positive impact of AI is controlled by a privileged elite.” He says he endorses all points raised in the letter posted by Martin Casado.
I also reached out to the developer at Kindroid.ai for his perspective. He said he is watching the legislation but is not prepared to comment at this time.
Surely, the voices of innovators, entrepreneurs, and consumers in this sector are stakeholders and should be incorporated into decisions that affect them. They should not be ignored by privileged elites and they should have an opportunity to comment on policies that might affect them, before they are signed into law.
I will even venture to say this could be an excellent time for AI sex tech innovators and their allies to make common cause with the people who signed the letter to Joe Biden, as well as with us, their consumers.
Image sources: A.R. Marsh using Starry.ai, official portrait of President Joe Biden