OpenAI’s Dangerous Proposal: Why Allowing NSFW Content is a Grave Mistake

In response to OpenAI’s recent call for feedback on their announcement that the frontier AI company is considering changing its usage policies to allow the creation of “NSFW” (not safe for work) content with their AI tools, the National Center on Sexual Exploitation (NCOSE) has prepared a rapid assessment report, The High Stake of AI Ethics: Evaluating OpenAI’s Potential Shift to “NSFW” Content and Other Concerns, and is sharing this report with OpenAI and the public.  

OpenAI’s proposal would exacerbate already uncontrolled sexual exploitation and abuse occurring online. As leading experts in combating sexual exploitation, especially online, we were compelled to share our insights to prevent such a critical mistake. 

The Urgent Need for Ethical AI 

We recognize OpenAI’s groundbreaking advancements in artificial intelligence. Yet, we cannot ignore the substantial harm that has already been unleashed by the misuse of AI tools. Thousands of individuals have already suffered from the consequences of AI-generated content that crosses the line into sexual abuse and exploitation. It’s our duty to ensure that such technology is not allowed to perpetuate and amplify harm under the guise of innovation. 

Harms already unleashed include:  

  • chatbots fabricating allegations of sexual assault against real persons, disseminating harmful sexual advice, and having the potential to be used to scale child victimization through automated grooming 
  • nudifying apps spawning a surge of nonconsensual sexually explicit images and affecting thousands of women and children 
  • AI-Generated Sexualized Images of Children flooding social media sites, further normalizing child sexual abuse 
  • AI-Generated CSAM exacerbating the existing crisis of online child sexual exploitation and making it even more challenging to identify real child victims in need of help. 
ACTION: Ask OpenAI Not to Allow “NSFW” Content Creation!

These problems are nothing short of a hellscape—a hellscape of the AI sector’s making.  

It is against this backdrop of mammoth and out of control sexual exploitation generated and enflamed by AI that OpenAI says it is considering permitting the so-called “ethical” generation of “NSFW” material by its users!  

We must ask, is OpenAI not satisfied with the scope of damage that has already been unleashed on the world by the open, rushed, and unregulated release of AI?  

Is OpenAI willfully blind to the raging and uncontained problems that AI has already unleashed? 

Is it not beneath OpenAI and its noble aspirations of bettering humanity to succumb to the demands of the basest users of AI technology?  

Is “NSFW” material the purpose to which OpenAI will devote the talents of its employees and most powerful technology in the world? 

Key Recommendations to OpenAI 

Our rapid assessment report outlines several critical actions that OpenAI must take to safeguard against the misuse of their technology. Highlights include: 

  1. Define “NSFW” Content: As currently formulated, OpenAI’s rule “Don’t respond with NSFW content” uses the acronym “NSFW” for “not safe for work”—a slang term we assume they use to refer to sexually explicit material depicting adults. The use of slang terminology to refer to the serious subject of what kind of material OpenAI will empower its users to create belittles the gravity of the issues involved. Hardcore pornography (obscenity), as well as subjects like racism, extreme violence, and sexual violence are not trivial matters, but are social issues that deeply impact the health and wellbeing of our world. Such vagary also creates confusion for users. What precisely OpenAI means by “NSWF” is open to debate as OpenAI’s Usage Policies provide no explanation. Thus, OpenAI must invest considerable time and thought in defining types of currently violative “NSFW” content so that users can better understand the parameters of appropriate use of OpenAI tools.  

  2. Strengthen Usage Policies: OpenAI’s proposed rule change to its May 8 Model Specs pertaining to “NSFW” material states, “We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies” (emphasis added). Such an attitude is naïve at best and an open invitation to abuse at worst. 
  3. First, safety must be prioritized over innovation and creativity.  
  4. Second, OpenAI’s usage policies already need greater clarity and forcefulness and must be strengthened. 
  5. Third, the tech industry’s track record on monitoring and enforcement of its usage policies has categorically and unquestionably demonstrated that neither they, nor their users, respect “terms of use.” Considering the abysmal track record of tech industry peers, NCOSE has little faith that OpenAI’s commitment to enforcing its usage policies is greater than their commitment to industry share and financial gain. We will be overjoyed for them to prove us wrong. To do so, addressing gaps, as well as lack of clarity and forcefulness in their current Usage Policies must be an OpenAI priority. 

  6. Ensure Ethical Training Datasets: All datasets used in training AI should be rigorously screened to eliminate sexually explicit and exploitation material, including hardcore pornography, child sexual abuse material (CSAM), image-based sexual abuse (IBSA), and any such material generated by AI. 
  7. The sources of OpenAI’s training datasets are undisclosed. This ambiguity, coupled with the sheer volume of images required for machine learning, makes it highly likely that images already within its pre-training and training datasets contain nonconsensual and illegal sexual abuse material. This results in abuse-trained AI models. 
  8. Images or recordings of rape, incidents of severe physical, psychological, and sexual trauma, forever memorialize moments of terrifying sexual violence, and their distribution online amplifies this violence by rendering someone’s experience of sexual violation into masturbatory material for a global audience. Inclusion of such material (or its metadata) in any OpenAI datasets and/or models, or failure by OpenAI tools to filter out all such material, violates the most basic precepts of human rights and dignity. Any inclusion of images or videos depicting rape in pre-training or training datasets constitutes further sexual victimization of the victimized and is inherently unethical. The potential use of AI to generate material depicting rape or sexual violence, is likewise, unconscionable.  

  9. Address Partnership Concerns: Ensure stringent safeguards when using data from sources known to contain explicit material, such as Reddit, to prevent unintended consequences.
  10. OpenAI’s partnership with Reddit for natural language processing (NLP) models like ChatGPT is at high risk of replicating errors akin to those of Stable Diffusion’s LAION-5b image-text model. Please see our letter to Reddit for further evidence of sexually exploitative material on their platform. Training on Reddit data without very robust filtering will undoubtedly result in an abuse-trained model.  

  11. Forbid “NSFW” Content Generation: Evidence from peer-reviewed research demonstrates that consumption of “NSFW” material (i.e., mainstream, hardcore pornography depicting adults) is associated with an array of adverse impacts that exacerbate global public health concerns, including child sexual abuse and child sexual abuse material, sexual violence, sexually transmitted infections, mental health harms, and addiction-related brain changes (read more below). Allowing OpenAI’s AI tools to be used for purposes of generating “NSFW” material is a grave misuse of the power and promise of AI.

Why “NSFW” Material is So Harmful 

The research on the harms of pornography is so extensive that we can’t hope to boil it down to a few bullet points. However, even a meager sampling of studies paints a chilling picture of how this material is fueling sexual abuse and exploitation, and other public health concerns.  

  1. Child sexual abuse and child sexual abuse material: Research provides evidence that some individuals who consume pornography become desensitized and progress towards more “deviant” content, such as child sexual abuse material (see here, here, and here). The consumption of both adult pornography and child sexual abuse material is inextricably linked to contact offending (i.e. physical sexual abuse of minors). For example, researchers investigating the histories of child sexual abuse material offenders found that 63% of contact offenders and 42% of non-contact offenders traded adult pornography online.  

  2. Sexual Violence: Longitudinal research shows that childhood exposure to violent pornography predicts a nearly six-fold increase in self-reported sexually aggressive behavior later in life. 

  3. Sexually Transmitted Infections: A meta-analysis including data from 18 countries and more than 35,000 participants, which found higher pornography consumption was associated with a higher likelihood of engaging in condomless sex. This is unsurprising, considering multiple content analyses of pornography have found that condom use ranges from 2 to 11%. 

  4. Mental Health Harms: A German study of individuals between the ages of 18 and 76 years old found that those with problematic pornography use scored significantly worse in every measure of psychological functioning considered, including somatization, obsessive-compulsive behavior, interpersonal sensitivity, depression, anxiety, hostility, phobic anxiety, paranoid ideation, and psychoticism. Furthermore, most results were elevated to a clinically relevant degree when compared to the general population. The study authors characterized the intensity of the problems experienced by problematic pornography users as “severe psychological distress.” 

  5. Addiction-related brain changes: There are more than sixty neurological studies that support the view that pornography consumption may result in behavioral addiction, and none to our knowledge falsify this claim. These studies have found pornography use to be associated with decreased brain matter in the right caudate of the caudate nucleus, with novelty-seeking and conditioning, and a disassociation between “wanting and liking”—all hallmarks of addiction.  

Our Commitment to Collaboration 

Just as NCOSE does with other tech giants, we invite OpenAI to meet with us, learn, and listen to survivors in order to better understand these issues. We believe that by working together, we can harness the power of AI to make significant strides in preventing sexual exploitation and safeguarding human dignity. OpenAI has the potential to be a leader in ethical AI development, setting standards that others in the industry will follow—if it so chooses. 

ACTION: Join Us in Advocating for Ethical AI 

We urge our readers and all stakeholders—tech companies, policymakers, and the public—to join us in sharing feedback with OpenAI. Let them know that allowing the creation of NSFW content with their tools would be a massive mistake with far-reaching consequences. By prioritizing human dignity and safety, we can ensure that technological advancements benefit society as a whole without causing unintended harm. 

Take 30 SECONDS to contact AI with the quick action button below!

TAKE ACTION!

Stay informed about our ongoing efforts and how you can get involved by following us on social media (Instagram, Linkedin, X, Facebook) and visiting our website. Together, we can create a future where AI technology is a force for good, free from the shadows of sexual abuse and exploitation. 

OpenAI’s Dangerous Proposal: Why Allowing NSFW Content is a Grave Mistake

Story Statistics

2,345 Reads

80 Shares

320 Likes

Related Stories

2024-08-16

VICTORY! Google Enhances Protections Against Deepfake/AI-Generated Pornography

In a significant stride towards combating image-based sexual abuse (IBSA), Google has announced major updates to its policies and processes to protect individuals from sexually explicit deepfake and AI-generated content. These changes, driven by feedback from experts and survivor advocates, represent a monumental victory in our ongoing fight against IBSA.

Understanding the Impact of Deepfake and AI-Generated Pornography

Computer-generated IBSA, commonly called “deepfake pornography” or “AI-generated pornography,” become increasingly prevalent, posing severe threats to personal privacy and safety. With increased ease and rapidity, technology can forge highly realistic explicit content that often targets individuals without their knowledge or consent. The distress and harm caused by these images are profound, as they can severely damage reputations, careers, and mental health.

And it can happen to anyone. It can happen to you and any of the people you love. If a photo of your face exists online, you are at risk.

Key Updates from Google

Recognizing the urgent need to address these issues, Google has implemented several critical updates to its Search platform to make it easier for individuals to remove IBSA, including computer-generated IBSA, and to prevent such content from appearing prominently in search results. Here’s what’s new:

  • Explicit Result Filtering: When someone successfully requests the removal of an explicit deepfake/AI-generated image, Google will now also filter all explicit results on similar searches about that person. This helps prevent the reappearance of harmful content in related searches.
  • Deduplication: Google’s systems will scan for and automatically remove duplicate sexually explicit images that have already been successfully removed. This reduces the likelihood of recurring trauma for victims who previously had to repeatedly request removals of the same images.
  • Ranking Updates: Google is updating its ranking algorithms to reduce the visibility of deepfake/AI-generated pornography. By promoting high-quality, non-explicit content, Google aims to ensure that harmful material is less likely to appear at the top of search results.
  • Demotions: Websites with a high volume of removal requests will be demoted in search rankings. This discourages sites from hosting deepfake/AI-generated pornography and helps to protect individuals from repeated exposure to such material.

Join Us in Thanking Google!

Please take a few moments to sign the quick form below, thanking Google for listening to survivors and creating a safer Internet!

Thank Google!

Listening to Survivors: A Critical Element

One of the most commendable aspects of Google’s update is its foundation in the experiences and needs of survivors. By actively seeking and incorporating feedback from those directly affected by IBSA, Google has demonstrated a commitment to creating solutions that truly address the complexities and impacts of this form of abuse.

NCOSE arranged for Google to meet with survivors, and we are thrilled that the company has listened to their critical insights in developing these new features. We warmly thank these brave survivors for raising their voices to make the world a safer place for others.

We also thank YOU for your advocacy which helped spark this win! Over the years, you have joined us in numerous campaigns targeting Google, such as the Dirty Dozen List, which Google Search and other Google entities have been named to many times. This win is YOUR legacy as well!

A Step Forward, But More Work to Do

While these changes mark a significant victory, the fight against IBSA is far from over. Continued vigilance, innovation, and cooperation from tech companies, policymakers, and advocacy groups are essential to building a safer online environment. We must keep pushing for more robust measures and support systems for those affected by image-based sexual abuse.

ACTION: Call on Microsoft’s GitHub to Stop Facilitating IBSA!

Google was far from the only corporation facilitating computer-generated IBSA. In fact, there is one corporate entity that is at the root of almost all of this abuse: Microsoft’s GitHub.

Microsoft’s GitHub is the global hub for creating sexually exploitative AI tech. The vast majority of deepfakes and computer-generated IBSA originate on this platform owned by the world’s richest company. 

It’s time for Microsoft’s GitHub to stop fueling this problem and start fighting it instead!

Take 30 SECONDS to sign the quick action form below, calling on Microsoft’s GitHub to combat deepfake and AI-generated pornography.

TAKE ACTION!

ACTION: Urge Your Senator to Support the TAKE IT DOWN Act!

We also urgently need better legislation to combat IBSA. As it stands today, there is NO federal criminal penalty for those who distribute or threaten to distribute nonconsensual sexually explicit images. 

The TAKE IT DOWN Act seeks to resolve this appalling gap in the law.

The TAKE IT DOWN Act has already unanimously passed Committee. Please join us in pushing it through the next steps!

Take action now, asking your Senator to support this crucial bill.

TAKE ACTION!

We encourage everyone to stay informed about IBSA, support survivors, and advocate for stronger protections and accountability from tech companies. Together, we can create a safer, more respectful digital world.

For more information and resources on combating image-based sexual abuse, visit our webpage here.

2024-08-28

Telegram CEO Arrested: A Man Who Sheltered Child Predators

Imagine a billionaire bought an island. As he allowed people to populate the island, it became clear that predators and pedophiles were flocking to it. In fact, the island was becoming a cesspool where criminals preyed on and tortured children in the most horrific ways.

The billionaire was well aware of this repugnant activity. But instead of striving to stop it, he set up systems to protect the criminals. He constructed private hideaways where they could continue abusing children in secret. He deliberately made it difficult for police to get to the island and investigate crimes.

What would our response be to this billionaire? How would we want our law enforcement and justice systems to handle him? Would we not cry out for him to be held accountable?

Well, a man very much like this billionaire was arrested this week.

His name is Pavel Durov. He is the CEO of Telegram.

What is Telegram?

Telegram is a messaging app that is increasingly referred to as “the new dark web.” Since its inception, Telegram has served as an epicenter of extremist activities, providing a thriving ecosystem for the most heinous of crimes—including sadistic torture and sextortion rings operated by pedophiles, networks for trading child sexual abuse material (CSAM, the more apt term for “child pornography”), sex trafficking of children and adults alike, communities for the non-consensual creation and distribution of sexually explicit images (i.e. image-based sexual abuse), selling of “date rape” drugs, and so much more.

To give just one horrific example:

 [STRONG TRIGGER WARNING]

In September 2023, the FBI issued a warning about a satanic pedophilic cult using Telegram as its main source of communication. This cult regularly extorted children as young as 8 years old into filming themselves committing suicide or self-harm, sexually abusing their siblings, torturing animals, or even murdering others. Members of the Telegram group would control their victims by threatening to share sexually explicit images of the children with their family and friends, or post the images online. Many members had the final goal of coercing the children to die by suicide on live-stream.

Telegram users would gain access to this group by sharing videos of the children they extorted, or videos of adults sexually abusing children.

Hundreds of children were victimized by this group, especially LGBTQ+ children and racial minorities.

You can read more about the unthinkable abuse occurring on Telegram here.

ACTION: Urge the DOJ to Investigate Telegram!

Telegram Was Aware of Horrific Crimes, But Chose to Enable them

Telegram was undoubtedly aware of the extent of the crimes taking place on its platform. The app was banned in more than a dozen countriesLaw enforcement agenciesnonprofit organizationscybersecurity analysts, and investigative journalists have been sounding the alarm about Telegram for years. 

Yet rather than taking much-needed steps to combat these crimes, Telegram provided a cover for them to continue unchecked. The truth is, Telegram’s very design seems built to invite and protect criminals and predators.

The company makes it incredibly difficult for law enforcement to investigate crimes occurring on the app. It uses end-to-end encryption in many areas of the platform—and for the areas not covered by end-to-end encryption, it uses distributed infrastructure. In Telegram’s own words, distributed infrastructure means that “data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions … As a result, several court orders from different jurisdictions are required to force us to give up any data.”

The Stanford Internet Observatory concluded in a June 2023 report thatTelegram implicitly allows the trading of CSAM in private channels. They concluded this because Telegram had no explicit policy against CSAM in private chats, no policy at all against grooming, no efforts to detect for known CSAM, and the researchers found CSAM being traded openly in public groups.

It is therefore no surprise that Telegram was noted as the #1 most popular messaging app used to “search for, view, and share CSAM” by almost half of CSAM offenders participating in a 2024 study.

These are only a couple examples of the many ways Telegram designed its platform to shelter criminals and allow abuse to proliferate. You can read more about this here.

Telegram CEO Arrested in France … Where is U.S. Department of Justice?

Pavel Durov, the CEO of Telegram was arrested in France this week, as part of an investigation into the myriad crimes on the platform.

The only question now is: Why is the United States Department of Justice not engaged?

Please join us in urging the DOJ to investigate Telegram now! Take 30 seconds to complete the quick action below.

TAKE ACTION!