We value your feedback! Please let us know if you've encountered any issues, bugs, or if you have suggestions for improving our platform.
Your feedback helps us:
In response to OpenAI’s recent call for feedback on their announcement that the frontier AI company is considering changing its usage policies to allow the creation of “NSFW” (not safe for work) content with their AI tools, the National Center on Sexual Exploitation (NCOSE) has prepared a rapid assessment report, The High Stake of AI Ethics: Evaluating OpenAI’s Potential Shift to “NSFW” Content and Other Concerns, and is sharing this report with OpenAI and the public.
OpenAI’s proposal would exacerbate already uncontrolled sexual exploitation and abuse occurring online. As leading experts in combating sexual exploitation, especially online, we were compelled to share our insights to prevent such a critical mistake.
We recognize OpenAI’s groundbreaking advancements in artificial intelligence. Yet, we cannot ignore the substantial harm that has already been unleashed by the misuse of AI tools. Thousands of individuals have already suffered from the consequences of AI-generated content that crosses the line into sexual abuse and exploitation. It’s our duty to ensure that such technology is not allowed to perpetuate and amplify harm under the guise of innovation.
Harms already unleashed include:
It is against this backdrop of mammoth and out of control sexual exploitation generated and enflamed by AI that OpenAI says it is considering permitting the so-called “ethical” generation of “NSFW” material by its users!
We must ask, is OpenAI not satisfied with the scope of damage that has already been unleashed on the world by the open, rushed, and unregulated release of AI?
Is OpenAI willfully blind to the raging and uncontained problems that AI has already unleashed?
Is it not beneath OpenAI and its noble aspirations of bettering humanity to succumb to the demands of the basest users of AI technology?
Is “NSFW” material the purpose to which OpenAI will devote the talents of its employees and most powerful technology in the world?
Our rapid assessment report outlines several critical actions that OpenAI must take to safeguard against the misuse of their technology. Highlights include:
The research on the harms of pornography is so extensive that we can’t hope to boil it down to a few bullet points. However, even a meager sampling of studies paints a chilling picture of how this material is fueling sexual abuse and exploitation, and other public health concerns.
Just as NCOSE does with other tech giants, we invite OpenAI to meet with us, learn, and listen to survivors in order to better understand these issues. We believe that by working together, we can harness the power of AI to make significant strides in preventing sexual exploitation and safeguarding human dignity. OpenAI has the potential to be a leader in ethical AI development, setting standards that others in the industry will follow—if it so chooses.
We urge our readers and all stakeholders—tech companies, policymakers, and the public—to join us in sharing feedback with OpenAI. Let them know that allowing the creation of NSFW content with their tools would be a massive mistake with far-reaching consequences. By prioritizing human dignity and safety, we can ensure that technological advancements benefit society as a whole without causing unintended harm.
Take 30 SECONDS to contact AI with the quick action button below!
TAKE ACTION!Stay informed about our ongoing efforts and how you can get involved by following us on social media (Instagram, Linkedin, X, Facebook) and visiting our website. Together, we can create a future where AI technology is a force for good, free from the shadows of sexual abuse and exploitation.
Story Statistics
2,345 Reads
80 Shares
320 Likes
Related Stories
2024-08-16
VICTORY! Google Enhances Protections Against Deepfake/AI-Generated Pornography
In a significant stride towards combating image-based sexual abuse (IBSA), Google has announced major updates to its policies and processes to protect individuals from sexually explicit deepfake and AI-generated content. These changes, driven by feedback from experts and survivor advocates, represent a monumental victory in our ongoing fight against IBSA.
Computer-generated IBSA, commonly called “deepfake pornography” or “AI-generated pornography,” become increasingly prevalent, posing severe threats to personal privacy and safety. With increased ease and rapidity, technology can forge highly realistic explicit content that often targets individuals without their knowledge or consent. The distress and harm caused by these images are profound, as they can severely damage reputations, careers, and mental health.
And it can happen to anyone. It can happen to you and any of the people you love. If a photo of your face exists online, you are at risk.
Recognizing the urgent need to address these issues, Google has implemented several critical updates to its Search platform to make it easier for individuals to remove IBSA, including computer-generated IBSA, and to prevent such content from appearing prominently in search results. Here’s what’s new:
Please take a few moments to sign the quick form below, thanking Google for listening to survivors and creating a safer Internet!
Thank Google!One of the most commendable aspects of Google’s update is its foundation in the experiences and needs of survivors. By actively seeking and incorporating feedback from those directly affected by IBSA, Google has demonstrated a commitment to creating solutions that truly address the complexities and impacts of this form of abuse.
NCOSE arranged for Google to meet with survivors, and we are thrilled that the company has listened to their critical insights in developing these new features. We warmly thank these brave survivors for raising their voices to make the world a safer place for others.
We also thank YOU for your advocacy which helped spark this win! Over the years, you have joined us in numerous campaigns targeting Google, such as the Dirty Dozen List, which Google Search and other Google entities have been named to many times. This win is YOUR legacy as well!
While these changes mark a significant victory, the fight against IBSA is far from over. Continued vigilance, innovation, and cooperation from tech companies, policymakers, and advocacy groups are essential to building a safer online environment. We must keep pushing for more robust measures and support systems for those affected by image-based sexual abuse.
Google was far from the only corporation facilitating computer-generated IBSA. In fact, there is one corporate entity that is at the root of almost all of this abuse: Microsoft’s GitHub.
Microsoft’s GitHub is the global hub for creating sexually exploitative AI tech. The vast majority of deepfakes and computer-generated IBSA originate on this platform owned by the world’s richest company.
It’s time for Microsoft’s GitHub to stop fueling this problem and start fighting it instead!
Take 30 SECONDS to sign the quick action form below, calling on Microsoft’s GitHub to combat deepfake and AI-generated pornography.
We also urgently need better legislation to combat IBSA. As it stands today, there is NO federal criminal penalty for those who distribute or threaten to distribute nonconsensual sexually explicit images.
The TAKE IT DOWN Act seeks to resolve this appalling gap in the law.
The TAKE IT DOWN Act has already unanimously passed Committee. Please join us in pushing it through the next steps!
Take action now, asking your Senator to support this crucial bill.
TAKE ACTION!We encourage everyone to stay informed about IBSA, support survivors, and advocate for stronger protections and accountability from tech companies. Together, we can create a safer, more respectful digital world.
For more information and resources on combating image-based sexual abuse, visit our webpage here.
2024-08-28
Telegram CEO Arrested: A Man Who Sheltered Child Predators
Imagine a billionaire bought an island. As he allowed people to populate the island, it became clear that predators and pedophiles were flocking to it. In fact, the island was becoming a cesspool where criminals preyed on and tortured children in the most horrific ways.
The billionaire was well aware of this repugnant activity. But instead of striving to stop it, he set up systems to protect the criminals. He constructed private hideaways where they could continue abusing children in secret. He deliberately made it difficult for police to get to the island and investigate crimes.
What would our response be to this billionaire? How would we want our law enforcement and justice systems to handle him? Would we not cry out for him to be held accountable?
Well, a man very much like this billionaire was arrested this week.
His name is Pavel Durov. He is the CEO of Telegram.
Telegram is a messaging app that is increasingly referred to as “the new dark web.” Since its inception, Telegram has served as an epicenter of extremist activities, providing a thriving ecosystem for the most heinous of crimes—including sadistic torture and sextortion rings operated by pedophiles, networks for trading child sexual abuse material (CSAM, the more apt term for “child pornography”), sex trafficking of children and adults alike, communities for the non-consensual creation and distribution of sexually explicit images (i.e. image-based sexual abuse), selling of “date rape” drugs, and so much more.
To give just one horrific example:
[STRONG TRIGGER WARNING]
In September 2023, the FBI issued a warning about a satanic pedophilic cult using Telegram as its main source of communication. This cult regularly extorted children as young as 8 years old into filming themselves committing suicide or self-harm, sexually abusing their siblings, torturing animals, or even murdering others. Members of the Telegram group would control their victims by threatening to share sexually explicit images of the children with their family and friends, or post the images online. Many members had the final goal of coercing the children to die by suicide on live-stream.
Telegram users would gain access to this group by sharing videos of the children they extorted, or videos of adults sexually abusing children.
Hundreds of children were victimized by this group, especially LGBTQ+ children and racial minorities.
You can read more about the unthinkable abuse occurring on Telegram here.
ACTION: Urge the DOJ to Investigate Telegram!Telegram was undoubtedly aware of the extent of the crimes taking place on its platform. The app was banned in more than a dozen countries. Law enforcement agencies, nonprofit organizations, cybersecurity analysts, and investigative journalists have been sounding the alarm about Telegram for years.
Yet rather than taking much-needed steps to combat these crimes, Telegram provided a cover for them to continue unchecked. The truth is, Telegram’s very design seems built to invite and protect criminals and predators.
The company makes it incredibly difficult for law enforcement to investigate crimes occurring on the app. It uses end-to-end encryption in many areas of the platform—and for the areas not covered by end-to-end encryption, it uses distributed infrastructure. In Telegram’s own words, distributed infrastructure means that “data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions … As a result, several court orders from different jurisdictions are required to force us to give up any data.”
The Stanford Internet Observatory concluded in a June 2023 report thatTelegram implicitly allows the trading of CSAM in private channels. They concluded this because Telegram had no explicit policy against CSAM in private chats, no policy at all against grooming, no efforts to detect for known CSAM, and the researchers found CSAM being traded openly in public groups.
It is therefore no surprise that Telegram was noted as the #1 most popular messaging app used to “search for, view, and share CSAM” by almost half of CSAM offenders participating in a 2024 study.
These are only a couple examples of the many ways Telegram designed its platform to shelter criminals and allow abuse to proliferate. You can read more about this here.
Pavel Durov, the CEO of Telegram was arrested in France this week, as part of an investigation into the myriad crimes on the platform.
The only question now is: Why is the United States Department of Justice not engaged?
Please join us in urging the DOJ to investigate Telegram now! Take 30 seconds to complete the quick action below.
TAKE ACTION!Your report is anonymous, except if you're reporting an intellectual property infringement. If someone is in immediate danger, call the local emergency services - don't wait.
Your report is anonymous, except if you're reporting an intellectual property infringement. If someone is in immediate danger, call the local emergency services - don't wait.
We use these reports to: