Consumer advocacy group urges OpenAI to pull video app Sora over privacy, misinformation concerns

Consumer advocacy group urges OpenAI to pull video app Sora over privacy, misinformation concerns

text to speech icon

listen to this article

estimated 6 minutes

The audio version of this article is generated by text-to-speech, a technology based on artificial intelligence.

Nonprofit consumer advocacy group Public Citizen demanded in a Tuesday letter that OpenAI withdraw its video-generation software Sora 2 after the application raised concerns about spreading misinformation and privacy violations.

The letter, addressed to the company and CEO Sam Altman, accused OpenAI of rushing the release of the app so it could launch before competitors.

“OpenAI appears to have a persistent and dangerous pattern of coming to market with a product that is either inherently insecure or lacks necessary guardrails,” the watchdog group said.

The letter says Sora 2 shows a “reckless disregard” for product safety and people’s rights to their own equality. It was argued that it also contributes to a broader undermining of public trust in the authenticity of online content.

The group also sent a letter to the US Congress.

OpenAI did not immediately respond to a request for comment on Tuesday.

More responsive to complaints about celebrity content

Exclusive Sora videos are designed to be so entertaining that you can click and share on platforms like TikTok, Instagram, X and Facebook.

It could be a rapping of the late Queen Elizabeth II or something more general and believable. A popular Sora style shows fake doorbell camera footage capturing something strange – say, a boa constrictor on the porch or an alligator approaching a startled child – and ending with a mildly shocking image, such as a grandmother screaming as she hits the animal with a broom.

listen AI video app Sora 2 is here. Can you tell what is real?:

the current24:17New AI video app Sora is here: Can you tell what’s real?

Whether it’s your best friend riding a unicorn, Michael Jackson teaching math, or Martin Luther King Jr. dreaming of selling vacation packages – it’s now easier and faster to turn those ideas into realistic videos using the new AI app, Sora. The company behind it, OpenAI, promises guardrails to protect against violence and fraud — but many critics worry that the app could promote misinformation… and pollute society with even more “AI slop.”

Public Citizen has joined a growing chorus of advocacy groups, academics and experts warning about the dangers of allowing people to create AI videos on almost anything, allowing them to fuel the spread of non-consensual images and realistic deepfakes in a sea of ​​less harmful “AI slop.”

OpenAI has cracked down on AI creations of public figures like Michael Jackson, Martin Luther King Jr. and Mister Rogers — but only after protests from the family estate and the actors’ union.

“Our biggest concern is the potential threat to democracy,” Public Citizen tech policy advocate JB Branch said in an interview.

“I think we’re entering a world where people can’t really trust what they see. And we’re starting to see strategies in politics where the first image, the first video that’s released, that’s what people remember.”

Guardrails haven’t stopped harassment

Branch, who wrote the letter Tuesday, also sees broader threats to people’s privacy and says these could disproportionately impact certain groups.

Look How Denmark is trying to stop unauthorized deepfakes:

How Denmark is trying to stop unauthorized deepfakes

AI-generated videos are everywhere online, but what happens when your image or voice is replicated without your permission? CBC’s Ashley Fraser explains how Denmark is trying to reshape digital identity protection and how Canada’s laws compare.

OpenAI prevents nudity but Branch said that “women are finding themselves being harassed online in other ways”.

Fetish niche content has made it through the app’s restrictions. News outlet 404 Media reported this on Friday There was a flood of videos made by Sora of strangulating women.

OpenAI introduced its new Sora app on iPhones more than a month ago. It was launched last week on Android phones in the US, Canada and several Asian countries including Japan and South Korea.

The most forceful reaction against it has come from Hollywood and other entertainment interests, including the Japanese manga industry.

OpenAI announced its first major changes just days after release, stating that “extreme moderation is extremely frustrating for users” but that it is important to be conservative “while the world is still adjusting to this new technology.”

After this it was publicly announced Agreement with the family of Martin Luther King Jr. On October 16, an “offensive portrayal” of the civil rights leader was halted while the company worked on better security measures, and another on October 20. Breaking Bad actor Bryan Cranston, the SAG AFTRA union and talent agencies.

“It’s all good if you’re famous,” Branch said. “This is a pattern of OpenAI, where they’re willing to respond to the outrage of a very small population. They’re willing to release something and apologize later. But a lot of these issues are design choices they could have made before release.”

Look AI-generated ‘actress’ Tilly Norwood’s sharp reaction:

AI-generated ‘actress’ Tilly Norwood faces criticism in Hollywood

European AI production company Particle6 says their AI-creation Tilly Norwood has generated a lot of interest, but Hollywood actors including Emily Blunt, Melissa Barrera and Whoopi Goldberg, as well as the SAG-AFTRA union, have come out against the AI ​​character.

Lawsuits are ongoing against ChatGPT

OpenAI has faced similar complaints about its flagship product ChatGPT. Seven new lawsuits were filed in California courts last week Claims chatbots drove people to suicide and harmful delusions Even if they had no pre-existing mental health problems.

The lawsuits, filed by the Social Media Victims Law Center and the Tech Justice Law Project on behalf of six adults and a teenager, claim that OpenAI knowingly released GPT-4o prematurely last year, despite internal warnings that it was dangerously flattering and psychologically manipulative. Four of the victims died by suicide.

Public Citizen was not involved in the lawsuit, but Branch said he saw parallels in the way Sora was released.

“A lot of this seems predictable,” he said. “But they want to get a product out there, get people downloading it, get people addicted to it, rather than doing the right thing and stress-testing these things beforehand and worrying about the plight of everyday users.”

OpenAI responds to anime creators, video game makers

OpenAI spent last week responding to complaints about Sora from the Japanese trade association representing famous animators like Hayao Miyazaki’s Studio Ghibli and video game makers Bandai Namco, Square Enix and others.

OpenAI defended the app’s extensive ability to create fake videos based on popular characters, saying that many anime fans want to interact with their favorite characters.

But the company also said it has put up guardrails to prevent famous characters from being generated without the consent of those who hold the copyrights.

“We are engaging directly with studios and rights holders, listening to feedback and learning how people are using Sora 2, including in Japan, where cultural and creative industries are highly valued,” OpenAI said in a statement about the trade group’s letter last week.

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus ( )