Article

Protections for ChatGPT and AI could be shaped by the Supreme Court's YouTube ruling.

Protections for ChatGPT and AI could be shaped by the Supreme Court’s YouTube ruling.

The U.S. Supreme Court’s decision in the upcoming months on whether to undermine a strong shield defending internet corporations could have an impact on quickly evolving technology like ChatGPT, an artificial intelligence chatbot.

By the end of June, the judges must decide whether Alphabet Inc.’s YouTube can be sued for its user-recommendation video policies. In that instance, it will be determined whether a U.S. rule that shields technology platforms from liability for user-posted content on the internet also holds true when businesses use algorithms to target specific users with suggestions.

Beyond social networking sites, the court’s ruling on those problems will be important. According to technology and legal experts, the court’s decision may have an impact on the ongoing discussion about whether businesses that create generative AI chatbots, like ChatGPT from OpenAI, in which Microsoft Corp. (MSFT.O) is a major investor, or Bard from Alphabet’s Google, should be shielded from lawsuits alleging defamation or privacy violations.

The reason for this, according to the experts, is that the algorithms that drive generative AI tools like ChatGPT and its successor GPT-4 function relatively similarly to those that recommend videos to YouTube users.

“The debate is really about whether the organization of information available online through recommendation engines is so significant to shaping the content as to become liable,” said Cameron Kerry, a visiting fellow at the Washington-based Brookings Institution think tank and an expert on AI. “You face the same problems with a chatbot,” she said.

Requests for comment from OpenAI and Google representatives went unanswered.

The justices of the Supreme Court voiced doubt during oral arguments in February about whether to reduce the legal safeguards contained in Section 230 of the Communications Decency Act of 1996. Although there is no obvious connection between the case and generative AI, Justice Neil Gorsuch emphasized that similar legal protections are probably not available for AI systems that produce “poetry” and “polemics”.
The case is merely one aspect of a growing debate over whether Section 230 immunity should apply to AI models that can create creative works while being trained on vast amounts of public data.

Sen. Ron Wyden, a Democrat who worked on the legislation’s drafting while serving in the House of Representatives, argued that generative AI tools should not be covered by the liability shield since they “create content.”

“Section 230 focuses on safeguarding users and the websites that host and coordinate user speech. It shouldn’t shield businesses from the repercussions of their own decisions and output, said Wyden in a statement to Reuters.

Despite widespread bipartisan opposition to the immunity, the technology sector has fought to maintain Section 230. As search engines, they claimed, applications like ChatGPT route users to already-existing content in answer to a query.

“AI doesn’t actually produce anything. It involves taking already-existing content and presenting it in a different way or format, according to Carl Szabo, vice president and general counsel of NetChoice, a trade association for the tech sector.

A weaker Section 230, according to Szabo, would make it hard for AI developers to protect themselves from a barrage of lawsuits that might hinder research.

According to some experts, courts might adopt a medium approach and look at the circumstances under which the AI model produced a possibly damaging answer.

The shield may still be applicable when the AI model seems to paraphrase already published sources. However, chatbots like ChatGPT have been known to generate fictitious responses that seem to be unrelated to data accessible elsewhere online, a situation that experts suggested would probably not be safeguarded.

It defies logic, according to technologist and University of California, Berkeley professor Hany Farid, to claim that AI researchers should be shielded from legal action relating to models they “programmed, trained, and deployed.”

Companies generate safer products, according to Farid, when they are held legally accountable for damages caused by the things they make. “And they produce less safe products when they’re not held accountable.”

The family of Nohemi Gonzalez, a 23-year-old college student from California who was fatally murdered in a 2015 attack by Islamist extremists in Paris, is appealing the lower court’s decision to dismiss their complaint against YouTube. The case is currently being heard by the Supreme Court.

The lawsuit asserted that YouTube’s algorithms improperly pushed films by the Islamic State militant group, which claimed responsibility for the Paris attacks, to some users and charged Google with “material support” for terrorism.

378 views

Leave a reply

Your email address will not be published. Required fields are marked *

cool good eh love2 cute confused notgood numb disgusting fail