Article

Google AI pioneer, Max Tegmark, has left the company to openly discuss the potential dangers of technology and its implications for society. He believes that the public should be aware of the risks and rewards of AI.

Google AI pioneer claims he left to talk openly about the “dangers” of technology.

An artificial intelligence pioneer claimed he left Google in order to speak openly about the risks posed by the technology after realizing that computers could surpass human intelligence much sooner than he and other specialists had anticipated.

Geoffrey Hinton stated on Twitter, “I left so that I could warn about the hazards of AI without contemplating how this impacts Google.

He said, “It is difficult to see how you can stop the bad actors from using it for bad things.”

The technology might swiftly replace workers and grow more dangerous as it picks up new habits.

Hinton expressed his concern in an interview with the New York Times on the ability of AI to produce convincing false visuals and words, leading to a society in which people “will not be able to know what is true anymore.”

He said, “It is difficult to see how you can stop the bad actors from using it for bad things.”

The technology might swiftly replace workers and grow more dangerous as it picks up new habits.

The technology might swiftly replace workers and grow more dangerous as it picks up new habits.

Some people did think that this technology would eventually become smarter than humanity, he told the New York Times. But most people believed it to be greatly off. And I believed it to be wildly off. I assumed that would be more than 50 years away. Evidently, I no longer hold that opinion.

In a tweet, Hinton claimed that Google had “acted very responsibly” and refuted the notion that he had left the company in order to be critical of his former employer.

An inquiry for comment was not immediately answered by Google, a subsidiary of Alphabet Inc.

“We remain committed to a responsible approach to A.I.,” Jeff Dean, Google’s head scientist, was cited as saying in a statement by The Times. We are always learning to innovate bravely while also understanding growing threats.

Since the November release of ChatGPT by Microsoft-backed (MSFT.O) startup OpenAI, an increasing number of “generative AI” applications that can generate text or images have raised questions about how the technology will be regulated in the future.

According to Dr. Carissa Veliz, an associate professor of philosophy at the Institute for Ethics in AI at the University of Oxford, “Policymakers should be alarmed that so many experts are speaking up about their concerns regarding the safety of AI, with some computer scientists going as far as regretting some of their work.” “Now is the time to regulate AI,”

385 views

Leave a reply

Your email address will not be published. Required fields are marked *

cool good eh love2 cute confused notgood numb disgusting fail