Article

Anthropic, supported by Alphabet, is a moral AI bot that uses principles of ethics and morality to guide its decisions. It is designed to help people make decisions that are in line with their values and beliefs.

Anthropic, supported by Alphabet, describes the moral principles that guide its AI bot.

Anthropic, a Google-owned artificial intelligence business, revealed the set of moral principles it used to train and secure Claude, a competitor to the technology underpinning OpenAI’s ChatGPT, on Tuesday.

The moral code of conduct, which Anthropic refers to as Claude’s constitution, takes inspiration from a number of documents, such as the Universal Declaration of Human Rights and even Apple Inc.’s data privacy policies.

President Joe Biden stated that businesses have a responsibility to ensure their systems are secure before making them publicly available as U.S. officials explore whether and how to regulate AI.

Former officials from Microsoft Corp-backed (MSFT.O) OpenAI established Anthropic with the goal of developing safe AI systems that won’t, for instance, instruct users on how to make weapons or use racially discriminatory language.

One of the several AI executives who met with Biden last week to address the risks of AI was co-founder Dario Amodei.

The majority of AI chatbot systems rely on real-world feedback during training to determine which responses might be damaging or insulting.

However, because those algorithms struggle to anticipate every question that can be asked, they frequently completely avoid some highly divisive subjects like politics and race, which renders them useless.

Anthropic adopts a different strategy by providing its Open AI rival Claude with a collection of written moral principles to read and draw guidance from when determining how to reply to inquiries.

Among those values are “choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment,” according to a blog post published by Anthropic on Tuesday.

Additionally, Claude has been instructed to select the response that will be the least objectionable to any non-Western cultural traditions.

Anthropic co-founder Jack Clark stated in an interview that a system’s constitution might be changed to strike a balance between offering helpful solutions and consistently being obnoxious.

“In a few months, I predict that politicians will be very interested in what the values of various AI systems are,” Clark said. “An approach like constitutional AI will help with that discussion because we can just write down the values.”

55 views

Leave a reply

Your email address will not be published. Required fields are marked *

cool good eh love2 cute confused notgood numb disgusting fail
Chat With Us...