ChatGPT and other AI technologies.

The US starts looking into potential regulations for AI like ChatGPT

As concerns about artificial intelligence (AI) systems’ potential effects on national security and education grow, the Biden administration said on Tuesday that it is looking for feedback from the public on potential accountability measures for these systems.

U.S. politicians have taken notice of ChatGPT, an AI program that recently received public attention for its capacity to write replies swiftly to a variety of queries. ChatGPT is the fastest-growing consumer application in history and has more than 100 million monthly active users.

There is “increasing regulatory interest” in an AI “accountability mechanism,” according to the National Telecommunications and Information Administration, a Commerce Department organization that advises the White House on telecommunications and information policy.

If there are steps that could be taken to ensure “that AI systems are lawful, effective, ethical, safe, and otherwise trustworthy,” the agency wants to know about them.

“If we address the negative effects and unintended consequences of responsible AI systems, they could have enormous positive effects. Companies and customers must be able to trust these technologies for them to perform to their full potential, according to NTIA Administrator Alan Davidson.
Last week, President Joe Biden said it was too early to tell whether AI posed a risk. Before releasing their products to the public, IT companies, in his opinion, “have a responsibility to make sure their products are secure,” he stated.

ChatGPT, developed by California-based OpenAI with support from Microsoft Corp., has impressed some users with its speedy responses to queries while upsetting others with its mistakes.

The Biden Administration’s ongoing efforts to “ensure a cohesive and comprehensive federal government response to AI-related risks and potential” will be informed by NTIA’s report on “efforts to guarantee AI systems perform as advertised – and without causing harm,” according to NTIA.

As GPT-4 is “biased, deceptive, and a risk to privacy and public safety,” the Center for Artificial Intelligence and Digital Policy asked the U.S. Federal Trade Commission to prevent OpenAI from releasing additional commercial releases of it.


Leave a reply

Your email address will not be published. Required fields are marked *

cool good eh love2 cute confused notgood numb disgusting fail