Article

ChatGPT fever spreads to US workplace, sounding alarm for some

Spread of ChatGPT fever to US workplace alarms some

Despite concerns that have prompted employers like Microsoft and Google to limit its usage, a Reuters/Ipsos poll indicated that many workers across the U.S. are using ChatGPT to assist with basic tasks.

Companies all over the world are debating how to effectively utilize ChatGPT, a chatbot program that uses generative AI to engage consumers in discussions and respond to a variety of cues. However, security groups and businesses have expressed worries that it can lead to leaks of strategy and intellectual property.

People have reportedly used ChatGPT for things like email composition, document summarization, and completing initial research to assist with their daily work.

An online survey on artificial intelligence (AI) conducted between July 11 and July 17 found that 28% of participants regularly use ChatGPT at work, whereas just 22% said that their employers explicitly permit such external tools.

The credibility interval, a gauge of precision, for the Reuters/Ipsos survey of 2,625 individuals in the US, was roughly 2 percentage points.

25% of individuals surveyed did not know whether their employer allowed the use of the technology, while 10% of those surveyed claimed that their supervisors specifically forbade the use of external AI technologies.

After its November introduction, ChatGPT rose to the position of app with the fastest growth in history. It has sparked both interest and concern, putting OpenAI, the project’s developer, in confrontation with authorities, particularly in Europe, where the firm’s extensive data collection has come under fire from privacy watchdogs.

Researchers discovered that similar artificial intelligence AI might repeat material it received during training, creating a possible risk for sensitive information. Human reviewers from other organizations may read any of the created chats.

“People do not understand how the data is used when they use generative AI services,” said Ben King, vice president of customer trust at corporate security company Okta.

Because many AIs offer free services without a contract, King noted, “this is critical for businesses because corporates won’t have run the risk through their usual assessment process.”

Researchers found that comparable artificial intelligence AI could regurgitate information it learned while being trained, possibly posing a risk for sensitive data. Any chat that has been made can be read by human reviewers from other organizations.

“People do not understand how the data is used when they use generative AI services,” said Ben King, vice president of customer trust at business security firm Okta.

As King pointed out, “this is critical for businesses because corporates won’t have run the risk through their usual assessment process.” King was referring to the fact that many AIs offer free services without a contract.

An employee of Tinder in the United States claimed that despite the company’s explicit policy against it, staff members still used ChatGPT for “harmless tasks” like sending emails.

“They are standard emails. Making amusing calendar invites for team events and farewell emails when someone is departing are quite unimportant. The employee, who wished to remain anonymous since they were not permitted to speak with reporters, added that they also utilize it for general research.

Despite Tinder’s “no ChatGPT rule,” the employee claimed that people still use it in a way that “doesn’t reveal anything about us being at Tinder.”

Reuters was unable to independently verify how Tinder staff were use ChatGPT. According to Tinder, it offers “regular guidance to employees on best security and data practices”.

After learning that one of its employees had posted sensitive code to the site in May, Samsung Electronics issued a global ban on employees using ChatGPT and comparable AI technologies.

According to a statement released by Samsung on August 3, “We are reviewing steps to create a secure environment for generative AI usage that increases employees’ productivity and efficiency.”

However, “we are temporarily limiting the use of generative AI through company devices until these measures are ready.”

In June, Reuters reported that Alphabet had warned staff about using chatbots, such as Google’s Bard, even as it promoted the initiative internationally.

Google claimed that while Bard can recommend undesirable code, it nevertheless benefits programmers. It added that one of its goals was to be open and honest about the limitations of their technology.
Reuters was informed by certain businesses that they are adopting ChatGPT and comparable platforms while keeping security in mind.

A Coca-Cola representative in Atlanta, Georgia, said, “We’ve started testing and learning about how AI can enhance operational effectiveness,” adding that data remains within its firewall.

The spokesman explained, “Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity,” adding that Coca-Cola intends to employ AI to increase the efficiency and productivity of its teams.

Meanwhile, Tate & Lyle’s Chief Financial Officer Dawn Allen told Reuters that the company was testing ChatGPT after “finding a way to use it in a safe way.”

Through a series of experiments, several teams are selecting how they want to use it. Is it appropriate for investor relations? Is it appropriate for knowledge management? How can we utilize it to complete things more quickly?

Some workers claim they are completely unable to use the platform on workplace PCs.

An employee at Procter & Gamble (PG.N), who wished to remain anonymous because they were not permitted to speak to the press, said, “It’s completely banned on the office network, like it doesn’t work.”

P&G opted not to respond. Whether P&G employees were unable to utilize ChatGPT could not be independently verified by Reuters.

Companies should be cautious, according to Paul Lewis, chief information security officer of cyber security company Nominet.

The information isn’t entirely secure and it can be engineered out, he added, citing “malicious prompts” that may be used to compel AI chatbots to reveal information. “Everyone benefits from that increased capability, but the information isn’t completely secure and it can be engineered out,” he said.

Lewis stated, “A blanket ban isn’t yet warranted, but we need to tread carefully.”

Source

 

 

98 views

Leave a reply

Your email address will not be published. Required fields are marked *

cool good eh love2 cute confused notgood numb disgusting fail