Article

AI use rising in influence campaigns online, but impact limited - US cyber firm

Increased use of AI in online influence efforts has a limited impact, according to Google-owned cyber firm

Mandiant, a Google-owned cybersecurity company in the United States, reported on Thursday that it had observed an increase in recent years in the use of artificial intelligence (AI) to carry out deceptive information campaigns online, despite the technology’s relatively limited use in other digital breaches.

Since 2019, “many instances” of AI-generated content, such as fake profile photographs, being utilized in politically driven internet influence efforts have been discovered by researchers at the Virginia-based startup.

According to the report, these included initiatives by organizations supporting the governments of Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador.

It occurs at a time when generative AI models like ChatGPT have just become quite popular, making it much simpler to produce convincingly fake text, photos, videos, and computer code. Security experts have issued warnings about the usage of these models by cybercriminals.

According to Mandiant experts, generative AI would make it possible for organizations with limited resources to develop higher-quality material for influence operations at scale.

According to Sandra Joyce, vice president of Mandiant Intelligence, a pro-China information campaign dubbed Dragonbridge has grown “exponentially” since it started by targeting pro-democracy protestors in Hong Kong in 2019 across 30 social platforms and 10 different languages.

However, these campaigns had a modest effect. Not a lot of victories there from an effectiveness perspective, she noted. They haven’t yet significantly altered the threat landscape, in my opinion.

China has previously refuted U.S. claims that it participated in similar influence operations.

Mandiant, a company that aids both public and private organizations in responding to cyberattacks, claimed that it hadn’t yet observed AI playing a significant role in threats from North Korea, Russia, Iran, or China. According to the experts, the usage of AI in digital breaches is anticipated to stay modest in the foreseeable future.

According to Joyce, “So far, we haven’t seen a single incident response where AI played a role.” They haven’t actually been put to any kind of practical use that goes beyond what is possible using commonly available tools, according to our observations.

74 views

Leave a reply

Your email address will not be published. Required fields are marked *

cool good eh love2 cute confused notgood numb disgusting fail