Microsoft: State-supported Hackers Using Its AI Tools

05:41 February 18, 2024

Microsoft: State-supported Hackers Using Its AI Tools

Microsoft says state-supported online attackers from Russia, China, and Iran have been using its OpenAI tools to possibly trick targets and gain information.

Microsoft said in a report released Wednesday that it had tracked online attackers, or hacking groups, that work with several states. They include Russia’s military Intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments. The company said the hackers were trying to improve their campaigns using large language models like OpenAI’s ChatGPT. Those computer programs use artificial intelligence. They use huge amounts of information on the internet to create human-sounding writing.

The company said it would ban state-backed hacking groups from using its AI products. The company said it was not concerned whether any rules had been broken.

"Independent of whether there's any violation of the law or any violation of terms of service, we just don't…want them to have access to this technology," Microsoft Vice President for Customer Security Tom Burt told Reuters.

Russian, North Korean, and Iranian diplomatic officials did not immediately return requests for comment on the claims.

However, China’s U.S. embassy spokesperson Liu Pengyu said China opposed groundless attacks against it and supports the safe, dependable, and controllable use of AI technology to help “all mankind.”

The claims that state-backed hackers have been caught using AI tools to support spying activities is likely to increase concerns about the spread of the technology and its possible abuse. Internet security officials in Western countries have been warning since last year that bad actors were abusing AI tools.

OpenAI and Microsoft described the hackers’ use of their AI tools as in an “early-stage” and “incremental.” Burt said neither had seen online spies have big successes.

“We really saw them just using this technology like any other user,” he said. The report described hacking groups using large language models in different ways.

Microsoft said hackers suspected of working for the Russian military spy agency, widely called GRU, used the models. The company said the hackers researched satellite and military technologies that might deal with military operations in Ukraine.

Microsoft said North Korean hackers used the models to create content that could be used to trick area experts into giving up information. Iranian hackers also used the models to write better emails, Microsoft said. The company said the Iranian group aimed to trick feminist leaders to go to a dangerous website.

Microsoft added that Chinese state-backed hackers were also experimenting with large language models. For example, they have asked questions about enemy intelligence agencies, online security issues, and “notable individuals.”

Neither Burt nor OpenAI security official Bob Rotsted said how much activity had been found or how many users had been banned. Burt defended the ban on hacking groups although Microsoft’s search engine Bing has no such ban. Burt noted that AI was new and a cause for concern.

"This technology is both new and incredibly powerful," he said.

I’m Gena Bennett.

Google Play VOA Learning English - Digdok