The artificial intelligence (AI) tool ChatGPT was released last year. Since then, the possibility that artificial intelligence might take over the world has worried people more than ever.
A new report from New York University’s Stern Center for Business and Human Rights identifies eight risks of generative AI. Some of those risks especially concern reporters and news media organizations.
Disinformation, computer attacks, privacy violations, and the weakening of news media are among the risks the team reports.
Stern Center assistant director Paul Barnett was a co-writer of the report. He told VOA that people are confused about what risks AI presents now and in the future.
Barnett said: “We shouldn’t get paralyzed by the question of, ‘Oh my God, will this technology lead to killer robots that are going to destroy humanity?’”
The systems being released right now are not going to lead to the extreme danger in the future some worry about, explained Barrett explained. The report urges lawmakers to face some of the existing problems with AI.
Among the biggest concerns with AI are the dangers it presents for reporters and activists.
The report says AI makes it much easier to dox reporters online. Doxxing is when a person’s private information, like their address, is posted publicly.
Disinformation is another problem, as AI makes it easier to create propaganda. The report noted Russia’s involvement in the 2016 U.S. presidential election. It said use of AI could have widened and deepened Russia’s interference with the process.
Barrett said AI “is going to be a huge engine of efficiency, but it’s also going to make much more efficient the production of disinformation.”
Disinformation could also be dangerous for news reporters because it could lead the public to trust them less.
And AI could worsen financial problems for news media groups. People are less likely to seek news reports, the researchers say, because they can seek answers from ChatGPT instead. The report says that could shrink traffic on news sites causing losses in their advertising revenue.
However, AI could also be helpful for the news industry. The technology can examine data, fact-check sources, and produce headlines speedily.
The report urges the government to supervise AI companies more in the future.
“Congress, regulators, the public – and the industry, for that matter, need to pay attention to the immediate potential risks,” Barrett said.
I’m Caty Weaver.