Major media organizations are calling for new laws to protect their content from use by trainers of artificial intelligence (AI) tools.
The request for new laws, or regulations, was contained in an open letter released earlier this month. The leaders of several large news and publishing organizations signed the document.
They include officials of the Associated Press (AP), Gannett and the News Media Alliance, which represents hundreds of media publishers. Representatives from Getty Images, the National Press Photographers Association and Agence France-Presse also signed the document.
The organizations stated their support for the “responsible” development and deployment of AI systems. One AI tool, known as a “chatbot,” has demonstrated the ability to produce human-quality writing based on short, written commands. Such tools are also known as “generative AI” or “large language models.”
But the letter also expresses the need to develop regulations “to protect the content that powers” the increasing number of AI tools in development.
Media companies are worried about AI developers using their published content without permission. The media content enjoys copyright protections and the companies want governments to enact rules to restrict unapproved use of their material.
AI developers need huge amounts of data to train systems that aim to produce human-level results. Developers often collect this data from publicly available websites in a process called “scraping.”
The letter states that the process of scraping gives AI developers free use of their media content. They can then use this data to create language models that strengthen their AI tools and businesses.
A few media companies have already signed licensing agreements with AI developers to give them permission to scrape their content. Last month, the Associated Press completed a deal with chatbot developer OpenAI to license the news agency’s large news story collection.
Other news organizations have taken steps designed to block developers from collecting their content. The New York Times, for example, recently changed its “Terms of Service” agreement to include new guidelines for AI.
The new policy requires AI developers to have written permission to use any of its content to train language models. The policy covers different forms of content, including written text, images and audio and video material.
The open letter also describes another big problem with chatbots and other AI tools: the production of false or misleading information presented as truth. It calls on AI developers to build into their systems tools to prevent such falsehoods.
News and media content created by AI tools “can distort facts and leave the public with no basis to discern what is true and what is made up,” the letter states. It adds that many language models can also produce results that include long-standing biases against minority and underrepresented communities.
Many news companies are currently experimenting with generative AI tools. They hope to learn how the tools can best serve the news production process. Several major news organizations announced last month they had teamed up with AI developers to find new tools to help reporters do their work.
The AP recently issued new guidelines for the use of AI tools across all departments at the news agency. The guidelines ban the use of such tools “to create publishable content and images for the news service,” the AP said in a story about its new rules.
However, the news agency said that it is also urging its employees to study the technology to learn how its use might improve their work within the rules.
I’m Bryan Lynn.