Ask the new artificial intelligence (AI) tool ChatGPT to write about the cause of the American Civil War and you can watch it produce a school report in just a few seconds.
The technology is so good that it can write sentences like ones written by a human. And it is also free.
The tool has been in use since November. But it is already raising tough questions about the future of AI in education, the tech industry, and a number of professions.
New York City school officials recently started blocking the writing tool on school devices and networks. The decision by the largest school system in the United States could affect how other school systems deal with the technology.
Teachers are now trying to find out how to prevent students using the AI tool for cheating. And the creators of ChatGPT also say they are looking for ways to detect misuse.
What is ChatGPT?
ChatGPT launched last November as part of a larger set of technologies developed by the San Francisco-based company OpenAI.
It is part of a new generation of AI systems that can have a discussion and create written work. It can even produce new images and video based on what it has learned from a large database of digital books, online writings and other media.
But unlike previous AI tools known as “large language models,” ChatGPT is available for free to anyone on the internet. It is also designed to be more user-friendly. It works like a written conversation between the AI system and the person asking it questions.
Millions of people have played with the tool over the past month. They used it to write poems or songs. Some tried to trick it into making mistakes. Others used it to write email. All of those requests are helping it to get smarter.
What are the possible issues?
Like similar systems, ChatGPT can produce strong writing. But that does not mean what it says is factual or makes sense.
Its launch came with little guidance for how to use it. But the program will admit when it is wrong. It will also question “incorrect premises” and reject requests meant to bring about offensive answers.
Its popularity has led its creators to try to lower some people's expectations.
Sam Altman is the head of OpenAI. He said on Twitter in December that ChatGPT is very limited, but good enough at some things to make people think it is great. He added that it should not be used for “anything important right now.”
Many school systems in the U.S. are still deciding how to set policies on the use of AI programs and how they can be used.
The New York City education department said it is restricting use of ChatGPT because it is worried about negative impacts on student learning, as well as “concerns regarding the safety and accuracy of content.”
But there is no stopping a student from using ChatGPT from home or on a personal device.
Human or AI?
Jenna Lyle is a spokesperson for New York schools. She said the tool may be able to provide quick and easy answers to questions.
But she told The Associated Press: “it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”
When VOA asked ChatGPT whether the program could be used to write school papers, it said that using it for writing papers is “cheating” and does not help the students.
ChatGPT then provided a very similar answer to Lyle’s saying, “using such a tool doesn't build critical-thinking and problem-solving skills which are essential for academic and lifelong success.”
The Associated Press asked ChatGPT how to know if something was written by a human or AI.
The program said, “To determine if something was written by a human or AI, you can look for the absence of personal experiences or emotions.” It noted that AI writings could also contain unnecessary words or repeated sentences.
In a human-written statement, OpenAI told the AP that it plans to work with educators as it learns how people are experimenting with ChatGPT in the real world.
“We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system,” the company said.
I'm Dan Novak.
Dan Novak adapted this story for VOA Learning English based on reporting by The Associated Press.