Artificial intelligence (AI) became a popular subject in 2023. Still, the technology has a long way to go to meet people’s futuristic expectations of human-like machines.
ChatGPT was central to this year’s attention on AI. The chatbot showed the world recent developments in computer science, although not everyone understood quite how it works or what to do with it.
AI scientist Fei-Fei Li suggested that 2023 would be remembered for the great changes in technology as well as the public awakening. It was a year for people to figure out “what this is, how to use it, what’s the impact — all the good, the bad and the ugly,” she added.
Concerns over AI
The first AI concerns of 2023 began soon after New Year’s Day. That is when classrooms reopened and schools from Seattle to Paris started blocking ChatGPT. Students were already asking the chatbot — released in late 2022 — to write papers and answer take-home tests.
AI large language models behind technology such as ChatGPT work by predicting the next word in a sentence. The models make such predictions after having “learned” the structure of a huge number of human-written works. The large language models often get facts wrong. But the results appeared so natural that it created interest in the next AI developments and possible uses for trickery and deception.
Worries grew as this new group of generative AI tools produced not just words but also images, music and voices. They seemed to threaten the jobs of anyone who writes, draws, and creates music and computer languages. Concerns about AI tools fueled strikes by Hollywood writers and actors and legal disputes from artists to writers.
Some of the most respected AI scientists warned that the technology’s progress was marching toward outsmarting humans and possibly threatening their existence. Yet other scientists called the warnings overblown and brought attention to more immediate risks.
By spring, AI-created videos known as deepfakes had appeared in U.S. election campaigns. Deepfakes are videos that contain realistic images but with digital changes to people’s actions and speech. One deepfake falsely showed Donald Trump embracing the nation’s former top infectious disease expert. The technology made it increasingly difficult to tell the difference between real and fake videos of the wars in Ukraine and Gaza.
By the end of the year, the AI crisis affected ChatGPT’s maker, OpenAI. The San Francisco-based company led by chief executive Sam Altman was nearly destroyed by disagreements over its mission.
AI debates also led to new laws from the European Union and consideration from others, including the United States Congress.
Too much excitement?
AI products released in 2023 have brought technology achievements not possible in earlier times. But the market research company Gartner says they arrive with “inflated expectations” and “massive claims” about its abilities.
Gartner analyst Dave Micko said leading AI developers are pushing the latest technology with their current line of products, including search engines and workplace productivity software.
He said, “As much as Google and Microsoft and Amazon and Apple would love us to adopt the way that they think about their technology and that they deliver that technology, I think adoption actually comes from the bottom up.”
It is easy to forget that this is not the first appearance of AI in business. The technology has been used to help guide self-driving cars, compare objects and individual faces, and recognize speech in software like Siri and Alexa.
In 2011, Tom Gruber launched Siri, which was bought and used by Apple in the iPhone. At that time, it was the only major use of AI that people had ever experienced.
But Gruber believes what’s happening now is the “biggest wave ever” in AI, launching new possibilities as well as dangers.
The dangers could come fast in 2024. Major national elections in the U.S., India and elsewhere could get flooded with AI-created deepfakes.
In the longer term, AI technology’s rapidly improving language, visual sensing and step-by-step planning abilities could create a kind of digital assistant — but only if given access to the “inner loop of our digital life stream,” Gruber said.
“They can manage your attention as in, ‘You should watch this video. You should read this book. You should respond to this person’s communication,’” Gruber said. He added, “That is what a real executive assistant does. And we could have that, but with a really big risk of personal information and privacy.”
I’m John Russell. And I’m Ashley Thompson.