The social media site X has blocked some searches for Taylor Swift in reaction to an online spread of pornographic deepfake images of the singer.
A “deepfake” is an image, sound recording, or video created to make it appear that people in it are saying or doing things that they did not say or do.
Attempts to search for her name without quotation marks on the site Monday resulted in a failure message. It asked users to retry their search. “Don’t fret — it’s not your fault,” also appeared.
However, putting quotation marks around her name permitted posts to show where her name appeared.
The images of Swift began spreading widely on X last week. She is now the most famous victim of a danger that technology companies and anti-abuse groups have struggled to fix.
“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” Joe Benarroch, head of business operations at X, said in a statement.
After the images began spreading, the singer’s loyal fan base, called “Swifties,” quickly came to her defense. They launched a protective movement and flooded the site with more positive images of the singer. They used the hashtag #ProtectTaylorSwift on their posts. Some said they were reporting accounts that were sharing the deepfakes.
The deepfake-search group Reality Defender said it followed many posts of pornographic material of Swift. Many of the posts were found on X, formerly known as Twitter. Some images also made their way to Facebook and other social media sites.
The researchers found at least 20 different images created by artificial intelligence (AI).
Researchers have said the number of pornographic deepfakes has grown in the past few years. The increase continues as the technology used to produce such images has become more available and easier to use.
In 2019, a report released by the AI company DeepTrace Labs showed these images were mainly weaponized against women. Most of the victims, it said, were Hollywood actors and South Korean singers.
I’m Jill Robbins.