Taylor Swift Deepfake Images on Twitter X Alarming to White House, Congress

Deepfakes have proliferated across the internet in recent years of celebrities, politicians, and athletes. But when deepfake pornographic images of Taylor Swift started circulating across X (formerly known as Twitter), it seemed a red line had been crossed: the U.S. Congress, White House, and countless lawmakers voiced their concern that generative AI had become a disgusting tool for carrying out unethical wishes of those with malicious intent. It also reignited calls for regulating AI across “big tech” companies such as OpenAI, Google, Amazon, and Microsoft.


White House Press Spokesperson Karine Jean-Pierre said the Biden administration was “alarmed” by the spread of explicit AI-generated images of Taylor Swift.

White House spokesperson Karine Jean-Pierre believes social media companies should create and enforce their own rules to prevent the spread of misinformation and images like the ones of Swift.

Many companies, such as OpenAI, have already pledged to ban the use of its generative AI for use in political campaigns and for voting. The creation and public access of many proprietary generative AI models abide by each company’s responsible AI principles as industry best practices. But, jailbreaking models and creative prompt engineering can possibly evade safeguards.

Taylor Swift deepfakes difficult to stop on social media

Before X pulled any Taylor Swift deepfake images and censored any searches on its platform for phrases such as “Taylor Swift AI” or “Taylor Swift deepfake,” the crude and pornographic images were viewed more than 45 million times, 24,000 reposts, and hundreds of thousands of likes and bookmarks.

Controlling the release and spread of deepfake images and videos appears impossible to erase once public on the internet. At best, platforms such as X can enforce controls and censors to automatically detect and reject images it has watermarked as banned. But individually operated file repositories, Mastodon servers, or private message boards are just a few examples of how removing the content is beyond the reach of absolute control.

The deepfake proposition is so concerning that many influencers, athletes, and celebrities like Tom Hanks are already fighting back and notifying fans to be warned of convincing but entirely deepfake productions of their likeness.

Lawmakers call for AI regulation and deepfake porn federal charges

Artists, celebrities, and athletes aren’t the only ones concerned with the disturbing and damaging effects deepfake porn of real people can cause. Lawmakers are aggressively moving to charge users of AI tools with federal crime charges for creating or sharing nude deepfake photos of real people.

In November 2023, several girls at Westfield High School in New Jersey unknowingly had real, innocuous images of themselves shared on social media altered through AI tools to produce convincing deepfake pornographic images of their likeness. The images were created by a group of boys and spread virally throughout the school.

The invasion of privacy, deception, and damage of the incident is a new frontier for cybersecurity and AI safety.

In January, Rep. Joseph Morelle (D., N.Y.) re-proposed the “Preventing Deepfakes of Intimate Images Act,” which would outlaw the nonconsensual sharing of AI-created or altered intimate images. He had previously introduced the bill but added Rep. Tom Kean, a Republican from New Jersey, as a co-sponsor. Kean introduced a bill in November called the AI Labeling Act of 2023, requiring AI-generated content to be clearly identified and labeled as such.

Today, about 10 states, including Georgia, Hawaii, Texas, and Virginia, have laws that criminalize nonconsensual deepfake porn.

Deepfakes can also be damaging for accessing sensitive online accounts, such as financial or investment accounts that require voice authentication. Generative AI tools such as ElevenLabs – the same tool used to create the New Hampshire President Biden deepfake robocall – successfully tricked the family and bank provider of Joanna Stern, a tech reporter with the Wall Street Journal.

You May Also Like

Ransomware attacks exploiting VMware ESXi servers

VMware has advised its customers to urgently apply the latest security updates…

WordPress sites hacked with fake Cloudflare DDoS alerts pushing malware

WordPress sites can be an easy target for hosting malware and injection…

Russian National charged with ransomware attacks against critical infrastructure

The United States Department of Justice (DOJ) unsealed an indictment on Tuesday…