Last week, AI-generated images showing superstar Taylor Swift in sexually suggestive and explicit positions spread across the internet, sparking horror and condemnation, and experts say it’s a wake-up call that shows we need real regulation of the AI now.
Mohit Rajhans, technology and media consultant at Think Start, told CTV News Channel on Sunday that “we have become the wild west online” when it comes to generating and disseminating AI content.
“The train has left the station, artificial general intelligence is here and now it will be up to us to figure out how we are going to regulate it.”
It reportedly took 17 hours to remove the fake images circulating on X.
The terms “Taylor Swift”, “Taylor Swift AI” and “Taylor AI” currently generate error reports if a user tries to search for them on X. The company has said this is a temporary measure while they evaluate security on the platform.
But fake pornographic images of the singer were viewed tens of millions of times before social media sites took action. Deepfakes are AI-generated images and videos of fake situations featuring real people. The big danger is that they are much more realistic than a Photoshopped image.
“There is a huge potential for harassment and misinformation to spread if this technology is not regulated,” Rajhans said.
The attack on Swift is part of a disturbing trend of using AI to generate pornographic images of people without their consent, a practice known as “revenge porn” that is predominantly used against women and girls.
While AI has been misused for years, Rajhans said there is definitely a “Taylor effect” in making people sit up and pay attention to the problem.
“What’s happened is…because of Taylor Swift’s image being used to do everything from selling products she’s not affiliated with to altering her (image) into various sexual acts, more people have become aware of how rampant this technology is,” he said. .
Even the White House is paying attention, commenting Friday that action needs to be taken.
In a statement Friday, White House press secretary Karine Jean-Pierre said Swift’s spread of fake nudes was “alarming” and that legislative measures were being considered to better address these situations in the future.
“Obviously there should be legislation to address this issue,” he said, without specifying what specific legislation they support.
SAG-AFTRA, the union that represents thousands of actors and artists, said in a statement Saturday that it supports proposed legislation introduced by U.S. Rep. Joe Morelle last year, called the Intimate Images Deepfake Prevention Act.
“The development and dissemination of false images, especially those of a lewd nature, without someone’s consent must be prohibited. illegal act“the union said in the statement.
At the White House briefing, Jean-Pierre added that social media platforms “have an important role to play in enforcing their own rules” to prevent the spread of “non-consensual intimate images of real people.”
Rajhans said Sunday that it is clear that social media companies must step up to deal with deepfakes.
“We need to hold social media companies accountable,” he said. “There have to be some big fines associated with some of these social media companies. They’ve made a lot of money off of people using social media.”
He noted that if people upload a song that doesn’t belong to them, there are ways to flag it on social media sites.
“So why aren’t they using this technology now in an effort to moderate social media so that deepfakes can’t penetrate?” he said.
A 2023 report on deepfakes found that 98 percent of all fake videos The online videos were pornographic in nature and 99 percent of the people targeted by deepfake pornography were women. South Korean singers and actresses were disproportionately targeted, making up 53 percent of people targeted by deepfake pornography.
The report highlighted that there is now technology that allows users to create a 60-second deepfake pornographic video for free and in less than half an hour.
The sheer speed of progress occurring in the world of AI is working against us in terms of managing the impacts of this technology, Rajhans said.
“It’s becoming so vulgar that you and I can just make memes and share them and no one can tell the difference between (whether) it’s a real event or it’s something that’s been recreated,” he said.
“This isn’t just about Taylor Swift. This is about harassment, this is about sharing fake news, this is about an entire culture that needs to be educated about how this technology is used.”
It is unknown how long it could take until Canadian legislation curbs deepfakes.
The Canadian Security Intelligence Service called deepfakes “threat towards a Canadian future” in a 2023 report that concluded that “collaboration between partner governments, allies, academics and industry experts is essential both to maintain the integrity of globally distributed information and to address malicious applications of AI in evolution”.
Although a proposed regulatory framework for AI systems in Canada, called the Data and Artificial Intelligence Act, is currently being considered in the House of Commons, it is not expected to come into force this year. If the bill gains royal assent, a consultation process will be launched to clarify the AIDA, with the framework coming into force no earlier than 2025.