AI brings deepfake porn to the masses, as Canadian laws catch up




Brieanna Charlebois, Canadian Press



Published Saturday, February 3, 2024 5:40 pm EST




Underage Canadian high school girls are targeted using artificial intelligence to create fake explicit photos that spread online. Google searches show multiple free websites capable of “undressing” women in a matter of minutes. The world’s biggest pop star is the victim of a deepfake pornographer, whose images are viewed tens of millions of times.

This is the new era of artificial pornography for the masses.

The technology needed to create convincing fake pornography has existed for years, but experts warn that it is faster and more accessible than ever, creating an urgent challenge for Canadian policymakers.

Advances in artificial intelligence have made it possible to do with a cell phone what would previously have required a supercomputer, said Philippe Pasquier, a professor of creative AI at Simon Fraser University in British Columbia.

Pasquier stated that society has “lost the certainty” of what is real and what is altered.

“Technology has improved a little in the lab, but mostly the quality of technology that everyone has access to has improved,” he said.

“If you increase the accessibility of technology, that means that good and bad actors will be much more numerous.”

Across Canada, lawmakers have been trying to keep up. Eight provinces have enacted laws on intimate images, but only half of them refer to altered images.

BC recently became the latest, joining Prince Edward Island, Saskatchewan and New Brunswick.

British Columbia’s law, which came into force on January 29, allows people to go to civil resolution court to have intimate images removed, regardless of whether they are real or fake, and to pursue perpetrators and internet companies for damages.

Individuals will be fined up to $500 per day and websites up to $5,000 per day if they do not comply with orders to stop distributing images posted without consent.

Prime Minister David Eby said the recent sharing of fake images of pop star Taylor Swift showed that no one was immune to such “attacks.”

Attorney General Niki Sharma said in an interview that she is concerned that people do not come forward when they are victims of non-consensual sharing of intimate images, whether real or not.

“Our legal systems need to improve when it comes to the impacts of technology on society and individuals, and this is one part of that,” he said of the new legislation.

The province said it could not provide specific data on the extent of AI-altered images and deepfakes.

But on occasion cases have been made public in other places.

In December, a Winnipeg school notified parents that AI-generated photos of underage students were circulating online.

At least 17 photographs taken from students’ social networks were explicitly altered using artificial intelligence. School officials said they had contacted police and made support available to students directly or indirectly affected.

“We are grateful for the courage of the students who brought this to our attention,” Christian Michalik, superintendent of the Louis Riel School Division, said in a letter to parents that was also posted on Facebook by a school division administrator.

Manitoba has intimate image laws, but they do not address doctored images.

Brandon Laur is the CEO of White Hatter, an internet security company based in Victoria.

The firm recently conducted an experiment and found that it only took a few minutes to use free websites to virtually strip the image of a fully clothed woman, something Laur called “shocking.”

The woman used in the experiment was not real; she was also created with AI.

“It’s pretty surprising,” Laur said in an interview. “We’ve been dealing with cases (of fake sexual images) since the early 2010s, but back then it was all Photoshop.

“Nowadays, it’s much easier to do it without any skills.”

White Hatter’s experiment used Google to find seven easily accessible and easy-to-use websites and apps capable of creating so-called “deep nudes.”

In the original photo, a young woman dressed in a blue long-sleeved shirt, white pants, and sneakers walks toward the viewer. In the following scenes, she is nude, partially nude, or wearing lingerie; White Hatter censored the resulting images with black bars.

LEGAL WAYS, NEW AND OLD

Angela Marie MacDougall, executive director of Battered Women’s Support Services, said her organization was consulted about the British Columbia legislation.

He said Swift’s case underscores the urgent need for comprehensive legislation to combat deepfakes on social media and applauded the province for making it a priority.

But the legislation targets the non-consensual distribution of explicit images, and the next “crucial step” is to create legislation targeting creators of non-consensual images, he said.

“It’s very necessary,” he said. “There’s a gap there. There are other possibilities that would require access to resources, and the women we work with wouldn’t be able to hire a lawyer and pursue a civil legal process around the creation of images… because, of course, “It costs money to do that.”

But there may be other legal avenues for victims.

Suzie Dunn, assistant professor of law at Dalhousie University in Halifax, said there were several laws that could apply to deepfakes and doctored images, including those related to defamation and privacy.

“There’s a new social problem that’s emerging with AI-generated content, image generators, and deepfakes, where there’s this kind of new social harm that doesn’t fit neatly into any of these existing legal categories that we have,” he said.

He said some forms of falsification might merit exceptions, such as satire.

“As technology evolves, the law has to constantly catch up and this worries me a little bit, that there may be some delay with this generative AI.”

Pablo Tseng, an intellectual property lawyer in Vancouver, said deepfakes are “accelerating” a problem that has existed for decades: misrepresentation.

“There has always been a body of law targeting misrepresentation that has been around for a long time, and that is still very applicable today to deepfakes, (including) the torts of defamation, misrepresentation or false information, and the tort of appropriation improper personality.”

But he said specific laws, like British Columbia’s legislation, are steps in the right direction to continue combating the problem, in conjunction with existing laws.

Tseng said he was aware of a case from Quebec that showed how the misuse of deepfake technology could fall under child pornography laws. That case led to a prison sentence of more than three years for a 61-year-old man who used artificial intelligence to produce falsified child pornography videos.

But Tseng said he was not aware of any lawsuits in which technology is referred to in the context of misrepresentation.

“It’s clear that just because no judgment has been made doesn’t mean it’s not happening all around us. Taylor Swift is just the latest example in a series of other examples where faces, personalities and portraits of celebrities have simply been misused,” he said.

Dunn said he believed content moderation by websites was probably the best way forward.

He called on search engines like Google to deindex websites primarily focused on creating sexual deepfakes.

“At a certain point, I think some people just give up, even people like Scarlett Johansson or Taylor Swift, because there’s so much content being produced and so little opportunity for legal recourse because every person who shares it would have to be sued,” he said. Dunn.

He said that while most deepfake videos involve celebrities, there are cases of “ordinary women” being targeted.

“All you need is a still image of a person, and you can feed it into these nude image generators and it just creates a still image that looks like they’re naked, and most of that technology only works on women.”

‘PAINFUL AND DEHUMANIZING’

Australian activist Noelle Martin is well aware of the danger.

The 29-year-old woman said in an interview that she did a reverse search of a photo of herself on Google about 10 years ago.

Her curiosity turned to mortification when she found fake, sexually explicit photographs of herself.

“It’s the most shocking, painful, dehumanizing experience I’ve ever gone through,” he said in an interview.

“To see yourself depicted in all these different positions and circumstances, in the most graphic and degrading way, is disgusting.”

She went to the police, but since there were no laws against it at the time, she said they told her to contact the websites to try to remove them. Some did, but others did not respond and the fake images (and eventually videos) continued to multiply.

Martin said she still doesn’t know who attacked her or why.

He began speaking publicly, advocating for a national Australian law that would fine companies thousands of dollars if they did not comply with takedown orders. The law was approved in 2018.

Martin, who now works as a legal researcher at the University of Western Australia, said a global approach was needed to combat the problem given the “borderless” nature of the internet, but it had to start locally.

Although recent conversations about the misuse of AI have focused on public figures, Martin said she hopes the focus will be on “everyday women.”

“Not only do we not have laws in some jurisdictions, but in many of those that do, they are not enforced. When you put it in the context of this becoming so easy and fast for people, it’s scary because I know exactly what it will be like,” he said.

“It’s not going to be the experience we’re seeing, for example, in the case of Taylor Swift. The world is not going to rally around a regular person and help them remove the images, and they’re not going to be responsive.” a way that protects them.”


Leave a Comment