Liberals intend to include sexually explicit deepfakes in online harms bill

OTTAWA – The Liberal government plans to ensure sexually explicit “deepfakes,” like the images of Taylor Swift that generated global headlines last month, are addressed in upcoming online harms legislation.

If passed, the long-delayed bill would establish new rules to govern certain categories of online content, including the nonconsensual sharing of intimate images.

It is not clear whether fake videos created by artificial intelligence would fall within the definition of this type of images that already exists in the Penal Code.

“Keeping our children and young people safe online is a legislative priority for our government, especially given the changing capabilities of AI,” Justice Minister Arif Virani said in an emailed statement.

It highlighted deepfakes as content that can “exacerbate forms of online exploitation, harassment and cyberbullying.”

Deepfake videos feature remarkably realistic simulations of celebrities, politicians or other public figures and their voices; Sexual deepfakes include nude or pornographic content.

The intention is to address the issue of deepfakes in the next bill, said a government source familiar with the legislation, but who is not authorized to discuss details that are not yet public.

The source, who spoke on condition of anonymity, stopped short of confirming the plan directly, citing parliamentary privilege – the rule requiring the House of Commons to be the first to know the details of government legislation.

Celebrities are not the only victims of this type of AI-generated content, said Conservative MP Michelle Rempel Garner.

“The Taylor Swift example is a high-profile case, but there are examples in Canada of women already facing this: women who don’t have the resources that Taylor Swift has,” Rempel Garner said.

He cited a case last year in Winnipeg where a school notified parents that AI-generated photos of underage students were being shared online.

Most provinces have laws that deal with the distribution of intimate images, and several of them specifically address doctored images, said Roxana Parsa, a lawyer with the Women’s Legal Education and Action Fund.

Cases are handled through civil resolution courts where victims can seek help to have photographs removed and potentially receive compensation, Parsa said.

However, at the national level the law remains unclear.

The Penal Code “does not specify altered images,” Parsa said, and there have been too few legal cases on the matter to provide much additional clarity.

That’s because most of these laws were developed “before deepfakes were a major concern,” said Kristen Thomasen, an assistant professor at the University of British Columbia’s law school who also worked with Parsa’s group.

There is also uncertainty over whether “altered images” can be applied to deepfakes, as they can be artificially generated from scratch, rather than changing pre-existing images, Thomasen said.

The posting of fake images of Taylor Swift has led several lawmakers around the world to propose laws specifically addressing sexually explicit deepfakes.

Canadian lawmakers should do the same, Thomasen said, and the long-promised Online Harms Bill is the place to do it.

“To me, it seems very obvious that the damage is there,” he said.

“Many of the same or similar harms are exacerbated by the creation of images using artificial intelligence, as well as by the distribution of real images.”

Some say it would be easier to pass a single criminal law amendment to address the issue, rather than fold it into a broader bill that is likely to be complex and controversial.

Peter Menzies, former vice-president of the Canadian Radio-television and Telecommunications Commission, said it would be a faster, non-partisan approach.

“I think you should always take the quickest and most efficient path to a solution, if it’s available, and I think that’s easy to do,” said Menzies, long a vocal critic of the Liberals’ previous attempts to regulate the online giants.

“You’ll probably only have to change about four or five words.”

There is a risk that including it in online harms legislation will politicize the issue, he added: “I wouldn’t want this to become something that is used for political purposes.”

Previous attempts to regulate online platforms have not worked well for the ruling Liberals.

The first version of an online harms bill, introduced in 2021, drew widespread criticism. The government has already far exceeded its own deadline to resurrect the bill.

The Online Broadcasting Act, which updated broadcasting laws to include online platforms, suffered years of delay amid heated debate. And the Online News Law generated its own controversy.

“I would like the government to treat this issue urgently and with importance, and not confuse it with a bill that could follow the spirit” of those previous bills, Rempel Garner said.

The Criminal Code definition can be updated to say that a genuine intimate photo and a similar image generated by artificial intelligence receive the same treatment under the law, he said.

“There is the same potential for harm, so we should extend the same principle.”

But Parsa warned that such an amendment should not be seen as a “complete answer to the deepfake problem.”

A simple amendment could lead to a false sense of security that the problem has been solved, he maintains.

She says the government must make a broader effort to “better hold platforms accountable for facilitating the distribution of deepfakes and other forms of technology-enabled gender-based violence.”

This report by The Canadian Press was first published Feb. 7, 2024..


Leave a Comment