AI is already permeating politics and it’s time to set rules, experts say

A woman in a grayish-brown shirt is sitting next to a man and appears to be listening intently to someone out of frame. She has her arms crossed on a table, but also a third arm, dressed in plaid, supporting her chin.

Voters did a double take when they looked at a Toronto mayoral candidate’s platform last summer and saw an image of the mysterious three-armed woman.

It was evident that Anthony Furey’s team had used artificial intelligence and, amid much public laughter, they confirmed it.

The mistake was a prominent example of how AI is coming into play in Canadian politics.

But it can also be used in much more subtle ways. Without rules in place, we won’t know to what extent it is being used, says the author of a new report.

“We’re still in the early days, and we’re in this strange period where there are no rules on the disclosure of AI, and there are still no rules on the disclosure of uses of generative AI,” says the University professor from Ottawa. Isabel Dubois.

“We don’t necessarily know everything that’s going on.”

In a report released Wednesday, Dubois outlines the ways AI is being used in both Canada and the United States: to conduct polls, predict election results, help prepare lobbying strategies, and detect abusive social media posts during campaigns. .

Generative AI, or technology that can create text, images and videos, was popularized with the launch of OpenAI’s ChatGPT in late 2022.

Many Canadians are already using technology in their daily lives and also to create political content, such as campaign materials. Last year in the United States, the Republican Party launched its first AI-generated attack ad.

Sometimes it is obvious that AI has been used, as in the case of the three-armed woman.

When the Alberta Party shared a video of a man’s endorsement online in January 2023, people on social media quickly pointed out it wasn’t a real person, the report says. He was eliminated.

But if the content appears real, Dubois says it may be difficult to trace.

The lack of established rules and regulations around the use and disclosure of AI is a “real problem,” he says.

“If we don’t know what’s happening, then we can’t make sure it happens in a way that supports fair elections and strong democracies, right?”

Nestor Maslej, research director at the Institute for Human-Centered Artificial Intelligence at Stanford University, agrees that this is a “completely valid concern.”

One way AI could do real damage in elections is through deepfake videos.

Deepfakes, or fake videos that make it look like a celebrity or public figure is saying something they’re not, have been around for years.

Maslej cites high-profile examples of fake videos of former US President Barack Obama saying derogatory things and a fake video of Ukrainian President Volodymyr Zelenskyy surrendering to Russia.

Those examples “occurred in the past, when the technology was not as good or as capable, but the technology will continue to improve,” he says.

Maslej says technology is progressing rapidly and newer versions of generative AI make it harder to tell if images or videos are fake.

There is also evidence that humans have a hard time identifying synthetically generated audio, he notes.

“Deepfake voices are something that has tended to trip up many humans before.”

This is not to suggest that AI cannot be used responsibly, Maslej notes, but it can also be used in election campaigns for malicious purposes.

Research shows that “it is relatively easy and not that expensive to establish disinformation channels using AI,” he says.

Not all voters are equally vulnerable.

People who aren’t as familiar with these types of technologies are probably “much more susceptible to getting confused and taking something that’s actually fake as real,” Maslej says.

One way to set some barriers is with watermarking technology, he says. Automatically marks AI-generated content as such so people don’t mistake it as real.

Whatever solution authorities decide, “now is the time to act,” says Maslej.

“It seems to me that the environment in Canadian politics has become a little more complicated and confrontational at times, but hopefully there is still an understanding among all participants that something like AI disinformation can create a worse atmosphere for all”.

It is time for government agencies to identify the risks and harms that AI can pose in electoral processes, he says.

“If we wait until there’s, I don’t know, some kind of deepfake of (Prime Minister Justin) Trudeau that causes the Liberals to lose the election, then I think we’re going to open up a very nasty can of political worms.”

Some malicious uses of AI are already covered by current legislation in Canada, Dubois says.

The use of deepfake videos to impersonate a candidate, for example, is already illegal under electoral law.

“On the other hand, there are potential uses that are novel or that do not clearly fit within the boundaries of existing standards,” he says.

“And so initially it has to be kind of case-by-case, until we figure out the limits of how these tools could be used.”

This report by The Canadian Press was first published Feb. 1, 2024.

Leave a Comment