Artificial intelligence | “The urgency to regulate AI has increased tenfold”

How to regulate artificial intelligence (AI)? As examples of drift multiply, different countries and territories around the world are trying to answer this question.

When AI clones the voice


Fernand Boissonneault, 91, was trapped by fraudsters who used artificial intelligence to clone his son’s voice.

A phone call containing a very realistic voice from his son allegedly being detained by the police almost pushed a Trois-Rivières man to pay a false $5,800 bail to fraudsters last month. This is potentially a case of hyperfaking carried out with AI. This demonstrates once again that AI technology is evolving very quickly and now has the power to influence very real behaviors, says Benjamin Prud’homme, Vice-President, Public Policy, Society and Global Affairs at Mila-Institut Quebec artificial intelligence. “The examples now number in the thousands. With generative AI, we can create videos, audio… That means that someone can create false messages, make fraud more effective, create texts, create false images that show politically explosive things . The urgency (to regulate AI) has increased tenfold,” he says.

2024, a pivotal year


The subject of artificial intelligence emerged at the World Economic Forum held in mid-January in Davos, Switzerland.

Mr. Prud’homme notes that 2023 saw a sharp acceleration in the priority of governments regarding the regulation of AI, suggesting a pivotal moment for 2024 and 2025. “The European Union’s AI law s As it comes, the United States adopted an executive order from the White House last October, and Canada is in the process of developing AI legislation, so things are moving. » AI technologies have benefits, including improving the health system and addressing the climate crisis, he says. “But as they also pose risks to national security issues, we can no longer ignore them. As the capabilities of AI systems increase, so do the opportunities and risks. »

Running after innovation


Samsung Electronics Co. Vice President Han Jong-hee discusses artificial intelligence technologies during a press conference at the Consumer Electronics Show (CES) in Las Vegas on January 8.

Pierre Larouche, professor at the faculty of law at the University of Montreal, notes that governments give the impression of being behind in technological developments, and of seeking to show citizens that they are at the forefront. fact of what is happening. “While we are not in a complete legal vacuum, the current framework is far from ready to effectively apply regulations to artificial intelligence (AI),” he says. He notes that commercial interests want to have as much room for maneuver as possible. “One element of the legal framework that could be quickly adapted is civil liability,” says Mr. Larouche. Send a clear message to companies working on these services that neglecting regulations could lead to serious financial consequences. »

Regulate like pharmaceuticals?


The regulation is requested by the industry itself, which shows that it is not capable of self-regulating, says Pierre Larouche, professor at the faculty of law at the University of Montreal.

Mr. Larouche gives the example of the pharmaceutical industry, which is very regulated, with certain powerful drugs sold only by prescription. “The pharmaceutical industry knows that a rushed release of a product can have serious financial consequences, consequences which can amount to billions of dollars. With AI, the risks could be even greater, prompting companies to think more carefully about their actions. » Regulation is requested by the industry itself, which shows that it is not capable of self-regulating, he notes. “This gives the impression that governments are unable to act, which reflects a certain indecision. »

Culture to change


Products are sometimes launched without sufficient consideration of the consequences.

So far, the bill before the Canadian Parliament appears to lack incentives to push the industry to change, says Mr. Larouche. “It’s an industry that is often left to its own devices, experimenting and seeing what works. » The limits of this attitude have become evident in recent years with social networks, where products are sometimes launched without sufficient consideration of the consequences. “The big challenge lies in changing the mentality of the technology industry. We must encourage him to think about the consequences of his actions. Mistakes must have consequences, and businesses must proceed with caution. In this sense, I am not sure that the Canadian government is clear enough on this point. »

International ramifications


The global aspect of AI means that it must be regulated as internationally as possible.

The global aspect of AI means that it must be regulated as much as possible internationally, notes Benjamin Prud’homme. “It will take an international agreement and national regulations. Otherwise, there could be AI evasion, much like tax evasion. There is the whole geopolitical aspect. If China and the United States are not part of the equation, it will have less impact. » A conference on AI and human rights organized by the UN is to be held in February, he notes. “The UN will have to make a lot of compromises to get the nearly 200 countries to agree. International negotiation takes a long time. At the same time, nation states must move forward. The EU can raise the bar. American AI giants will have to comply with what the EU decides if they want to be able to join this market of 400 million people. So it could benefit everyone, even if it won’t solve all the problems. »


Leave a Comment