Killer robots | The “Oppenheimer moment” of artificial intelligence

If governments want to control the emergence of a new generation of artificial intelligence (AI) death machines, it is one minute to midnight. This is the warning issued to them on Monday.




Land-based autonomous weapons systems are already proliferating – in Ukraine and Gaza, in particular – while algorithms and unmanned flying machines are already helping the military decide whether to strike targets or not.

These life-and-death decisions could soon be entrusted entirely to machines.

Killing machines

“This is the Oppenheimer moment of our generation,” says Austrian Foreign Minister Alexander Schallenberg, referring to the father of the atomic bomb, J. Robert Oppenheimer, who advocated after 1945 to limit the proliferation of nuclear weapons. .

Civilian, military and technology leaders from around a hundred countries met in Vienna on Monday to discuss ways to govern the application of AI to military technologies – two sectors that excite investors and where stock market valuations reach historic levels.

The proliferation of conflicts and profits linked to the development of AI are hampering efforts to control the advent of killer robots, estimates Estonian Jaan Tallinn, one of the first investors in DeepMind Technologies, Alphabet’s AI platform, and co-founder of the Center for the Study of Existential Risk at the University of Cambridge.

The incentives that drive Silicon Valley may not be aligned with the interests of the rest of humanity.

AI investor Jaan Tallinn, co-founder of the Center for the Study of Existential Risks

Governments around the world are collaborating with companies that are integrating AI into defense. The Pentagon funds small AI firms with millions. On April 26, the European Union entrusted Thalès SA with the creation of an image database to evaluate military targets.

Assassination targets and bombings

According to the magazine +972, from Tel Aviv, Israel uses an artificial intelligence program called “Lavender” to find assassination targets. The report, contested by Israel, indicates that this artificial intelligence system played a “central role in the unprecedented bombardment of the Palestinians”.

“The future of slaughtering robots is now,” says Anthony Aguirre, a physicist who predicted the technology’s stages in a 2017 short film seen by more than 1.6 million people. “We need a treaty to control these weapons under the auspices of the United Nations. »

But advocates of a diplomatic solution are likely to be disappointed, at least in the short term, says Alexander Kmentt, Austria’s top disarmament official and architect of this week’s conference. “The classic approach to arms control does not work: it is not a single weapons system, but a combination of technologies for civilian and military use,” Mr. Kmentt said in an interview.

In the absence of a new comprehensive treaty, countries may be forced to make do with existing legal tools, Mr. Kmentt believes: the application of export controls and humanitarian laws could help maintain some control over the proliferation of weapon systems that integrate AI.

Longer term, when technology becomes available to non-state actors and perhaps terrorists, countries will be forced to develop new rules, predicted Arnoldo André Tinoco, Costa Rica’s foreign minister.

“Access has become easy. Before, only a small number of countries could embark on the autonomous arms race, he said. These barriers have fallen. Today, students equipped with a 3D printer and basic programming skills can build drones capable of causing large numbers of casualties. Autonomous weapon systems have forever changed the concept of international stability. »

Read the magazine article +972 on AI and the war in the Gaza Strip (in English)


reference: www.lapresse.ca

Leave a Comment