Imagine an artificial-intelligence-driven military drone capable of autonomously patrolling the perimeter of a country or region and deciding who lives and who dies, without a human operator.
Now do the same with tanks, helicopters and biped/quadruped robots. Welcome to the not-so-distant future of LAWs, or lethal autonomous weapon systems. A conclusion reached at the UN conference on regulating LAWs in warfare that took place this August in Geneva was that, instead of outright banning them, the topic should be revisited in November. The stall was initiated by the U.S., Russia, Israel, South Korea and Australia. Until the revision meeting one thing is sure — AI-controlled robotic warfare isn’t too far off.
Not everyone shares this sentiment, though. In July, 2,400 leading artificial-intelligence (AI) researchers, including Tesla TSLA, -13.90% CEO Elon Musk, signed a pledge against killer robots, promising not to participate in the development or manufacture of machines that can identify and attack people without human oversight. It may sound encouraging, but countries can easily source the necessary know-how and tools to build their lethal “tin men” even without these researchers joining the team.