Western governments are racing each other to set up artificial intelligence (AI) safety institutes. The UK, US, Japan and Canada have all announced such initiatives, while the US Department of Homeland Security added an AI Safety and Security Board to the mix only last week. Given this heavy emphasis on safety, it is remarkable that none of these bodies governs the military use of AI. Meanwhile, the modern-day battlefield is already demonstrating the potential for clear AI safety risks.
According to a recent investigation by the Israeli magazine +972, the Israel Defence Forces have used an AI-enabled program called Lavender to flag targets for drone attacks. The system combines data and intelligence sources to identify suspected militants. The program allegedly identified tens of thousands of targets, and bombs dropped in Gaza resulted in excessive collateral deaths and damage. The IDF denies several aspects of the report.
Already a subscriber? Log in
Read the full story and more at $9.90/month
Get exclusive reports and insights with more than 500 subscriber-only articles every month
ST One Digital
$9.90/month
No contract
ST app access on 1 mobile device
Unlock these benefits
All subscriber-only content on ST app and straitstimes.com
Easy access any time via ST app on 1 mobile device
E-paper with 2-week archive so you won't miss out on content that matters to you