For subscribers

Military is the missing word in AI safety discussions

Attempts by governments to regulate the technology must look at its use on the battlefield.

Sign up now: Get ST's newsletters delivered to your inbox

The Israel Defense Forces have used an AI-enabled program called Lavender to flag targets for drone attacks.

The Israel Defense Forces have used an AI-enabled program called Lavender to flag targets for drone attacks.

PHOTO: EPA-EFE

Marietje Schaake

Follow topic:

Western governments are racing each other to set up artificial intelligence (AI) safety institutes. The UK, US, Japan and Canada have all announced such initiatives, while the US Department of Homeland Security added an AI Safety and Security Board to the mix only last week. Given this heavy emphasis on safety, it is remarkable that none of these bodies governs the military use of AI. Meanwhile, the modern-day battlefield is already demonstrating the potential for clear AI safety risks. 

According to a recent investigation by the Israeli magazine +972, the Israel Defence Forces have used an

AI-enabled program called Lavender to flag targets for drone attacks.

The system combines data and intelligence sources to identify suspected militants. The program allegedly identified tens of thousands of targets, and bombs dropped in Gaza resulted in excessive collateral deaths and damage. The IDF denies several aspects of the report.

See more on