Why some AI research may be too dangerous to share

PHOTO: AFP
New: Gift this subscriber-only story to your friends and family

A recent feature in the Financial Times magazine about the British spy agency GCHQ revealed to me one especially surprising tidbit: The agency had been inspired to create an Internet surveillance system by the Google-owned artificial intelligence (AI) company DeepMind, based on an academic paper about an artificial chess grand master.

A GCHQ official said: "The people who did this at DeepMind, they published all the work, it's out there, anybody can access it. So we should make use of it."

Already a subscriber? 

Read the full story and more at $9.90/month

Get exclusive reports and insights with more than 500 subscriber-only articles every month

Unlock these benefits

  • All subscriber-only content on ST app and straitstimes.com

  • Easy access any time via ST app on 1 mobile device

  • E-paper with 2-week archive so you won't miss out on content that matters to you

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on June 25, 2019, with the headline Why some AI research may be too dangerous to share. Subscribe