In regulating AI, we may be doing too much and too little

Success will mean staying focused on concrete problems like deep fakes.

Done correctly, with an eye towards the present, regulation might protect the vulnerable and promote broader and more salutary innovation. PHOTO: REUTERS
New: Gift this subscriber-only story to your friends and family

When US President Joe Biden signed his sweeping executive order on artificial intelligence (AI) last week, he joked about the strange experience of watching a “deep fake” of himself, saying: “When the hell did I say that?”

The anecdote was significant, for it linked the executive order to an actual AI harm that everyone can understand – human impersonation. Another example is the recent boom in fake nude images that have been ruining the lives of high school girls. These everyday episodes underscore an important truth: The success of the United States government’s efforts to regulate AI will turn on its ability to stay focused on concrete problems like deep fakes, as opposed to getting swept up in hypothetical risks like the arrival of our robot overlords.

Already a subscriber? 

Read the full story and more at $9.90/month

Get exclusive reports and insights with more than 500 subscriber-only articles every month

Unlock these benefits

  • All subscriber-only content on ST app and straitstimes.com

  • Easy access any time via ST app on 1 mobile device

  • E-paper with 2-week archive so you won't miss out on content that matters to you

Join ST's Telegram channel and get the latest breaking news delivered to you.