For subscribers
In regulating AI, we may be doing too much and too little
Success will mean staying focused on concrete problems like deep fakes.
Sign up now: Get ST's newsletters delivered to your inbox
Done correctly, with an eye towards the present, regulation might protect the vulnerable and promote broader and more salutary innovation.
PHOTO: REUTERS
Tim Wu
Follow topic:
When US President Joe Biden signed his sweeping executive order on artificial intelligence (AI)
The anecdote was significant, for it linked the executive order to an actual AI harm that everyone can understand – human impersonation. Another example is the recent boom in fake nude images that have been ruining the lives of high school girls. These everyday episodes underscore an important truth: The success of the United States government’s efforts to regulate AI will turn on its ability to stay focused on concrete problems like deep fakes, as opposed to getting swept up in hypothetical risks like the arrival of our robot overlords.

