For subscribers

In regulating AI, we may be doing too much and too little

Success will mean staying focused on concrete problems like deep fakes.

Sign up now: Get ST's newsletters delivered to your inbox

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Done correctly, with an eye towards the present, regulation might protect the vulnerable and promote broader and more salutary innovation.

PHOTO: REUTERS

Tim Wu

Follow topic:

When US President Joe Biden

signed his sweeping executive order on artificial intelligence (AI)

last week, he joked about the strange experience of watching a “deep fake” of himself, saying: “When the hell did I say that?”

The anecdote was significant, for it linked the executive order to an actual AI harm that everyone can understand – human impersonation. Another example is the recent boom in fake nude images that have been ruining the lives of high school girls. These everyday episodes underscore an important truth: The success of the United States government’s efforts to regulate AI will turn on its ability to stay focused on concrete problems like deep fakes, as opposed to getting swept up in hypothetical risks like the arrival of our robot overlords.

See more on