US Securities and Exchange Commission chief warns of AI risks

Fraudsters may use artificial intelligence on an individualised basis, preying on personal vulnerabilities, said United States Securities and Exchange Commission chair Gary Gensler. PHOTO: REUTERS

WASHINGTON – Wall Street’s top regulator warned that artificial intelligence (AI) could heighten financial fragility by promoting “herding” – a behaviour observed in financial markets where “individual actors make similar decisions because they are getting the same signal from a base model or data aggregator”.

Speaking to an audience at the National Press Club in Washington on Monday, Mr Gary Gensler, chair of the United States Securities and Exchange Commission (SEC), noted that if a trading platform’s AI system considers the interest of both the platform and its customers, this “can lead to conflicts of interest”.

“Current model risk management guidance – generally written prior to this new wave of data analytics – will need to be updated, it will not be sufficient,” he said.

“In finance, conflicts may arise to the extent that advisers or brokers are optimising to place their interests ahead of their investors’ interests,” he warned.

The SEC is working on establishing potential rules on conflicts of interest in the use of predictive analytics, machine learning and similar technologies as they apply to investor interactions, he said.

On the potential for fraud, Mr Gensler, who among other roles has been professor of practice of global economics and management at the Massachusetts Institute of Technology, said: “Since antiquity, bad actors have found new ways to deceive the public, (and) of course with AI, fraudsters have a new tool.”

Fraudsters may use AI on an individualised basis, preying on personal vulnerabilities, he added.

“We used to get spam, but we’d all get the same spam. Now, communications can be efficiently individualised. More seriously, bad actors may seek to use AI to influence elections, the capital markets or spook the public.”

But, he added: “Make no mistake though, under the securities law, fraud is fraud.”

The SEC is focused on identifying and prosecuting any form of fraud that may threaten investors, capital formation, or the markets more broadly, whether it involves generative AI or not, he said.

He noted that AI is the most transformative technology of our time, on a par with past game changers such as the mass production of automobiles or the Internet.

Mr Gensler’s remarks came amid the various hearings Congress has been holding in recent months about the risks and advantages of AI.

At one such hearing at the Senate in May, lawmakers across party lines called for tougher regulation and greater disclosure of AI utilisation.

But lawmakers are also trying to balance the need for regulation with the need for the US to develop and use AI.

“It’s like fire” one former regulator told The Straits Times, asking not to be named.

“You can do all kinds of things using fire, and it warms you – but it can also burn you.”

At another hearing in the House on June 22, Oklahoma Representative Frank Lucas, chair of the House Committee on Science, Space and Technology, said: “It is in our national interest to ensure the United States has a robust innovation pipeline that supports fundamental research, all the way through to real-world applications.

“The country that leads in commercial and military applications will have a decisive advantage in global economic and geopolitical competition.”

But, he added: “While it is critical the US supports advances in AI, these advances do not have to come at the expense of safety, security, fairness, or transparency.”

Last week in New York, in a discussion following a special screening of Oppenheimer – a biopic about the “father of the atomic bomb” during World War II – director Christopher Nolan compared AI with the creation of the bomb.

As he witnessed the detonation of the first atomic bomb on July 16, 1945, American theoretical physicist J. Robert Oppenheimer, the wartime head of the Los Alamos Laboratory, home of the Manhattan Project that developed the weapon, famously recalled a line from the Hindu scripture Bhagavad Gita: “Now I am become Death, the destroyer of worlds.”

“AI systems will go into defensive infrastructure ultimately,” Mr Nolan said. “They’ll be in charge of nuclear weapons.”

“To say that that is a separate entity from the person wielding, programming, putting that AI to use, then we’re doomed. It has to be about accountability. We have to hold people accountable for what they do with the tools that they have.”

On Monday, in the context of companies and financial markets, Mr Gensler said: “We at the SEC are technology neutral.”

He added: “We focus on the outcomes rather than the tools.”  

Still, he warned: “Given that we’re dealing with literally the automation of human intelligence, the gravity of these challenges is real”.

Join ST's Telegram channel and get the latest breaking news delivered to you.