BASEL - The use of artificial intelligence (AI) and machine learning could lead to greater efficiency and interconnectedness between financial markets but only if the potential risks are properly managed, an international financial body has warned.
The Financial Stability Board (FSB), which monitors and makes recommendations about the global financial system, noted in a report on Wednesday that institutions are already actively using AI and machine learning in areas such as assessing credit quality, pricing and marketing insurance contracts and automating client interactions.
And both public and private sector institutions could use these technologies for regulatory compliance, surveillance, data quality assessment and fraud detection.
These developments could benefit the financial system, the FSB noted.
For example, AI and machine learning could lead to more efficient processing of information on credit risks and lower-cost customer interaction.
The internal, or back office, applications of AI and machine learning could also improve risk management, fraud detection and compliance with regulatory requirements, potentially at lower cost.
However, it added: "As with any new product or service, there are important issues around appropriate risk management and oversight.
One risk is that the use of AI and machine learning could creating 'black boxes' in decision-making that could create complicated issues.
"In particular, it may be difficult for human users at financial institutions - and for regulators - to grasp how decisions, such as those for trading and investment, have been formulated," the FSB said.
"Moreover, the communication mechanism used by such tools may be incomprehensible to humans, thus posing monitoring challenges for the human operators of such solutions."
The network effects and scalability of new technologies could also give rise to third-party dependencies.
"This could in turn lead to the emergence of new systemically important players that could fall outside the regulatory perimeter," it warned.
As with any new product or service, it will be important to assess uses of AI and machine learning in view of their risks, including adherence to relevant protocols on data privacy, conduct risks, and cybersecurity, the FSB noted.
"Adequate testing and 'training' of tools with unbiased data and feedback mechanisms is important to ensure applications do what they are intended to do."