Using artificial intelligence (AI) and machine learning could make financial markets far more efficient, but only if the potential risks are properly managed, an international financial body has warned.
The Financial Stability Board (FSB), an international body which monitors the global financial system, noted in a report yesterday that financial institutions are already actively using AI and machine learning.
The areas they are used in include assessing credit quality, pricing and marketing insurance contracts, and automating client interactions.
Both public-sector and private-sector institutions could use these technologies for regulatory compliance, surveillance, data quality assessment and fraud detection.
These developments could benefit the financial system, the FSB noted. For example, AI and machine learning could lead to more efficient processing of information on credit risks and lower-cost customer interaction.
The internal, or back office, applications of these technologies could also improve risk management, fraud detection and compliance with regulatory requirements, potentially at lower cost.
PROPER TESTING IS KEY
Adequate testing and 'training' of tools with unbiased data and feedback mechanisms are important to ensure applications do what they are intended to do. ''
THE FINANCIAL STABILITY BOARD
Finally, regulators and banking supervisors could use AI and machine learning to increase supervisory effectiveness and perform better systemic risk analysis in financial markets, the FSB said.
However, it added: "As with any new product or service, there are important issues around appropriate risk management and oversight. One risk is that the use of AI and machine learning could create 'black boxes' in decision-making that could create complicated issues.
"In particular, it may be difficult for human users at financial institutions - and for regulators - to grasp how decisions, such as those for trading and investment, have been formulated," the FSB said.
"Moreover, the communication mechanism used by such tools may be incomprehensible to humans, thus posing monitoring challenges for the human operators of such solutions."
The network effects and scalability of new technologies could also give rise to third-party dependencies. "This could in turn lead to the emergence of new systemically important players that could fall outside the regulatory perimeter," it warned.
As with any new product or service, it will be important to assess uses of AI and machine learning in view of their risks, including adherence to relevant protocols on data privacy, conduct risks and cyber security, the FSB noted.
"Adequate testing and 'training' of tools with unbiased data and feedback mechanisms are important to ensure applications do what they are intended to do."