BRANDED CONTENT

How investors can nudge companies to use artificial intelligence responsibly

Increased adoption of AI technology is raising questions on whether companies are well prepared to mitigate risks that could hurt their reputation and even bring harm to society

The adoption of artificial intelligence has opened up many opportunities for companies and investors, but these systems also bring about destructive risks. PHOTO: AXA INVESTMENT MANAGERS

Cyberattacks, privacy invasion and misinformation campaigns. Malicious uses of technology are increasing alongside rapid advances in artificial intelligence (AI), but many companies are not doing enough to mitigate the risks.

This is a potential concern for responsible investors, who are looking to do good with their investments while targeting sustainable long-term financial returns.

As AI becomes a big part of various business functions today, a lack of risk mitigation could result in cybersecurity breaches that damage a company’s reputation; while some applications of AI, such as tracking of individuals and content moderation, raise legal and ethical concerns.

“We are convinced that mitigating the risks associated with AI systems – and addressing regulatory considerations – are closely related to the ability of companies to deliver long-term value creation with these technologies,” says Mr Theo Kotula, an ESG analyst at AXA Investment Managers (AXA IM).

“We also believe that AI systems can better provide long-term and sustainable opportunities when responsible AI is practised,” he adds.

Responsible AI refers to business practices that use AI in a fair, ethical and transparent manner while maintaining human oversight over the activities of the AI systems.

Opportunities and risks of AI

Today’s AI systems can mimic human problem-solving and decision-making abilities and have wide-ranging real-world applications, such as customer-service automation, risk modelling and analytics, as well as fraud detection.

Businesses have gained real value from the use of AI, according to the latest survey on the state of AI by consultancy McKinsey. In the report, released in December 2021, 27 per cent of respondents attributed at least 5 per cent of their earnings before interest and taxes (EBIT) to AI. This is up from 22 per cent of respondents in the previous survey.

AI may also be used to help reduce harmful environmental effects. Potential examples include using algorithms to detect possible oil spills, better modelling of climate change impact and processing large amounts of satellite images to monitor and classify vegetation to better understand the extent and causes of biodiversity changes.

The proliferation of AI over the last decades has opened up many opportunities for companies and investors. But AI systems also bring about destructive risks that are harder to define and measure, in addition to cybersecurity risks.

For example, AI systems could exacerbate bias that exists in the underlying data used to automate processes. In financial services, a bias could put certain communities at a disadvantage when seeking loans; in recruitment, the automated shortlisting process could lead to certain groups being overlooked.

Awareness about the risks of AI systems has increased. But managing these risks “remains an area for significant improvement” for many companies, according to McKinsey’s AI survey.

The top risk cited by survey respondents was cybersecurity, with 50 per cent of firms in developed economies and 36 per cent of those in emerging economies saying they were working to mitigate this risk.

Other risks garnered much less attention from survey respondents: regulatory compliance (39 per cent of respondents in developed economies and 24 per cent of those in emerging economies said they were working on mitigation), organisational reputation (24 per cent in developed economies, 15 per cent in emerging economies), and equity and fairness (21 per cent in developed economies, 16 per cent in emerging economies).

A separate study, the 2022 Big Tech Scorecard by Ranking Digital Rights, an independent research program at think tank New America, graded large technology companies poorly in terms of transparency about their algorithms.

The assessment of transparency includes how much these companies share how they use algorithms to curate, recommend and rank content; how they use user data; and their human rights policies.  

The role of investors in AI risks mitigation

There are several reasons why companies are not doing more to mitigate all the risks relating to AI.

Respondents in the McKinsey survey said they had to prioritise because they lack the capacity to address the full range of risks. Some indicated they were unsure how much they are exposed to these risks, were waiting for clearer regulations, or did not have the leadership buy-in to dedicate resources towards AI risks mitigation.

Both the McKinsey survey and the 2022 Big Tech Scorecard point to the need for investors to step in to nudge companies to use AI more responsibly, says Mr Kotula.

“We think these results help demonstrate the need for responsible investors, such as AXA IM, to engage companies in discussions over responsible AI and indicate how we might put that into practice,” he says.

As part of its sustainable or responsible investing approach, AXA IM holds discussions with investee companies with significant investment plans in AI development. The asset manager’s recommendations urge companies to: 

  • Ensure fair, ethical and transparent use of AI, and maintain human oversight over the activities of the AI systems;
  • Increase disclosure and transparency around AI system development; and
  • Have their board and senior executives oversee a responsible AI framework

AXA IM’s responsible approach to AI is in line with the European Commission’s guidelines, which aim to help companies to use AI in a lawful, ethical way.

The guidelines say AI systems should be human-centric; neither cause nor exacerbate harm to human beings; ensure equal distribution of benefits and costs; and do not lead to people being discriminated against, deceived or unjustifiably impaired in their freedom of choice. AI processes should also be transparent, with their capabilities and purpose openly communicated to those directly or indirectly affected.

“We think that companies will be able to mitigate AI risks and deliver sustainable value creation if they practise Responsible AI policies – benefiting from the deployment of AI systems whilst building trust and improving customer loyalty,” says Mr Kotula.

“The bottom line is that responsible AI is not only about avoiding risks but ensuring that these technologies can be harnessed to the advantage of people and businesses alike.”

This publication is issued by AXA Investment Managers Asia (Singapore) Ltd. (Registration No. 199001714W) for general circulation and informational purposes only. It has been prepared without taking into account the specific personal circumstances, investment objectives, financial situation or particular needs of any particular person and may be subject to change without notice. It does not constitute an offer to buy or sell any investments, products or services and should not be considered as a solicitation or as investment advice. Please consult your financial or other professional advisers if you are unsure about the information contained herein. Investment involves risks. Be aware that investments may increase or decrease in value and that past performance is no guarantee of future returns, you may not get back the amount originally invested. You should not make any investment decision based on this publication alone. This advertisement or publication has not been reviewed by the Monetary Authority of Singapore. © 2022 AXA Investment Managers. All rights reserved.

Follow ST on LinkedIn and stay updated on the latest career news, insights and more.