SINGAPORE - Should a self-driving car veer off the road and into the sidewalk - potentially knocking into pedestrians - to avoid a collision with another car on the road to protect its driver and passengers?
If a chatbot run by a company starts spewing racist or offensive messages, because it was exposed to online trolls feeding it such content, should the company be responsible, and should learning data be given only by the company and not open to the public?
These ethical questions and scenarios, which arise with the growing use of artificial intelligence (AI) in today's world, could be some of the issues at the forefront of what a new Advisory Council on the Ethical Use of AI and Data has been tasked to examine.
The council, appointed by the Ministry of Communications and Information, will work with the Infocomm Media Development Authority (IMDA) on the ethical and legal development and deployment of AI.
"To encourage the adoption of AI, we will adopt the same progressive regulatory stance as we have with other innovative technologies," said Minister for Communications and Information S. Iswaran on Tuesday morning (June 5).
"Innovative technologies bring economic and societal benefits, as well as attendant ethical issues. Thus, good regulation is needed to enable innovation by building public trust," he said.
He was speaking at the opening ceremony of InnovFest Unbound, the anchor event of the Smart Nation Innovations Week held at Marina Bay Sands Expo and Convention Centre.
In this context, AI refers to technologies which attempt to simulate human intelligence and thinking processes like learning, reasoning and problem solving. These are done through software algorithms which let machines "learn" for themselves after giving them large amounts of data to study from.
The AI council will be led by Senior Counsel V. K. Rajah, a former attorney-general. Mr Rajah also sits on the 10-member Fairness, Ethics, Accountability and Transparency Committee formed by the Monetary Authority of Singapore in April 2018, which has been tasked to develop a guide on the responsible and ethical use of AI and data analytics by financial institutions.
The full list of other council members will be made available at a later date, said the IMDA. The council will be made up of experts in AI and big data from local and international companies, academia and consumer advocates.
It will advise the Government on ethical and related issues on AI use in the private sector and talk to the ethics board of commercial companies to minimise their ethical and sustainability risks. This will be done through discussion papers, advisory guidelines and codes of practices for businesses to voluntarily adopt.
Two other projects have been set up to support the council's work.
The Personal Data Protection Commission put up a discussion paper on Tuesday on how a potential AI and data governance framework for industries could look like. This is intended as a baseline for discussion among AI users on common definitions, objectives and ethical implications.
The paper recommends two major principles: that decisions made by or with the assistance of AI be explainable, transparent and fair to consumers; and secondly, that AI systems, robots and decisions put the benefit of the human user first.
The Singapore Management University has also set up a five-year research programme on AI and data use to conduct academic research on the policy, legal, regulatory, ethics and other issues relating to AI.
The programme, funded by a $4.5 million grant from the IMDA and National Research Foundation, includes the setting up of a new research centre under the university's School of Law to undertake related projects.
The research will also involve collaborations with AI practitioners through symposiums, conferences and seminars.