Should a self-driving car, in an attempt to protect everyone in the vehicle, veer off the road and into the sidewalk - potentially knocking into pedestrians - to avoid a collision with another car?
If a chatbot run by a company starts spewing racist or offensive messages because it was exposed to online trolls, should the company be responsible, and should learning data then be issued only by the company and not open to the public?
These ethical questions and scenarios arising from the growing use of artificial intelligence (AI) could be some of the issues at the forefront for the new Advisory Council on the Ethical Use of AI and Data.
The council, appointed by the Ministry of Communications and Information, will work with the Infocomm Media Development Authority (IMDA) on the ethical and legal development as well as the deployment of AI.
"To encourage the adoption of AI, we will adopt the same progressive regulatory stance as we have with other innovative technologies," Minister for Communications and Information S. Iswaran said yesterday.
"Innovative technologies bring economic and societal benefits, as well as attendant ethical issues. Thus, good regulation is needed to enable innovation by building public trust," he said.
GOOD REGULATION NEEDED
Innovative technologies bring economic and societal benefits, as well as attendant ethical issues. Thus, good regulation is needed to enable innovation by building public trust.
MINISTER FOR COMMUNICATIONS AND INFORMATION S. ISWARAN
He was speaking at the opening ceremony of InnovFest Unbound, the anchor event of the Smart Nation Innovations Week held at the Marina Bay Sands Expo and Convention Centre.
AI refers to technologies which attempt to simulate human intelligence and thinking processes like learning, reasoning and problem solving. This is done through software algorithms which let machines "learn" for themselves from vast amounts of data.
The AI council will be led by Senior Counsel V. K. Rajah, a former attorney-general. Mr Rajah also sits on the 10-member Fairness, Ethics, Accountability and Transparency Committee formed by the Monetary Authority of Singapore in April, which is developing a guide on the responsible and ethical use of AI and data analytics by financial institutions.
The council will comprise experts in AI and big data from local and international companies, academia and consumer advocates. More details, including its other members, will be made available at a later date, said IMDA.
Among other things, the council will advise the Government on ethical and related issues on AI use in the private sector. It will also hold discussions with the ethics board of commercial companiesand come up with advisory guidelines and codes of practices for businesses to adopt voluntarily.
Two other projects in support of the council's work are already under way.
The Personal Data Protection Commission put up a discussion paper yesterday on a potential AI and data governance framework for industries. The paper was intended to serve as a baseline for discussion among AI users on common definitions, objectives and ethical implications.
It made two major recommendations: that decisions made by or with the assistance of AI be explainable, transparent and fair to consumers; and that AI systems, robots and decisions put the benefit of the human user first.
The other project involved the Singapore Management University, which has set up a five-year research programme to conduct academic research on policy, legal, regulatory, ethics and other issues relating to AI and data use.
The programme, funded by a $4.5 million grant by IMDA and the National Research Foundation, includes the setting up of a new research centre under the university's School of Law to undertake related projects.