Ride-hailing firm Grab uses an artificial intelligence (AI) algorithm to reduce trip cancellations by drivers, taking into account their preferred trip types, and where they start and end their day.
The process requires no human involvement due to the high volume of trip allocations - over 5,000 a minute - and also because there is little or no harm done when assigned trips are cancelled.
A stock recommendation algorithm, on the other hand, may not be able to run independently of human reviewers due to the risk of many investors dumping or buying the same shares and rocking the stock market.
These are examples fleshed out in the second edition of Singapore's voluntary framework on how AI can be ethically and responsibly used, launched yesterday at the 50th annual meeting of the World Economic Forum (WEF) in Davos, Switzerland.
It builds on the first edition, announced at the same meeting last year, by providing real-world use cases for the first time and clarifying, among other things, that AI models must produce consistent results with little margin of error.
Such clarity is needed amid heightened discourse over the past year in AI ethics and governance, which comes on the back of many advances.
"For instance, we saw the emergence of next-generation AI-powered natural text generators like GPT-2, which can generate coherent passages difficult to distinguish from human writing," said Minister for Communications and Information S. Iswaran in the Model AI Governance Framework document released yesterday.
Singapore also opened almost 1,000km of public roads - about a tenth of its entire road network - for companies to conduct tests with autonomous vehicles, he added.
"There are concerns about how (AI) will be used, and whether people can have trust in AI when it is used," said Mr Iswaran, adding that the framework aims to build such trust by providing guidance to organisations.
At least 15 organisations - including Grab, DBS Bank, HSBC and American multinational pharmaceutical firm Merck Sharp & Dohme - have adopted the guidelines outlined in the framework, he said.
AI refers to a set of technologies that seek to simulate human traits such as reasoning, problem solving, learning, planning and predicting.
Its use today can be found in banks, insurance firms, healthcare providers, retailers and social media platforms for everything from reducing fraud and the early onset of diseases, to predicting behaviour and recommending actions.
In one of several exemplary use cases highlighted in a new compendium to Singapore's AI framework, DBS has automated its money laundering detection system, but it still involves human supervisors when necessary.
The system first flags suspicious transactions. Then an AI system rates the likelihood of the flagged activities to be criminal by analysing historical trends. The bank's human supervisors will need to review only cases with high risk ratings.
Human involvement satisfies both the AI framework and the Monetary Authority of Singapore's requirement for accountable decision making.
Also announced yesterday was a self-assessment tool to help organisations get a sense of how aligned their practices are with the Singapore AI framework.
Jointly developed by the Info-comm Media Development Authority and the WEF Centre for the Fourth Industrial Revolution, the tool distils the principles promoted in the framework - explainable, transparent and fair decision-making and human-centric solutions - into a questionnaire checklist.
Questions asked include: "Did your organisation consider whether the decision to use AI for a specific use case is consistent with its core values and/or societal expectations?"
For instance, although it is generally accepted that AI may be used to identify medical conditions, a human doctor will still make the final decision on diagnosis and treatment.
• For more information on the Singapore AI framework, go here.