Google has just added another significant milestone in the fast evolving field of artificial intelligence. Faced with an uproar from its staff over a Pentagon drone image analysis project, it came out on Thursday with a set of principles, promising not to allow its AI to be used in weapons, illegal surveillance and technologies that cause "overall harm". "How AI is developed and used will have a significant impact on society for many years to come," its CEO Sundar Pichai wrote in his blog. "As a leader in AI, we feel a special responsibility to get this right." Google's response is significant on several counts. It is the strongest stance so far from Big Tech on tackling the potential harm from a new and powerful technology with wide-ranging applications. The initial response to Google's move is also instructive. Some fear the US could lose out to China, where AI is undergoing unfettered development. Others think it has not gone far enough. But above all, the reactions point to the pressing need for guard-rails in a field where many legal and ethical issues have been raised. There are none at the moment.
So it is timely that Singapore announced last week that it will explore some of the thorny issues arising from the increasing use of AI. The effort is crucial because AI is advancing rapidly but lacks the equivalent of the physicians' Hippocratic Oath of doing no harm. Also, it is not just about arms research. If we are increasingly allowing machines to make important decisions on crime fighting, mortgage approvals and staff recruitment, how do we ensure that these decisions are bias-free? And yet proprietary algorithms give companies a competitive edge in the marketplace. How do regulators ensure a balance between oversight and corporate interests? Then there are issues of legal accountability - who is responsible if a driverless car kills someone, for instance - and ethical ones that techies must face up to in an age of data manipulation and "deepfake" videos. Patent laws, too, have to consider robot "inventors" and AI-generated innovations. There is also much to be gained in this burgeoning field that has AI providing medical diagnoses and chatbots offering legal advice. Finance Minister Heng Swee Keat observed recently that AI could create US$3.5 trillion (S$4.7 trillion) to US$5.8 trillion in annual value for the global economy. Having rules of the road will encourage public acceptance and allow the technology to flourish. It is an endeavour that others, like the UK, are pursuing as well. Lord Clement-Jones, chair of the House of Lords Select Committee on AI, said taking an ethical approach ensures the public will trust the technology and see its benefits. This is where Singapore can weigh in. The Republic may not be at the cutting edge of AI development, but it can play to its strength and reputation in the regulatory sphere to shape a much needed international framework of rules in this area.