SINGAPORE - Citizens should be at the centre of developing artificial intelligence (AI), not the technology, and people ought to be treated fairly and be well represented when it comes to their input in the training of AI systems, said experts on Tuesday (April 6).
When looking into technologies such as AI, governments should understand what their citizens' interests are and regulate based on that, said Mr Vilas Dhar, president of the Patrick J. McGovern Foundation.
He said tech companies should also “treat consumers as participants in the creation and design of these technologies”.
If these are in place and people have a real say about tech, both individually and through civil society, this can lead to “a much more fundamental and new social compact for the AI age”, said Mr Dhar, whose foundation is a United States philanthropic organisation that promotes the use of AI and data solutions for an equitable future.
He was speaking at a panel discussion on Shaping The Future Of Artificial Intelligence at the virtual Global Technology Governance Summit organised by the World Economic Forum (WEF).
The panel included speakers like Dr Haniyeh Mahmoudian, a global AI ethicist with AI platform DataRobot. It was moderated by Straits Times tech editor Irene Tham.
Discussions at the virtual meeting will feature in talks when the WEF meets in Singapore in August.
Another issue raised on Tuesday was the importance of ensuring that the data used to train AI systems is fair and representative of people.
Panellist Mark Brayan, chief executive of Australian AI data company Appen, said that if the data collected for training AI is biased, the AI will become biased too.
For example, if only male voices are used to build a speech-recognition product, the resulting AI developed for the product will not work as well when it hears female voices.
"The completeness and the fairness of the data sets are an important contributor to AI," said Mr Brayan. "But there's work to be done in this area because it's very inconsistent across the globe and across jurisdictions."
Fairness could also be one of several factors considered in AI regulations.
On what is needed for governments to develop laws to protect people from the detrimental impact of AI, Dr Jason Matheny, deputy assistant to US President Joe Biden for technology and national security, pointed to the AI principles set out by the Organisation for Economic Cooperation and Development (OECD).
The OECD said that the principles of positivity, fairness, transparency, robustness and accountability are necessary but, individually, they are insufficient, said Dr Matheny.
"Collectively, though, they describe what we want our AI systems to achieve or to aspire to," he said, adding that the White House and governments elsewhere are looking into AI regulations.
"The most important work is how we harmonise our efforts to embody these principles in law."
International collaboration is also needed to develop technical standards to make safe AI systems because, at the moment, such standards do not exist, he added.
But, at the end of the day, there is perhaps one basic consideration when it comes to AI - children.
Giving the session's closing remarks, Mr Fayaz King, deputy executive director for field results and innovation at the United Nations Children's Fund (Unicef), said that his organisation has a policy on the guidance of AI for children, which was developed with Finland.
Among other things, it sets out how AI policies and systems should aim to protect children.
"It's encouraging to see that other governments are picking up on that, and adopting this and putting this into the national policies," said Mr King. "If AI works for children, it will work for everyone."