Citizens should be the focus when developing artificial intelligence (AI), not the technology, and people ought to be treated fairly and be well represented when it comes to their input in the training of AI systems, said experts yesterday.
When looking into technologies like AI, governments should understand what their citizens' interests are and regulate based on that, said Mr Vilas Dhar, president of the Patrick J. McGovern Foundation.
He said tech companies should also "treat consumers as participants in the creation and design of these technologies".
If these are in place and people have a real say about tech, both individually and through civil society, this can lead to "a much more fundamental and new social compact for the AI age", said Mr Dhar, whose foundation is a United States philanthropic organisation that promotes the use of AI and data solutions for an equitable future.
He was speaking at a panel discussion on shaping the future of AI at the virtual Global Technology Governance Summit organised by the World Economic Forum (WEF).
The panel included speakers like Dr Haniyeh Mahmoudian, a global AI ethicist with AI platform DataRobot. It was moderated by Straits Times Tech Editor Irene Tham.
Discussions at the meeting will feature in talks when the WEF meets in Singapore in August.
Another issue raised yesterday was the importance of ensuring that the data used to train AI is fair and representative of people.
Panellist Mark Brayan, chief executive of Australian AI data company Appen, said that if the data collected for training AI is biased, the AI will become biased too.
"The completeness and the fairness of the data sets are an important contributor to AI," said Mr Brayan. "But there's work to be done in this area because it's very inconsistent across the globe and across jurisdictions."
Fairness could also be one of several factors considered in AI regulations.
On what is needed for governments to develop laws to protect people from the detrimental impact of AI, Dr Jason Matheny, deputy assistant to US President Joe Biden for technology and national security, pointed to the AI principles set out by the Organisation for Economic Cooperation and Development (OECD).
The OECD said the principles of positivity, fairness, transparency, robustness and accountability are necessary but are individually insufficient, he noted.
"Collectively, though, they describe what we want our AI systems to achieve or to aspire to," he said, adding that the White House and governments elsewhere are looking into AI regulations.
"The most important work is how we harmonise our efforts to embody these principles in law."
But there is perhaps one basic consideration for AI - children.
Giving the session's closing remarks, Mr Fayaz King, deputy executive director for field results and innovation at the United Nations Children's Fund, said his organisation has a policy - developed with Finland - on the guidance of AI for children. Among other things, it sets out how AI policies and systems should aim to protect children.
"It's encouraging to see that other governments are picking up on that, and adopting this and putting this into the national policies," he said. "If AI works for children, it will work for everyone."