The science fiction trope of artificial intelligence (AI) going against human beings has been creeping into real life, with some businesses engaged in "robo firings".
AI ethics experts said firing workers using AI is problematic partly because algorithms, at this time, cannot fully model human thinking and replace human intelligence.
The Verge reported in 2019 that American e-commerce giant Amazon could automatically fire warehouse workers based on productivity metrics.
A Californian law kicked in this year prohibiting warehouse bosses from imposing productivity quotas that prevent staff from, for instance, taking a break. Amazon declined to say what changes it would make to comply, Bloomberg reported last December.
Separately, in a judgment published last April, a Dutch court ordered American firm Uber to reinstate some drivers struck off its ride-hailing app for fraud "based solely on automated processing, including profiling". Uber sought to contest the judgment.
In the business world, many people are beginning to think that human intelligence can be replaced with AI intelligence to make decisions, said Professor David De Cremer, director of the Centre on AI Technology for Humankind (AiTH) at the National University of Singapore Business School. But machine intelligence has no intuition and consciousness, he said.
At Amazon, for instance, assembly-line staff work according to a machine-like schedule, he said. If staff cannot keep up with the pace, algorithms can fire them, without human intervention.
"That means people have to work in very consistent ways," he said. "But sometimes, as a human, you have a bad day or a good day."
Prof De Cremer said that cutting costs by using AI to replace human decisions is problematic if jobs that tap human strengths are also not created. It could result in people becoming unemployed or not having meaningful jobs, he said.
On the flip side, AI is being used to fix issues created by algorithms, such as AI solutions that coach call centre staff to be more empathetic. In the first place, workers have lost that soft human touch while trying to keep up with performance metrics, AI experts said.
"We measure humans by the standards that are appropriate for machines and then we tell them we need technology to make them more human. It's perverse," said Professor Shannon Vallor, the Baillie Gifford chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh.
Speaking at a recent panel discussion on AI, she said technology should be about enhancing people's capabilities and experiences. But, increasingly, she is seeing AI being designed to advance its performance, "and humans are being twisted into knots in order to make that possible".
The call centre case was raised in a manifesto AiTH released in December, which sets out recommendations on how society and organisations should approach AI to promote human interests and well-being. They include how machines should serve people instead of the other way around, and that the ultimate responsibility for technology-augmented decisions must remain in human hands.
Professor Lim Sun Sun, head of humanities, arts and social sciences at the Singapore University of Technology and Design, agreed that Big Tech firms are obsessed with using technology to solve most problems encountered in society and business.
"We really need to think about rewiring science, technology, engineering and mathematics education so that, upstream, we already have technologists and engineers thinking about human values and who understand what the cultural norms are," she said.