Letter of the week: AI chatbots shouldn’t be allowed to spread untruths
Sign up now: Get ST's newsletters delivered to your inbox
Laws need to be strengthened to protect individuals and institutions against defamatory content and untruths churned out by generative AI, says the reader.
PHOTO: EPA-EFE
I was astonished to read that when Straits Times reporter Osmond Chia fed the question “Who is Osmond Chia?” into Meta AI’s chatbot, it spat out a list of criminal charges under his name – instances of the chatbot confusing his name with the crime headlines he has reported ( Ever looked yourself up on a chatbot? Meta AI accused me of a workplace scandal
This cannot be a satisfactory situation.
Imagine an employer being fed erroneous information linking a potential hire to unsavoury matters which have nothing to do with him other than, say, sharing the same name or as a result of the AI algorithm’s confusion, like in Mr Chia’s case.
Surely laws need to be strengthened to protect individuals and institutions against defamatory content and untruths churned out by generative AI? I don’t see how it is fair to let these tech companies get away with reputational murder.
While the aggrieved party has the right to sue the tech firm, the reality is that people may be unaware that disparaging information about them is lurking out there.
The onus shouldn’t be on people to ask about themselves to ensure that the tech bots haven’t maligned them.
Peh Chwee Hoe


