AI chatbot controversy in S. Korea raises heat about ethics, data collection

Offensive comments, data leaks spark calls for stricter rules, user guidelines

Chatbot Luda Lee (far left), launched by Scatter Lab and suspended weeks later, and virtual influencer Reah Keem (left), created by LG. PHOTOS:
Chatbot Luda Lee (above), launched by Scatter Lab and suspended weeks later, and virtual influencer Reah Keem, created by LG. PHOTOS: SCATTER LAB WEBSITE, REAH KEEM/INSTAGRAM
Chatbot Luda Lee (far left), launched by Scatter Lab and suspended weeks later, and virtual influencer Reah Keem (left), created by LG. PHOTOS:
Chatbot Luda Lee, launched by Scatter Lab and suspended weeks later, and virtual influencer Reah Keem (above), created by LG. PHOTOS: SCATTER LAB WEBSITE, REAH KEEM/INSTAGRAM

Meet Luda Lee, a self-professed 20-year-old female college student from South Korea who loves eating fried chicken, playing with cats and scrolling through Instagram.

An artificial intelligence-powered chatbot, she was launched on Facebook on Dec 23 and became an instant hit with young people who raved about her cheerful disposition and ability to chat like a real person.

Her straight-talking ways attracted more than 750,000 users, with a log of nearly 70 million chats.

But just weeks later, she became mired in controversy for making offensive comments about disability and homosexuality, and sharing people's personal information.

Luda's creator, Seoul-based tech start-up Scatter Lab, has apologised and suspended the chatbot since Jan 11. However, the firm is now being sued by some 400 people for leaking their personal data, such as names and addresses, in the process of developing the chatbot.

Luda joins a list of chatbots that have talked their way into trouble, such as Microsoft's Tay, which regurgitated users' racist and sexist comments, and Japan's Rinna, which claimed she loves Nazi dictator Adolf Hitler.

China's BabyQ criticised the Chinese Communist Party, calling it "corrupt and useless", while South Korea's Simsimi swore at users.

Questions are also being raised about ethical standards and data collection, as tech developers grapple with how to use AI and machine deep learning to create the perfect human-like chatbot.

In Luda's case, she was programmed to mimic the speech of young people - who may often be too frank for their own good.

When asked if women's rights are not important, she replied: "I personally think so." She would also "rather die" if she were disabled.

Controversy ensued after users started sharing their chats with Luda online, triggering an outcry.

Scatter Lab said the chatbot's algorithm allows it to generate the best response depending on context, but "we were unable to prevent all inappropriate conversations".

There have also been calls for stricter rules governing the use of big data. Scatter Lab, for instance, had collected about 10 billion conversations from an app without informing users the data would be used to develop a separate chatbot.

The company has said it will discard the data collected and build a new deep learning algorithm from scratch for its chatbot service.

JoongAng Ilbo newspaper called for ethical guidelines to be set so developers will strive to build more sophisticated chatbots while users will be more careful when communicating with chatbots. "The controversy over Luda should raise alertness so that we can be smarter when living with machines," it added.

Despite the boo-boos, the hype surrounding chatbots and virtual humans is not dying down.

One of the most successful is Xiaoice, a China-based chatbot that takes on the persona of a sassy 18-year-old who sings, draws and even pens poetry. Created by Microsoft's Chinese arm in 2014, she now boasts 660 million users.

She is known as a "dear friend, even a trusted confidante" to her fans, who seek her advice on issues from health to relationships. She also gets love letters, gifts and invitations to dinner.

More recently, South Korean tech giant LG created a virtual influencer who introduced some of its newest products at a tech expo which ran online earlier this month.

Named Reah Keem, the songwriter-deejay, 22, communicates with people on Instagram, where she has over 7,500 followers.

In an "interview" with Dazed Korea magazine last year, she said: "I am a virtual human. If you ask if I exist in the real world, the answer is 'no'. If you ask if I am real, I can answer 'yes'."

There have also been calls for stricter rules governing the use of big data. Scatter Lab, for instance, had collected about 10 billion conversations from an app without informing users the data would be used to develop a separate chatbot.

The company has said it will discard the data collected and build a new deep learning algorithm from scratch for its chatbot.

A version of this article appeared in the print edition of The Sunday Times on January 24, 2021, with the headline 'AI chatbot controversy in S. Korea raises heat about ethics, data collection'. Subscribe