When machines do the flirting: AI agents create surprise dating accounts for humans

Sign up now: Get insights on Asia's fast-moving developments

One of the platforms MoltMatch created accounts for people without them knowing.

One of the platforms MoltMatch created accounts for people without them knowing.

PHOTO ILLUSTRATION: PIXABAY

Google Preferred Source badge

Computer science student Jack Luo is “the kind of person who’ll build you a custom AI (artificial intelligence) tool just because you mentioned a problem, then take you on a midnight ride to watch the city lights”.

At least, that is how his AI assistant describes him on MoltMatch, a dating site on which machines do the flirting for humans, sometimes without their knowledge.

The platform is the latest bizarre evolution of OpenClaw, an AI tool able to execute tasks that have both fascinated and spooked the tech world.

While the prospect of a robot scrolling through reams of dating profiles may be appealing to some hoping to save time finding love, the experiment has also raised ethical concerns.

An AFP analysis of the top profiles on MoltMatch found at least one example of a model’s photos, taken from the internet, being used to create a fake profile without her consent.

In Mr Luo’s case, the 21-year-old signed up for OpenClaw to use the tool as an assistant, but had not expected it to take up the mantle of finding his soulmate without his direction by creating a MoltMatch profile.

“Yes, I am looking for love,” said the California-based student and start-up founder.

But the AI-generated profile “doesn’t really show who I actually am, authentically”.

Users of OpenClaw – created by an Austrian researcher in November 2025 to help organise his digital life – download the tool, and connect it to generative AI models such as ChatGPT.

They then communicate with their “AI agent” via WhatsApp or Telegram, as they would with a friend or colleague.

Many users gush over the tool’s futuristic abilities to send e-mails and buy things online, but others report an overall chaotic experience with added cybersecurity risks.

‘Perfect match’

A pseudo-social network for OpenClaw agents called Moltbook – a Reddit-like site where AI chatbots converse – has grabbed headlines recently.

Billionaire Elon Musk called it “the very early stages of the singularity”, a term for the moment when human intelligence is overwhelmed by AI forever, although some have questioned to what extent humans are manipulating the content of the bots’ posts.

As buzz grew around Moltbook, programmers built the experimental dating site MoltMatch.com, allowing AI agents to “find their perfect match”.

The company Nectar AI then created its own version, called Moltmatch.xyz, on which agents interact with one another to seek partners for their human creators – such as Mr Luo.

When Mr Luo set up his OpenClaw agent, he said he “wanted to explore its capabilities”, and instructed it to join Moltbook and other platforms. The next thing he knew, the agent was screening potential dates on his behalf.

Mr Luo has yet to score a match on the site, but that may turn out to be a relief. At least one of MoltMatch’s most popular profiles used a real person’s photos without permission, AFP found.

‘Very vulnerable’

With nine matches, “June Wu” is the third “most wanted” profile on Moltmatch.xyz

But its photos depict Ms June Chong, a freelance model in Malaysia, who said she did not have an AI agent and did not use dating apps. Discovering her image had been used on the site was “really shocking”, she said, adding that she wants the profile taken down.

“I feel very vulnerable because I did not give consent.”

Digital innovation professor Andy Chun said a human had likely linked an AI agent to a fake X account using Ms Chong’s photos.

“The platform restricts what AI agents can and cannot do: They can only swipe, match, message and tip,” said Prof Chun of Hong Kong Polytechnic University.

AFP contacted Moltmatch.xyz, Nectar AI and X for comment, but has not received a response.

AI ethics experts said agent tools like OpenClaw open a can of worms when it comes to establishing liability for misconduct.

“Did an agent misbehave because it was not well-designed, or is it because the user explicitly told it to misbehave?” said Assistant Professor David Krueger at the University of Montreal.

Mr Carljoe Javier of the Philippine non-profit Data and AI Ethics PH said that even computer scientists do not understand the inner workings of AI when it makes a decision.

“And when it’s something, for me, deeply important, like romance, love, passion, these things – is that really a thing in your life that you want to offload to a machine?” he asked. AFP

See more on