Investor-inventor Elon Musk once described the sensational advances in artificial intelligence (AI) as "summoning the demon". Boy, can the demon play Go.
The AI company DeepMind announced recently it had developed an algorithm capable of excelling at the ancient Chinese board game.
The big deal is that this new algorithm, called AlphaGo Zero (AGZ), is completely self-taught. It was armed only with the rules of the game - and zero human input.
Its predecessor, AlphaGo, was trained on data from thousands of games played by human competitors.
The two algorithms went to war, and AGZ triumphed 100 to nil. In other words - put this up in neon lights - disregarding human intellect allowed AGZ to become a supreme exponent of its art.
While DeepMind is the outfit most likely to feed Mr Musk's fevered nightmares, machine autonomy is on the rise elsewhere. In January, researchers at Carnegie Mellon University unveiled an algorithm capable of beating the best human poker players.
The machine, called Libratus, racked up nearly US$2 million (S$2.7 million) in chips against top-ranked professionals of Heads-Up No-Limit Texas Hold 'em, a challenging version of the card game. Flesh-and-blood rivals described being outbluffed by a machine as "demoralising". Again, Libratus improved its game by detecting and patching its own weaknesses, rather than borrowing from human intuition.
AGZ and Libratus are one-trick ponies but technologists dream of machines with broader capabilities. DeepMind, for example, declares it wants to create "algorithms that achieve superhuman performance in the most challenging domains with no human input".
Once fast, deep algorithms are unshackled from the slow, shallow disappointment of human intellect, they can begin crunching problems that our own lacklustre species has not confronted. Rather than emulating human intelligence, the top tech thinkers toil daily to render it unnecessary.
For that reason, we might one day look back on AGZ and Libratus as baby steps towards the Singularity , the much-debated point at which AI becomes super-intelligent, able to control its own destiny without recourse to human intervention. The most dystopian scenario is that AI becomes an existential risk.
Suppose that super-intelligent machines calculate, in pursuit of their programmed goals, that the best course of action is to build even cleverer successors. A runaway iteration takes hold, racing exponentially into fantastical realms of calculation.
One day, these goal-driven paragons of productivity might also calculate, without menace, that they can best fulfil their tasks by taking humans out of the picture.
As others have quipped, the most coldly logical way to beat cancer is to eliminate the organisms that develop it.
Ditto for global hunger and climate change.
These are riffs on the famous paper-clip thought experiment dreamt up by philosopher Nick Bostrom, now at the Future of Humanity Institute at Oxford University. If a hyper-intelligent machine, devoid of moral agency, was programmed solely to maximise the production of paper clips, it might end up commandeering all available atoms to this end. There is surely no sadder demise for humanity than being turned into office supplies. Professor Bostrom's warning articulates the capability caution principle, a well-subscribed idea in robotics that we cannot necessarily assume the upper capabilities of AI .
It is of course pragmatic to worry about job displacement: Many of us, this writer included, are paid for carrying out a limited range of tasks.
We are ripe for automation. But only fools contemplate the more distant future without anxiety - when machines may out-think us in ways we do not have the capacity to imagine.