Effective altruism’s role in the OpenAI chaos explained
Sign up now: Get ST's newsletters delivered to your inbox
On one side were the commercial ambitions of OpenAI leader Sam Altman and OpenAI’s major partner, Microsoft.
PHOTO: NYTIMES
Follow topic:
Bengaluru – The expulsion and swift reinstatement of Mr Sam Altman as leader of OpenAI,
On one side were the commercial ambitions of Mr Altman and OpenAI’s major partner, Microsoft.
1. What is effective altruism?
It is a movement that aims to use research and reasoning to solve the most pressing global problems for the benefit of the maximum number of people.
It reflects the ideas of moral philosopher and professor of bioethics at Princeton University Peter Singer, who argues that people should spend their resources saving as many lives as possible, especially in parts of the world where a life can be saved for a relatively low cost.
Effective altruism is related to the concept of utilitarianism, an ethical theory that emphasises maximising the net good in the world.
By the early 2010s, effective altruism had sparked several non-profits that directed donors to causes such as buying malaria nets in sub-Saharan Africa, donating kidneys to the dying and distributing medical supplies in under-developed countries.
2. How did AI get involved?
Over the past decade, effective altruism has broadened its mission towards preventing future scenarios in which humans could go extinct, such as nuclear war and pandemics.
Also on that list: an AI apocalypse.
Since the early 2000s, a few AI theorists have posited that the emergence of powerful AI could spell danger or even doomsday for humanity.
The notion spawned the field of AI safety, which aims to prevent disastrous outcomes from the work of building AI.
AI safety was embraced as an important cause by big name Silicon Valley figures who believe in effective altruism, including PayPal co-founder Peter Thiel, Tesla chief executive Elon Musk and Sam Bankman-Fried, the founder of crypto exchange FTX, who was convicted in early November of a massive fraud.
3. What’s the worry about AI?
Effective altruists fear that the power of artificial general intelligence will one day surpass rational humans and lead the technology to turn on mankind.
The decisions that entrepreneurs and developers make today will irrevocably shape the course of humanity, they believe, and so the industry must recognise the stakes of this moment and make wise choices.
These far-flung causes fall under a branch of effective altruism called “longtermism”, a belief that stresses the moral worth of future people and this generation’s obligation to protect their interests.
“Neartermism” describes causes that affect people living today, such as preventing the spread of disease.
4. How did this affect what happened at OpenAI?
Founded with a mission to “ensure that AI benefits all of humanity”, OpenAI was supposed to be a counterweight to the profit-driven efforts within labs of technology giants such as Alphabet’s Google.
Members of OpenAI’s governing board had ties, both past and present, to the effective altruism movement.
The start-up began as a non-profit organisation but added a for-profit subsidiary so that it could raise the vast amounts of money it needed to operate the technology that fuels ChatGPT.
OpenAI attracted billions of dollars from Microsoft and, as at October, pinned its value at US$86 billion (S$115 billion).
That led to tension between OpenAI’s commercial ambitions – driven by Mr Altman and Microsoft – and worries from some board members about pushing AI development too fast. BLOOMBERG

