Ethics 101: Bringing morality to computer science

Stanford and other top US universities offering courses to ensure students know ethical issues related to computing

In the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley's top technologists are bringing a more medicine-like morality to computer science. PHOTO: REUTERS

The medical profession has an ethic: First, do no harm.

Silicon Valley has an ethos: Build it first and ask for forgiveness later.

Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley's top technologists are hustling to bring a more medicine-like morality to computer science.

This semester, Harvard University and the Massachusetts Institute of Technology (MIT) are jointly offering a new course on the ethics and regulation of artificial intelligence. The University of Texas at Austin just introduced a course titled "Ethical Foundations of Computer Science" - with the idea of eventually requiring it for all computer science majors.

And at Stanford University, the academic heart of the industry, three professors and a research fellow are developing a computer science ethics course for next year. They hope several hundred students will enrol.

The idea is to train the next generation of technologists and policymakers to consider the ramifications of innovations - like autonomous weapons or self-driving cars - before those products go on sale. "It's about finding or identifying issues that we know in the next two, three, five, 10 years, the students who graduate from here are going to have to grapple with," said computer science professor Mehran Sahami at Stanford, who is helping to develop the course.

He is renowned on campus for bringing Facebook co-founder Mark Zuckerberg to class. "Technology is not neutral," said Prof Sahami, who formerly worked at Google as a senior research scientist. "The choices that get made in building technology then have social ramifications."

The courses are emerging at a moment when big tech firms have been struggling to handle the side effects - fake news on Facebook, fake followers on Twitter, lewd children's videos on YouTube - of the industry's build-it-first mindset. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.

"We need to at least teach people that there's a dark side to the idea that you should move fast and break things," said post-doctoral fellow Laura Noren at the Centre for Data Science at New York University, who began teaching a new data science ethics course this semester. "You can patch the software, but you can't patch a person if you, you know, damage someone's reputation."

Computer science programmes are required to make sure students have an understanding of ethical issues related to computing in order to be accredited by ABET, a global accreditation group for university science and engineering programmes. Some computer science departments have folded the topic into a broader class, and others have standalone courses.

But until recently, ethics did not seem relevant to many students. "Compared to transportation or doctors, your daily interaction with physical harm or death or pain is a lot less if you are writing software for apps," said MIT Media Lab director Joi Ito.

One reason that universities are pushing tech ethics now is the popularisation of powerful tools like machine learning - computer algorithms that can autonomously learn tasks by analysing large amounts of data. Because such tools could ultimately alter human society, universities are rushing to help students understand the potential consequences, said Professor Ito, who is co-teaching the Harvard-MIT ethics course.

"As we start to see things, like autonomous vehicles, that clearly have the ability to save people but also cause harm, I think that people are scrambling to build a system of ethics," he said.

Cornell University introduced a data science course where students learnt to deal with ethical challenges - such as biased data sets that include too few lower-income households to be representative of the general population.

Students also debated the use of algorithms to help automate life-changing decisions such as hiring or college admissions. "It was really focused on trying to help them understand what in their everyday practice as a data scientist they are likely to confront, and to help them think through those challenges more systematically," said Dr Solon Barocas, an assistant professor in information science who taught the course.

In another Cornell course, Dr Karen Levy, also an assistant professor in information science, is teaching her students to focus more on the ethics of tech firms. "A lot of ethically charged decision-making has to do with the choices a company makes: what products they choose to develop, what policies they adopt around user data," she said. "If data science ethics training focuses entirely on the individual responsibility of the data scientist, it risks overlooking the role of the broader enterprise."

The Harvard-MIT course, which has 30 students, focuses on the ethical, policy and legal implications of artificial intelligence. It was spurred and financed in part by a new artificial intelligence ethics research fund whose donors include Mr Reid Hoffman, a co-founder of LinkedIn, and the Omidyar Network, the philanthropic investment firm of eBay founder Pierre Omidyar.

The curriculum also covers the spread of algorithmic risk scores that use data - such as whether a person was ever suspended from school, or how many of his or her friends have arrest records - to forecast whether someone is likely to commit a crime.

Prof Ito hopes the course would spur students to ask basic ethical questions, such as: Is the technology fair? How do you make sure the data is not biased? Should machines be judging humans?

Some universities offer such programmes in their information science, law or philosophy departments. At Stanford, the computer science department will offer the new ethics course, tentatively titled "Ethics, Public Policy and Computer Science".

The expectations for the course are running high in part because of Prof Sahami's popularity on campus. About 1,500 students take his introductory computer science course every year. The new ethics course covers topics such as artificial intelligence and autonomous machines; privacy and civil rights; and platforms like Facebook.

Stanford political science professor Rob Reich, who is helping to develop the course, said students would be asked to consider those topics from the point of view of software engineers, product designers and policymakers.

Students will also be assigned to translate ideal solutions into computer code. "Stanford absolutely has a responsibility to play a leadership role in integrating these perspectives, but so does Carnegie Mellon and Caltech and Berkeley and MIT," said Stanford political science professor Jeremy Weinstein, who co-developed the ethics course. "The set of institutions that are generating the next generation of leaders in the technology sector have all got to get on this train."

NYTIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Sunday Times on February 18, 2018, with the headline Ethics 101: Bringing morality to computer science. Subscribe