A project funded by OpenAI aims to develop algorithms that predict what humans will do morally.
OpenAI Inc., OpenAI’s nonprofit organization, disclosed in a filing with the IRS that it gave a grant to Duke University researchers to do research on AI morality. When contacted for comment, an OpenAI spokesperson said the award was part of a $1 million grant given to Duke professors studying “making moral AI” over a three-year period.
Not much is known about this research. Walter Sinnott-Armstrong, who leads the study and teaches practical ethics at Duke, wouldn’t share details about the work. The grant will end in 2025.
Sinnott-Armstrong and his colleague Jana Borg have done interesting work in this field before. They wrote a book about using AI as a “moral GPS” to help people make better choices. They’ve also created an AI system to help decide who should get kidney donations and looked at when people would trust AI to make moral decisions.
The research team wants to train AI to guess how humans would judge moral issues in medicine, law, and business. But teaching morality to AI isn’t easy. Creating an AI system that can predict human moral judgments will come with a lot of challenges. It’s a difficult goal, and some wonder if it’s even possible with today’s technology.