Imagine you’re driving home. Up ahead, you see a busy crosswalk, so you hit the brakes—but your brakes don’t work. If you keep going forward, you’ll hit a woman who’s just entered the crosswalk. You’re only other option is to swerve, which means you’ll hit five men instead. What do you do?

Would your answer be different if the woman was elderly? What if the five men were criminals? How about if we replace the woman with a dog and the five men with a cat?

Now what if instead of driving the car, you were programming a self-driving vehicle? How would you want the car to be programmed?

These are some of the questions posed by the Moral Machine, an online experiment conducted by MIT. People’s responses have shed an interesting light on the problems involved in programming morality, as well as the hierarchy of human values.

What Would You Do?

The Moral Machine is designed as a game-like poll. Participants—which can include anyone with internet access—are given two possible scenarios and have to pick the one that they consider preferable. Either way, someone will die.

  • Should you try to save as many people as possible?
  • Should you let the car mow down jaywalkers?
  • Should you let criminals die in order to save pets?
  • Should you take a person’s profession and age into account?

If you haven’t taken the poll yet, you can check it out now.

What Do People Value Most?

After you go through a few scenarios, you’ll be given a summary of your decisions, and you’ll be able to compare this against the choices made by others.

People around the world have already given millions of responses, and these responses have been analyzed in a paper published by Nature.

According to a summary in Jalopnik, strollers were spared the most often, followed closely by girls, boys and pregnant women. Cats came in dead last – quite literally. Interestingly, criminals were spared more often than cats but less often than dogs.

What Does This Mean for Self-Driving Cars?

Some of the scenarios presented in the experiment seem pretty far-fetched. For example, would you really be able to identify a bank robber as they crossed the street in front of you? It’s also possible that people who chose to kill a criminal in order to let a dog live in an online game might not make the same decision in real life.

Despite these shortcomings, the experiment raises some serious issues. When human drivers make life-and-death decisions, they do so in the moment. Programming a self-driving vehicle to make these decisions feels very different. How should companies developing self-driving cars address these dilemmas? Should they program cars to take details like a person’s age and gender into account?

There were also regional differences. For example, overall, most people chose to save young people, but respondents in Asia tended to spare the elderly. Should self-driving cars be programmed differently depending on the country’s values?

As people hand over the keys to self-driving cars, these questions will become increasingly important.