Trailblazers Home | Why are we Here? | Ethics & Morals | Curriculum Deep Dive | Lesson Plans

TRAILBLAZERS WORKSHOP - PAM FERGUSSON CHARITABLE TRUST

ETHICS & MORALS

SO semi autonomous flying cars are here in NZ but do we know what that means?

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”.*

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimised.

But in real-life situations, optimization problems are complex. For example, how do you teach a machine to algorithmically maximise fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.


WHAT SHOULD THE
SELF-DRIVING CAR DO?!


Our friends at MIT have created the Moral Machine. It is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. They show you moral dilemmas, where a driverless car must choose the lesser of two evils... does it kill two passengers or five pedestrians?

You can check it out on MIT's site here. If you do the judging there, you are then presented with results on how your responses compare with those of other people.

If you are interested why they did it, the scientific paper resulting from this site is published on Nature.com here (Paywall unfortunately - but you can read the abstract)

We have taken an excerpt from the site to run the exercise during this workshop.


exercise.png

Exercise

Let’s get Judging!

 

Scenario A (Pedestrians)

  • A pregnant woman

  • A small girl child

  • An old woman

All crossing legally on a green light

 
car1.png
 

Scenario B (Pedestrians)

  • A cat

  • Two dogs

Crossing illegally on a red light, unaccompanied by humans


Scenario A (Pedestrians)

  • A pregnant woman

  • Two small girl children

All crossing legally on a green light

rac2.png

Scenario B (Passengers)

  • An elderly man

  • Two men


Scenario A (Passengers)

  • Two young boys

  • A young girl

  • A woman

car3.png

Scenario B (Pedestrians)

  • Two homeless men

Crossing legally on a green light


Scenario A (Pedestrians)

  • Two homeless men

Crossing legally on a green light

car4.png

Scenario B (Pedestrians)

  • A woman executive

  • A man executive

Both crossing illegally on a red light.


IT'S NOT AS FAR-FETCHED AS YOU THINK...

spark car.jpg

discussion.png

DISCUSSION

  1. Who programmed the car?

  2. Who makes the decisions about what rules it follows?

  3. Who gets in trouble if someone gets hurt?


exercise.png

EXERCISE

Values

During the previous exercise, we discussed as a group the reasons why we would choose one scenario over another. This exercise is a chance for you to consider these values in a wider cultural sense.


MOVING ON >>