Programming Self-Driving Cars to Make Ethical Decisions

Engineers at Stanford University shed some light on what goes into programming self-driving cars to make ethical decisions.


It’s no secret that before self-driving cars are mainstreamed into society - whenever that may be - a slew of ethical dilemmas need to be answered.

How will self-driving cars make life-or-death decisions?

Should self-driving cars protect the occupants at all costs?

Should self-driving cars always minimize the loss of life?

Researchers need to teach self-driving cars how to make safe driving decisions. Engineers at Stanford University recently shed some light on what goes into programming self-driving cars to make those safe driving decisions.

In the video above, Stanford uses the example of a self-driving car encountering an obstacle in the middle of its lane, and all the possibilities that need to be accounted for in the programming stages.

“We can treat that as a very hard, strict constraint and the vehicle will have to come to a complete stop to avoid the obstacle,” said Sarah Thornton, a PhD candidate who’s in Stanford’s Dynamic Design Lab. “Another option would be to minimize how much it violates the double yellow line and veer very closely to the obstacle - very uncomfortable for the occupant in the passenger seat. The third scenario is to enter the oncoming traffic lane to give more space to the obstacle as you maneuver around it.”

Stanford developed a self-driving car named “Shelley” that can hit speeds up to 120 miles per hour. The custom Audi TTS has been tested at the three-mile Thunderhill Raceway in California. Shelley hit average speeds of between 50-70 MPH, but on the quicker parts of the track the self-driving car reached speed between 110-120 MPH.

Shelley was developed to see how the car adjusts its throttle, braking systems and record data from these movements to be to improve collision avoidance software.



Comments

Steve Crowe · August 31, 2016 · 12:43 pm

You’re fired smile There’s gotta be some sort of logic to how programmers are making these decisions. I’ll try to do some digging and see what I can find.

Brett Pipitone · August 31, 2016 · 11:25 am

I’ve asked around and the hierarchy I mentioned was apparently either informal or the product of one group within NASA. It’s been many years since I was there but the folks I talked to immediately agreed but didn’t think it was official. So I guess I somewhat overstated in the first comment.

Brett Pipitone · August 25, 2016 · 9:35 am

I can’t seem to find a link either. I still know some folks doing that kind of work and have old paper documents I’ll look into. I’ll report back with findings.

Steve Crowe · August 24, 2016 · 1:42 pm

Hey Brett, do you have a link to that hierarchy? i did a quick search but didn’t find it. i’m sure others would who don’t know are intrigued, too.

Brett Pipitone · August 24, 2016 · 11:48 am

I’ve always been surprised that self driving cars don’t use the accepted risk hierarchy developed for aviation:
1. People on the ground
2. Passengers in the vehicle
3. Vehicle pilot
4. Property on the ground
5. Vehicle itself.
NASA has used this for decades.


Brett Pipitone · August 24, 2016 at 11:48 am

I’ve always been surprised that self driving cars don’t use the accepted risk hierarchy developed for aviation:
1. People on the ground
2. Passengers in the vehicle
3. Vehicle pilot
4. Property on the ground
5. Vehicle itself.
NASA has used this for decades.

Steve Crowe · August 24, 2016 at 1:42 pm

Hey Brett, do you have a link to that hierarchy? i did a quick search but didn’t find it. i’m sure others would who don’t know are intrigued, too.

Brett Pipitone · August 25, 2016 at 9:35 am

I can’t seem to find a link either. I still know some folks doing that kind of work and have old paper documents I’ll look into. I’ll report back with findings.

Brett Pipitone · August 31, 2016 at 11:25 am

I’ve asked around and the hierarchy I mentioned was apparently either informal or the product of one group within NASA. It’s been many years since I was there but the folks I talked to immediately agreed but didn’t think it was official. So I guess I somewhat overstated in the first comment.

Steve Crowe · August 31, 2016 at 12:43 pm

You’re fired smile There’s gotta be some sort of logic to how programmers are making these decisions. I’ll try to do some digging and see what I can find.


Log in to leave a Comment


in the Future Tech Hub

Editors’ Picks

Autonomous Snake-like Robot to Support Search-and-Rescue
Worcester Polytechnic Institute is creating autonomous snake-like robots that can navigate through...

Love Writing About Robotics and AI? Robotics Trends is Hiring!
Robotics Trends and sister site Robotics Business Review are growing and adding...

WiBotic PowerPad Wirelessly Charges Drones
WiBotic’s PowerPad wirelessly charges everything from large industrial drones to smaller...

Meet Jing Xiao: WPI’s New Director of Robotics
In January 2018, Jing Xiao will become the new director of the Robotics...