Google’s Top 5 AI Safety Concerns

Google outlines outlines its top five safety concerns as artificial intelligence is applied in more general circumstances.

Photo Caption: Google DeepMind is a research division dedicated to artificial intelligence. In 2016, DeepMind's AlphaGo learning algorithm defeated one of the world's best Go players, Lee Sedol.

The rapid progress in machine learning and artificial intelligence (AI) has led to many doom-and-gloom conversations about the future of the human race. Google, however, believes most of those conversations have “been very hypothetical and speculative.”

Google, in collaboration with OpenAI, Stanford and Berkley, published a paper called Concrete Problems in AI Safety that outlines five problems Google thinks will be very important as AI is applied in more general circumstances.

To illustrate its concerns, Google paints a picture of a fictional robot whose job is to clean up messes in an office using common cleaning tools, explaining how the robot could behave undesirably. “These are all forward thinking, long-term research questions—minor issues today, but important to address for future systems,” said Chris Olah of Google Research in a blog.

Here are Google’s Top 5 AI Safety Concerns:

1. Avoiding Negative Side Effects

How can we ensure that our cleaning robot will not disturb the environment in negative ways while pursuing its goals, e.g. by knocking over a vase because it can clean faster by doing so? Can we do this without manually specifying everything the robot should not disturb?

2. Avoiding Reward Hacking

How can we ensure that the cleaning robot won’t game its reward function? For example, if we reward the robot for achieving an environment free of messes, it might disable its vision so that it won’t find any messes, or cover over messes with materials it can’t see through, or simply hide when humans are around so they can’t tell it about new types of messes.

3. Scalable Oversight

How can we efficiently ensure that the cleaning robot respects aspects of the objective that are too expensive to be frequently evaluated during training? For instance, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone (it should handle stray candy wrappers differently from stray cellphones). Asking the humans involved whether they lost anything can serve as a check on this, but this check might have to be relatively infrequent – can the robot find a way to do the right thing despite limited information?

4. Safe Exploration

How do we ensure that the cleaning robot doesn’t make exploratory moves with very bad repercussions? For example, the robot should experiment with mopping strategies, but putting a wet mop in an electrical outlet is a very bad idea.

5. Robustness to Distributional Shift

How do we ensure that the cleaning robot recognizes,and behaves robustly, when in an environment different from its training environment? For example, heuristics it learned for cleaning factory workfloors may be outright dangerous in an office.

About the Author

Steve Crowe · Steve Crowe is managing editor of Robotics Trends. Steve has been writing about technology since 2008. He lives in Belchertown, MA with his wife and daughter.
Contact Steve Crowe:  ·  View More by Steve Crowe.


Log in to leave a Comment

Editors’ Picks

10 Best Robots of CES 2018
Self-driving cars were the talk of Las Vegas during CES 2018. We recap...

Top 10 AI & Robot Stories of 2017
2018 is shaping up to be a massive year for automation. But first,...

Robots are Learning to Pick up Objects Like Babies
UC Berkeley has developed a technique that enables robots to complete tasks...

Self-Driving Taxis Giving Rides During CES 2018
Aptiv and Lyft have teamed up to give CES 2018 attendees self-driving taxi...