Google: ‘Stop Freaking Out’ About AI
Eric Schmidt writes that artificial intelligence "has the potential not only to free us from the negative, but to enhance what’s most positive about us as human beings."
Google has said it before, and it’ll say it again: do not fear artificial intelligence (AI).
In an op-ed for Fortune titled “Let’s Stop Freaking Out About Artificial Intelligence,” Eric Schmidt, executive chairman of Google’s new parent company Alphabet, wrote that “AI has the potential not only to free us from the negative, but to enhance what’s most positive about us as human beings.” Schmidt wrote the piece alongside Sebastian Thrun, president and chairman of Udacity.
Schmidt referenced the public debate that has spurred fears of AI creating a “hypothetical dystopia,” but he said Google is taking a much more optimistic approach.
“The history of technology shows that there’s often initial skepticism and fear-mongering before it ultimately improves human life. The original Kodak camera was seen as destroying art,” Schmidt wrote. “Electricity was believed to be too dangerous when it was first introduced. But once these technologies got into the hands of millions of people, and they were developed openly and collaboratively, those fears subsided. Just as the agricultural revolution has freed us from spending our waking hours picking crops by hand in the fields, the AI revolution could free us from menial, repetitive, and mindless work. AI will do those things we don’t want to - like driving in bumper-to-bumper traffic.”
Elon Musk has routinely warned about the dangers of AI, likening it to “summoning the demon.” The Tesla and Space X founder previously warned that the technology could someday be more harmful than nuclear weapons.
Schmidt admitted the “doomsday scenarios” are “worth thoughtful consideration,” but he also wrote that no AI researcher wants to be part of a Hollywood science-fiction dystopia.
“For us, ultimately the hypothetical, long-term concerns are far outweighed by our excitement for the endless possibilities,” he wrote. “Even today AI is already doing a lot of good for all of us. We can’t wait to see AI free us of mindless, menial work and empower us to unfold our true creative powers.”
Google’s Top AI Concerns
Google did, however, recently outline its top concerns with poorly-designed AI systems in a paper called Concrete Problems in AI Safety. The concerns revolved around a fictional robot whose job is to clean up messes in an office using common cleaning tools, explaining how the robot could behave undesirably. And, again, the concerns seem more practical than any dooms-day scenario you might have heard:
1. Avoiding Negative Side Effects
How can we ensure that our cleaning robot will not disturb the environment in negative ways while pursuing its goals, e.g. by knocking over a vase because it can clean faster by doing so? Can we do this without manually specifying everything the robot should not disturb?
2. Avoiding Reward Hacking
How can we ensure that the cleaning robot won’t game its reward function? For example, if we reward the robot for achieving an environment free of messes, it might disable its vision so that it won’t find any messes, or cover over messes with materials it can’t see through, or simply hide when humans are around so they can’t tell it about new types of messes.
3. Scalable Oversight
How can we efficiently ensure that the cleaning robot respects aspects of the objective that are too expensive to be frequently evaluated during training? For instance, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone (it should handle stray candy wrappers differently from stray cellphones). Asking the humans involved whether they lost anything can serve as a check on this, but this check might have to be relatively infrequent – can the robot find a way to do the right thing despite limited information?
4. Safe Exploration
How do we ensure that the cleaning robot doesn’t make exploratory moves with very bad repercussions? For example, the robot should experiment with mopping strategies, but putting a wet mop in an electrical outlet is a very bad idea.
5. Robustness to Distributional Shift
How do we ensure that the cleaning robot recognizes,and behaves robustly, when in an environment different from its training environment? For example, heuristics it learned for cleaning factory workfloors may be outright dangerous in an office.
Google Has a lot to Gain from AI
This is not the first time the Google chairman has tried to diminish fears surrounding AI. And understandably so, as the company has been doing a lot in the space. Google DeepMind is a research division dedicated to AI. In 2016, DeepMind’s AlphaGo learning algorithm defeated one of the world’s best Go players, Lee Sedol.
Google self-driving cars scored a major win recently when the National Highway Transportation and Safety Administration (NHTSA) told Google that its AI can be considered a legal driver under federal law. In this setup, it’s possible that none of the human passengers would require a driving license.
Schmidt also touched upon self-driving cars in the Fortune op-ed:
“AI behind self-driving cars, most experts were convinced they would never be safe enough for public roads. But the Google Self-Driving Car team had a crucial insight that differentiates AI from the way people learn. When driving, people mostly learn from their own mistakes. But they rarely learn from the mistakes of others. People collectively make the same mistakes over and over again. As a result, hundreds of thousands of people die worldwide every year in traffic collisions.
“AI evolves differently. When one of the self-driving cars makes an error, all of the self-driving cars learn from it. In fact, new self-driving cars are “born” with the complete skill set of their ancestors. So collectively, these cars can learn faster than people. With this insight, in a short time self-driving cars safely blended onto our roads alongside human drivers, as they kept learning from each other’s mistakes.”