Harvard Creating AI That Works Like the Human Brain

The pattern-recognition and learning abilities of machines still pale in comparison to even the simplest mammalian brains.


The US government’s Intelligence Advanced Research Project Activity (IARPA) has awarded three departments at Harvard University $28 million to develop artificial intelligence (AI) that can interpret, analyze, and learn information as successfully as humans.

The scientists at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS), Center for Brain Science (CBS), and the Department of Molecular and Cellular Biology will study the human brain to determine patterns and develop algorithms that can help improve existing AI systems.

The scientists say that while AI systems can handle more information than humans, AI can’t match a human brain’s ability to recognize patterns and learn information. Harvard researchers will record activity in the brain’s visual cortex, map its connections at a scale never before attempted, and reverse engineer the data to inspire better computer algorithms for learning.

“This is a moonshot challenge, akin to the Human Genome Project in scope,” project leader David Cox, assistant professor of molecular and cellular biology and computer science, tells SEAS News. “The scientific value of recording the activity of so many neurons and mapping their connections alone is enormous, but that is only the first half of the project. As we figure out the fundamental principles governing how the brain learns, it’s not hard to imagine that we’ll eventually be able to design computer systems that can match, or even outperform, humans.”

Cox said these systems could be designed to do everything from detecting network invasions, to reading MRI images, to driving cars.

The multi-stage effort begins in Cox’s lab, where rats will be trained to recognize various visual objects on a computer screen. As the animals are learning, Cox’s team will record the activity of visual neurons using next-generation laser microscopes built for this project with collaborators at Rockefeller University, to see how brain activity changes as the animals learn.  Then, a substantial portion of the rat’s brain – one-cubic millimeter in size – will be sent down the hall to Lichtman’s lab, where it will be diced into ultra-thin slices and imaged under the world’s first multi-beam scanning electron microscope, housed in the Center for Brain Science.

“This is an amazing opportunity to see all the intricate details of a full piece of cerebral cortex,” says Lichtman. “We are very excited to get started but have no illusions that this will be easy.

Here’s more about the project from SEAS News:

“The multi-stage effort begins in Cox’s lab, where rats will be trained to recognize various visual objects on a computer screen. As the animals are learning, Cox’s team will record the activity of visual neurons using next-generation laser microscopes built for this project with collaborators at Rockefeller University, to see how brain activity changes as the animals learn.  Then, a substantial portion of the rat’s brain - one-cubic millimeter in size - will be sent down the hall to Lichtman’s lab, where it will be diced into ultra-thin slices and imaged under the world’s first multi-beam scanning electron microscope, housed in the Center for Brain Science.

“This difficult process will generate over a petabyte of data—equivalent to about 1.6 million CDs worth of information. This vast trove of data will then be sent to Pfister, whose algorithms will reconstruct cell boundaries, synapses, and connections, and visualize them in three dimensions.

“This project is not only pushing the boundaries of brain science, it is also pushing the boundaries of what is possible in computer science,” said Pfister. “We will reconstruct neural circuits at an unprecedented scale from petabytes of structural and functional data. This requires us to make new advances in data management, high-performance computing, computer vision, and network analysis.”




About the Author

Steve Crowe · Steve Crowe is managing editor of Robotics Trends. Steve has been writing about technology since 2008. He lives in Belchertown, MA with his wife and daughter.
Contact Steve Crowe: scrowe@ehpub.com  ·  View More by Steve Crowe.




Comments



Log in to leave a Comment

Article Topics

News · Artificial Intelligence · All Topics


Editors’ Picks

Toyota T-HR3 Humanoid Uses VR to Mimic Humans
Toyota’s third-generation humanoid robot, the T-HR3, uses a force feedback-enabled control...

3 Ways AR/VR Are Improving Autonomous Vehicles
A lot of work still needs to be done before we start...

New Emotional Robotics Lab to Study Human-Robot Interaction
The University of Texas at Arlington has launched a new Emotional Robotics...

SpotMini Robot Dog Gets Major Makeover
Boston Dynamics introduced a new version of its SpotMini robot dog. The...