Deep Learning

An article in the Chronicle of Higher Education tells the story of Geoffrey Hinton, a computer scientist behind the neural nets whose applications we increasingly enjoy. From the article,

As a teenager, Hinton became fascinated with computers and brains. He could build electrical relays out of razor blades, six-inch nails, and copper wire in 10 minutes; give him an hour, and he’d give you an oscillator.

His view then was the same he has today: "If you want to understand how the mind works, ignoring the brain is probably a bad idea." Using a computer to build simple models to see if they worked—that seemed the obvious method. "And that’s what I’ve been doing ever since."

This was not an obvious view. He was the only person pursuing neural nets in his department at Edinburgh. It was hard going. "You seem to be intelligent," people told him. "Why are you doing this stuff?"

Hinton had to work in secret. His thesis couldn’t focus on learning in neural nets; it had to be on whether a computer could infer parts, like a human leg, in a picture. His first paper on neural nets wouldn’t pass peer review if it mentioned "neural nets"; it had to talk about "optimal networks." After he graduated, he couldn’t find full-time academic work. But slowly, starting with a 1979 conference he organized, he found his people.

"We both had this belief," says Terrence J. Sejnowski, a computational neurobiologist at the Salk Institute for Biological Studies and longtime Hinton collaborator. "It was a blind belief. We couldn’t prove anything, mathematical or otherwise." But as they saw rules-based AI struggle with things like vision, they knew they had an ace up their sleeve, Sejnowski adds. "The only working system that could solve these problems was the brain."

No votes yet