Ever since I was an undergrad at the University of Michigan in the late 1970s, I have wondered how we think. Modern digital computers are marvelous machines but they don’t hold a candle to what a very young child can do. For instance, it is absolutely trivial for you to recognize a bicycle when you see one in the distance, even if you only see a small part of the bike. Similarly, many people can recognize a song, and instantly access the melody and the lyrics, after hearing only a couple of notes. Artificial intelligence programs on digital computers can be trained to do one task well but nothing exists that can do both.
Jeff Hawkins, founder of Palm and Handspring, has an article on Hierarchical Temporal Memory (HTM) in IEEE Spectrum: Learn Like A Human; Why Can’t A Computer Be More Like A Brain?. It sounds like they have made solid inroads into creating a machine which actually thinks like a biological brain by modeling the neocortex. You don’t program it; instead, you expose the HTM to sensory input and it learns.
Back at UofM I envisioned a machine which could “see” well enough to drive a car. It had to use existing visual cues, just like a human driver. This is not too tough in good weather when the car is moving slowly: just look for the lines on the side of the road and stay between them. But the problem becomes ridiculously difficult when it gets dark and starts to rain and there are lights glaring everywhere and there is traffic on the road and the car is careening down the highway at 70 MPH…. You get the idea.
Hawkins’ work holds huge potential. Imagine a world in which you had a mechanical co-driver in your car, always watching the road for hazards, and which you could actually trust to make decisions that you always agreed with.