Last week I was at the AAAI Symposium on Knowledge Representation and Reasoning (KRR). Check out the schedule at that link, and the accepted papers presented as posters. Lots of good stuff there if, of course, you are into that kind of thing.
I think getting to hear Geoff Hinton and Doug Lenat recapitulate the battle of the neats vs. the scruffies was probably the highlight of the conference for many. That was, frankly, kind of fun. What I really got from the symposium was the diversity of methods that are available and need to be reconciled. There is little doubt that the machinery under our veneer of conscious thought is much closer to the neural network models than to the rules and assertions of symbolic logic. So what? Airplanes don’t flap their wings, as has been pointed out for most of the life of AI. There are times when being able to set forth a few facts and rules seems like a great and pragmatic efficiency hack. Josh Tennenbaum’s talk about the things babies learn before they can speak or even understand much of what is said – things like gravity, density, and the stability of towers of blocks – pointed out that part of our thinking is akin to running simulations using a game engine.
People who are trying to figure out consciousness may not be happy with efficiency hacks, but people trying to engineer useful tools probably will be. Integrating those three models of cognition, plus a few others that will come along, is going to keep a lot of grad students locked in their labs and staying out of trouble at night.
The same theme of integrating multiple methods repeats itself at a smaller scale. Also present at the meeting were some people pushing forward with Probabilistic Soft Logic, and others pushing forward with matrix factorization and universal schemas. How do we tie those in with networks trained on multi-modal content? I’m seriously interested to see how that works out over the next couple of years.