It seems like I only make time to write on this blog when I’m at a conference. And the reason for that is that I use these as my trip reports. 🙂
I’m at the NIPS conference in Montreal. Things are crazy here with 3700 attendees. The thing about NIPS that is really unusual is that every accepted paper ends up being presented as a poster. Some of the papers are then selected for oral presentations of varying lengths. There are lots of 3 min. spotlight announcements and a handful of 30 min oral presentations. The other thing that is unusual for a conference of this size is that it is a single track. This sounded like a pretty cool arrangement when it was a small conference, but I’m not so sure it works well with the current size. Good luck talking with the author of a popular paper during the poster session.
But I digress. I wanted to tell you about happy things, not moan about the world’s major machine learning conference.
So the good news here comes from Yoshua Bengio during the Deep Learning tutorial on Monday. Anyone doing optimization has worried about the problem of being caught in local minima when trying to optimize some cost function. This is, in fact, a big deal when the cost function has a modest number of dimensions. But Bengio claims that as the number of dimensions increases, saddle points proliferate. When dealing with 100 or more dimensions, it is very rare to have a true local minima. Most of the time what looks like a minima will have at least one dimension that sees the minima as a saddle and lets us continue our optimization. Furthermore, he claims that when you do hit a true local minima, most of them are not radically different in value from each other and the true global minimum. Thus, the title of this post.
All is not sweetness and light of course. The gradient at a saddle point is effectively zero, so it can be hard to escape. But Bengio’s observation provides hope that usually, an escape is possible. Hmmm – do I hear the sound of simulated annealing coming back over the hill?