AKBC 2016

Earlier I blogged about the NAACL conference. The conference proper ran Monday..Wednesday. It was followed by workshops on Thursday and Friday. I’ll describe Thursday’s workshop on Question Answering in a future posting. This one is about Friday’s workshop on Automated Knowledge Base Completion (AKBC). It was the highlight of the week for me, and I left there re-energized to dig into some tasks I’ve been wanting to do for a while, and with some cool ideas for new tasks.

The organizers put a lot of effort into the workshop, including humorous introductions for most of the invited speakers. For better or worse, they had more invited talks than contributed ones. I think this was for the better, as we got to hear about the efforts underway at several leading programs, and the speakers were good communicators. The disadvantage is that the slides of their talks are not available for download (at least yet).

Oren Etzioni spoke about things underway at the Allen Institute of Artificial Intelligence, and he put in a plug for the poster on IKE (Interactive Knowledge Explorer) which looks like a very nice tool for helping in KB construction. Oren’s work on the Semantic Scholar has me itching to do a lot more analysis on the text of sentences that cite other works in the literature.

Andrew McCallum spoke about recent progress in Universal Schemas, and put in plug for a couple of posters from his group when those topics came up in his presentation: Incorporating Selectional Preferences in Multi-hop Relation Extraction” by Rajarshi Das et al, and “Row-less Universal Schema” by Patrick Verga et al. Patrick’s work is very interesting as it allows us to train a Universal Schema, and then use it even when the incoming entities are not ones that were seen during training time. There is some complementary work on column-less Universal Schema, and Patrick has identified a merger of the two as an area of future work.

William Cohen spoke about some statistical relational learning systems like ProPPR and the quality of results they produce. He also noted that they were hard to integrate into an end-to-end system. Neural networks are easy to integrate that way because the individual components are differentiable, therefore the whole system can be trained with backprop. He went on to talk about making logical systems that were differentiable, such as their most current work on TensorLog.

Chris Manning spoke about a couple of things – one was contrasting some neural networks against traditional methods, then reworking those networks for improved performance. He also spoke about Natural Logic Inference – breaking complex sentences into clauses, then mutating the text of those clauses, to come to a set of simpler statements that are still valid inferences from the original text. That work has been recently extended to come up with a broader range of inferences based on text similarity, at the cost of shifting to probabilities of being a correct inference. Some of us see that as a benefit.

Richard Socher spoke about Dynamic Memory Networks. The Episodic Memory Module in these networks is an exceedingly cool attention mechanism and I recommend you read the paper “Dynamic Memory Networks for Visual and Textual Question Answering“. The composition of several modules to provide an overall system is probably a foretaste of the things to come.

The talks from Richard, Andrew and Chris made me want to go out and get their code and start playing around. Quite a bit of that code is available. Based on a quick peek before the flight home, CoreNLP now has an Open IE annotator that seems to use the Natural Logic Inference, although I don’t think it has the lexical similarity work in it yet. Patrick Verga, from Andrew’s group, has code on GitHub for working with a few different Universal Schema models including the new row-less one. There are several reimplementations of Richard’s work on GitHub. I think I will have fun over the summer with these things!

I should note that my colleague Paul Groth submitted a paper to the AKBC workshop which was accepted for poster presentation. He couldn’t make the trip due to schedule issues, so I presented it for him. Unlike Verga’s paper, ours is not not about pushing the state of the art in Universal Schemas. What is does show is how it was pretty easy to use Universal Schemas and some other unsupervised techniques to come up with a system to help in the maintenance of something we might call a medical knowledge base. Since the panel of experts at the end of the day predicted that medical was the big domain for NLP in the near..medium term, and they also said they didn’t see a lot of people working on integrating a flow of new knowledge into a knowledge base, you might want to check it out: Applying Universal Schemas for Domain Specific Ontology Expansion, Paul Groth et al.

One final comment – the organizers had a set of questions for the audience and panel. One of them was whether the next big challenge for NLP/AKBC was precision, recall, scalability, or something else. Recall was the dominant answer. Personally, I’d go for something else. Make no mistake, I think recall is both vital and difficult. But it seems to me that the question assumes to much. How are we going to evaluate recall, or even precision, on a large knowledge base that is automatically constructed and maintained? I think we need some fundamental advance in how we can perform such evaluations before we can tell what kind of progress we are making on recall.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s