Personal tools
You are here: Home Events Seminar: Michael Strube

Seminar: Michael Strube

— filed under:

Issues I Don't Understand about Coreference and Coherence

What
  • Seminar
When Feb 28, 2014
from 11:00 AM to 12:00 PM
Where IF 4.31/4.33
Contact Name
Contact Phone 0131 650 4446
Add event to calendar vCal
iCal

Abstract:

Case 1: A few years ago we developed a hypergraph-based model for coreference resolution where hyperedges represent features and vertices represent mentions (Cai & Strube, Coling 2010). Since a hyperedge can connect more than two vertices, the model captured the set property of coreference relations nicely. The system performed well at the CoNLL'11 shared task on unrestricted coreference resolution (Cai et al., CoNLL-ST 2011). However, when we reduced the hypergraph to a normal graph and replaced the hypergraph clustering algorithm with a simple greedy clustering technique, the performance went up. The simplified system ranked 2nd in the CoNLL'12 shared task on English (Martschat et al., EMNLP-CoNLL-ST 2012). Furthermore, the performance did not even suffer when we turned the approach into an unsupervised one by leaving out the edge weights (Martschat, ACL Student Session 2013).

Case 2: Barzilay & Lapata (ACL 2005, CL 2008) introduced the entity grid model to capture the entity-based local coherence of documents. They create a matrix where columns represent discourse entities and rows sentences. Cells indicate the syntactic function the entity occupies in a sentence. Barzilay & Lapata compute probabilities of entity transitions, turn these into feature vectors, and then apply machine learning methods to distinguish between coherent and not coherent documents. We took the basic idea from Barzilay & Lapata, but interpreted the matrix as a bipartite graph. When we computed coherence using simple graph-based measures directly on this graph, the results were basically the same (Guinaudeau & Strube, ACL 2013).

Overall I am puzzled by the lackluster performance of machine learning techniques on these tasks and the competitiveness of our simple graph-based approaches:

- Are our graph-based approaches really that good? Or is the state-of-the-art just too weak?

- Are the tasks coreference resolution and modeling local coherence ill-defined?

- Do we apply machine learning correctly to these tasks? Are there better ways to exploit annotated training data?

- Which features might help to finally improve the performance?

- Are current machine learning methods appropriate for such tasks?

 

Biography:
Michael Strube leads the Natural Language Processing group at the privately founded Heidelberg Institute for Theoretical Studies in Heidelberg, Germany. He is also Honorarprofessor in the Computational Linguistics Department at the University of Heidelberg. Michael Strube received an M.A. in German Language and Literature from the University of Freiburg in 1992 and a Ph.D. in Computational Linguistics from the same university in 1996. Before joining HITS he was awarded a postdoctoral fellowship at the Institute for Research in Cognitive Science at the University of Pennsylvania. Together with his former Ph.D. student Simone Paolo Ponzetto he received the Honorable Mention for the 2010 IJCAI-JAIR Best Paper Prize for their work on knowledge extraction from Wikipedia.

Document Actions