Pre-viva talk: Michael Auli
Title: Integrated Supertagging and Parsing
May 09, 2012
from 02:00 PM to 03:00 PM
|Contact Name||Level 3 Admin|
|Contact Phone||0131 650 4446|
|Add event to calendar||
Parsing is the task of assigning syntactic or semantic structure to a natural language sentence. This thesis focuses on syntactic parsing with Combinatory Categorial Grammar (CCG; Steedman 2000). CCG allows incremental processing, which is essential for speech recognition and some machine translation models, and it can build semantic structure in tandem with syntactic parsing. Supertagging solves a subset of the parsing task by assigning lexical types to words in a sentence using a sequence model. It has emerged as a way to improve the efﬁciency of full CCG parsing (Clark and Curran, 2007) by reducing the parser’s search space. This has been very successful and it is the central theme of this thesis.
We begin by an analysis of how efﬁciency is being traded for accuracy in supertagging. Pruning the search space by supertagging is inherently approximate and to contrast this we include A* in our analysis, a classic exact search technique. Interestingly, we ﬁnd that
combining the two methods improves efﬁciency but we also demonstrate that excessive pruning by a supertagger signiﬁcantly lowers the upper bound on accuracy of a CCG parser.
Inspired by this analysis, we design a single integrated model with both supertagging and parsing features, rather than separating them into distinct models chained together in a pipeline. To overcome the resulting complexity, we experiment with both loopy belief propagation and dual decomposition approaches to inference, the ﬁrst empirical comparison of these algorithms that we are aware of on a structured natural language processing problem.
Finally, we address training the integrated model. We adopt the idea of optimising directly for a task-speciﬁc metric such as it is common in other areas like statistical machine translation. We demonstrate how a novel dynamic programming algorithm enables us to optimise for F-measure, our task-speciﬁc evaluation metric, and experiment with approximations, which prove to be excellent substitutions.
Each of the presented methods improves over the state-of-the-art in CCG parsing. Moreover, the improvements are additive, achieving a labelled/unlabelled dependency F-measure on CCGbank of 89.3%/94.0% with gold part-of-speech tags, and 87.2%/92.8% with automatic part-of-speech tags, the best reported results for this task to date. Our techniques are general and we expect them to apply to other parsing problems, including lexicalised tree adjoining grammar and context-free grammar parsing.