Contributions to Semantic Dependency Parsing : Search, Learning, and Application

Abstract: Semantic dependency parsing is the task of mapping natural language sentences into representations of their meaning in the form of directed graphs on words. These bilexical graphs are designed to capture the sentence-internal predicate-argument relationships – they tell us “who did what to whom” in the given sentence. From an application point of view, semantic dependency parsing is interesting due to the ability of graphs to express language phenomena that cannot be modelled with the more restrictive tree structures used in syntactic parsing.This thesis contributes to multiple different aspects of semantic dependency parsing: the development of a new algorithm, an investigation of the connection between enforced graphstructural properties and learning behaviour, an analysis of the relevance of syntactic information in semantic dependency parsing, and the application of semantic dependency parsing techniques to the downstream task of negation resolution.Semantic dependency parsing can be formalized as the search for directed acylic graphs of maximal weight. As this search problem is intractable in the general case, we consider a restriction to the class of 1-endpoint-crossing graphs. We characterize the class as a subclass of graphs with pagenumber at most 3, and based on our characterization develop a polynomial- time inference algorithm for semantic dependency parsing.Current parsers combine such inference algorithms with deep neural networks that are trained to induce graph weights from labelled data. We identify a problematic interaction between the standard training objective of these networks when trained in conjunction with structurally restricted search algorithms, and propose a modified objective to address this problem.To improve the quality of the semantic dependency analyses provided by a state-of-the-art model, we extend the model with related syntactic dependency analyses. However, only additional gold-standard syntactic information, but not automatically learned representations, lead to consistent improvements. Our error analysis suggests that there is a significant overlap between the two representations, with language phenomena that are difficult to analyze both syntactically and semantically.Using the techniques developed for parsing semantic dependencies, we show their strength in an application that benefits from syntactic and semantic analysis. We reformulate the detection of negations and their scopes as a graph parsing problem, that enables previously discussed techniques and additionally incorporates any bilexical graph-structures as features. Despite the conceptual simplicity of this novel end-to-end architecture, we achieve state-of-the-art results on a standard benchmark.

  This dissertation MIGHT be available in PDF-format. Check this page to see if it is available for download.