Note: this is no longer a wiki, only a static archive of the orginal!

Home

Background

Dependency Parsing

The fundamental idea in dependency-based parsing is that parsing crucially involves establishing binary relations between words in a sentence. This simple idea can be implemented in a variety of ways.

On the one hand, we have approaches that can be characterized as dependency parsing in a broad sense, where the actual form of the syntactic analysis may vary, but where the construction of the analysis nevertheless depends on finding syntactic relations between lexical heads. In this category, we find the widely used link grammar parser for English Sleator and Temperley 1991, as well as the influential probabilistic parsers of Collins 1997 and Charniak 2000, but also a variety of other lexicalized parsing models that can be subsumed under the general notion of bilexical grammars Eisner 2000.

On the other hand, we have what we may call dependency parsing in a narrow sense, which is the focus of the CoNLL shared tasks in 2006 and 2007, where the goal of the parsing process is to build a dependency structure, consisting only of words connected by binary dependency relations. This tradition is represented by Maruyama 1990, Eisner 1996, Tapanainen and Järvinen 1997, Menzel and Schröder 1998, Duchier and Debusmann 2001, Yamada and Matsumoto 2003, Nivre et al. 2004, and McDonald et al. 2005, among others.

Until quite recently, most data-driven parsers (whether dependency-based or not) had been applied only to one or a few languages, but this situation has now changed dramatically, especially after the CoNLL shared task 2006 (Buchholz and Marsi 2006), where 19 different systems were used to parse 13 different languages. The CoNLL shared task 2007 continues this tradition through its multilingual track, adding several new languages.

For more information about specific approaches to dependency parsing, used in the CoNLL shared task 2006 and elsewhere, consult the parsing frameworks page. For a more thorough introduction to dependency parsing, try Joakim Nivre's and Sandra Kuebler's ACL tutorial or chapter 3 of Nivre 2006.

Domain Adaptation

One well known characteristic of supervised parsing models is that they typically perform much worse on data that does not come from the training domain (Gildea 2001). Due to the large overhead in annotating text with deep syntactic parse trees, the need to adapt parsers from domians with plentiful resources (e.g., news) to domain with little resources is an important problem. Work on domain adaptation falls roughly into two categories:

  1. Limited annotated resources in new domain. Many studies have shown that large gains can be made when a small amount of annotated resources exist in the new domain. This includes the work of Roark and Bacchiani 2003, Florian et al 2004, Chelba and Acero 2004, Daume and Marcu 2006, and Titov and Henderson 2006. Of these R&B2003 and T&H2006 deal specifically with syntactic parsing.

  2. No annotated resources in new domain. This is a more common problem and is considerably more difficult. Recent work by McClosky et al 2006 and Blitzer et al 2006 have shown that the existence of a large unlabeled corpus in the new domain can be leveraged in adaptation.

Another approach might be to use transductive learning, where the model is learned based on the training set and the unlabeled test set. Self-training (McClosky et al 2006) can be considered a variant of transductive learning.

References

BackgroundPage (last edited 2007-01-10 02:26:25 by RyanMcDonald)