Presentation Abstracts


Saturday, October 10



Minimalist Parsing: what kind of thing is it?
Prof. Ed Stabler Nuance/UCLA


Parsing, the recognition of linguistic structure, is only one of many things go on in human language understanding. From the perspective developed in this paper, one distinction of minimalist theories of parsing is the recognition that even when the focus is on linguistic structure, studying all the influences on structure at once is not feasible, and so there are really many parsing problems depending on how the problem is factored. Another distinction is the hypothesis that the core mechanisms are very simple, and that they are the very same mechanisms that much of theoretical linguistics aims to identify. Studying how those mechanisms operate in language understanding and acquisition, at various levels of abstraction, tells us something about what they are and how they must interface with other things in contexts of language use. This talk explores some consequences of some recent results (Kobele 2011, Graf 2011) with particular attention to the interface with phonology and to the role parsing plays in language acquisition.


Towards a Minimalist Machine: Combining linguistic theories
Prof. Jason Ginsburg, Osaka Kyoiku University, Japan
Prof. Sandiway Fong, University of Arizona

We have developed a Minimalist Machine that automatically constructs detailed derivations of over 100 core examples from recent important work in the Minimalist Program via a single unified theory that incorporates into a single theory Chomsky’s (2001) bottom-up phase-based system (Chomsky, 2001), Chomsky’s (2001) theory that a single probe undergoes multiple-agreement relations in expletive constructions, Sobin’s (2014) theory that a light verb with split-agreement properties is involved in thematic-extraction and expletive constructions, Pesetsky & Torrego’s (2001) theory that uT checking and economy account for the English subject vs. object wh-movement asymmetry and that-trace effects, Gallego’s (2006) theory that uT checking and economy account for relative clause constructions, and Kayne’s (2002) theory that doubling constituents are involved in coreference relations. We also extend this unified theory to account for tough-constructions (Chomsky 1977, Munn 1994). We discuss particular modifications that are necessary to combine the target linguistic theories and we demonstrate how a computer model is ideal for incorporating core elements of a variety of theories into a single model.


Towards a Minimalist Machine: A stack based architecture
Prof. Sandiway Fong, University of Arizona
Prof. Jason Ginsburg,Osaka Kyoiku University

We have coded a stack-based system that automatically constructs detailed derivations in the Minimalist framework. In the first talk we describe the linguistic theories that we have implemented; in this talk, we describe the underlying architectural features, and discuss design choices that we have made. The system is a vehicle built not only to verify the implemented theories but also to explore how efficiently we can compute derivations without compromising linguistic design principles. We will discuss the compromises involved in balancing the elimination of conceptually unnecessary components with the goal of maximizing computational efficiency through eliminating search. We will also describe how economy and multiple derivations are implemented in this framework.

Left-corner Parsing of Minimalist Grammars
Prof. Tim Hunter, University of Minnesota

Much recent research in the experimental psycholinguistics literature revolves around the resolution of long-distance dependencies, and the manner in which the human sentence processor "retrieves" elements from earlier in a sentence that must be related in some way to the material currently being processed. A canonical instance is the resolution of a wh-dependency, where a filler wh-phrase must be linked with an associated gap site; in this case it is now well-established, for example, that humans actively predict gap sites in advance of definitive confirming bottom-up evidence. At present, however, there is no obvious way for discussion of these findings to be framed in terms of an MG parser. Stabler's 2013 top-down MG parser is incremental, but does not involve any corresponding notion of "retrieval": it requires that a phrase's position in the derivation tree be completely identified before the phrase can be scanned, which has the consequence that a filler cannot be scanned without committing to a particular location for its corresponding gap. In this talk I will attempt to develop a parsing algorithm that is inspired by Stabler 2013, but which allows a sentence-initial filler (such as a wh-word) to be scanned immediately while delaying the choice of corresponding gap position.

Formal Processing Theory, or Parsing Without Parsers
Prof. Thomas Graf, Stony Brook University

I argue that parsing research so far has operated at the wrong level(s) of granularity and that we should take a hint from formal language theory instead. Rather than compare the predictions of specific parsing models for specific phenomena, we should identify abstract properties that carve out classes of parsers and investigate what kind of processing patterns these classes can replicate. This view is more concrete than information-theoretic accounts as it maintains a close connection to the structural inference mechanisms of parsing. At the same time, it deliberately avoids the pitfalls of combinatorial indeterminacy that arises in the comparison of specific parsing models. This ``formal processing theory'' thus opens up a way of studying parsing without parsers.


Sunday October 11

Parsing Copy and Delete as Move
Meaghan Fowlie, UCLA

In the minimalist grammars proposed by Stabler, adjuncts are selected: they are not treated differently from arguments. Given that the properties of adjuncts differ from those of arguments, in being optional, transparent to selection, and ordered, among other things, I argue for a new model that treats adjuncts as distinct from arguments. These models, Minimalist Grammars with Adjunction (MGAs), are similar enough to traditional MGs to require only a minor extension of MG CKY-like parsers. MGAs add to Merge and Move a new function, Adjoin, which adjoins phrase to phrase when the one is listed in the grammar as an adjunct of the other. The resulting phrase keeps its original head and category.

To account for ordering restrictions of the sort observed by Cinque (1999), hierarchies are added. The hierarchy level is tracked as a subpart of the category name; only adjuncts higher in the hierarchy than previously adjoined adjuncts may be adjoined. The adjoin rules are added to the parser such that when a phrase headed by a category feature is encountered, in addition to looking for a selector for it, the algorithm also seeks an adjunct of it, and something for it to adjoin to. Any that are found are added to the chart. The result is a parseable language that behaves more like natural language than traditional MG languages.

Parsing Ellipsis
Prof. Greg Kobele, University of Chicago

Minimalist syntax is awash with ideas about ellipsis, these however do not lend themselves immediately to efficient implementation. Here I demonstrate that two of the more influential theories, LF-copying and PF-deletion, can be thought of as different perspectives on one and the same theory. This theory embodies in the minimalist grammar framework the very same idea found in recent work on dynamic syntax and on type logical grammar. I identify sources of intractability (in parsing from a string to a meaning), and discuss principled ways of resolving this.


Tone in Parsing
Prof. Kristine Yu, Umass Amherst

Although there has long been interest in using prosodic information from the speech signal in parsing (Cooper and Sorenson 1977, Morgan and Demuth 1998, Shriberg et al. 2000, i.a.,) state-of-the-art syntactic parsing algorithms do not directly use prosodic information. This is not surprising, given the prevalence of mismatch between syntactic and prosodic constituency, and the enormous individual variability in production and comprehension of prosodic tunes (e.g. Jun and Bishop 2015, Speer and Foltz 2015).

However, in this talk, I show how accessing prosodic information from the speech signal is not just helpful but necessary for the syntactic parser, and I sketch how the minimalist parser might access and use prosodic information in syntactic analysis. My entry point here is the well-attested phenomenon of tonal morphemes, whose position is determined by an interaction of syntactic and phonological mechanisms. I present a case study involving parsing with an absolutive case high tone morpheme in Samoan.


Analyzing fMRI time courses with Minimalist Grammars
Prof. John Hale, Cornell University

Recently, we have begun to use Minimalist Grammars (MG) in the sense of Stabler (1997) to analyze the time course of blood oxygen level dependent (BOLD) signals from the brain. From these and other grammars, one can derive a time series of predictions by adding assumptions about how parsing might work (see e.g. Hale 2014). The question is, which of several alternative conceptions of grammar and parsing fits the neuroimaging data best? The results obtained so far support a view of the anterior temporal lobe as a kind of "combinatorial hub" for language. For instance, X-bar trees from minimalist grammars derive a positive predictor of BOLD signal in that region. This obtains even in regression models that include co-predictors based on Markov Models and Phrase Structure Grammars. We seek to extend this line of work using a variety of probabilistic linking hypotheses such surprisal and entropy reduction. Doing so brings up a variety of interesting issues in computational linguistics and cognitive science more generally.

(Joint work with Jonathan Brennan (Michigan), Edward Stabler (Nuance) and Wen-Ming Luh, Cornell)

Return to main page.