See program

Call for Papers:

Unshared Task at LENLS 13

Theory and System analysis with FraCaS, MultiFraCaS and JSeM Test Suites


This one day task focused on undertaking Theory and System analysis with FraCaS and FraCaS inspired Test Suites is to be held as part of Logic and Engineering of Natural Language Semantics 13 (LENLS 13). See for full information about LENLS 13, which is to be take place on November 13-15, 2016. LENLS is an annual international workshop on formal syntax, semantics and pragmatics.

The FraCaS test suite was created by the FraCaS Consortium as a benchmark for measuring and comparing the competence of semantic theories and semantic processing systems. It contains inference problems that collectively demonstrate basic linguistic phenomena that a semantic theory has to account for; including quantification, plurality, anaphora, ellipsis, tense, comparatives, and propositional attitudes. Each problem has the form: there is some natural language input T, then there is a natural language claim H, giving the task to determine whether H follows from T. Problems are designed to include exactly one target phenomenon, to exclude other phenomena, and to be independent of background knowledge.

Following FraCaS, overlapping test suites are now available for a number of languages (notably in addition to the original English: Farsi, German, Greek, Japanese, and Mandarin), which together cover both universal semantic phenomena as well as language-specific phenomena. With the problem sets categorised according to the semantic phenomena they involve, it is possible to focus on obtaining results for specific phenomena (within a language or cross-linguistically), as well as strive for wide coverage.


Shared tasks typically provide "gold" analysed data with clear evaluation criteria for competing systems and have become popular within NLP fields. The concept of a so-called "unshared task" is an alternative to shared tasks. In an unshared task, there are neither quantitative performance measures nor set problems that have to be solved. Instead, participants are given a common ground (e.g., data) and an open-ended prompt.

With the availability of FraCaS, MultiFraCaS and JSeM Test Suites, the aim of this unshared task is for participants to put these resources to work as the basis for inspiring analysis, e.g., for showcasing a semantic theory or semantic processing system, or syntactic annotation model for the data.

We would also be interested to hear about the creation of complementary data for other languages not yet represented by the existing test suites, or with work concerning properties of the existing test suites, or with cross-linguistic comparisons using the test suites, etc.

Being an unshared task, use made of the datasets is up to the authors. Any of the data sets might serve as a benchmark for testing the approach taken (or even a computational model, for participants who go that far) and reporting success levels on the problems (if applicable).


Papers should be submitted via the LENLS 13 easychair site as "Unshared Task" papers, and should mention "Unshared Task" in their paper title to further distinguish themselves from other workshop submissions. Papers must be anonymous, up to 4 pages, including figures and references, A4 or letter size, with 12 point font, and submitted electronically in PDF format at:

Submissions will be reviewed by the LENLS 13 program committee.

We plan for all accepted unshared task papers to be orally presented as 30 minute talks at the workshop, followed by an open discussion.

The acceptance of submissions will be based on reviewers' assessment with a focus on ambition, thoroughness, and the overall quality.

When the abstract is accepted, the author is expected to submit a full paper (10-14 pages) before the workshop for inclusion in the LENLS 13 workshop proceedings. The online proceedings of the LENLS 13 workshop will be available at the conference site.

Where applicable, we encourage submission along with the paper of data analysis (e.g., syntactic/semantic annotations) as supplementary material, so that the community has a chance to explore in-depth how the datasets have been used, rather than the few samples most likely shown in the paper.


We invite papers to apply either theoretical or computational analyses or other ideas to any of the following datasets, or subsets thereof, and describe findings:


Same as LENLS 13 workshop paper submissions, see


Daisuke Bekki (Ochanomizu University/JST CREST/AIST AIRC/NII)
Alastair Butler (National Institute for Japanese Language and Linguistics (NINJAL))
Ai Kubota (National Institute for Japanese Language and Linguistics (NINJAL))
Yusuke Kubota (University of Tsukuba) Koji Mineshima (Ochanomizu University/JST CREST)

Program Committee


Kawazoe, A., Tanaka, R., Mineshima, K., Bekki, B. 2015. "An Inference Problem Set for Evaluating Semantic Theories and Semantic Processing Systems for Japanese," Proceedings of the Twelfth International Workshop on Logic and Engineering of Natural Language Semantics (LENLS12).

Cooper, R., Crouch, D., van Eijck, J., Fox, C., van Genabith, J., Jan, J., Kamp, H., Milward, D., Pinkal, M., Poesio, M., Pulman, S., Briscoe, T., Maier, H., and Konrad, K. 1996. "Using the framework." Technical report, FraCaS: A Framework for Computational Semantics. FraCaS deliverable D16.

MacCartney, B., and Manning, C. D. 2007. "Natural logic for textual inference." In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, 193-200.

Last updated: Jun 29, 2016