HTTPS SSH

MARMARA TURKISH COREFERENCE RESOLUTION CORPUS

This repository contains the "Marmara Turkish Coreference Resolution Corpus".

The corpus is a layer on top of the "METU-Sabanci Turkish Treebank". Due to license reasons only the coreference layer is published in this repository.

Main responsible is Peter Schüller http://www.peterschueller.com/ http://www.knowlp.com/

TOOLS

  • tools-baseline/extract-documents.py

Transforms the 1960 XML files in tb_corrected.zip in the original METU-Sabanci Turkish Treebank distribution into 34 wellformed UTF-8-encoded XML files containing uniquely addressable sentences/words. (Original XML files in the Treebank distribution are partially non-wellformed XML and encoded with windows-1254 encoding.)

The script uses as input the original files in the directory tb_corrected/ in the zip file tb_corrected.zip from the METU-Sabanci Turkish Treebank distribution.

See comments in the script for further instructions.

  • tools-baseline/xml-to-conll.py

Converts a document XML and a coreference XML file and a document name into a CONLL file that can be used by the reference scorer.

  • tools-baseline/conll-to-xml.py

Converts a CONLL file in the format understood by the reference scorer into a coreference XML file.

BASELINE

For the baseline scorers to work, you will need to initialize and update the submodule of this repository: git submodule init followed by git submodule update.

  • tools-baseline/predictmentions.py

Mention Detection baseline: reads a XML document from the Turkish Treebank and produces a XML document with mentions. Can create dummy chains so that the scorer will provide a mention detection score.

  • tools-baseline/testpredictmentions.sh

Runs mention detection with predictmentions.py on all documents and runs scorer.

  • tools-baseline/crossvalidate_coref.py

Takes a list of K pairs of XML documents and gold mention/coreference chain XML files, a directory name for storing output, and a python string for the machine learning method.

Performs K-fold crossvalidation for all K given documents, scores each fold and stores models for each fold.

  • tools-baseline/predictcoreference.py

Takes a document XML and a mention coreference XML file and a model as generated by crossvalidate_coref.py. Predicts coreference using given mentions and stores it to an output XML file.

  • tools-baseline/testcrossval.sh

Runs crossvalidate_coref.py with appropriate arguments. Also runs predictcoreference.py to demonstrate usage of that tool.

Per default this script runs on the two smallest documents and tests coreference prediction on one of the files. This is not meaningful in terms of scores but it is fast and demonstrates the usage of the script. Running with all documents can take several hours and can take more than 40 GB of RAM, depending on configuration. (SVC method on gold mentions requires less than 10GB.)

LINKS

MAINTAINER

CONTRIBUTORS

  • Kübra Cıngıllı (2016)
  • Ferit Tunçer (2015,2016)
  • Hacer Ezgi Karakaş (2016)
  • Barış Gün Sürmeli (2015)
  • Ayşegül Pekel (2015)

ACKNOWLEDGEMENTS

This project has been supported by The Scientific and Technological Research Council of Turkey (TUBITAK) under grant agreement 114E430.