Overview

HTTPS SSH

This repository contains code for the papers:

LSTM Shift-Reduce CCG Parsing
Wenduan Xu
In Proc. EMNLP 2016

Expected F-Measure Training for Shift-Reduce Parsing with Recurrent Neural Networks
Wenduan Xu, Michael Auli, and Stephen Clark
In Proc. NAACL 2016

EMNLP 2016 Models

Required files and dependencies

  • Download the config files; untar as rnn_parser_config_files
  • Download the models; untar as emnlp_models
  • Checkout the branch: mark16-test-cathash-fix-flags-punct-xf1-trainer-rnn-May2015-new-rnn-super-new-loss-LSTM-cnn
  • Note: only tested on Ubuntu; other external dependencies: boost and Armadillo
  • DyNet (then CNN) is included in the source

To build

make clean
make -j4

Sanity check

Using the XF1 model reported in the paper. First, tag wsj00 using the C&C POS tagger:

ccsr2015/bin/pos --model rnn_parser_config_files/pos --input rnn_parser_config_files/wsj00.raw --output rnn_parser_config_files/wsj00.apos

Then, supertag and parse WSJ00 (with the XF1 model and a beam size of 1: --srbeam 1) and print to parser_output

ccsr2015/bin/lstm_parser --model rnn_parser_config_files/parser/ --super ./rnn_parser_config_files/super/ --mode 0 --use_rnn_super false --lstm_super_model tagger_1_100_256-pid22327.params --beta 0.06 --use_super true --use_act true --use_pos true --use_biqueue true --model_dir emnlp_models --model_epoch 12 --srbeam 1  --ifmt '%w|%p \n'  --printer deps --input rnn_parser_config_files/wsj00.apos --output parser_output

Evaluate the output:

ccsr2015/src/scripts/ccg/evaluate rnn_parser_config_files/wsj00.stagged rnn_parser_config_files/wsj00.ccgbank_deps parser_output
note: all these statistics are over just those sentences
      for which the parser returned an analysis

cover: 100.00% (1913 of 1913 sentences parsed)

cats:  94.41% (42885 of 45422 tokens correct)
csent: 48.09% (920 of 1913 sentences correct)

lp:    89.68% (34460 of 38426 labelled deps precision)
lr:    85.29% (34460 of 40405 labelled deps recall)
lf:    87.43% (labelled deps f-score)
lsent: 35.96% (688 of 1913 labelled deps sentences correct)

up:    94.87% (36455 of 38426 unlabelled deps precision)
ur:    90.22% (36455 of 40405 unlabelled deps recall)
uf:    92.49% (unlabelled deps f-score)
usent: 37.27% (713 of 1913 unlabelled deps sentences correct)

skip:   8.19% (3430 of 41856 ignored deps (to ensure compatibility with CCGbank))

Training

  • Train the cross-entropy model (the weights will be used as the starting point for training the XF1 model):
mkdir greedy.parser.model.lstm.greedy
mkdir greedy.parser.model.lstm.greedy/dev_out
cp emnlp_models/tagger_1_100_256-pid22327.params greedy.parser.model.lstm.greedy/
ccsr2015/bin/lstm_parser --model rnn_parser_config_files/parser/ --super ./rnn_parser_config_files/super/ --mode 2 --use_rnn_super false --lstm_super_model tagger_1_100_256-pid22327.params --use_super true --use_act true --use_pos true --use_biqueue true --model_dir greedy.parser.model.lstm.greedy --srbeam 1 --input ./rnn_parser_config_files/std_train/new_emb_dict_and_pos_feat/sfef/wsj02-21.stagged.cv0001.sfef --input_dev ./rnn_parser_config_files/wsj00.apos --printer deps --output greedy.parser.model.lstm.greedy/dev_out/out  
  • Train the XF1 model (must train the cross-entropy model first):
mkdir xf1.parser.beam8.model.3
mkdir xf1.parser.beam8.model.3/dev_out
ccsr2015/bin/lstm_parser --model rnn_parser_config_files/parser/ --super ./rnn_parser_config_files/super/ --mode 3 --xf1_dropout true --use_rnn_super false --lstm_super_model tagger_1_100_256-pid22327.params --beta 0.06 --cv_supercat_file_lstm wsj02-21.cv.0.06.all.cv_pos --use_super true --use_act true --use_pos true --use_biqueue true --model_dir greedy.parser.model.lstm.greedy --model_epoch 26 --model_dir_xf1 xf1.parser.beam8.model.3 --srbeam 8 --input ./rnn_parser_config_files/std_train/new_emb_dict_and_pos_feat/sfef/wsj02-21.stagged.cv0001.full --input_dev ./rnn_parser_config_files/wsj00.apos --printer deps --output xf1.parser.beam8.model.3/dev_out/out