Commits

<jro...@gmail.com  committed 06675fd

escaped newlines in shell commands for copy-and-paste goodness

  • Participants
  • Parent commits 4a3ef0d

Comments (0)

Files changed (1)

 Run the following command from UPDOWN_DIR to preprocess Shamma et al (2009)'s Obama McCain Debate dataset:
 
 {{{
-$ updown preproc-shamma data/shamma/orig/debate08_sentiment_tweets.tsv
+$ updown preproc-shamma data/shamma/orig/debate08_sentiment_tweets.tsv \
     src/main/resources/eng/dictionary/stoplist.txt > data/shamma/shamma-features.txt
 }}}
 
 Run the following command to preprocess the train portion of our Healthcare Reform dataset (only used to train a supervised model for comparison to the semisupervised models of interest):
 
 {{{
-$ updown preproc-hcr data/hcr/train/orig/hcr-train.csv src/main/resources/eng/dictionary/stoplist.txt >
+$ updown preproc-hcr data/hcr/train/orig/hcr-train.csv src/main/resources/eng/dictionary/stoplist.txt > \
     data/hcr/train/hcr-train-features.txt
 }}}
 
 Run the following command to preprocess the development portion of our Healthcare Reform dataset:
 
 {{{
-$ updown preproc-hcr data/hcr/dev/orig/hcr-dev.csv src/main/resources/eng/dictionary/stoplist.txt >
+$ updown preproc-hcr data/hcr/dev/orig/hcr-dev.csv src/main/resources/eng/dictionary/stoplist.txt > \
     data/hcr/dev/hcr-dev-features.txt
 }}}
 
 Run the following command to preprocess the test portion of our Healthcare Reform dataset:
 
 {{{
-$ updown preproc-hcr data/hcr/test/orig/hcr-test.csv src/main/resources/eng/dictionary/stoplist.txt >
+$ updown preproc-hcr data/hcr/test/orig/hcr-test.csv src/main/resources/eng/dictionary/stoplist.txt > \
     data/hcr/test/hcr-test-features.txt
 }}}
 
 To run LexRatio on the Stanford Sentiment dataset, use the following command from the UPDOWN_DIR directory:
 
 {{{
-$ updown lex-ratio -g data/stanford/stanford-features.txt -p
+$ updown lex-ratio -g data/stanford/stanford-features.txt -p \
     src/main/resources/eng/lexicon/subjclueslen1polar.tff 
 }}}
 
 To run label propagation using [[http://code.google.com/p/junto/|Junto]]'s implementation of Modified Adsorption on the Stanford Sentiment dataset, use the following command:
 
 {{{
-$ updown 8 junto -g data/stanford/stanford-features.txt -m models/maxent-eng.mxm -p
-    src/main/resources/eng/lexicon/subjclueslen1polar.tff -f data/stanford/username-username-edges.txt -r
+$ updown 8 junto -g data/stanford/stanford-features.txt -m models/maxent-eng.mxm -p \
+    src/main/resources/eng/lexicon/subjclueslen1polar.tff -f data/stanford/username-username-edges.txt -r \
     src/main/resources/eng/model/ngramProbs.ser.gz 
 }}}
 
 By default, all five of these are included, i.e. adding "-e nfmoe" to the above command line would not change output. To run on just the follower graph and EmoMaxent's predictions, for example, you would add "-e fm" to the command line, like so:
 
 {{{
-updown 8 junto -g data/stanford/stanford-features.txt -m models/maxent-eng.mxm -p
-    src/main/resources/eng/lexicon/subjclueslen1polar.tff -f data/stanford/username-username-edges.txt -r
+$ updown 8 junto -g data/stanford/stanford-features.txt -m models/maxent-eng.mxm -p \
+    src/main/resources/eng/lexicon/subjclueslen1polar.tff -f data/stanford/username-username-edges.txt -r \
     src/main/resources/eng/model/ngramProbs.ser.gz -e fm
 }}}
 
 Tweets in the HCR datasets are annotated for target as well as sentiment. To extract the list of targets for one of the HCR datasets (necessary to perform per-target evaluation), add a third argument before the '>' to the HCR preprocessing command, where that argument is a target output filename. For example, this will extract the targets from HCR-dev:
 
 {{{
-$ updown preproc-hcr data/hcr/dev/orig/hcr-dev.csv src/main/resources/eng/dictionary/stoplist.txt
+$ updown preproc-hcr data/hcr/dev/orig/hcr-dev.csv src/main/resources/eng/dictionary/stoplist.txt \
     data/hcr/dev/hcr-dev-targets.txt > data/hcr/dev/hcr-dev-features.txt
 }}}
 
 Whenever running the above experiments on an HCR dataset for which targets have been extracted, you can point to the appropriate target file with the -t flag and see a breakdown of results per target. For example, this command will run per-target evaluation on HCR-dev after performing the default label propagation:
 
 {{{
-$ updown 8 junto -g data/hcr/dev/hcr-dev-features.txt -m models/maxent-eng.mxm -p
-    src/main/resources/eng/lexicon/subjclueslen1polar.tff -f data/hcr/username-username-edges.txt -r
+$ updown 8 junto -g data/hcr/dev/hcr-dev-features.txt -m models/maxent-eng.mxm -p \
+    src/main/resources/eng/lexicon/subjclueslen1polar.tff -f data/hcr/username-username-edges.txt -r \
     src/main/resources/eng/model/ngramProbs.ser.gz -t data/hcr/dev/hcr-dev-targets.txt 
 }}}