Commits

Anonymous committed 1f1d54e

Comments (0)

Files changed (1)

 
 This may take a few minutes. Ignore any warnings.
 
+== Unzipping the EmoMaxent model ==
+
+Run the following command from UPDOWN_DIR:
+
+{{{
+$ gzip -d models/maxent-eng.mxm.gz
+}}}
+
 == Preprocessing the Datasets ==
 
 === Stanford Sentiment (STS) ===
 
 === EmoMaxent ===
 
+To load the maximum entropy model trained on about 2 million tweets with positive and negative emoticons in them and evaluate its per-tweet performance on the Stanford Sentiment dataset, run the following command:
+
+{{{
+$ updown per-tweet-eval -g data/stanford/stanford-features.txt -m models/maxent-eng.mxm
+}}}
+
+You should see the following output:
+{{{
+***** PER TWEET EVAL *****
+Accuracy: 0.8306011 (152.0/183)
+}}}
+
+To run per-user evaluation rather than per-tweet evaluation, use the following command:
+
+{{{
+$ updown per-user-eval -g data/stanford/stanford-features.txt -m models/maxent-eng.mxm
+}}}
+
+You should see the following output:
+{{{
+***** PER USER EVAL *****
+Number of users evaluated: 0 (min of 3 tweets per user)
+Mean squared error: NaN
+}}}
+
+Point the -g flag to other preprocessed feature files to run EmoMaxent on other datasets. (Per-user evaluation makes the most sense on the HCR datasets.)