Commits

conn...@dirk.w3.org  committed f3f045e

factored blog article material out; moved documentation of trx structure to module docstring

  • Participants
  • Parent commits c994dbf

Comments (0)

Files changed (2)

File blog-trxtsv.html

+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+  <title>Getting my data back from Quicken</title>
+</head>
+<body>
+<p>The Quicken Interchange Format (<a href="http://en.wikipedia.org/wiki/Quicken_Interchange_Format">QIF</a>) is notoriously inadequate for
+clean import/export. The <a href="https://quicken.custhelp.com/cgi-bin/quicken.cfg/php/enduser/std_adp.php?p_faqid=774&amp;p_created=1129160880&amp;p_sid=Lr_SmM1i&amp;p_lva=&amp;p_sp=cF9zcmNoPTEmcF9zb3J0X2J5PSZwX2dyaWRzb3J0PSZwX3Jvd19jbnQ9OSZwX3Byb2RzPTk1LDExNCZwX2NhdHM9JnBfcHY9Mi4xMTQmcF9jdj0mcF9wYWdlPTEmcF9zZWFyY2hfdGV4dD1jb252ZXJ0IHdpbmRvd3M*&amp;p_li=&amp;p_topview=1#Import">instructions</a> for migrating <a href="http://quicken.intuit.com/">Quicken</a> data
+across platforms say:</p>
+
+<ol>
+  <li>From the old platform, dump it out as QIF</li>
+  <li>On the new platform, read in the QIF data</li>
+  <li>After importing the file, verify that account balances in your
+     new Quicken for Mac 2004 data file are the same as those in
+     Quicken for Windows. If they don't match, look for duplicate or
+     missing transactions.</li>
+</ol>
+<p>I have not migrated my data from Windows98 to OS X because of this
+mess.  I use win4lin on my debian linux box as life-support for
+Quicken 2001.</p>
+
+<p>Meanwhile, Quicken supports printing any report to a tab-separated
+file, and I found that an exhaustive transaction report represents
+transfers unambiguously. Since October 2000, when my testing showed
+that I could re-create various balances and reports from these
+tab-separated reports, I have been maintaining a CVS history of
+my exported Quicken data, splitting it every few years:</p>
+
+<pre>
+   $ wc *qtrx.txt
+    4785   38141  276520 1990-1996qtrx.txt
+    6193   61973  432107 1997-1999qtrx.txt
+    4307   46419  335592 2000qtrx.txt
+    5063   54562  396610 2002qtrx.txt
+    5748   59941  437710 2004qtrx.txt
+   26096  261036 1878539 total
+</pre>
+
+<p>I switched from CVS to <a
+href="http://www.selenic.com/mercurial/">mercurial</a> a few months
+ago, carrying the history over. I seem to have 189 commits/changesets,
+of which 154 are on the qtrx files (others are on the makefile and
+related scripts). So that's about one commit every two weeks.</p>
+</body>
+</html>
 trxtsv -- read quicken transaction reports
 ==========================================
 
-The Quicken Interchange Format (QIF_) is notoriously inadequate for
-clean import/export. The instructions_ for migrating Quicken_ data
-across platforms say:
-
-  1. From the old platform, dump it out as QIF
-  2. On the new platform, read in the QIF data
-  3. After importing the file, verify that account balances in your
-     new Quicken for Mac 2004 data file are the same as those in
-     Quicken for Windows. If they don't match, look for duplicate or
-     missing transactions.
-
-I have not migrated my data from Windows98 to OS X because of this
-mess.  I use win4lin on my debian linux box as life-support for
-Quicken 2001.
-
-Meanwhile, Quicken supports printing any report to a tab-separated
-file, and I found that an exhaustive transaction report represents
-transfers unambiguously. Since October 2000, when my testing showed
-that I could re-create various balances and reports from these
-tab-separated reports, I have been maintaining a CVS history of
-my exported Quicken data, splitting it every few years::
-
-   $ wc *qtrx.txt
-    4785   38141  276520 1990-1996qtrx.txt
-    6193   61973  432107 1997-1999qtrx.txt
-    4307   46419  335592 2000qtrx.txt
-    5063   54562  396610 2002qtrx.txt
-    5748   59941  437710 2004qtrx.txt
-   26096  261036 1878539 total
-
-I switched from CVS to mercurial_ a few months ago, carrying the
-history over. I seem to have 189 commits/changesets, of which
-154 are on the qtrx files (others are on the makefile and
-related scripts). So that's about one commit every two weeks.
-
-
-.. _QIF: http://en.wikipedia.org/wiki/Quicken_Interchange_Format
-.. _instructions: https://quicken.custhelp.com/cgi-bin/quicken.cfg/php/enduser/std_adp.php?p_faqid=774&p_created=1129160880&p_sid=Lr_SmM1i&p_lva=&p_sp=cF9zcmNoPTEmcF9zb3J0X2J5PSZwX2dyaWRzb3J0PSZwX3Jvd19jbnQ9OSZwX3Byb2RzPTk1LDExNCZwX2NhdHM9JnBfcHY9Mi4xMTQmcF9jdj0mcF9wYWdlPTEmcF9zZWFyY2hfdGV4dD1jb252ZXJ0IHdpbmRvd3M*&p_li=&p_topview=1#Import
-.. _mercurial: http://www.selenic.com/mercurial/
-
 Usage
 -----
 
 The main methods are is eachFile() and eachTrx().
 
+A transaction is a JSON_-like dict:
+
+  - trx
+
+    - date, payee, num, memo
+
+  - splits array
+
+    - cat, clr, subtot, memo
+        
+.. _JSON: http://www.json.org/
+
+See isoDate(), num(), and amt() for some of the field formats.
+
 
 Future Work
 -----------
 
-  - support some SPARQL: date range, text matching (in progress)
+  - support database loading, a la normalizeQData.py
 
-    - the sink.transaction method is like a SPARQL describe hit
+  - investigate diff/patch, sync with DB; flag ambiguous transactions
 
-    - how about a python datastructure to mirror turtle?
+  - how about a python datastructure to mirror turtle?
 
     - and how does JSON relate to python pickles?
 
-  - investigate diff/patch, sync with mysql DB; flag ambiguous transactions
 
 Colophon
 --------
     The transaction method gets called a la: sink.transaction(trxdata, splits).
     See eachTrx() for the structure of trxdata and splits.
 
+    The sink.transaction method is like a SPARQL describe hit.
+
     @@TODO: document header() method.
     """
     sink.startDoc()
     >>> d=iter(_TestLines); dummy=readHeader(d); t=eachTrx(d, []); len(list(t))
     11
 
-    A transaction is a JSON_-like dict:
-
-    - trx
-      - date, payee, num, memo
-    - splits array
-      - cat, clr, subtot, memo
-        
-    .. _JSON: http://www.json.org/
-
-    See isoDate(), num(), and amt() for some of the string formats.
-
     >>> d=iter(_TestLines); dummy=readHeader(d); t=eachTrx(d, []); \
     _sr(t.next())
     [('splits', [[('L', 'Home'), ('cat', 'Home'), ('clr', 'R'), ('subtot', '-17.70')]]), ('trx', [('acct', 'Texans Checks'), ('date', '1/7/94'), ('num', '1237'), ('payee', 'Albertsons')])]