Commits

dan mackinlay committed fdabea6

tody up, tag and package for distribution

  • Participants
  • Parent commits 2292393
  • Tags 0.1.1dev

Comments (0)

Files changed (6)

 include README.rst
 include NEWS.txt
+include TODO.rst
 News
 ====
 
-0.2a1
------
-
-*Release date: UNRELEASED*
-
-* Example news entry for the in-development version
-
-
-0.1
+0.1dev
 ---
 
-*Release date: 15-Mar-2010*
+*Release date: 16-Mar-2011*
 
-* Example news entry for a released version
+* initial release of the version from https://bitbucket.org/howthebodyworks/possumpalace_kit/src/23e924ba53e8/_lib/citeulike/
 
 
   * `do JSON searches <http://www.citeulike.org/groupforum/2253>`_
   * `edit records by HTML form submission <http://www.citeulike.org/groupforum/700>`_
-  * `Edit URLs and JSON fields <http://www.citeulike.org/groupforum/2312>`_
+  * `Edit URLs and JSON fields <http://www.citeulike.org/groupforum/2312>`_
+  
+usage
+======
+
+Here is an example of how i use this from my pavement.py file to write out a bibtex file of all records and download all PDFs:
+
+    if options is None: options=globals()['options']
+    from citeulike.citeulike_api import CiteULike
+    
+    outjsonpath = path(options.docroot)/options.bibtex_json_file
+    if os.path.isabs(options.attachment_path):
+        outpdfpath = path(options.attachment_path)
+    else:
+        outpdfpath = path(options.docroot)/options.attachment_path
+
+    cul = CiteULike(username=options.cul_username,
+      password=options.cul_pass,
+      json_cache=outjsonpath,
+      attachment_path=outpdfpath
+    )
+    cul.cache_records()
+    bibtex_string = cul.render('bibtex')
+      with codecs.open(outbibtexpath, 'w', 'utf-8') as bf:
+          bf.write(bibtex_string)
+
+TODO
+=====
+
+.. include: TODO.rst
+======
+TODOs
+======
+
+  * remove OpenMeta dependency, and make it optional (currently that restricts us to OS X unnecessarily)  
+  * test (ha!)
+  * document (ha!)
+  * parse success of edit and login operations
+  * CUL doesn't like more than one request every 5 seconds. Currently i use a
+    wait_for_api_limit() method to throttle connections, but this is error
+    prone
+    
+     * it might be cleaner to subclass mechanize and enforce it there, plus
+       include a backoff (see next)
+    
+  * The server is patchily available, at least from my ISP, so we should also
+    override the fetch methods to do retry with automatic backoff, e.g.
+    
+    * https://gist.github.com/728327
+    * http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/
+    
+  * downloaded PDFs should link back to their CUL page
+  * cache cookies to save a page load
 NEWS = open(os.path.join(here, 'NEWS.txt')).read()
 
 
-version = '0.1'
+version = '0.1dev'
 
 install_requires = [
     "mechanize",
     description="a python api for http://citeulike.org/",
     long_description=README + '\n\n' + NEWS,
     classifiers=[
-      # Get strings from http://pypi.python.org/pypi?%3Aaction=list_classifiers
+      'Development Status :: 3 - Alpha',
+      'Intended Audience :: Science/Research',
+      'License :: OSI Approved :: BSD License',
+      'Programming Language :: Python :: 2.6',
+      'Programming Language :: Python :: 2.7',
+      'Topic :: Internet :: WWW/HTTP :: Browsers',
     ],
     keywords='',
     author='Dan MacKinlay',
     package_dir = {'': 'src'},include_package_data=True,
     zip_safe=False,
     install_requires=install_requires,
-    entry_points={
-        'console_scripts':
-            ['citeulike_api=citeulike_api:main']
-    }
 )

src/citeulike_api/TODO.rst

-======
-TODOs
-======
-
-  * test (ha!)
-  * document (ha!)
-  * package just the citeulike code into an easy_installable thing
-  * parse success of edit and login operations
-  * CUL doesn't like more than one request every 5 seconds. Currently i use a
-    wait_for_api_limit() method to throttle connections, but this is error prone
-     - it would be better to subclass mechanize and enforce it there, plus
-    include a backoff (see next)
-  * The server is patchily available, at least from my ISP, so we should also
-    override the fetch methods to do retry with automatic backoff, e.g.
-    * https://gist.github.com/728327
-    * http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/
-  * downloaded PDFs should link back to their CUL page
-  * delete unused attachments. properly. as opposed to in code comments.
-  * cache cookies to save a page load