Commits

Yoshifumi YAMAGUCHI committed 5a63792

started reading pyhs

  • Participants
  • Parent commits 1802f73
  • Branches develop

Comments (0)

Files changed (33)

File notes/pyhs/.hgignore

+syntax: glob
+
+*.egg-info
+*.pyc
+*.swp
+*.swo
+*.egg
+*.orig
+*~
+
+dist
+build
+_build
+.idea

File notes/pyhs/.hgtags

+e1c583b3b35d07bbafd70e356fba48ec5469fcc5 0.1.0
+8e9edc85224fef87c7422185aa400aa7b7e2993f 0.2.0
+d63e73f450661dd18a5f3fe562ab572e4f9030e2 0.2.1
+609e77ab1577d65623453529abcb3251c0b3a5f1 0.2.2
+059855fe517736cd09e46acbbf06440c07f4c59f 0.2.3
+c27582847cf4cd7a66ced07304968bd6e79a3901 0.2.4

File notes/pyhs/AUTHORS

+Artem Gluvchynsky <excieve@gmail.com>
+Dmitry Chaplinsky

File notes/pyhs/LICENSE

+The MIT License
+
+Copyright (c) 2010 Artem Gluvchynsky
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.

File notes/pyhs/MANIFEST.in

+include README.rst
+include LICENSE
+include AUTHORS
+recursive-include docs *
+recursive-exclude docs/_build *

File notes/pyhs/README.rst

+====
+pyhs
+====
+
+Overview
+--------
+
+pyhs (python-handler-socket) is a Python client library for the
+`HandlerSocket <https://github.com/ahiguti/HandlerSocket-Plugin-for-MySQL/>`_
+MySQL plugin.
+
+Installation
+------------
+
+First, install MySQL and HandlerSocket. Some of the client's functionality
+depends on latest revisions of the plugin so keep it up to date.
+
+After that, get the distribution::
+    
+    pip install python-handler-socket
+
+Or get the package from latest source::
+
+    pip install hg+http://bitbucket.org/excieve/pyhs#egg=python-handler-socket
+
+Or clone the main repository and install manually::
+
+    hg clone http://bitbucket.org/excieve/pyhs
+    cd pyhs
+    python setup.py install
+
+Check your installation like this::
+
+    python
+    >>> from pyhs import __version__
+    >>> print __version__
+
+Usage
+-----
+
+Usage cases, details and API reference are available
+in ``docs`` directory inside the package or
+`online <http://python-handler-socket.readthedocs.org/>`_ on RTD.
+
+Changelog
+---------
+
+0.2.4
+~~~~~
+- Fixed infinite loop caused by remotely closed connection.
+- Fixed incorrect Unicode chars escaping/unescaping in C speedups.
+- Fixed indexes and caches might not be cleaned on connection errors.
+- Somewhat refactored error recovery code.
+
+0.2.3
+~~~~~
+- Fixed single result single-column responses. Fixes issue #1 for real now, I hope.
+
+0.2.2
+~~~~~
+- Fixed incorrect behavior with single columns responses.
+- Changed return value of ``find_modify`` calls with ``return_original=True`` to a list of rows of (field, value) tuples instead of a flat list of values.
+
+0.2.1
+~~~~~
+- Implemented optimised C versions of ``encode`` and ``decode``.
+- Modified installation script to include optional building of C speedups module.
+
+0.2.0
+~~~~~
+- Added "incr" and "decr" operations support to the ``find_modify`` call.
+- Added increment and decrement methods to the ``Manager`` class.
+- Added original value result for all ``find_modify`` operations.
+- Optimised query string encoding function.
+
+0.1.0
+~~~~~
+- Initial release.
+
+License
+-------
+
+| pyhs is released under MIT license.
+| Copyright (c) 2010 Artem Gluvchynsky <excieve@gmail.com>
+
+See ``LICENSE`` file inside the package for full licensing information.

File notes/pyhs/docs/Makefile

+# Makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS    =
+SPHINXBUILD   = sphinx-build
+PAPER         =
+BUILDDIR      = _build
+
+# Internal variables.
+PAPEROPT_a4     = -D latex_paper_size=a4
+PAPEROPT_letter = -D latex_paper_size=letter
+ALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+
+.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
+
+help:
+	@echo "Please use \`make <target>' where <target> is one of"
+	@echo "  html       to make standalone HTML files"
+	@echo "  dirhtml    to make HTML files named index.html in directories"
+	@echo "  singlehtml to make a single large HTML file"
+	@echo "  pickle     to make pickle files"
+	@echo "  json       to make JSON files"
+	@echo "  htmlhelp   to make HTML files and a HTML help project"
+	@echo "  qthelp     to make HTML files and a qthelp project"
+	@echo "  devhelp    to make HTML files and a Devhelp project"
+	@echo "  epub       to make an epub"
+	@echo "  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
+	@echo "  latexpdf   to make LaTeX files and run them through pdflatex"
+	@echo "  text       to make text files"
+	@echo "  man        to make manual pages"
+	@echo "  changes    to make an overview of all changed/added/deprecated items"
+	@echo "  linkcheck  to check all external links for integrity"
+	@echo "  doctest    to run all doctests embedded in the documentation (if enabled)"
+
+clean:
+	-rm -rf $(BUILDDIR)/*
+
+html:
+	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
+	@echo
+	@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
+
+dirhtml:
+	$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
+	@echo
+	@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
+
+singlehtml:
+	$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
+	@echo
+	@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
+
+pickle:
+	$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
+	@echo
+	@echo "Build finished; now you can process the pickle files."
+
+json:
+	$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
+	@echo
+	@echo "Build finished; now you can process the JSON files."
+
+htmlhelp:
+	$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
+	@echo
+	@echo "Build finished; now you can run HTML Help Workshop with the" \
+	      ".hhp project file in $(BUILDDIR)/htmlhelp."
+
+qthelp:
+	$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
+	@echo
+	@echo "Build finished; now you can run "qcollectiongenerator" with the" \
+	      ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
+	@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pyhs.qhcp"
+	@echo "To view the help file:"
+	@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pyhs.qhc"
+
+devhelp:
+	$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
+	@echo
+	@echo "Build finished."
+	@echo "To view the help file:"
+	@echo "# mkdir -p $$HOME/.local/share/devhelp/pyhs"
+	@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/pyhs"
+	@echo "# devhelp"
+
+epub:
+	$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
+	@echo
+	@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
+
+latex:
+	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+	@echo
+	@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
+	@echo "Run \`make' in that directory to run these through (pdf)latex" \
+	      "(use \`make latexpdf' here to do that automatically)."
+
+latexpdf:
+	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+	@echo "Running LaTeX files through pdflatex..."
+	make -C $(BUILDDIR)/latex all-pdf
+	@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
+
+text:
+	$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
+	@echo
+	@echo "Build finished. The text files are in $(BUILDDIR)/text."
+
+man:
+	$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
+	@echo
+	@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
+
+changes:
+	$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
+	@echo
+	@echo "The overview file is in $(BUILDDIR)/changes."
+
+linkcheck:
+	$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
+	@echo
+	@echo "Link check complete; look for any errors in the above output " \
+	      "or in $(BUILDDIR)/linkcheck/output.txt."
+
+doctest:
+	$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
+	@echo "Testing of doctests in the sources finished, look at the " \
+	      "results in $(BUILDDIR)/doctest/output.txt."

File notes/pyhs/docs/api/exceptions.rst

+:mod:`exceptions`
+=================
+.. automodule:: pyhs.exceptions
+    :members:

File notes/pyhs/docs/api/index.rst

+API
+===
+
+This is the pyhs reference documentation, autogenerated from the source
+code.
+
+.. toctree::
+
+    sockets
+    manager
+    exceptions

File notes/pyhs/docs/api/manager.rst

+:mod:`manager`
+==============
+.. automodule:: pyhs.manager
+
+    .. autoclass:: Manager
+        :members: get, purge
+
+        .. automethod:: find(db, table, operation, fields, values, index_name=None, limit=0, offset=0)
+        .. automethod:: insert(db, table, fields, index_name=None)
+        .. automethod:: update(db, table, operation, fields, values, update_values, index_name=None, limit=0, offset=0, return_original=False)
+        .. automethod:: incr(db, table, operation, fields, values, step=['1'], index_name=None, limit=0, offset=0, return_original=False)
+        .. automethod:: decr(db, table, operation, fields, values, step=['1'], index_name=None, limit=0, offset=0, return_original=False)
+        .. automethod:: delete(db, table, operation, fields, values, index_name=None, limit=0, offset=0, return_original=False)

File notes/pyhs/docs/api/sockets.rst

+:mod:`sockets`
+==============
+.. automodule:: pyhs.sockets
+    :members:

File notes/pyhs/docs/conf.py

+# -*- coding: utf-8 -*-
+#
+# pyhs documentation build configuration file, created by
+# sphinx-quickstart on Sun Nov 28 01:17:44 2010.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys, os
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+sys.path.insert(0, os.path.abspath('..'))
+
+# -- General configuration -----------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = ['sphinx.ext.autodoc']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'pyhs'
+copyright = u'2010, Artem Gluvchynsky'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The full version, including alpha/beta/rc tags.
+release = __import__('pyhs').__version__
+# The short X.Y version.
+version = release[:release.rindex('.')]
+
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = ['_build']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+
+# -- Options for HTML output ---------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages.  See the documentation for
+# a list of builtin themes.
+html_theme = 'nature'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further.  For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+# The name for this set of Sphinx documents.  If None, it defaults to
+# "<project> v<release> documentation".
+#html_title = None
+
+# A shorter title for the navigation bar.  Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+html_sidebars = {
+    '**': ['localtoc.html', 'relations.html', 'sourcelink.html']
+}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+html_domain_indices = False
+
+# If false, no index is generated.
+html_use_index = False
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it.  The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'pyhsdoc'
+
+
+# -- Options for LaTeX output --------------------------------------------------
+
+# The paper size ('letter' or 'a4').
+#latex_paper_size = 'letter'
+
+# The font size ('10pt', '11pt' or '12pt').
+#latex_font_size = '10pt'
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, documentclass [howto/manual]).
+latex_documents = [
+  ('index', 'pyhs.tex', u'pyhs Documentation',
+   u'Artem Gluvchynsky', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Additional stuff for the LaTeX preamble.
+#latex_preamble = ''
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_domain_indices = True
+
+
+# -- Options for manual page output --------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+    ('index', 'pyhs', u'pyhs Documentation',
+     [u'Artem Gluvchynsky'], 1)
+]
+
+primary_domain = 'py'
+autoclass_content = 'both'

File notes/pyhs/docs/index.rst

+.. pyhs documentation master file, created by
+   sphinx-quickstart on Sun Nov 28 01:17:44 2010.
+   You can adapt this file completely to your liking, but it should at least
+   contain the root `toctree` directive.
+
+Welcome to pyhs
+===============
+
+pyhs is a pure Python client (with optional C speedups) for `HandlerSocket <https://github.com/ahiguti/HandlerSocket-Plugin-for-MySQL>`_
+plugin to MySQL database. In short, it provides access to the data omitting
+the SQL engine in a NoSQL-like interface. It allows all simple operations
+(get, insert, update, delete) over indexed data to perform considerably faster
+than by usual means.
+
+See `this <http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html>`_
+article for more details about HandlerSocket.
+
+This client supports both read and write operations but no batching at the moment.
+
+Go to :doc:`installation` and :doc:`usage` sections for quick start. There's also a
+:doc:`reference <api/index>` for all public interfaces.
+
+Project is open-source and always available on the bitbucket:
+http://bitbucket.org/excieve/pyhs/
+
+
+Contents:
+
+.. toctree::
+   :maxdepth: 2
+
+   installation
+   usage
+   api/index

File notes/pyhs/docs/installation.rst

+Installation
+============
+
+HandlerSocket plugin
+--------------------
+
+First, you'll have to get this working. At the moment of writing the only way
+to do this, was getting the source code, compiling it and loading into the
+MySQL instance. Keep the HandlerSocket up to date as the client gets updated
+from time to time as new features or changes appear in the plugin.
+
+.. seealso::
+
+    `Installation guide <https://github.com/ahiguti/HandlerSocket-Plugin-for-MySQL/blob/master/docs-en/installation.en.txt>`_
+        HandlerSocket installation guide at the official repository.
+
+The Client
+----------
+
+At the moment you can install pyhs by either using `pip <http://pip.openplans.org/>`_,
+easy_install, downloading from PyPI or getting source directly from bitbucket.
+
+Pip way
+~~~~~~~
+This is very simple, just run::
+
+    pip install python-handler-socket
+
+Or this to get the latest (not yet released on PyPI)::
+
+    pip install hg+http://bitbucket.org/excieve/pyhs#egg=python-handler-socket
+
+This command will install the package into your site-packages or dist-packages.
+
+Source
+~~~~~~
+Clone the source from the repository and install it::
+
+    hg clone http://bitbucket.org/excieve/pyhs
+    cd pyhs
+    python setup.py install
+
+By default additional C speedups are also built and installed (if possible).
+However, if they are not needed, please use ``--without-speedups`` option.
+
+Testing installation
+~~~~~~~~~~~~~~~~~~~~
+
+Check your installation by running this in Python interpreter::
+
+    from pyhs import __version__
+    print __version__
+
+This should show currently installed version of pyhs.
+You're all set now.

File notes/pyhs/docs/make.bat

+@ECHO OFF
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+	set SPHINXBUILD=sphinx-build
+)
+set BUILDDIR=_build
+set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
+if NOT "%PAPER%" == "" (
+	set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
+)
+
+if "%1" == "" goto help
+
+if "%1" == "help" (
+	:help
+	echo.Please use `make ^<target^>` where ^<target^> is one of
+	echo.  html       to make standalone HTML files
+	echo.  dirhtml    to make HTML files named index.html in directories
+	echo.  singlehtml to make a single large HTML file
+	echo.  pickle     to make pickle files
+	echo.  json       to make JSON files
+	echo.  htmlhelp   to make HTML files and a HTML help project
+	echo.  qthelp     to make HTML files and a qthelp project
+	echo.  devhelp    to make HTML files and a Devhelp project
+	echo.  epub       to make an epub
+	echo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter
+	echo.  text       to make text files
+	echo.  man        to make manual pages
+	echo.  changes    to make an overview over all changed/added/deprecated items
+	echo.  linkcheck  to check all external links for integrity
+	echo.  doctest    to run all doctests embedded in the documentation if enabled
+	goto end
+)
+
+if "%1" == "clean" (
+	for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
+	del /q /s %BUILDDIR%\*
+	goto end
+)
+
+if "%1" == "html" (
+	%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The HTML pages are in %BUILDDIR%/html.
+	goto end
+)
+
+if "%1" == "dirhtml" (
+	%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
+	goto end
+)
+
+if "%1" == "singlehtml" (
+	%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
+	goto end
+)
+
+if "%1" == "pickle" (
+	%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished; now you can process the pickle files.
+	goto end
+)
+
+if "%1" == "json" (
+	%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished; now you can process the JSON files.
+	goto end
+)
+
+if "%1" == "htmlhelp" (
+	%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished; now you can run HTML Help Workshop with the ^
+.hhp project file in %BUILDDIR%/htmlhelp.
+	goto end
+)
+
+if "%1" == "qthelp" (
+	%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished; now you can run "qcollectiongenerator" with the ^
+.qhcp project file in %BUILDDIR%/qthelp, like this:
+	echo.^> qcollectiongenerator %BUILDDIR%\qthelp\pyhs.qhcp
+	echo.To view the help file:
+	echo.^> assistant -collectionFile %BUILDDIR%\qthelp\pyhs.ghc
+	goto end
+)
+
+if "%1" == "devhelp" (
+	%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished.
+	goto end
+)
+
+if "%1" == "epub" (
+	%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The epub file is in %BUILDDIR%/epub.
+	goto end
+)
+
+if "%1" == "latex" (
+	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
+	goto end
+)
+
+if "%1" == "text" (
+	%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The text files are in %BUILDDIR%/text.
+	goto end
+)
+
+if "%1" == "man" (
+	%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The manual pages are in %BUILDDIR%/man.
+	goto end
+)
+
+if "%1" == "changes" (
+	%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.The overview file is in %BUILDDIR%/changes.
+	goto end
+)
+
+if "%1" == "linkcheck" (
+	%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Link check complete; look for any errors in the above output ^
+or in %BUILDDIR%/linkcheck/output.txt.
+	goto end
+)
+
+if "%1" == "doctest" (
+	%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Testing of doctests in the sources finished, look at the ^
+results in %BUILDDIR%/doctest/output.txt.
+	goto end
+)
+
+:end

File notes/pyhs/docs/usage.rst

+Usage
+=====
+
+Overview
+--------
+
+Once the package is correctly installed and HandlerSocket plugin is loaded in
+your MySQL instance, you're ready to do some code.
+
+The client consists of two parts: *high level* and *low level*.
+
+In most cases you'll only need the high level part which is handled by
+:class:`.manager.Manager` class. It saves developer from index id allocation
+and reader, writer server pools management - just provides a simple interface
+for all supported operations.
+
+One might want to use the low level interface in case more control over mentioned
+things is needed. This part is handled by :class:`.sockets.ReadSocket` and
+:class:`.sockets.WriteSocket` for read and write server pools/operations correspondingly.
+They both subclass :class:`.sockets.HandlerSocket` which defines the pool and
+common operations like opening an index. There's also the :class:`.sockets.Connection`
+which controls low-level socket operations and is managed by the pool.
+
+Usage examples
+--------------
+
+A few simple snippets of both low and high level usage to get started.
+
+High level
+~~~~~~~~~~
+
+This one initialises HandlerSocket connection and inserts a row in a table::
+
+    from pyhs import Manager
+
+    # This will initialise both reader and writer connections to the default hosts
+    hs = Manager()
+
+    try:
+        # Insert a row into 'cars.trucks' table using default (primary) index
+        hs.insert('cars', 'trucks', [('id', '1'), ('company', 'Scania'), ('model', 'G400')])
+    except OperationalError, e:
+        print 'Could not insert because of "%s" error' % str(e)
+    except ConnectionError, e:
+        print 'Unable to perform operation due to a connection error. Original error: "%s"' % str(e)
+
+.. note::
+    Look how the data is passed - it is a list of field-value pairs. Make sure that
+    all values are strings.
+
+Now let's get that data back::
+
+    from pyhs import Manager
+
+    hs = Manager()
+
+    try:
+        data = hs.get('cars', 'trucks', ['id', 'company', 'model'], '1')
+        print dict(data)
+    except OperationalError, e:
+        print 'Could not get because of "%s" error' % str(e)
+    except ConnectionError, e:
+        print 'Unable to perform operation due to a connection error. Original error: "%s"' % str(e)
+
+.. note::
+    :meth:`~.manager.Manager.get` is a wrapper over :meth:`~.manager.Manager.find`.
+    It only fetches one row searched for by a single comparison value and uses only
+    primary index for this. For more complex operations please use ``find``.
+    Make sure that the first field in the fields list is the one that is searched
+    by and that the list is ordered in the same way fields are present in the index.
+
+    ``find`` and ``get`` return list of field-value pairs as result.
+
+A more complex ``find`` request with composite index and custom servers::
+
+    from pyhs import Manager
+
+    # When several hosts are available, client code will try to use both of them
+    # to balance the load and will retry requests in case of failure on one of them.
+    read_servers = [('inet', '1.1.1.1', 9998), ('inet', '2.2.2.2', 9998)]
+    write_servers = [[('inet', '1.1.1.1', 9999), ('inet', '2.2.2.2', 9999)]]
+    hs = Manager(read_servers, write_servers)
+
+    try:
+        # This will fetch maximum of 10 rows with 'id' >= 1 and company >= 'Scania'.
+        # Unfortunately, HandlerSocket doesn't support multiple condition operations
+        # on a single request.
+        data = hs.find('cars', 'trucks', '>=', ['id', 'company', 'model'], ['1', 'Scania'], 'custom_index_name', 10)
+        # Return value is a list of rows, each of them is a list of (field, value) tuples.
+        print [dict(row) for row in data]
+    except OperationalError, e:
+        print 'Could not find because of "%s" error' % str(e)
+    except ConnectionError, e:
+        print 'Unable to perform operation due to a connection error. Original error: "%s"' % str(e)
+
+.. note::
+    Fields and condition values must be ordered in the same way as present in
+    the index (in case it's composite). All fields that aren't in the index
+    may be ordered randomly.
+
+    Another important thing is the ``limit`` parameter. In case multiple results
+    are expected to be returned by the database, this must be set explicitly.
+    HandlerSocket will **not** return all of them by default.
+
+A sample of increment operation with original value returned as result. Similar one exists for decrement.::
+
+    from pyhs import Manager
+
+    hs = Manager()
+
+    try:
+        # "incr" increments a numeric value by defined step parameter. By default it is '1'.
+        original = hs.incr('cars', 'trucks', '=', ['id'], ['1'], return_original=True)
+        print original
+        # This will return ['1'] but the new value would be ['2']
+    except OperationalError, e:
+        print 'Could not find because of "%s" error' % str(e)
+    except ConnectionError, e:
+        print 'Unable to perform operation due to a connection error. Original error: "%s"' % str(e)
+
+Low level
+~~~~~~~~~
+
+A small overview of how to operate HandlerSocket.
+An opened index is required to perform any operation. To do this, use
+:meth:`.sockets.HandlerSocket.get_index_id` which will open the index and
+return its ``id``.
+
+.. note::
+    Id's are cached internally by the client and it will return existing id
+    (without opening a new index) in case same ``db``, ``table`` and list of
+    ``columns`` is passed.
+
+This ``id`` will must used in all further operations that operate over the same
+index and columns.
+There are two classes that must be used to perform actual operations:
+:class:`.sockets.ReadSocket` for reads and :class:`.socket.WriteSocket` for writes.
+
+An example::
+
+    from pyhs.sockets import ReadSocket
+
+    hs = ReadSocket([('inet', '127.0.0.1', 9998)])
+
+    try:
+        index_id = hs.get_index_id('cars', 'trucks', ['id', 'company', 'model'])
+        data = hs.find(index_id, '=', ['1'])
+        # Data will contain a list of results. Each result is a list of row's values.
+        print data
+    except OperationalError, e:
+        print 'Could not find because of "%s" error' % str(e)
+    except ConnectionError, e:
+        print 'Unable to perform operation due to a connection error. Original error: "%s"' % str(e)
+
+Exception handling
+~~~~~~~~~~~~~~~~~~
+
+There are three exceptions that client may raise:
+
+    :exc:`.exceptions.ConnectionError`
+        Something bad happened to HandlerSocket connection. Data could not be sent
+        or received. Actual reason will be present in the first exception instance's
+        argument. Note that the client may retry operations in case several hosts are defined.
+    :exc:`.exceptions.OperationalError`
+        Raised when HandlerSocket returned an error. Error code is present in the
+        exception instance.
+    :exc:`.exceptions.IndexedConnectionError`
+        ``ConnectionError`` happened when performing an operation with already
+        opened index. High level client uses this to retry whole operation in case
+        something correctable failed. Developer might want to use it if low level
+        client is used.
+
+
+.. seealso::
+
+    :doc:`API reference <api/index>`
+        Description of all public interfaces provided by both parts of the client

File notes/pyhs/pyhs/#sockets.py#

+import socket
+import threading
+import time
+import random
+from itertools import imap, chain
+
+try:
+    from _speedups import encode, decode
+except ImportError:
+    from utils import encode, decode
+from utils import check_columns
+from exceptions import *
+
+
+
+class Connection(object):
+    """Single HandlerSocket connection.
+
+    Maintains a streamed socket connection and defines methods to send and
+    read data from it.
+    In case of failure :attr:`~.retry_time` will be set to the exact time after
+    which the connection may be retried to deal with temporary connection issues.
+    """
+
+    UNIX_PROTO = 'unix'
+    INET_PROTO = 'inet'
+    DEFAULT_TIMEOUT = 3
+    RETRY_INTERVAL = 30
+
+    def __init__(self, protocol, host, port=None, timeout=None):
+        """
+        :param string protocol: socket protocol (*'unix'* and *'inet'* are supported).
+        :param string host: server host for *'inet'* protocol or socket file path for *'unix'*.
+        :param port: server port for *'inet'* protocol connection.
+        :type port: integer or None
+        :param timeout: timeout value for socket, default is defined in
+            :const:`.DEFAULT_TIMEOUT`.
+        :type timeout: integer or None
+        """
+        self.timeout = timeout or self.DEFAULT_TIMEOUT
+
+        self.host = host
+        if protocol == self.UNIX_PROTO:
+            self.protocol = socket.AF_UNIX
+            self.address = self.host
+        elif protocol == self.INET_PROTO:
+            self.protocol = socket.AF_INET
+            if not port:
+                raise ValueError('Port is not specified for TCP connection')
+            self.address = (self.host, port)
+        else:
+            raise ValueError('Unsupported protocol')
+
+        self.socket = None
+        self.retry_time = 0
+        self.debug = False
+
+    def set_debug_mode(self, mode):
+        """Changes debugging mode of the connection.
+        If enabled, some debugging info will be printed to stdout.
+
+        :param bool mode: mode value
+        """
+        self.debug = mode
+
+    def connect(self):
+        """Establishes connection with a new socket. If some socket is
+        associated with the instance - no new socket will be created.
+        """
+        if self.socket:
+            return
+
+        try:
+            sock = socket.socket(self.protocol, socket.SOCK_STREAM)
+            # Disable Nagle algorithm to improve latency:
+            # http://developers.slashdot.org/comments.pl?sid=174457&threshold=1&commentsort=0&mode=thread&cid=14515105
+            sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+            sock.settimeout(self.timeout)
+            sock.connect(self.address)
+        except socket.error, e:
+            self._die(e, 'Connection error')
+
+        self.socket = sock
+
+    def _die(self, e, msg='Socket error'):
+        """Disconnects from the host and assigns failure retry time. Throws a
+        :exc:`~.exceptions.ConnectionError` exception with failure details.
+        This is a private method and is meant to be used for any connection
+        failures.
+
+        :param e: original exception that caused connection failure.
+        :type e: :exc:`socket.error`
+        :param msg: optional exception message to indentify operation that was
+            being in process (e.g. 'Read error').
+        :type msg: string or None
+        """
+        self.retry_time = time.time() + self.RETRY_INTERVAL
+        self.disconnect()
+
+        exmsg = len(e.args) == 1 and e.args[0] or e.args[1]
+        raise ConnectionError("%s: %s" % (msg, exmsg))
+
+    def is_ready(self):
+        """Checks if connection instance is ready to be used.
+
+        :rtype: bool
+        """
+        if self.retry_time and self.retry_time > time.time():
+            return False
+        self.retry_time = 0
+        return True
+
+    def disconnect(self):
+        """Closes a socket and disassociates it from the connection instance.
+
+        .. note:: It ignores any socket exceptions that might happen in process.
+        """
+        if self.socket:
+            try:
+                self.socket.close()
+            except socket.error:
+                pass
+            self.socket = None
+
+    def readline(self):
+        """Reads one line from the socket stream and returns it.
+        Lines are expected to be delimited with LF.
+        Throws :exc:`~.exceptions.ConnectionError` in case of failure.
+
+        :rtype: string
+
+        .. note:: Currently Connection class supports only one line per
+           request/response. All data in the stream after first LF will be ignored.
+        """
+        buffer = ''
+        index = -1
+        while True:
+            index = buffer.find('\n')
+            if index >= 0:
+                break
+
+            try:
+                data = self.socket.recv(4096)
+                if self.debug:
+                    print "DEBUG: read data bucket: %s" % data
+                if not data:
+                    raise RecoverableConnectionError('Connection closed on the remote end.')
+            except socket.error, e:
+                self._die(e, 'Read error')
+
+            buffer += data
+
+        return buffer[:index]
+
+    def send(self, data):
+        """Sends all given data into the socket stream.
+        Throws :exc:`~.exceptions.ConnectionError` in case of failure.
+
+        :param string data: data to send
+        """
+        try:
+            self.socket.sendall(data)
+            if self.debug:
+                print "DEBUG: sent data: %s" % data
+        except socket.error, e:
+            self._die(e, 'Send error')
+
+
+class HandlerSocket(threading.local):
+    """Pool of HandlerSocket connections.
+
+    Manages connections and defines common HandlerSocket operations.
+    Uses internal index id cache.
+    Subclasses :class:`threading.local` to put connection pool and indexes data
+    in thread-local storage as they're not safe to share between threads.
+
+    .. warning::
+       Shouldn't be used directly in most cases.
+       Use :class:`~.ReadSocket` for read operations and :class:`~.WriteSocket` for
+       writes.
+    """
+
+    RETRY_LIMIT = 5
+    FIND_OPERATIONS = ('=', '>', '>=', '<', '<=')
+
+    def __init__(self, servers, debug=False):
+        """Pool constructor initializes connections for all given HandlerSocket servers.
+
+        :param iterable servers: a list of lists that define server data,
+            *format*: ``(protocol, host, port, timeout)``.
+            See :class:`~.Connection` for details.
+        :param bool debug: enable or disable debug mode, default is ``False``.
+        """
+        self.connections = []
+        for server in servers:
+            conn = Connection(*server)
+            conn.set_debug_mode(debug)
+            self.connections.append(conn)
+
+        self._clear_caches()
+
+    def _clear_caches(self):
+        """Clears index cache, connection map, index id counter and last cached
+        exception.
+        Private method.
+        """
+        self.index_map = {}
+        self.current_index_id = 0
+        self.index_cache = {}
+        self.last_connection_exception = None
+
+    def _get_connection(self, index_id=None, force_index=False):
+        """Returns active connection from the pool.
+
+        It will retry available connections in case of connection failure. Max
+        retry limit is defined in :const:`~.RETRY_LIMIT`.
+
+        In case of connection failure on all available servers will raise
+        :exc:`~.exceptions.ConnectionError`. If ``force_index`` is set, it will
+        try only one connection that was used to open given ``index_id``. If that
+        fails will throw :exc:`~.exceptions.RecoverableConnectionError`.
+
+        :param index_id: index id to look up connection for, if ``None`` (default)
+            or not found a new connection will be returned.
+        :type index_id: integer or None
+        :param bool force_index: if ``True`` will ensure that only a connection
+            that was used to open ``index id`` would be returned, will raise
+            :exc:`~.exceptions.OperationalError` otherwise.
+        :rtype: :class:`~.Connection` instance
+        """
+        connections = self.connections[:]
+        random.shuffle(connections)
+        # Try looking up for index_id in index_map - we should use same connections
+        # for opened indexes and operations using them
+        if index_id is not None and index_id in self.index_map:
+            conn = self.index_map[index_id]
+        else:
+            if force_index:
+                raise OperationalError('There is no connection with given index id "%d"' % index_id)
+            conn = connections.pop()
+
+        exception = lambda exc: ConnectionError('Could not connect to any of given servers: %s'\
+                                  % exc.args[0])
+        # Retry until either limit is reached or all connections tried
+        for i in range(max(self.RETRY_LIMIT, len(connections))):
+            try:
+                if conn.is_ready():
+                    conn.connect()
+                    break
+            except ConnectionError, e:
+                self.last_connection_exception = e
+                # In case indexed connection is forced remove it from the caches
+                # and raise exception so higher level code could retry whole operation
+                if force_index:
+                    self.purge_index(index_id)
+                    if connections:
+                        raise RecoverableConnectionError('Could not use connection with given index id "%d"' % index_id)
+                    else:
+                        # No point retrying if no more connections are available
+                        raise exception(self.last_connection_exception)
+            if connections:
+                conn = connections.pop()
+        else:
+            raise exception(self.last_connection_exception)
+
+        # If we have an index id, save a relation between it and a connection
+        if index_id is not None:
+            self.index_map[index_id] = conn
+        return conn
+
+    def _parse_response(self, raw_data):
+        """Parses HandlerSocket response data.
+        Returns a list of result rows which are lists of result columns.
+        Raises :exc:`~.exceptions.OperationalError` in case data contains
+        a HS error code.
+        Private method.
+
+        :param string raw_data: data string returned by HS server.
+        :rtype: list
+        """
+        tokens = raw_data.split('\t')
+        if not len(tokens) or int(tokens[0]) != 0:
+            error = 'Unknown remote error'
+            if len(tokens) > 2:
+                error = tokens[2]
+            raise OperationalError('HandlerSocket returned an error code: %s' % error)
+
+        columns = int(tokens[1])
+        decoded_tokens = imap(decode, tokens[2:])
+        # Divide response tokens list by number of columns
+        data = zip(*[decoded_tokens]*columns)
+
+        return data
+
+    def _open_index(self, index_id, db, table, fields, index_name):
+        """Calls open index query on HandlerSocket.
+        This is a required first operation for any read or write usages.
+        Private method.
+
+        :param integer index_id: id number that will be associated with opened index.
+        :param string db: database name.
+        :param string table: table name.
+        :param string fields: comma-separated list of table's fields that would
+            be used in further operations. Fields that are part of opened index
+            must be present in the same order they are declared in the index.
+        :param string index_name: name of the index.
+        :rtype: list
+        """
+        encoded = imap(encode, (db, table, index_name, fields))
+        query = chain(('P', str(index_id)), encoded)
+
+        response = self._call(index_id, query)
+
+        return response
+
+    def get_index_id(self, db, table, fields, index_name=None):
+        """Returns index id for given index data. This id must be used in all
+        operations that use given data.
+
+        Uses internal index cache that keys index ids on a combination of:
+        ``db:table:index_name:fields``.
+        In case no index was found in the cache, a new index will be opened.
+
+        .. note:: ``fields`` is position-dependent, so change of fields order will open
+           a new index with another index id.
+
+        :param string db: database name.
+        :param string table: table name.
+        :param iterable fields: list of table's fields that would be used in further
+            operations. See :meth:`._open_index` for more info on fields order.
+        :param index_name: name of the index, default is ``PRIMARY``.
+        :type index_name: string or None
+        :rtype: integer or None
+        """
+        index_name = index_name or 'PRIMARY'
+        fields = ','.join(fields)
+        cache_key = ':'.join((db, table, index_name, fields))
+        index_id = self.index_cache.get(cache_key)
+        if index_id is not None:
+            return index_id
+
+        response = self._open_index(self.current_index_id, db, table, fields, index_name)
+        if response is not None:
+            index_id = self.current_index_id
+            self.index_cache[cache_key] = index_id
+            self.current_index_id += 1
+            return index_id
+
+        return None
+
+    def purge_indexes(self):
+        """Closes all indexed connections, cleans caches, zeroes index id counter.
+        """
+        for conn in self.index_map.values():
+            conn.disconnect()
+
+        self._clear_caches()
+
+    def purge(self):
+        """Closes all connections, cleans caches, zeroes index id counter."""
+        for conn in self.connections:
+            conn.disconnect()
+
+        self._clear_caches()
+
+    def purge_index(self, index_id):
+        """Clear single index connection and cache.
+
+        :param integer index_id: id of the index to purge.
+        """
+        del self.index_map[index_id]
+        for key, value in self.index_cache.items():
+            if value == index_id:
+                del self.index_cache[key]
+
+    def _call(self, index_id, query, force_index=False):
+        """Helper that performs actual data exchange with HandlerSocket server.
+        Returns parsed response data.
+
+        :param integer index_id: id of the index to operate on.
+        :param iterable query: list/iterable of tokens ready for sending.
+        :param bool force_index: pass ``True`` when operation requires connection
+            with given ``index_id`` to work. This is usually everything except
+            index opening. See :meth:`~._get_connection`.
+        :rtype: list
+        """
+        conn = self._get_connection(index_id, force_index)
+        try:
+            conn.send('\t'.join(query)+'\n')
+            response = self._parse_response(conn.readline())
+        except ConnectionError, e:
+            self.purge_index(index_id)
+            raise e
+
+        return response
+
+
+class ReadSocket(HandlerSocket):
+    """HandlerSocket client for read operations."""
+
+    def find(self, index_id, operation, columns, limit=0, offset=0):
+        """Finds row(s) via opened index.
+
+        Raises ``ValueError`` if given data doesn't validate.
+
+        :param integer index_id: id of opened index.
+        :param string operation: logical comparison operation to use over ``columns``.
+            Currently allowed operations are defined in :const:`~.FIND_OPERATIONS`.
+            Only one operation is allowed per call.
+        :param iterable columns: list of column values for comparison operation.
+            List must be ordered in the same way as columns are defined
+            in opened index.
+        :param integer limit: optional limit of results to return. Default is
+            one row. In case multiple results are expected, ``limit`` must be
+            set explicitly, HS wont return all found rows by default.
+        :param integer offset: optional offset of rows to search for.
+        :rtype: list
+        """
+        if operation not in self.FIND_OPERATIONS:
+            raise ValueError('Operation is not supported.')
+
+        if not check_columns(columns):
+            raise ValueError('Columns must be a non-empty iterable.')
+
+        query = chain(
+            (str(index_id), operation, str(len(columns))),
+            imap(encode, columns),
+            (str(limit), str(offset))
+        )
+
+        response = self._call(index_id, query, force_index=True)
+
+        return response
+
+
+class WriteSocket(HandlerSocket):
+    """HandlerSocket client for write operations."""
+
+    MODIFY_OPERATIONS = ('U', 'D', '+', '-', 'U?', 'D?', '+?', '-?')
+
+    def find_modify(self, index_id, operation, columns, modify_operation,
+                    modify_columns=[], limit=0, offset=0):
+        """Updates/deletes row(s) using opened index.
+
+        Returns number of modified rows or a list of original values in case
+        ``modify_operation`` ends with ``?``.
+
+        Raises ``ValueError`` if given data doesn't validate.
+
+        :param integer index_id: id of opened index.
+        :param string operation: logical comparison operation to use over ``columns``.
+            Currently allowed operations are defined in :const:`~.FIND_OPERATIONS`.
+            Only one operation is allowed per call.
+        :param iterable columns: list of column values for comparison operation.
+            List must be ordered in the same way as columns are defined in
+            opened index.
+        :param string modify_operation: modification operation (update or delete).
+            Currently allowed operations are defined in :const:`~.MODIFY_OPERATIONS`.
+        :param iterable modify_columns: list of column values for update operation.
+            List must be ordered in the same way as columns are defined in
+            opened index. Only usable for *update* operation,
+        :param integer limit: optional limit of results to change. Default is
+            one row. In case multiple rows are expected to be changed, ``limit``
+            must be set explicitly, HS wont change all found rows by default.
+        :param integer offset: optional offset of rows to search for.
+        :rtype: list
+
+        """
+        if operation not in self.FIND_OPERATIONS \
+                or modify_operation not in self.MODIFY_OPERATIONS:
+            raise ValueError('Operation is not supported.')
+
+        if not check_columns(columns):
+            raise ValueError('Columns must be a non-empty iterable.')
+
+        if modify_operation in ('U', '+', '-', 'U?', '+?', '-?') \
+            and not check_columns(modify_columns):
+            raise ValueError('Modify_columns must be a non-empty iterable for update operation')
+
+        query = chain(
+            (str(index_id), operation, str(len(columns))),
+            imap(encode, columns),
+            (str(limit), str(offset), modify_operation),
+            imap(encode, modify_columns)
+        )
+
+        response = self._call(index_id, query, force_index=True)
+
+        return response
+
+    def insert(self, index_id, columns):
+        """Inserts single row using opened index.
+
+        Raises ``ValueError`` if given data doesn't validate.
+
+        :param integer index_id: id of opened index.
+        :param list columns: list of column values for insertion. List must be
+            ordered in the same way as columns are defined in opened index.
+        :rtype: bool
+        """
+        if not check_columns(columns):
+            raise ValueError('Columns must be a non-empty iterable.')
+
+        query = chain(
+            (str(index_id), '+', str(len(columns))),
+            imap(encode, columns)
+        )
+
+        self._call(index_id, query, force_index=True)
+
+        return True

File notes/pyhs/pyhs/.#sockets.py

+yoshifumi@yoshifumi-macbookpro.local.81265

File notes/pyhs/pyhs/__init__.py

+from manager import Manager
+
+__version__ = '0.2.4'

File notes/pyhs/pyhs/_speedups.c

+#include <Python.h>
+
+
+#define END_ENCODABLE_CHAR 0x0f
+#define END_ENCODED_CHAR 0x4f
+#define ENCODING_SHIFT 0x40
+#define ENCODING_PREFIX 0x01
+
+struct str_t {
+    unsigned int char_length;
+    const char *raw_end;
+};
+
+int init_string_data(PyObject *raw_data, struct str_t *string_data) {
+    if (PyUnicode_Check(raw_data)) {
+        string_data->char_length = sizeof(Py_UNICODE);
+        string_data->raw_end = (const char*)(PyUnicode_AS_UNICODE(raw_data) + PyUnicode_GET_SIZE(raw_data));
+    } else if (PyString_Check(raw_data)) {
+        string_data->char_length = 1;
+        string_data->raw_end = PyString_AS_STRING(raw_data) + PyString_GET_SIZE(raw_data);
+    } else {
+        return 0;
+    }
+
+    return 1;
+}
+
+char* get_string(PyObject *raw_data, struct str_t *string_data) {
+    if (string_data->char_length > 1) {
+        return (char*)PyUnicode_AS_UNICODE(raw_data);
+    } else {
+        return PyString_AS_STRING(raw_data);
+    }
+}
+
+unsigned int get_num_encoded_chars(PyObject *raw_data, struct str_t *string_data, short encode) {
+    unsigned int num_chars = 0;
+    char *str_raw = get_string(raw_data, string_data);
+
+    while (*str_raw || str_raw < string_data->raw_end) {
+        if ((encode && *str_raw >= 0 && *str_raw <= END_ENCODABLE_CHAR) ||
+                (!encode && *str_raw == ENCODING_PREFIX 
+                 && *(str_raw+string_data->char_length) >= ENCODING_SHIFT
+                 && *(str_raw+string_data->char_length) <= END_ENCODED_CHAR)) {
+            num_chars++;
+        }
+        str_raw += string_data->char_length;
+    }
+
+    return num_chars;
+}
+
+void copy_ending(PyObject *raw_data, char *target, char *source, struct str_t *string_data) {
+    if (source < string_data->raw_end) {
+        Py_ssize_t diff;
+        if (string_data->char_length > 1) {
+            diff = PyUnicode_GET_DATA_SIZE(raw_data) - (source - (char*)PyUnicode_AS_UNICODE(raw_data));
+        } else {
+            diff = PyString_GET_SIZE(raw_data) - (source - PyString_AS_STRING(raw_data));
+        }
+        Py_MEMCPY(target, source, diff);
+    }
+}
+
+static PyObject* encode(PyObject *self, PyObject *raw_data) {
+    PyObject *encoded;
+    unsigned int num_chars = 0;
+    struct str_t string_data;
+    char *str_raw;
+    char *str_enc;
+
+    if (!init_string_data(raw_data, &string_data)) {
+        return NULL;
+    }
+
+    num_chars = get_num_encoded_chars(raw_data, &string_data, 1);
+
+    if (!num_chars) {
+        Py_INCREF(raw_data);
+        return raw_data;
+    }
+
+    if (string_data.char_length > 1) {
+        encoded = PyUnicode_FromUnicode(NULL, PyUnicode_GET_SIZE(raw_data) + num_chars);
+    } else {
+        encoded = PyString_FromStringAndSize(NULL, PyString_GET_SIZE(raw_data) + num_chars);
+    }
+    if (!encoded) {
+        return NULL;
+    }
+    str_raw = get_string(raw_data, &string_data);
+    str_enc = get_string(encoded, &string_data);
+
+    while (num_chars--) {
+        char *next = str_raw;
+        while (next < string_data.raw_end) {
+            if (*next >= 0 && *next <= END_ENCODABLE_CHAR) {
+                break;
+            }
+            next += string_data.char_length;
+        }
+
+        if (next > str_raw) {
+            Py_MEMCPY(str_enc, str_raw, next - str_raw);
+            str_enc += next - str_raw;
+        }
+
+        if (string_data.char_length > 1) {
+            int i;
+            for (i = 0; i < 2*string_data.char_length; i++) {
+                str_enc[i] = 0;
+            }
+        }
+        str_enc[0] = ENCODING_PREFIX;
+        str_enc[string_data.char_length] = (*next) | ENCODING_SHIFT;
+        str_enc += 2*string_data.char_length;
+
+        str_raw = next + string_data.char_length;
+    }
+
+    copy_ending(raw_data, str_enc, str_raw, &string_data);
+
+    return encoded;
+}
+
+static PyObject* decode(PyObject *self, PyObject *raw_data) {
+    PyObject *decoded;
+    unsigned int num_chars = 0;
+    struct str_t string_data;
+    char *str_raw;
+    char *str_dec;
+
+    if (!init_string_data(raw_data, &string_data)) {
+        return NULL;
+    }
+
+    num_chars = get_num_encoded_chars(raw_data, &string_data, 0);
+
+    if (!num_chars) {
+        Py_INCREF(raw_data);
+        return raw_data;
+    }
+
+    if (string_data.char_length > 1) {
+        decoded = PyUnicode_FromUnicode(NULL, PyUnicode_GET_SIZE(raw_data) - num_chars);
+    } else {
+        decoded = PyString_FromStringAndSize(NULL, PyString_GET_SIZE(raw_data) - num_chars);
+    }
+    if (!decoded) {
+        return NULL;
+    }
+    str_raw = get_string(raw_data, &string_data);
+    str_dec = get_string(decoded, &string_data);
+
+    while (num_chars--) {
+        char *next = str_raw;
+        while (next < string_data.raw_end) {
+            if (*next == ENCODING_PREFIX && 
+                *(next+string_data.char_length) >= ENCODING_SHIFT &&
+                *(next+string_data.char_length) <= END_ENCODED_CHAR) {
+                break;
+            }
+            next += string_data.char_length;
+        }
+
+        if (next > str_raw) {
+            Py_MEMCPY(str_dec, str_raw, next - str_raw);
+            str_dec += next - str_raw;
+        }
+
+        if (string_data.char_length > 1) {
+            int i;
+            for (i = 0; i < string_data.char_length; i++) {
+                str_dec[i] = 0;
+            }
+        }
+        str_dec[0] = (*(next+string_data.char_length)) ^ ENCODING_SHIFT;
+        str_dec += string_data.char_length;
+
+        str_raw = next + 2*string_data.char_length;
+    }
+
+    copy_ending(raw_data, str_dec, str_raw, &string_data);
+
+    return decoded;
+}
+
+
+static PyMethodDef module_methods[] = {
+    {"encode", encode, METH_O, "Encodes the string according to the HS protocol"},
+    {"decode", decode, METH_O, "Decodes the string according to the HS protocol"},
+    {NULL, NULL, 0, NULL}
+};
+
+PyMODINIT_FUNC init_speedups(void) {
+    (void)Py_InitModule("_speedups", module_methods);
+}

File notes/pyhs/pyhs/exceptions.py

+"""Exceptions used with HandlerSocket client."""
+
+class ConnectionError(Exception):
+    """Raised on socket connection problems."""
+    pass
+
+class OperationalError(Exception):
+    """Raised on client operation errors."""
+    pass
+
+class RecoverableConnectionError(ConnectionError):
+    """Raised on socket connection errors that can be attempted to recover instantly."""
+    pass

File notes/pyhs/pyhs/manager.py

+import sockets
+from utils import retry_on_failure
+
+
+class Manager(object):
+    """High-level client for HandlerSocket.
+
+    This should be used in most cases except ones that you need fine-grained
+    control over index management, low-level operations, etc.
+    For such cases :class:`~.sockets.ReadSocket` and :class:`~.sockets.WriteSocket`
+    can be used.
+    """
+
+    def __init__(self, read_servers=None, write_servers=None, debug=False):
+        """Constructor initializes both read and write sockets.
+
+        :param read_servers: list of tuples that define HandlerSocket read
+            instances. See format in :class:`~.HandlerSocket` constructor.
+        :type read_servers: list of tuples or None
+        :param write_servers: list of tuples that define HandlerSocket write
+            instances. Format is the same as in ``read_servers``.
+        :type write_servers: list of tuples or None
+        :param bool debug: enable debug mode by passing ``True``.
+        """
+        read_servers = read_servers or [('inet', 'localhost', 9998)]
+        write_servers = write_servers or [('inet', 'localhost', 9999)]
+        self.read_socket = sockets.ReadSocket(read_servers, debug)
+        self.write_socket = sockets.WriteSocket(write_servers, debug)
+
+    def get(self, db, table, fields, value):
+        """A wrapper over :meth:`~.find` that gets a single row with
+        a single field look up.
+
+        Returns a list of pairs. First item in pair is field name, second is
+        its value.
+
+        If multiple result rows, different comparison operation or
+        composite indexes are needed please use :meth:`~.find` instead.
+
+        :param string db: database name.
+        :param string table: table name.
+        :param list fields: list of table's fields to get, ordered by inclusion
+            into the index. First item must always be the look up field.
+        :param string value: a look up value.
+        :rtype: list of tuples
+        """
+        data = self.find(db, table, '=', fields, [str(value)])
+        if data:
+            data = data[0]
+
+        return data
+
+    @retry_on_failure
+    def find(self, db, table, operation, fields, values, index_name=None, limit=0, offset=0):
+        """Finds rows that meet ``values`` with comparison ``operation``
+        in given ``db`` and ``table``.
+
+        Returns a list of lists of pairs. First item in pair is field name,
+        second is its value.
+        For example, if two rows with two columns each are returned::
+        
+          [[('field', 'first_row_value'), ('otherfield', 'first_row_othervalue')],
+           [('field', 'second_row_value'), ('otherfield', 'second_row_othervalue')]]
+
+        :param string db: database name
+        :param string table: table name
+        :param string operation: logical comparison operation to use over ``columns``.
+            Currently allowed operations are defined in
+            :const:`~.sockets.HandlerSocket.FIND_OPERATIONS`. Only one operation
+            is allowed per call.
+        :param list fields: list of table's fields to get, ordered by inclusion
+            into the index.
+        :param list values: values to compare to, ordered the same way as items
+            in ``fields``.
+        :param index_name: name of the index to open, default is ``PRIMARY``.
+        :type index_name: string or None
+        :param integer limit: optional limit of results. Default is one row.
+            In case multiple rows are expected to be returned, ``limit`` must be
+            set explicitly, HS wont get all found rows by default.
+        :param integer offset: optional offset of rows to search for.
+        :rtype: list of lists of tuples
+        """
+        index_id = self.read_socket.get_index_id(db, table, fields, index_name)
+        data = self.read_socket.find(index_id, operation, values, limit, offset)
+
+        if data:
+            data = [zip(fields, row) for row in data]
+
+        return data
+
+    @retry_on_failure
+    def insert(self, db, table, fields, index_name=None):
+        """Inserts a single row into given ``table``.
+
+        :param string db: database name.
+        :param string table: table name.
+        :param fields: list of (column, value) pairs to insert into the ``table``.
+        :type fields: list of lists
+        :param index_name: name of the index to open, default is ``PRIMARY``.
+        :type index_name: string or None
+        :rtype: bool
+        """
+        keys, values = zip(*fields)
+        index_id = self.write_socket.get_index_id(db, table, keys, index_name)
+        data = self.write_socket.insert(index_id, values)
+
+        return data
+
+    @retry_on_failure
+    def update(self, db, table, operation, fields, values, update_values,
+               index_name=None, limit=0, offset=0, return_original=False):
+        """Update row(s) that meet conditions defined by ``operation``, ``fields``
+        ``values`` in a given ``table``.
+
+        :param string db: database name
+        :param string table: table name
+        :param string operation: logical comparison operation to use over ``columns``.
+            Currently allowed operations are defined in
+            :const:`~.sockets.HandlerSocket.FIND_OPERATIONS`. Only one operation
+            is allowed per call.
+        :param list fields: list of table's fields to use, ordered by inclusion
+            into the index.
+        :param list values: values to compare to, ordered the same way as items
+            in ``fields``.
+        :param list update_values: values to update, ordered the same way as items
+            in ``fields``.
+        :param index_name: name of the index to open, default is ``PRIMARY``.
+        :type index_name: string or None
+        :param integer limit: optional limit of rows. Default is one row.
+            In case multiple rows are expected to be updated, ``limit`` must be
+            set explicitly, HS wont update all found rows by default.
+        :param integer offset: optional offset of rows to search for.
+        :param bool return_original: if set to ``True``, method will return a
+            list of original values in affected rows. Otherwise - number of
+            affected rows (this is default behaviour).
+        :rtype: int or list
+        """
+        index_id = self.write_socket.get_index_id(db, table, fields, index_name)
+        op = 'U' + (return_original and '?' or '')
+        data = self.write_socket.find_modify(index_id, operation, values, op,
+                                             update_values, limit, offset)
+
+        if data:
+            data = return_original and [zip(fields, row) for row in data] \
+                or int(data[0][0])
+        return data
+    
+    @retry_on_failure
+    def incr(self, db, table, operation, fields, values, step=['1'], index_name=None,
+               limit=0, offset=0, return_original=False):
+        """Increments row(s) that meet conditions defined by ``operation``, ``fields``
+        ``values`` in a given ``table``.
+
+        :param string db: database name
+        :param string table: table name
+        :param string operation: logical comparison operation to use over ``columns``.
+            Currently allowed operations are defined in
+            :const:`~.sockets.HandlerSocket.FIND_OPERATIONS`. Only one operation
+            is allowed per call.
+        :param list fields: list of table's fields to use, ordered by inclusion
+            into the index.
+        :param list values: values to compare to, ordered the same way as items
+            in ``fields``.
+        :param list step: list of increment steps, ordered the same way as items
+            in ``fields``.
+        :param index_name: name of the index to open, default is ``PRIMARY``.
+        :type index_name: string or None
+        :param integer limit: optional limit of rows. Default is one row.
+            In case multiple rows are expected to be updated, ``limit`` must be
+            set explicitly, HS wont update all found rows by default.
+        :param integer offset: optional offset of rows to search for.
+        :param bool return_original: if set to ``True``, method will return a
+            list of original values in affected rows. Otherwise - number of
+            affected rows (this is default behaviour).
+        :rtype: int or list
+        """
+        index_id = self.write_socket.get_index_id(db, table, fields, index_name)
+        op = '+' + (return_original and '?' or '')
+        data = self.write_socket.find_modify(index_id, operation, values, op,
+                                             step, limit, offset)
+
+        if data:
+            data = return_original and [zip(fields, row) for row in data] \
+                or int(data[0][0])
+        return data
+
+    @retry_on_failure
+    def decr(self, db, table, operation, fields, values, step=['1'], index_name=None,
+               limit=0, offset=0, return_original=False):
+        """Decrements row(s) that meet conditions defined by ``operation``, ``fields``
+        ``values`` in a given ``table``.
+
+        :param string db: database name
+        :param string table: table name
+        :param string operation: logical comparison operation to use over ``columns``.
+            Currently allowed operations are defined in
+            :const:`~.sockets.HandlerSocket.FIND_OPERATIONS`. Only one operation
+            is allowed per call.
+        :param list fields: list of table's fields to use, ordered by inclusion
+            into the index.
+        :param list values: values to compare to, ordered the same way as items
+            in ``fields``.
+        :param list step: list of decrement steps, ordered the same way as items
+            in ``fields``.
+        :param index_name: name of the index to open, default is ``PRIMARY``.
+        :type index_name: string or None
+        :param integer limit: optional limit of rows. Default is one row.
+            In case multiple rows are expected to be updated, ``limit`` must be
+            set explicitly, HS wont update all found rows by default.
+        :param integer offset: optional offset of rows to search for.
+        :param bool return_original: if set to ``True``, method will return a
+            list of original values in affected rows. Otherwise - number of
+            affected rows (this is default behaviour).
+        :rtype: int or list
+        """
+        index_id = self.write_socket.get_index_id(db, table, fields, index_name)
+        op = '-' + (return_original and '?' or '')
+        data = self.write_socket.find_modify(index_id, operation, values, op,
+                                             step, limit, offset)
+
+        if data:
+            data = return_original and [zip(fields, row) for row in data] \
+                or int(data[0][0])
+        return data
+
+    @retry_on_failure
+    def delete(self, db, table, operation, fields, values, index_name=None,
+               limit=0, offset=0, return_original=False):
+        """Delete row(s) that meet conditions defined by ``operation``, ``fields``
+        ``values`` in a given ``table``.
+
+        :param string db: database name
+        :param string table: table name
+        :param string operation: logical comparison operation to use over ``columns``.
+            Currently allowed operations are defined in
+            :const:`~.sockets.HandlerSocket.FIND_OPERATIONS`. Only one operation
+            is allowed per call.
+        :param list fields: list of table's fields to use, ordered by inclusion
+            into the index.
+        :param list values: values to compare to, ordered the same way as items
+            in ``fields``.
+        :param index_name: name of the index to open, default is ``PRIMARY``.
+        :type index_name: string or None
+        :param integer limit: optional limit of rows. Default is one row.
+            In case multiple rows are expected to be deleted, ``limit`` must be
+            set explicitly, HS wont delete all found rows by default.
+        :param integer offset: optional offset of rows to search for.
+        :param bool return_original: if set to ``True``, method will return a
+            list of original values in affected rows. Otherwise - number of
+            affected rows (this is default behaviour).
+        :rtype: int or list
+        """
+        index_id = self.write_socket.get_index_id(db, table, fields, index_name)
+        op = 'D' + (return_original and '?' or '')
+        data = self.write_socket.find_modify(index_id, operation, values, op,
+                                             limit=limit, offset=offset)
+
+        if data:
+            data = return_original and [zip(fields, row) for row in data] \
+                or int(data[0][0])
+        return data
+
+    def purge(self):
+        """Purges all read and write connections.
+        All requests after that operation will open new connections, index
+        caches will be cleaned too.
+        """
+        self.read_socket.purge()
+        self.write_socket.purge()

File notes/pyhs/pyhs/sockets.py

+import socket
+import threading
+import time
+import random
+from itertools import imap, chain
+
+try:
+    from _speedups import encode, decode
+except ImportError:
+    from utils import encode, decode
+from utils import check_columns
+from exceptions import *
+
+
+
+class Connection(object):
+    """Single HandlerSocket connection.