Commits

Anonymous committed 0431580

import beta bullet.

  • Participants

Comments (0)

Files changed (48)

+Bullet was originally created in 2010 by Felinx <felinx.lee@gmail.com>
+
+The PRIMARY AUTHORS are (and/or have been):
+
+    * Felinx <felinx.lee@gmail.com>
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+include README
+include AUTHORS
+include INSTALL
+include LICENSE
+include MANIFEST.in
+recursive-include docs *
+recursive-include demos *
+
+Bullet is an asynchronous web server work well with WSGI framework. 
+The code is still in beta, asynchronous push services across processes not ready 
+yet. 
+
+I will make it ready soon, but I am little busy now, 
+I need finish http://poweredsites.org at first, so please be patient, thanks!
+
+Bullet was originally created at July 2010 by Felinx <felinx.lee@gmail.com>,
+and open-sourced to community under Apache License.

bullet/__init__.py

+#-*- coding:utf-8 -*-
+#
+# Copyright(c) 2010 bulletweb.org
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+__version__ = "0.1"
+#-*- coding:utf-8 -*-
+#
+# Copyright(c) 2010 bulletweb.org
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Asynchronous Operations"""
+
+from bullet.core import Scheduler, IService, IAsyncOperation, \
+        AsyncDo, AsyncDoNowait, Timeout, IEvent
+
+
+# alias, to make operation classes look like a function
+def do(func, *args, **kw):
+    # return func's callback value
+    return AsyncDo(func, *args, **kw)()
+
+def do_nowait(func, *args, **kw):
+    AsyncDoNowait(func, *args, **kw)()
+
+def sleep(duration, *args, **kw):
+    _AsyncSleep(duration)()
+
+def periodic_callback(interval, callback, *args, **kw):
+    _AsyncPeriodicCallback(interval, callback, *args, **kw)()
+
+def fire(event):
+    Scheduler.instance().fire(event)
+
+def wait(event):
+    return Scheduler.instance().wait(event)
+
+
+class _AsyncSleep(IAsyncOperation):
+    """Sleep duration
+    
+    An asynchronous operation that indicates the coroutine wants
+    sleep `duration` seconds, then continue coroutine's running.
+
+    If no argument is passed, the coroutine will be called
+    again during the next iteration of the main loop.
+    
+    """
+    def __init__(self, duration=0, *args, **kw):
+        super(_AsyncSleep, self).__init__(*args, **kw)
+        self._duration = duration
+        self._callback = None
+
+    def __call__(self):
+        self._scheduler.add_service(_TimerService(self._coroutine,
+                    self._duration, self._callback, *self._args, **self._kw))
+
+
+class _AsyncPeriodicCallback(IAsyncOperation):
+    """Periodic callback
+    
+    Run callback periodically and asynchronously    
+
+    """
+    def __init__(self, interval, callback, *args, **kw):
+        super(_AsyncPeriodicCallback, self).__init__(*args, **kw)
+        self._interval = interval
+        self._callback = callback
+
+    def __call__(self):
+        self._scheduler.add_service(_PeriodicCallbackService(self._coroutine,
+                    self._interval, self._callback, *self._args, **self._kw))
+
+
+class _TimerService(IService):
+    """Timer service, run something when timeout"""
+    def __init__(self, coroutine, interval, callback=None, *args, **kw):
+        super(_TimerService, self).__init__()
+        kw["periodic"] = kw.get("periodic", False)
+        self.timeout = Timeout(coroutine, interval, callback, *args, **kw)
+
+    def start(self):
+        self._scheduler.add_timer(self.timeout)
+        self._scheduler.switch()
+
+    def stop(self):
+        self._scheduler.remove_timeout(self.timeout)
+
+
+class _PeriodicCallbackService(_TimerService):
+    """Periodic callback service"""
+    def __init__(self, coroutine, interval, callback, *args, **kw):
+        kw["periodic"] = True
+        super(_PeriodicCallbackService, self).__init__(coroutine, interval,
+                                                       callback, *args, **kw)

bullet/conf/__init__.py

+#-*- coding:utf-8 -*-
+#
+# Copyright(c) 2010 bulletweb.org
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Initialize bullet web server settings"""
+
+import os
+
+from bullet.conf import defaults
+from bullet.utils.conf import load_settings
+
+__all__ = ("settings", "all", "reload")
+
+
+all = None
+settings = None
+
+
+def reload():
+    global all, settings
+
+    all = load_settings("BULLET_YAML", defaults, os.path.dirname(__file__))
+    settings = all.default
+
+reload()

bullet/conf/bullet.yaml

+#-*- coding:utf-8 -*-
+#
+# Copyright(c) 2010 bulletweb.org
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+# A sample settings for bullet web server
+#
+default:
+    debug: true
+    host: "localhost:8888"
+    error_email: felinx.lee@gmail.com
+    
+    push_server:
+        host: "localhost:8889"
+        auth_secret: "15c48c74-da9c-4306-9837-ea6b46a6d288"
+    
+    push_client:        
+        host: "localhost:8889"
+        auth_secret: "15c48c74-da9c-4306-9837-ea6b46a6d288"
+        
+    sites_enabled:
+        - bulletweborg
+    
+bulletweborg:
+    domain: .*\.bulletweb.org
+    urls:
+        - url: static\.bulletweb\.org/
+          static_dir: "/home/felinx/workspace/Bullet/bulletweb/public/static/"
+        
+        - url: /robots\.txt
+          static_file: "/home/felinx/workspace/Bullet/bulletweb/public/robots.txt"              
+        
+        - url: django.bulletweb.org/.*
+          application: bulletweborg.django.application
+          
+        - url: pylons.bulletweb.org/.*
+          application: bulletweborg.pylons.application
+             
+        - url: /.*
+          application: bulletweborg.application

bullet/conf/defaults.py

+#-*- coding:utf-8 -*-
+#
+# Copyright(c) 2010 bulletweb.org
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Bullet web server default settings"""
+
+import os
+
+try:
+    worker_processes = os.sysconf("SC_NPROCESSORS_CONF")
+except ValueError:
+    worker_processes = 2
+
+debug = False
+daemon = False
+
+host = "0.0.0.0:80"
+worker_connections = 1024
+max_http_size = 10 # mega bytes
+default_type = "application/octet-stream"
+
+log_level = "warning" # debug, info, warning, error, critical
+
+keepalive_timeout = 65
+
+################################################################################
+# monitor settings
+monitor_interval = 30
+monitored_files = []
+unmonitored_filetypes = [".html.py", ".jpeg", ".jpg", ".gif", ".bmp", \
+                    ".png", ".css", ".js", ".txt", ".zip", ".rar", ".tar.gz"]
+
+autoreload_delay = 60
+################################################################################
+
+error_email = "webmaster@example.com"
+sites_enabled = []
+
+push_server = {}
+push_client = {}
+
+# gzip options
+gzip = True
+gzip_min_length = 2048
+gzip_comp_level = 2
+gzip_types = ["text/plain", "text/html", "text/css", "application/x-javascript", "text/xml", "application/xml", "application/xml+rss", "text/javascript"]

bullet/contrib/LICENSE

+All of modules in bullet.contrib are third party modules, those modules come 
+from the open source community, the code owners are the original authors.
+Those modules are not developed by bulletweb, but we may do some modification 
+to make it work better for bullet project(eg. modify the module's namespace 
+etc).
+
+For more detail license information about them, please refer to the module's 
+header comments. 

bullet/contrib/__init__.py

Empty file added.

bullet/contrib/importlib.py

+# License for code in this file that was taken from Python 2.7.
+
+# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
+# --------------------------------------------
+#
+# 1. This LICENSE AGREEMENT is between the Python Software Foundation
+# ("PSF"), and the Individual or Organization ("Licensee") accessing and
+# otherwise using this software ("Python") in source or binary form and
+# its associated documentation.
+#
+# 2. Subject to the terms and conditions of this License Agreement, PSF
+# hereby grants Licensee a nonexclusive, royalty-free, world-wide
+# license to reproduce, analyze, test, perform and/or display publicly,
+# prepare derivative works, distribute, and otherwise use Python
+# alone or in any derivative version, provided, however, that PSF's
+# License Agreement and PSF's notice of copyright, i.e., "Copyright (c)
+# 2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software Foundation;
+# All Rights Reserved" are retained in Python alone or in any derivative
+# version prepared by Licensee.
+#
+# 3. In the event Licensee prepares a derivative work that is based on
+# or incorporates Python or any part thereof, and wants to make
+# the derivative work available to others as provided herein, then
+# Licensee hereby agrees to include in any such work a brief summary of
+# the changes made to Python.
+#
+# 4. PSF is making Python available to Licensee on an "AS IS"
+# basis.  PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
+# IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
+# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
+# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
+# INFRINGE ANY THIRD PARTY RIGHTS.
+#
+# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
+# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
+# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
+# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
+#
+# 6. This License Agreement will automatically terminate upon a material
+# breach of its terms and conditions.
+#
+# 7. Nothing in this License Agreement shall be deemed to create any
+# relationship of agency, partnership, or joint venture between PSF and
+# Licensee.  This License Agreement does not grant permission to use PSF
+# trademarks or trade name in a trademark sense to endorse or promote
+# products or services of Licensee, or any third party.
+#
+# 8. By copying, installing or otherwise using Python, Licensee
+# agrees to be bound by the terms and conditions of this License
+# Agreement.
+import sys
+
+def _resolve_name(name, package, level):
+    """Return the absolute name of the module to be imported."""
+    if not hasattr(package, 'rindex'):
+        raise ValueError("'package' not set to a string")
+    dot = len(package)
+    for x in xrange(level, 1, -1):
+        try:
+            dot = package.rindex('.', 0, dot)
+        except ValueError:
+            raise ValueError("attempted relative import beyond top-level "
+                              "package")
+    return "%s.%s" % (package[:dot], name)
+
+
+def import_module(name, package=None):
+    """Import a module.
+
+    The 'package' argument is required when performing a relative import. It
+    specifies the package to use as the anchor point from which to resolve the
+    relative import to an absolute import.
+
+    """
+    if name.startswith('.'):
+        if not package:
+            raise TypeError("relative imports require the 'package' argument")
+        level = 0
+        for character in name:
+            if character != '.':
+                break
+            level += 1
+        name = _resolve_name(name[level:], package, level)
+    __import__(name)
+    return sys.modules[name]

bullet/contrib/intpack.py

+# MySQL Connector/Python - MySQL driver written in Python.
+# Copryright (c) 2009,2010, Oracle and/or its affiliates. All rights reserved.
+# Use is subject to license terms. (See COPYING)
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation.
+# 
+# There are special exceptions to the terms and conditions of the GNU
+# General Public License as it is applied to this software. View the
+# full text of the exception in file EXCEPTIONS-CLIENT in the directory
+# of this software distribution or see the FOSS License Exception at
+# www.mysql.com.
+# 
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+# 
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""Utilities pack and unpack packages
+    
+ Borrow from mysql.connector.utils(myconnpy)
+"""
+
+
+import struct
+
+def intread(b):
+    """Unpacks the given buffer to an integer"""
+    try:
+        if isinstance(b, int):
+            return b
+        l = len(b)
+        if l == 1:
+            return int(ord(b))
+        if l <= 4:
+            tmp = b + '\x00' * (4 - l)
+            return struct.unpack('<I', tmp)[0]
+        else:
+            tmp = b + '\x00' * (8 - l)
+            return struct.unpack('<Q', tmp)[0]
+    except:
+        raise
+
+def int1store(i):
+    """
+    Takes an unsigned byte (1 byte) and packs it as string.
+    
+    Returns string.
+    """
+    if i < 0 or i > 255:
+        raise ValueError('int1store requires 0 <= i <= 255')
+    else:
+        return struct.pack('<B', i)
+
+def int2store(i):
+    """
+    Takes an unsigned short (2 bytes) and packs it as string.
+    
+    Returns string.
+    """
+    if i < 0 or i > 65535:
+        raise ValueError('int2store requires 0 <= i <= 65535')
+    else:
+        return struct.pack('<H', i)
+
+def int3store(i):
+    """
+    Takes an unsigned integer (3 bytes) and packs it as string.
+    
+    Returns string.
+    """
+    if i < 0 or i > 16777215:
+        raise ValueError('int3store requires 0 <= i <= 16777215')
+    else:
+        return struct.pack('<I', i)[0:3]
+
+def int4store(i):
+    """
+    Takes an unsigned integer (4 bytes) and packs it as string.
+    
+    Returns string.
+    """
+    if i < 0 or i > 4294967295L:
+        raise ValueError('int4store requires 0 <= i <= 4294967295')
+    else:
+        return struct.pack('<I', i)
+
+def intstore(i):
+    """
+    Takes an unsigned integers and packs it as a string.
+    
+    This function uses int1store, int2store, int3store and
+    int4store depending on the integer value.
+    
+    returns string.
+    """
+    if i < 0 or i > 4294967295L:
+        raise ValueError('intstore requires 0 <= i <= 4294967295')
+
+    if i <= 255:
+        fs = int1store
+    elif i <= 65535:
+        fs = int2store
+    elif i <= 16777215:
+        fs = int3store
+    else:
+        fs = int4store
+
+    return fs(i)

bullet/contrib/pysendfile/__init__.py

Empty file added.

bullet/contrib/pysendfile/c/sendfilemodule.c

+/* py-sendfile 1.0
+   A Python module interface to sendfile(2)
+   Copyright (C) 2005 Ben Woolley <user ben at host tautology.org>
+
+   This is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   This is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, write to the Free
+   Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
+   02111-1307 USA.  */
+
+#include <Python.h>
+#if defined(__FreeBSD__) || defined(__DragonFly__)
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <sys/uio.h>
+
+static PyObject *
+method_sendfile(PyObject *self, PyObject *args)
+{
+    int fd, s, sts;
+    off_t offset, sbytes;
+    size_t nbytes;
+    PyObject *headers;
+    struct iovec *header_iovs;
+    struct iovec *footer_iovs;
+    struct sf_hdtr hdtr;
+
+    headers = NULL;
+
+    if (!PyArg_ParseTuple(args, "iiLi|O:sendfile", &fd, &s, &offset, &nbytes, &headers))
+        return NULL;
+
+    if (headers != NULL) {
+        int i, listlen;
+        PyObject *string;
+        char *buf;
+        int headerlist_len;
+        int headerlist_string = 0;
+        int footerlist_len;
+        int footerlist_string = 0;
+        PyObject *headerlist;
+        PyObject *footerlist;
+
+        if (PyTuple_Check(headers) && PyTuple_Size(headers) > 1) {
+            //printf("arg is tuple length %d\n", PyTuple_Size(headers));
+            headerlist = PyTuple_GetItem(headers, 0);
+            if (PyList_Check(headerlist)) {
+                headerlist_len = PyList_Size(headerlist);
+            } else if (PyString_Check(headerlist)) {
+                headerlist_string = 1;
+                headerlist_len = 1;
+            } else {
+                headerlist_len = 0;
+            }
+
+            footerlist = PyTuple_GetItem(headers, 1);
+            if (PyList_Check(footerlist)) {
+                //printf("footer is list\n");
+                footerlist_len = PyList_Size(footerlist);
+            } else if (PyString_Check(footerlist)) {
+                //printf("footer is string\n");
+                footerlist_string = 1;
+                footerlist_len = 1;
+            } else {
+                //printf("footer is invalid\n");
+                footerlist_len = 0;
+            }
+        } else {
+            if (PyTuple_Check(headers)) {
+                headerlist = PyTuple_GetItem(headers, 0);
+            } else {
+                headerlist = headers;
+            }
+            if (PyList_Check(headerlist)) {
+            	headerlist_len = PyList_Size(headerlist);
+            } else if (PyString_Check(headerlist)) {
+                headerlist_string = 1;
+                headerlist_len = 1;
+            } else {
+                headerlist_len = 0;
+            }
+
+            footerlist_len = 0;
+            footer_iovs = 0;
+        }
+        //printf("header list size and type: %d, %d\nfooter list size and type: %d, %d\n", headerlist_len, headerlist_string, footerlist_len, footerlist_string);
+
+        header_iovs = (struct iovec*) malloc( sizeof(struct iovec) * headerlist_len );
+        for (i=0; i < headerlist_len; i++) {
+            if (headerlist_string) {
+                //printf("found string\n");
+                string = headerlist;
+            } else {
+                //printf("found list\n");
+                string = PyList_GET_ITEM(headerlist, i);                
+            }
+            buf = (char *) PyString_AsString(string);                
+            //printf("buf: %s\n", buf);
+            header_iovs[i].iov_base = buf;
+            header_iovs[i].iov_len = PyString_GET_SIZE(string);
+        }
+
+        footer_iovs = (struct iovec*) malloc( sizeof(struct iovec) * footerlist_len );
+        for (i=0; i < footerlist_len; i++) {
+            if (footerlist_string) {
+                //printf("found string\n");
+                string = footerlist;
+            } else {
+                //printf("found list\n");
+                string = PyList_GET_ITEM(footerlist, i);                
+            }
+            buf = (char *) PyString_AsString(string);
+            //printf("buf: %s\n", buf);
+            footer_iovs[i].iov_base = buf;
+            footer_iovs[i].iov_len = PyString_GET_SIZE(string);
+        }
+
+        hdtr.headers = header_iovs;
+        hdtr.hdr_cnt = headerlist_len;
+        hdtr.trailers = footer_iovs;
+        hdtr.trl_cnt = footerlist_len;
+
+	Py_BEGIN_ALLOW_THREADS;
+        sts = sendfile(s, fd, (off_t) offset, (size_t) nbytes, &hdtr, (off_t *) &sbytes, 0);
+	Py_END_ALLOW_THREADS;
+        free(header_iovs);
+        free(footer_iovs);
+    } else {
+	Py_BEGIN_ALLOW_THREADS;
+        sts = sendfile(s, fd, (off_t) offset, (size_t) nbytes, NULL, (off_t *) &sbytes, 0);
+	Py_END_ALLOW_THREADS;
+    }
+    if (sts == -1) {
+        if (errno == EAGAIN || errno == EINTR) {
+            return Py_BuildValue("LL", offset + nbytes, sbytes);
+        } else {
+            PyErr_SetFromErrno(PyExc_OSError);
+            return NULL;
+        }
+    } else {
+        return Py_BuildValue("LL", offset + nbytes, sbytes);
+    }
+}
+
+#else
+#include <sys/sendfile.h>
+
+static PyObject *
+method_sendfile(PyObject *self, PyObject *args)
+{
+    int out_fd, in_fd;
+    off_t offset;
+    size_t count;
+    ssize_t sts;
+
+    if (!PyArg_ParseTuple(args, "iiLk", &out_fd, &in_fd, &offset, &count))
+        return NULL;
+	 
+    Py_BEGIN_ALLOW_THREADS;
+    sts = sendfile(out_fd, in_fd, (off_t *) &offset, (ssize_t) count);
+    Py_END_ALLOW_THREADS;
+    if (sts == -1) {
+        PyErr_SetFromErrno(PyExc_OSError);
+        return NULL;
+    } else {
+        return Py_BuildValue("Lk", offset, sts);
+    }
+}
+
+#endif
+
+static PyMethodDef SendfileMethods[] = {
+    {"sendfile",  method_sendfile, METH_VARARGS,
+"sendfile(out_fd, in_fd, offset, count) = [position, sent]\n"
+"\n"
+"FreeBSD only:\n"
+"sendfile(out_fd, in_fd, offset, count, headers_and_or_trailers) = [position, sent]\n"
+"\n"
+"Direct interface to FreeBSD and Linux sendfile(2) using the Linux API, except that errors are turned into Python exceptions, and instead of returning only the amount of data being sent, it returns a tuple that contains the new file pointer, and the amount of data that has been sent.\n"
+"\n"
+"For example:\n"
+"\n"
+"from sendfile import *\n"
+"sendfile(out_socket.fileno(), in_file.fileno(), int_start, int_len)\n"
+"\n"
+"On Linux, item 0 of the return value is always a reliable file pointer. The value specified in the offset argument is handed to the syscall, which then updates it according to how much data has been sent. The length of data sent is returned in item 1 of the return value.\n"
+"\n"
+"FreeBSD sf_hdtr is also supported as an additional argument which can be a string, list, or tuple. If it is a string, it will create a struct iovec of length 1 containing the string which will be sent as the header. If a list, it will create a struct iovec of the length of the list, containing the strings in the list, which will be concatenated by the syscall to form the total header. If a tuple, it will expect a string or list of strings in two items: the first representing the header, and the second representing the trailer, processed the same way as the header. You can send only a footer by making the header an empty string or list, or list of empty strings.\n"
+"\n"
+"FreeBSD example with string header:\n"
+"\n"
+"from sendfile import *\n"
+"sendfile(out_socket.fileno(), in_file.fileno(), 0, 0, \"HTTP/1.1 200 OK\\r\\nContent-Type: text/html\\r\\nConnection: close\\r\\n\\r\\n\")\n"
+"\n"
+"FreeBSD example with both header and trailer as a string:\n"
+"\n"
+"from sendfile import *\n"
+"sendfile(out_socket.fileno(), in_file.fileno(), int_start, int_len, ('BEGIN', 'END'))\n"
+"\n"
+"FreeBSD example with mixed types:\n"
+"\n"
+"from sendfile import *\n"
+"sendfile(out_socket.fileno(), in_file.fileno(), int_start, int_len, ([magic, metadata_len, metadata, data_len], md5))\n"
+"\n"
+"Although the FreeBSD sendfile(2) requires the socket file descriptor to be specified as the second argument, this function will ALWAYS require the socket as the first argument, like Linux and Solaris. Also, if an sf_hdtr is specified, the function will return the total data sent including all of the headers and trailers. Note that item 0 of the return value, the file pointer position, is determined on FreeBSD only by adding offset and count, so if not all of the data has been sent, this value will be wrong. You will have to use the value in item 1, which tells you how much total data has actually been sent, and be aware that header and trailer data are included in that value, so you may need to reconstruct the headers and/or trailers yourself if you would like to find out exactly which data has been sent. However, if you do not send any headers or trailers, you can just add item 1 to where you started to find out where you need to start from again. I do not consider this much of a problem because if you are sending header and trailer data, the protocol will likely not allow you to just keep sending from where the failure occured without getting into complexities, anyway.\n"
+"\n"
+"The variable has_sf_hdtr is provided for determining whether sf_hdtr is supported."},
+    {NULL, NULL, 0, NULL}        /* Sentinel */
+};
+
+static void
+insint (PyObject *d, char *name, int value)
+{
+    PyObject *v = PyInt_FromLong((long) value);
+    if (!v || PyDict_SetItemString(d, name, v))
+        PyErr_Clear();
+
+    Py_XDECREF(v);
+}
+
+PyMODINIT_FUNC
+initsendfile(void)
+{
+    PyObject *m = Py_InitModule("sendfile", SendfileMethods);
+
+    PyObject *d = PyModule_GetDict (m);
+
+#if defined(__FreeBSD__) || defined(__DragonFly__)
+    insint (d, "has_sf_hdtr", 1);
+#else
+    insint (d, "has_sf_hdtr", 0);
+#endif
+    PyModule_AddStringConstant(m, "__doc__", "Direct interface to FreeBSD and Linux sendfile(2), for sending file data to a socket directly via the kernel.");
+    PyModule_AddStringConstant(m, "__version__", "1.2.2");
+}

bullet/contrib/tornado/__init__.py

Empty file added.

bullet/contrib/tornado/escape.py

+#!/usr/bin/env python
+#
+# Copyright 2009 Facebook
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Escaping/unescaping methods for HTML, JSON, URLs, and others."""
+
+import htmlentitydefs
+import re
+import xml.sax.saxutils
+import urllib
+
+try:
+    import json
+    assert hasattr(json, "loads") and hasattr(json, "dumps")
+    _json_decode = lambda s: json.loads(s)
+    _json_encode = lambda v: json.dumps(v)
+except:
+    try:
+        import simplejson
+        _json_decode = lambda s: simplejson.loads(_unicode(s))
+        _json_encode = lambda v: simplejson.dumps(v)
+    except ImportError:
+        try:
+            # For Google AppEngine
+            from django.utils import simplejson
+            _json_decode = lambda s: simplejson.loads(_unicode(s))
+            _json_encode = lambda v: simplejson.dumps(v)
+        except ImportError:
+            raise Exception("A JSON parser is required, e.g., simplejson at "
+                            "http://pypi.python.org/pypi/simplejson/")
+
+
+def xhtml_escape(value):
+    """Escapes a string so it is valid within XML or XHTML."""
+    return utf8(xml.sax.saxutils.escape(value, {'"': "&quot;"}))
+
+
+def xhtml_unescape(value):
+    """Un-escapes an XML-escaped string."""
+    return re.sub(r"&(#?)(\w+?);", _convert_entity, _unicode(value))
+
+
+def json_encode(value):
+    """JSON-encodes the given Python object."""
+    # JSON permits but does not require forward slashes to be escaped.
+    # This is useful when json data is emitted in a <script> tag
+    # in HTML, as it prevents </script> tags from prematurely terminating
+    # the javscript.  Some json libraries do this escaping by default,
+    # although python's standard library does not, so we do it here.
+    # http://stackoverflow.com/questions/1580647/json-why-are-forward-slashes-escaped
+    return _json_encode(value).replace("</", "<\\/")
+
+
+def json_decode(value):
+    """Returns Python objects for the given JSON string."""
+    return _json_decode(value)
+
+
+def squeeze(value):
+    """Replace all sequences of whitespace chars with a single space."""
+    return re.sub(r"[\x00-\x20]+", " ", value).strip()
+
+
+def url_escape(value):
+    """Returns a valid URL-encoded version of the given value."""
+    return urllib.quote_plus(utf8(value))
+
+
+def url_unescape(value):
+    """Decodes the given value from a URL."""
+    return _unicode(urllib.unquote_plus(value))
+
+
+def utf8(value):
+    if isinstance(value, unicode):
+        return value.encode("utf-8")
+    assert isinstance(value, str)
+    return value
+
+
+def _unicode(value):
+    if isinstance(value, str):
+        return value.decode("utf-8")
+    assert isinstance(value, unicode)
+    return value
+
+
+def _convert_entity(m):
+    if m.group(1) == "#":
+        try:
+            return unichr(int(m.group(2)))
+        except ValueError:
+            return "&#%s;" % m.group(2)
+    try:
+        return _HTML_UNICODE_MAP[m.group(2)]
+    except KeyError:
+        return "&%s;" % m.group(2)
+
+
+def _build_unicode_map():
+    unicode_map = {}
+    for name, value in htmlentitydefs.name2codepoint.iteritems():
+        unicode_map[name] = unichr(value)
+    return unicode_map
+
+_HTML_UNICODE_MAP = _build_unicode_map()

bullet/contrib/tornado/httpclient.py

+#!/usr/bin/env python
+#
+# Copyright 2009 Facebook
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Blocking and non-blocking HTTP client implementations using pycurl."""
+
+import calendar
+import collections
+import cStringIO
+import email.utils
+import errno
+import escape
+import functools
+import httplib
+#import ioloop
+import logging
+import pycurl
+import sys
+import time
+import weakref
+
+from bullet import core as ioloop
+
+class HTTPClient(object):
+    """A blocking HTTP client backed with pycurl.
+
+    Typical usage looks like this:
+
+        http_client = httpclient.HTTPClient()
+        try:
+            response = http_client.fetch("http://www.google.com/")
+            print response.body
+        except httpclient.HTTPError, e:
+            print "Error:", e
+
+    fetch() can take a string URL or an HTTPRequest instance, which offers
+    more options, like executing POST/PUT/DELETE requests.
+    """
+    def __init__(self, max_simultaneous_connections=None):
+        self._curl = _curl_create(max_simultaneous_connections)
+
+    def __del__(self):
+        self._curl.close()
+
+    def fetch(self, request, **kwargs):
+        """Executes an HTTPRequest, returning an HTTPResponse.
+
+        If an error occurs during the fetch, we raise an HTTPError.
+        """
+        if not isinstance(request, HTTPRequest):
+           request = HTTPRequest(url=request, **kwargs)
+        buffer = cStringIO.StringIO()
+        headers = {}
+        try:
+            _curl_setup_request(self._curl, request, buffer, headers)
+            self._curl.perform()
+            code = self._curl.getinfo(pycurl.HTTP_CODE)
+            effective_url = self._curl.getinfo(pycurl.EFFECTIVE_URL)
+            buffer.seek(0)
+            response = HTTPResponse(
+                request=request, code=code, headers=headers,
+                buffer=buffer, effective_url=effective_url)
+            if code < 200 or code >= 300:
+                raise HTTPError(code, response=response)
+            return response
+        except pycurl.error, e:
+            buffer.close()
+            raise CurlError(*e)
+
+
+class AsyncHTTPClient(object):
+    """An non-blocking HTTP client backed with pycurl.
+
+    Example usage:
+
+        import ioloop
+
+        def handle_request(response):
+            if response.error:
+                print "Error:", response.error
+            else:
+                print response.body
+            ioloop.IOLoop.instance().stop()
+
+        http_client = httpclient.AsyncHTTPClient()
+        http_client.fetch("http://www.google.com/", handle_request)
+        ioloop.IOLoop.instance().start()
+
+    fetch() can take a string URL or an HTTPRequest instance, which offers
+    more options, like executing POST/PUT/DELETE requests.
+
+    The keyword argument max_clients to the AsyncHTTPClient constructor
+    determines the maximum number of simultaneous fetch() operations that
+    can execute in parallel on each IOLoop.
+    """
+    _ASYNC_CLIENTS = weakref.WeakKeyDictionary()
+
+    def __new__(cls, io_loop=None, max_clients=10,
+                max_simultaneous_connections=None):
+        # There is one client per IOLoop since they share curl instances
+        io_loop = io_loop or ioloop.IOLoop.instance()
+        if io_loop in cls._ASYNC_CLIENTS:
+            return cls._ASYNC_CLIENTS[io_loop]
+        else:
+            instance = super(AsyncHTTPClient, cls).__new__(cls)
+            instance.io_loop = io_loop
+            instance._multi = pycurl.CurlMulti()
+            instance._curls = [_curl_create(max_simultaneous_connections)
+                               for i in xrange(max_clients)]
+            instance._free_list = instance._curls[:]
+            instance._requests = collections.deque()
+            instance._fds = {}
+            instance._events = {}
+            instance._added_perform_callback = False
+            instance._timeout = None
+            instance._closed = False
+            cls._ASYNC_CLIENTS[io_loop] = instance
+            return instance
+
+    def close(self):
+        """Destroys this http client, freeing any file descriptors used.
+        Not needed in normal use, but may be helpful in unittests that
+        create and destroy http clients.  No other methods may be called
+        on the AsyncHTTPClient after close().
+        """
+        del AsyncHTTPClient._ASYNC_CLIENTS[self.io_loop]
+        for curl in self._curls:
+            curl.close()
+        self._multi.close()
+        self._closed = True
+
+    def fetch(self, request, callback, **kwargs):
+        """Executes an HTTPRequest, calling callback with an HTTPResponse.
+
+        If an error occurs during the fetch, the HTTPResponse given to the
+        callback has a non-None error attribute that contains the exception
+        encountered during the request. You can call response.reraise() to
+        throw the exception (if any) in the callback.
+        """
+        if not isinstance(request, HTTPRequest):
+           request = HTTPRequest(url=request, **kwargs)
+        self._requests.append((request, callback))
+        self._add_perform_callback()
+
+    def _add_perform_callback(self):
+        if not self._added_perform_callback:
+            self.io_loop.add_callback(self._perform)
+            self._added_perform_callback = True
+
+    def _handle_events(self, fd, events):
+        self._events[fd] = events
+        self._add_perform_callback()
+
+    def _handle_timeout(self):
+        self._timeout = None
+        self._perform()
+
+    def _perform(self):
+        self._added_perform_callback = False
+
+        if self._closed:
+            return
+
+        while True:
+            while True:
+                ret, num_handles = self._multi.perform()
+                if ret != pycurl.E_CALL_MULTI_PERFORM:
+                    break
+
+            # Update the set of active file descriptors.  It is important
+            # that this happen immediately after perform() because
+            # fds that have been removed from fdset are free to be reused
+            # in user callbacks.
+            fds = {}
+            (readable, writable, exceptable) = self._multi.fdset()
+            for fd in readable:
+                fds[fd] = fds.get(fd, 0) | 0x1 | 0x2
+            for fd in writable:
+                fds[fd] = fds.get(fd, 0) | 0x4
+            for fd in exceptable:
+                fds[fd] = fds.get(fd, 0) | 0x8 | 0x10
+
+            if fds and max(fds.iterkeys()) > 900:
+                # Libcurl has a bug in which it behaves unpredictably with
+                # file descriptors greater than 1024.  (This is because
+                # even though it uses poll() instead of select(), it still
+                # uses FD_SET internally) Since curl opens its own file
+                # descriptors we can't catch this problem when it happens,
+                # and the best we can do is detect that it's about to
+                # happen.  Exiting is a lousy way to handle this error,
+                # but there's not much we can do at this point.  Exiting
+                # (and getting restarted by whatever monitoring process
+                # is handling crashed tornado processes) will at least
+                # get things working again and hopefully bring the issue
+                # to someone's attention.
+                # If you run into this issue, you either have a file descriptor
+                # leak or need to run more tornado processes (so that none
+                # of them are handling more than 1000 simultaneous connections)
+                print >> sys.stderr, "ERROR: File descriptor too high for libcurl. Exiting."
+                logging.error("File descriptor too high for libcurl. Exiting.")
+                sys.exit(1)
+
+            for fd in self._fds:
+                if fd not in fds:
+                    try:
+                        self.io_loop.remove_handler(fd)
+                    except (OSError, IOError), e:
+                        if e[0] != errno.ENOENT:
+                            raise
+
+            for fd, events in fds.iteritems():
+                old_events = self._fds.get(fd, None)
+                if old_events is None:
+                    self.io_loop.add_handler(fd, self._handle_events, events)
+                elif old_events != events:
+                    try:
+                        self.io_loop.update_handler(fd, events)
+                    except (OSError, IOError), e:
+                        if e[0] == errno.ENOENT:
+                            self.io_loop.add_handler(fd, self._handle_events,
+                                                     events)
+                        else:
+                            raise
+            self._fds = fds
+
+
+            # Handle completed fetches
+            completed = 0
+            while True:
+                num_q, ok_list, err_list = self._multi.info_read()
+                for curl in ok_list:
+                    self._finish(curl)
+                    completed += 1
+                for curl, errnum, errmsg in err_list:
+                    self._finish(curl, errnum, errmsg)
+                    completed += 1
+                if num_q == 0:
+                    break
+
+            # Start fetching new URLs
+            started = 0
+            while self._free_list and self._requests:
+                started += 1
+                curl = self._free_list.pop()
+                (request, callback) = self._requests.popleft()
+                curl.info = {
+                    "headers": {},
+                    "buffer": cStringIO.StringIO(),
+                    "request": request,
+                    "callback": callback,
+                    "start_time": time.time(),
+                }
+                _curl_setup_request(curl, request, curl.info["buffer"],
+                                    curl.info["headers"])
+                self._multi.add_handle(curl)
+
+            if not started and not completed:
+                break
+
+        if self._timeout is not None:
+            self.io_loop.remove_timeout(self._timeout)
+            self._timeout = None
+
+        if num_handles:
+            self._timeout = self.io_loop.add_timeout(
+                time.time() + 0.2, self._handle_timeout)
+
+
+    def _finish(self, curl, curl_error=None, curl_message=None):
+        info = curl.info
+        curl.info = None
+        self._multi.remove_handle(curl)
+        self._free_list.append(curl)
+        buffer = info["buffer"]
+        if curl_error:
+            error = CurlError(curl_error, curl_message)
+            code = error.code
+            body = None
+            effective_url = None
+            buffer.close()
+            buffer = None
+        else:
+            error = None
+            code = curl.getinfo(pycurl.HTTP_CODE)
+            effective_url = curl.getinfo(pycurl.EFFECTIVE_URL)
+            buffer.seek(0)
+        try:
+            info["callback"](HTTPResponse(
+                request=info["request"], code=code, headers=info["headers"],
+                buffer=buffer, effective_url=effective_url, error=error,
+                request_time=time.time() - info["start_time"]))
+        except (KeyboardInterrupt, SystemExit):
+            raise
+        except:
+            logging.error("Exception in callback %r", info["callback"],
+                          exc_info=True)
+
+
+class AsyncHTTPClient2(object):
+    """Alternate implementation of AsyncHTTPClient.
+
+    This class has the same interface as AsyncHTTPClient (so see that class
+    for usage documentation) but is implemented with a different set of
+    libcurl APIs (curl_multi_socket_action instead of fdset/perform).
+    This implementation will likely become the default in the future, but
+    for now should be considered somewhat experimental.
+
+    The main advantage of this class over the original implementation is
+    that it is immune to the fd > 1024 bug, so applications with a large
+    number of simultaneous requests (e.g. long-polling) may prefer this
+    version.
+
+    Known bugs:
+    * Timeouts connecting to localhost
+    In some situations, this implementation will return a connection
+    timeout when the old implementation would be able to connect.  This
+    has only been observed when connecting to localhost when using
+    the kqueue-based IOLoop (mac/bsd), but it may also occur on epoll (linux)
+    and, in principle, for non-localhost sites.
+    While the bug is unrelated to IPv6, disabling IPv6 will avoid the
+    most common manifestations of the bug (use a prepare_curl_callback that
+    calls curl.setopt(pycurl.IPRESOLVE, pycurl.IPRESOLVE_V4)).
+    The underlying cause is a libcurl bug that has been confirmed to be
+    present in versions 7.20.0 and 7.21.0:
+    http://sourceforge.net/tracker/?func=detail&aid=3017819&group_id=976&atid=100976
+    """
+    _ASYNC_CLIENTS = weakref.WeakKeyDictionary()
+
+    def __new__(cls, io_loop=None, max_clients=10,
+                max_simultaneous_connections=None):
+        # There is one client per IOLoop since they share curl instances
+        io_loop = io_loop or ioloop.IOLoop.instance()
+        if io_loop in cls._ASYNC_CLIENTS:
+            return cls._ASYNC_CLIENTS[io_loop]
+        else:
+            instance = super(AsyncHTTPClient2, cls).__new__(cls)
+            instance.io_loop = io_loop
+            instance._multi = pycurl.CurlMulti()
+            instance._multi.setopt(pycurl.M_TIMERFUNCTION,
+                                   instance._handle_timer)
+            instance._multi.setopt(pycurl.M_SOCKETFUNCTION,
+                                   instance._handle_socket)
+            instance._curls = [_curl_create(max_simultaneous_connections)
+                               for i in xrange(max_clients)]
+            instance._free_list = instance._curls[:]
+            instance._requests = collections.deque()
+            instance._fds = {}
+            cls._ASYNC_CLIENTS[io_loop] = instance
+            return instance
+
+    def close(self):
+        """Destroys this http client, freeing any file descriptors used.
+        Not needed in normal use, but may be helpful in unittests that
+        create and destroy http clients.  No other methods may be called
+        on the AsyncHTTPClient after close().
+        """
+        del AsyncHTTPClient2._ASYNC_CLIENTS[self.io_loop]
+        for curl in self._curls:
+            curl.close()
+        self._multi.close()
+        self._closed = True
+
+    def fetch(self, request, callback, **kwargs):
+        """Executes an HTTPRequest, calling callback with an HTTPResponse.
+
+        If an error occurs during the fetch, the HTTPResponse given to the
+        callback has a non-None error attribute that contains the exception
+        encountered during the request. You can call response.reraise() to
+        throw the exception (if any) in the callback.
+        """
+        if not isinstance(request, HTTPRequest):
+           request = HTTPRequest(url=request, **kwargs)
+        self._requests.append((request, callback))
+        self._process_queue()
+        self.io_loop.add_callback(self._handle_timeout)
+
+    def _handle_socket(self, event, fd, multi, data):
+        """Called by libcurl when it wants to change the file descriptors
+        it cares about.
+        """
+        event_map = {
+            pycurl.POLL_NONE: ioloop.IOLoop.NONE,
+            pycurl.POLL_IN: ioloop.IOLoop.READ,
+            pycurl.POLL_OUT: ioloop.IOLoop.WRITE,
+            pycurl.POLL_INOUT: ioloop.IOLoop.READ | ioloop.IOLoop.WRITE
+        }
+        if event == pycurl.POLL_REMOVE:
+            self.io_loop.remove_handler(fd)
+            del self._fds[fd]
+        else:
+            ioloop_event = event_map[event]
+            if fd not in self._fds:
+                self._fds[fd] = ioloop_event
+                self.io_loop.add_handler(fd, self._handle_events,
+                                         ioloop_event)
+            else:
+                self._fds[fd] = ioloop_event
+                self.io_loop.update_handler(fd, ioloop_event)
+
+    def _handle_timer(self, msecs):
+        """Called by libcurl to schedule a timeout."""
+        self.io_loop.add_timeout(
+            time.time() + msecs / 1000.0, self._handle_timeout)
+
+    def _handle_events(self, fd, events):
+        """Called by IOLoop when there is activity on one of our
+        file descriptors.
+        """
+        action = 0
+        if events & ioloop.IOLoop.READ: action |= pycurl.CSELECT_IN
+        if events & ioloop.IOLoop.WRITE: action |= pycurl.CSELECT_OUT
+        while True:
+            try:
+                ret, num_handles = self._multi.socket_action(fd, action)
+            except Exception, e:
+                ret = e[0]
+            if ret != pycurl.E_CALL_MULTI_PERFORM:
+                break
+        self._finish_pending_requests()
+
+    def _handle_timeout(self):
+        """Called by IOLoop when the requested timeout has passed."""
+        while True:
+            try:
+                ret, num_handles = self._multi.socket_action(
+                                        pycurl.SOCKET_TIMEOUT, 0)
+            except Exception, e:
+                ret = e[0]
+            if ret != pycurl.E_CALL_MULTI_PERFORM:
+                break
+        self._finish_pending_requests()
+
+    def _finish_pending_requests(self):
+        """Process any requests that were completed by the last
+        call to multi.socket_action.
+        """
+        while True:
+            num_q, ok_list, err_list = self._multi.info_read()
+            for curl in ok_list:
+                self._finish(curl)
+            for curl, errnum, errmsg in err_list:
+                self._finish(curl, errnum, errmsg)
+            if num_q == 0:
+                break
+        self._process_queue()
+
+    def _process_queue(self):
+        while True:
+            started = 0
+            while self._free_list and self._requests:
+                started += 1
+                curl = self._free_list.pop()
+                (request, callback) = self._requests.popleft()
+                curl.info = {
+                    "headers": {},
+                    "buffer": cStringIO.StringIO(),
+                    "request": request,
+                    "callback": callback,
+                    "start_time": time.time(),
+                }
+                _curl_setup_request(curl, request, curl.info["buffer"],
+                                    curl.info["headers"])
+                self._multi.add_handle(curl)
+
+            if not started:
+                break
+
+    def _finish(self, curl, curl_error=None, curl_message=None):
+        info = curl.info
+        curl.info = None
+        self._multi.remove_handle(curl)
+        self._free_list.append(curl)
+        buffer = info["buffer"]
+        if curl_error:
+            error = CurlError(curl_error, curl_message)
+            code = error.code
+            effective_url = None
+            buffer.close()
+            buffer = None
+        else:
+            error = None
+            code = curl.getinfo(pycurl.HTTP_CODE)
+            effective_url = curl.getinfo(pycurl.EFFECTIVE_URL)
+            buffer.seek(0)
+        try:
+            info["callback"](HTTPResponse(
+                request=info["request"], code=code, headers=info["headers"],
+                buffer=buffer, effective_url=effective_url, error=error,
+                request_time=time.time() - info["start_time"]))
+        except (KeyboardInterrupt, SystemExit):
+            raise
+        except:
+            logging.error("Exception in callback %r", info["callback"],
+                          exc_info=True)
+
+
+class HTTPRequest(object):
+    def __init__(self, url, method="GET", headers={}, body=None,
+                 auth_username=None, auth_password=None,
+                 connect_timeout=20.0, request_timeout=20.0,
+                 if_modified_since=None, follow_redirects=True,
+                 max_redirects=5, user_agent=None, use_gzip=True,
+                 network_interface=None, streaming_callback=None,
+                 header_callback=None, prepare_curl_callback=None,
+                 allow_nonstandard_methods=False):
+        if if_modified_since:
+            timestamp = calendar.timegm(if_modified_since.utctimetuple())
+            headers["If-Modified-Since"] = email.utils.formatdate(
+                timestamp, localtime=False, usegmt=True)
+        if "Pragma" not in headers:
+            headers["Pragma"] = ""
+        self.url = _utf8(url)
+        self.method = method
+        self.headers = headers
+        self.body = body
+        self.auth_username = _utf8(auth_username)
+        self.auth_password = _utf8(auth_password)
+        self.connect_timeout = connect_timeout
+        self.request_timeout = request_timeout
+        self.follow_redirects = follow_redirects
+        self.max_redirects = max_redirects
+        self.user_agent = user_agent
+        self.use_gzip = use_gzip
+        self.network_interface = network_interface
+        self.streaming_callback = streaming_callback
+        self.header_callback = header_callback
+        self.prepare_curl_callback = prepare_curl_callback
+        self.allow_nonstandard_methods = allow_nonstandard_methods
+
+
+class HTTPResponse(object):
+    def __init__(self, request, code, headers={}, buffer=None, effective_url=None,
+                 error=None, request_time=None):
+        self.request = request
+        self.code = code
+        self.headers = headers
+        self.buffer = buffer
+        self._body = None
+        if effective_url is None:
+            self.effective_url = request.url
+        else:
+            self.effective_url = effective_url
+        if error is None:
+            if self.code < 200 or self.code >= 300:
+                self.error = HTTPError(self.code, response=self)
+            else:
+                self.error = None
+        else:
+            self.error = error
+        self.request_time = request_time
+
+    def _get_body(self):
+        if self.buffer is None:
+            return None
+        elif self._body is None:
+            self._body = self.buffer.getvalue()
+
+        return self._body
+
+    body = property(_get_body)
+
+    def rethrow(self):
+        if self.error:
+            raise self.error
+
+    def __repr__(self):
+        args = ",".join("%s=%r" % i for i in self.__dict__.iteritems())
+        return "%s(%s)" % (self.__class__.__name__, args)
+
+    def __del__(self):
+        if self.buffer is not None:
+            self.buffer.close()
+
+
+class HTTPError(Exception):
+    """Exception thrown for an unsuccessful HTTP request.
+
+    Attributes:
+    code - HTTP error integer error code, e.g. 404.  Error code 599 is
+           used when no HTTP response was received, e.g. for a timeout.
+    response - HTTPResponse object, if any.
+
+    Note that if follow_redirects is False, redirects become HTTPErrors,
+    and you can look at error.response.headers['Location'] to see the
+    destination of the redirect.
+    """
+    def __init__(self, code, message=None, response=None):
+        self.code = code
+        message = message or httplib.responses.get(code, "Unknown")
+        self.response = response
+        Exception.__init__(self, "HTTP %d: %s" % (self.code, message))
+
+
+class CurlError(HTTPError):
+    def __init__(self, errno, message):
+        HTTPError.__init__(self, 599, message)
+        self.errno = errno
+
+
+def _curl_create(max_simultaneous_connections=None):
+    curl = pycurl.Curl()
+    if logging.getLogger().isEnabledFor(logging.DEBUG):
+        curl.setopt(pycurl.VERBOSE, 1)
+        curl.setopt(pycurl.DEBUGFUNCTION, _curl_debug)
+    curl.setopt(pycurl.MAXCONNECTS, max_simultaneous_connections or 5)
+    return curl
+
+
+def _curl_setup_request(curl, request, buffer, headers):
+    curl.setopt(pycurl.URL, request.url)
+    curl.setopt(pycurl.HTTPHEADER,
+                [_utf8("%s: %s" % i) for i in request.headers.iteritems()])
+    if request.header_callback:
+        curl.setopt(pycurl.HEADERFUNCTION, request.header_callback)
+    else:
+        curl.setopt(pycurl.HEADERFUNCTION,
+                    lambda line: _curl_header_callback(headers, line))
+    if request.streaming_callback:
+        curl.setopt(pycurl.WRITEFUNCTION, request.streaming_callback)
+    else:
+        curl.setopt(pycurl.WRITEFUNCTION, buffer.write)
+    curl.setopt(pycurl.FOLLOWLOCATION, request.follow_redirects)
+    curl.setopt(pycurl.MAXREDIRS, request.max_redirects)
+    curl.setopt(pycurl.CONNECTTIMEOUT, int(request.connect_timeout))
+    curl.setopt(pycurl.TIMEOUT, int(request.request_timeout))
+    if request.user_agent:
+        curl.setopt(pycurl.USERAGENT, _utf8(request.user_agent))
+    else:
+        curl.setopt(pycurl.USERAGENT, "Mozilla/5.0 (compatible; pycurl)")
+    if request.network_interface:
+        curl.setopt(pycurl.INTERFACE, request.network_interface)
+    if request.use_gzip:
+        curl.setopt(pycurl.ENCODING, "gzip,deflate")
+    else:
+        curl.setopt(pycurl.ENCODING, "none")
+
+    # Set the request method through curl's retarded interface which makes
+    # up names for almost every single method
+    curl_options = {
+        "GET": pycurl.HTTPGET,
+        "POST": pycurl.POST,
+        "PUT": pycurl.UPLOAD,
+        "HEAD": pycurl.NOBODY,
+    }
+    custom_methods = set(["DELETE"])
+    for o in curl_options.values():
+        curl.setopt(o, False)
+    if request.method in curl_options:
+        curl.unsetopt(pycurl.CUSTOMREQUEST)
+        curl.setopt(curl_options[request.method], True)
+    elif request.allow_nonstandard_methods or request.method in custom_methods:
+        curl.setopt(pycurl.CUSTOMREQUEST, request.method)
+    else:
+        raise KeyError('unknown method ' + request.method)
+
+    # Handle curl's cryptic options for every individual HTTP method
+    if request.method in ("POST", "PUT"):
+        request_buffer = cStringIO.StringIO(escape.utf8(request.body))
+        curl.setopt(pycurl.READFUNCTION, request_buffer.read)
+        if request.method == "POST":
+            def ioctl(cmd):
+                if cmd == curl.IOCMD_RESTARTREAD:
+                    request_buffer.seek(0)
+            curl.setopt(pycurl.IOCTLFUNCTION, ioctl)
+            curl.setopt(pycurl.POSTFIELDSIZE, len(request.body))
+        else:
+            curl.setopt(pycurl.INFILESIZE, len(request.body))
+
+    if request.auth_username and request.auth_password:
+        userpwd = "%s:%s" % (request.auth_username, request.auth_password)
+        curl.setopt(pycurl.HTTPAUTH, pycurl.HTTPAUTH_BASIC)
+        curl.setopt(pycurl.USERPWD, userpwd)
+        logging.info("%s %s (username: %r)", request.method, request.url,
+                     request.auth_username)
+    else:
+        curl.unsetopt(pycurl.USERPWD)
+        logging.info("%s %s", request.method, request.url)
+    if request.prepare_curl_callback is not None:
+        request.prepare_curl_callback(curl)
+
+
+def _curl_header_callback(headers, header_line):
+    if header_line.startswith("HTTP/"):
+        headers.clear()
+        return
+    if header_line == "\r\n":
+        return
+    parts = header_line.split(":", 1)
+    if len(parts) != 2:
+        logging.warning("Invalid HTTP response header line %r", header_line)
+        return
+    name = parts[0].strip()
+    value = parts[1].strip()
+    if name in headers:
+        headers[name] = headers[name] + ',' + value
+    else:
+        headers[name] = value
+
+
+def _curl_debug(debug_type, debug_msg):
+    debug_types = ('I', '<', '>', '<', '>')
+    if debug_type == 0:
+        logging.debug('%s', debug_msg.strip())
+    elif debug_type in (1, 2):
+        for line in debug_msg.splitlines():
+            logging.debug('%s %s', debug_types[debug_type], line)
+    elif debug_type == 4:
+        logging.debug('%s %r', debug_types[debug_type], debug_msg)
+
+
+def _utf8(value):
+    if value is None:
+        return value
+    if isinstance(value, unicode):
+        return value.encode("utf-8")
+    assert isinstance(value, str)
+    return value
+#-*- coding:utf-8 -*-
+#
+# Copyright(c) 2010 bulletweb.org
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Bullet kernel module.
+    
+ Tasks scheduler.
+ Various basic interfaces.
+
+"""
+
+import os
+import socket
+import select
+import bisect
+import time
+import logging
+from collections import deque
+from greenlet import greenlet, GreenletExit
+
+from bullet.utils.const import Priority, EpollMask
+from bullet.exceptions import BulletError
+
+class Scheduler(object):
+    """Bullet tasks scheduler.
+    
+    Schedule tasks and manage communication between tasks.
+    The `start` is bullet's main loop.
+    
+    """
+    _instance = None
+    _poll_deadline = 0.2
+
+    def __init__(self):
+        assert self._instance is None, "Scheduler can't be initialized twice within \
+            one process!"
+
+        self.reset()
+
+    def reset(self):
+        """Reset Scheduler instance.
+        
+        Scheduler may inherit a instance from parent process, but sometimes child 
+        process need a clear instance, so bullet will call `reset` at the
+        starting of the child process.
+        
+        """
+        self._epoll = select.epoll()
+        self._sockets = {}
+        self._socket_fds = []
+        self._handlers = {}
+        self._services = []
+        self._timeout = []
+        self._queue = deque()
+        self._coroutine = greenlet.getcurrent()
+        self._poll_timeout = self._poll_deadline
+        self._push_client = None
+
+        self._running = False
+
+    @classmethod
+    def instance(cls):
+        """Returns a global instance.
+        
+        Returns a global Scheduler instance within one process,       
+        Use this method instead of passing around Scheduler instances
+        throughout your code.
+        """
+        if cls._instance is None:
+            cls._instance = cls()
+        return cls._instance
+
+    @property
+    def running(self):
+        """Returns true if this Scheduler is currently running."""
+        return self._running
+
+    @property
+    def sockets(self):
+        """Returns server sockets."""
+        return self._sockets
+
+    def start(self):
+        """Request that bullet scheduler be executed."""
+        logging.info("Start bullet scheduler.")
+
+        # schedule ioloop as the main task, and make it as the first task
+        # so we can add some tasks which depend on this ioloop before starting
+        self.schedule(self._ioloop, prio=Priority.FIRST)
+        # mark it as running
+        self._running = True
+
+        try:
+            while self._running:
+                try:
+                    resume, task, value = self._queue.popleft()
+                    # resume a task
+                    resume(task, value)
+                except IndexError:
+                    logging.info("Finish all of the tasks.")
+                    break
+                except BulletError:
+                    logging.warning("Bullet internal exception!", exc_info=True)
+                    raise
+                except Exception:
+                    logging.critical("Unknown exception!", exc_info=True)
+                    # continue, so the main loop will not be terminated
+                    pass
+        finally:
+            self._clearup()
+            logging.info("Bullet scheduler stopped.")
+
+    def stop(self):
+        self._running = False
+
+    def wake(self):
+        # do not let poll block the ioloop, so scheduler can run other 
+        # coroutines
+        self._poll_timeout = 0
+
+    def fire(self, event):
+        assert isinstance(event, IEvent), "event must be a IEvent object!"
+        self._push_client.fire(event)
+
+    def wait(self, event):
+        assert isinstance(event, IEvent), "event must be a IEvent object!"
+        self.schedule(self._push_client.wait, event)
+
+    def schedule(self, coroutine, value=None, prio=Priority.DEFAULT):
+        """Scheduler a new coroutine(task) for next round."""
+        def resume(coroutine, value=None):
+            try:
+                if coroutine is not None:
+                    if not isinstance(coroutine, greenlet):
+                        coroutine = greenlet(coroutine)
+
+                    if value is None:
+                        coroutine.switch()
+                    else:
+                        coroutine.switch(value)
+            except GreenletExit:
+                pass
+            except Exception:
+                # propagate to parent(sheduler main loop)
+                raise
+
+        if prio == Priority.FIRST:
+            self._queue.appendleft((resume, coroutine, value))
+        else:
+            self._queue.append((resume, coroutine, value))
+
+    def switch(self):
+        # switch to the main coroutine(greenlet)
+        self._coroutine.switch()
+
+    def add_server(self, serversocket, handler):