Commits

Evaggelos Balaskas committed 5af37da

Added ssh port parameter for synchronize of ssh

Comments (0)

Files changed (8)

+@ Tue Apr 12 21:31:31 EEST 2011
+
+Plz notice that this is the last version in python2
+The next version of piryncd would be only in python3
+
+Pyinotify.py updated to 0.9.1 (20110405)
+
+Added ssh port parameter for synchronize of ssh
+(Feature request by Fred Warren)
+
+Removed trailing slash (/) requirement for source & destination path
+
+Added Pyinotify.py version3 for pirsyncd.Py3k  
+
 @Wed Sep  8 20:45:52 EEST 2010
 
 OptionParser epilog changed to description
 
 Recommended:
 
-Python 2.6
+Python v2.7.1
 Rsync  3
 
 Python Issues:
 pirsyncd stands for: Python Inotify Rsync Daemon
-Copyright Evaggelos Balaskas, ebalaskas AT ebalaskas DOT gr (2009, 2010)
+Copyright Evaggelos Balaskas, ebalaskas AT ebalaskas DOT gr (2009, 2010, 2011)
 
 http://ebalaskas.gr/blog/?page=pirsyncd
 
 (logging python module)
 
 08. When using --host is in use, checking destination directory isnt necessary 
-(Suggested by Jeff Templon <templon AT nikhef DOT nl>)
+(Suggested by Jeff Templon < templon _AT_ nikhef _DOT_ nl >)
 
 fixed
 
 09. Append Functionality
-(Suggested by Jeff Templon <templon AT nikhef DOT nl>)
+(Suggested by Jeff Templon < templon _AT_ nikhef _DOT_ nl >)
 
 fixed
 
 10: rsync version 2 does not support --log-file 
-(Suggested by Jeff Templon <templon AT nikhef DOT nl>)
+(Suggested by Jeff Templon < templon _AT_ nikhef _DOT_ nl >)
 
 fixed
 
 11. PIrsyncD agains rsync daemon
 Based on Jeff Templon's patch
-(Suggested by Jeff Templon <templon AT nikhef DOT nl>)
+(Suggested by Jeff Templon < templon _AT_ nikhef _DOT_ nl >)
 
 fixed
 
 12. multiple instances of PIrsyncD
 Based on Jeff Templon & Jan Just Keijser's patch
-(Jeff Templon <templon AT nikhef DOT nl> and Jan Just Keijser <janjust AT nikhef DOT nl>)
+(Jeff Templon < templon _AT_ nikhef _DOT_ nl > and Jan Just Keijser < janjust _AT_ nikhef _DOT_ nl >)
 
 fixed
 
 16. Delay for rsync command - aka counter for inode events
 
 Fixed
-Based on Bryan's question - Bryan Seitz < seitz AT bsd-unix DOT net >
+Based on Bryan's question - Bryan Seitz < seitz _AT_ bsd-unix _DOT_ net >
+
+17. Add ssh port parameter
+
+Fixed
+Based on Fred's question - Fred Warren < fred _DOT_ warren _AT_ gmail _DOT_ com >
 
 Not Fixed:
 ----------
 04: The optparse module is deprecated since version 2.7
 
 05: Spit Configuration from mail code file
+
+06: Test pirsyncd.Py3k functionality
+
+07. Check for deprecated functions and modules
+
 directories (local or remote). This is a poor man's mirroring or an alternative
 (not so) real data replication mechanism and it is based on Pyinotify.
 
-Copyright (c) Evaggelos Balaskas (2009, 2010)
+Copyright (c) Evaggelos Balaskas (2009, 2010, 2011)
 """
 
 import os
     sys.exit(1)
 
 # Version of pirsyncd
-VERSION = "20100908"
+VERSION = "20110412"
 
 # Default Directories (Source & Destination Folder Paths)
 SOURCE_PATH = "/tmp/data/"
 
 # destination server (rsync over ssh)
 DEST_SERVER = False
+DEST_PORT   = "22"
 
 # destination server runs: rsync --daemon
 RSYNC_DAEMON    = False
 # Usage message for help
 USAGE_MSG = "pirsyncd \
 [ -s <source> ] [ -d <destination> ] [ -p <pidfile> ] \n\t\t\
-[ -r <rsync> ] [ -l <log>] [ --nolog] [ --debug=debug_log ] \n\t\t\
-[ --nodebug ] [ --rsync_append ] [ --rsync_v2 ] \n\t\t\
-[ --host=dest_server ] [ --max_size ] [ --min_size ] \n\t\t\
+[ -r <rsync> ] [ -l <log>] [ --nolog] \n\t\t\
+[ --debug=debug_log ] [ --nodebug ] \n\t\t\
+[ --rsync_append ] [ --rsync_v2 ] \n\t\t\
+[ --host=dest_server ] [ --port=dest_port ] \n\t\t\
+[ --max_size ] [ --min_size ] \n\t\t\
 [ --rsync_daemon=rsync_daemon --rsync_pass_file=rsync_pass_file ]\n\t\t\
-[ --exclude_pattern=pattern] [ -h <help> ] [ -f <foreground> ]\n\t\t\
-[ -v <version> ] [ -k stop/status ]"
+[ --exclude_pattern=pattern] [ -f <foreground> ]\n\t\t\
+[ -h <help> ] [ -v <version> ] [ -k stop/status ]"
 
 # Example messages for help
 EXAMPLE_MSG = """
 
 ./pirsyncd -s /tmp/data/ -d /tmp/data2/ --host ssh.example.com
 
+./pirsyncd --host ssh.example.com --port 2222
+
 ./pirsyncd --rsync_daemon=rsync.example.com::data --rsync_pass_file=/etc/rsyncd.secrets
 
 ./pirsyncd --max_size 100m
 
 ./pirsyncd -p /tmp/.pirsyncd.pid2 -k stop
 
-Read FAQ: http://ebalaskas.gr/wiki/pirsyncd
-Remember: Directories must have "/" at the end (eg. /tmp/data/)"""
+Read FAQ: http://ebalaskas.gr/wiki/pirsyncd"""
 
 def check_config(check = True):
     """ Validate the configuration of pirsyncd"""
         default=DEST_SERVER,
         help="Enable rsync over ssh. You must define a destination server \
 (IP Address) with passwordless connection.")
+    parser.add_option("--port",
+        action="store",
+        type="string",
+        dest="DEST_PORT",
+        default=DEST_PORT,
+        help="Define the destination ssh port, only with --host")
     parser.add_option("--rsync_daemon",
         action="store",
         type="string",
 def version():
     """ Print version info """
     print "You are running pirsyncd: " + VERSION
-    print "Copyright (c) Evaggelos Balaskas (2009, 2010)"
+    print "Copyright (c) Evaggelos Balaskas (2009, 2010, 2011)"
     print "Licenced under GNU General Public License, version 2\n"
 
 def my_version(option, opt, value, parser):
     """ Configuration of RSYNC_COMMAND upon user arguments & 
         watching for Kernel Inotify Events on source folder """
 
-    global COUNTER, RSYNC_ARGS, RSYNC_COMMAND
+    global COUNTER, RSYNC_ARGS, RSYNC_COMMAND, SOURCE_PATH, DEST_PATH, DEST_PORT
     # Configuration settings are in order
     if check_config():
+        # Check for trailing slash (/) on source & destination path
+        if SOURCE_PATH[-1] != "/" :
+            SOURCE_PATH += "/"
+        if DEST_PATH[-1] != "/" :
+            DEST_PATH += "/"
         if ( EXCLUDE_PAT != '' ):
             RSYNC_ARGS = RSYNC_ARGS + " --exclude=" + EXCLUDE_PAT
         if ( RSYNC_APPEND != 'False' ):
         else :
             rsync_logfile = "--log-file=" + RSYNC_LOG
         if ( DEST_SERVER != False ):
-            # define the rsync command
+            #Define the ssh port, it have to be numeric and between 1 - 65535
+            port = 22
+            DEST_PORT = re.sub ( "\D", "", DEST_PORT )
+            if len ( DEST_PORT ) > 0 : 
+                if int ( DEST_PORT ) > 0  and int ( DEST_PORT ) < 65536 :
+                    port = int ( DEST_PORT )
+            # Define the rsync command
             RSYNC_COMMAND = RSYNC_PATH + " " + RSYNC_ARGS + " " + rsync_logfile\
-+ " " + SOURCE_PATH + " " + "-e ssh " + DEST_SERVER + ":" + DEST_PATH
++ " " + SOURCE_PATH + " " + "-e 'ssh -p " +str ( port ) + "' " + DEST_SERVER\
++ ":" + DEST_PATH
         elif RSYNC_DAEMON != False and RSYNC_PASS_FILE != False :
             # define rsync command against rsync server
             RSYNC_COMMAND = RSYNC_PATH + " " + RSYNC_ARGS + " " + rsync_logfile\
 def main( options = getarguments() ):
     """ Declaration variables with user arguments """
     
-    global FOREGROUND, RSYNC_APPEND, RSYNC_LOG, DEST_SERVER, DEBUG_LOG, \
-SOURCE_PATH, RSYNC_PASS_FILE, RSYNC_V2, RSYNC_PATH, MIN_SIZE, RSYNC_DAEMON, \
-PIDFILE, PIDFILE, MAX_SIZE, DEST_PATH, EXCLUDE_PAT
+    global FOREGROUND, RSYNC_APPEND, RSYNC_LOG, DEST_SERVER, DEST_PORT, \
+DEBUG_LOG, SOURCE_PATH, RSYNC_PASS_FILE, RSYNC_V2, RSYNC_PATH, MIN_SIZE, \
+RSYNC_DAEMON, PIDFILE, PIDFILE, MAX_SIZE, DEST_PATH, EXCLUDE_PAT
 
     # Re-Define variables
     SOURCE_PATH     = options.SOURCE_PATH
     DEST_PATH       = options.DEST_PATH
     DEST_SERVER     = options.DEST_SERVER
+    DEST_PORT		= options.DEST_PORT
     RSYNC_DAEMON    = options.RSYNC_DAEMON
     RSYNC_PASS_FILE = options.RSYNC_PASS_FILE
     RSYNC_PATH      = options.RSYNC_PATH
 directories (local or remote). This is a poor man's mirroring or an alternative
 (not so) real data replication mechanism and it is based on Pyinotify.
 
-Copyright (c) Evaggelos Balaskas (2009, 2010)
+Copyright (c) Evaggelos Balaskas (2009, 2010, 2011)
 """
 
 import os
 import re
 
 try:
-    import pyinotify
-except ImportError, strerror:
-    print "\nThere was an error: %s\n" % (strerror)
-    print "You have to install pyinotify, http://trac.dbzteam.org/pyinotify\n"
+    import pyinotify3
+except ImportError as strerror:
+    print("\nThere was an error: %s\n" % (strerror))
+    print("You have to install pyinotify, http://trac.dbzteam.org/pyinotify\n")
     sys.exit(1)
 
 # Version of pirsyncd
-VERSION = "20100908"
+VERSION = "20110412"
 
 # Default Directories (Source & Destination Folder Paths)
 SOURCE_PATH = "/tmp/data/"
 
 # destination server (rsync over ssh)
 DEST_SERVER = False
+DEST_PORT   = "22"
 
 # destination server runs: rsync --daemon
 RSYNC_DAEMON    = False
 # Usage message for help
 USAGE_MSG = "pirsyncd \
 [ -s <source> ] [ -d <destination> ] [ -p <pidfile> ] \n\t\t\
-[ -r <rsync> ] [ -l <log>] [ --nolog] [ --debug=debug_log ] \n\t\t\
-[ --nodebug ] [ --rsync_append ] [ --rsync_v2 ] \n\t\t\
-[ --host=dest_server ] [ --max_size ] [ --min_size ] \n\t\t\
+[ -r <rsync> ] [ -l <log>] [ --nolog] \n\t\t\
+[ --debug=debug_log ] [ --nodebug ] \n\t\t\
+[ --rsync_append ] [ --rsync_v2 ] \n\t\t\
+[ --host=dest_server ] [ --port=dest_port ] \n\t\t\
+[ --max_size ] [ --min_size ] \n\t\t\
 [ --rsync_daemon=rsync_daemon --rsync_pass_file=rsync_pass_file ]\n\t\t\
-[ --exclude_pattern=pattern] [ -h <help> ] [ -f <foreground> ]\n\t\t\
-[ -v <version> ] [ -k stop/status ]"
+[ --exclude_pattern=pattern] [ -f <foreground> ]\n\t\t\
+[ -h <help> ] [ -v <version> ] [ -k stop/status ]"
 
 # Example messages for help
 EXAMPLE_MSG = """
 
 ./pirsyncd -s /tmp/data/ -d /tmp/data2/ --host ssh.example.com
 
+./pirsyncd --host ssh.example.com --port 2222
+
 ./pirsyncd --rsync_daemon=rsync.example.com::data --rsync_pass_file=/etc/rsyncd.secrets
 
 ./pirsyncd --max_size 100m
 
 ./pirsyncd -p /tmp/.pirsyncd.pid2 -k stop
 
-Read FAQ: http://ebalaskas.gr/wiki/pirsyncd
-Remember: Directories must have "/" at the end (eg. /tmp/data/)"""
+Read FAQ: http://ebalaskas.gr/wiki/pirsyncd"""
 
 def check_config(check = True):
     """ Validate the configuration of pirsyncd"""
     if os.path.exists(PIDFILE):
-        print "There is already a pid file " + PIDFILE
-        print "Perhaps there is a running pirsyncd instance already!"
+        print("There is already a pid file " + PIDFILE)
+        print("Perhaps there is a running pirsyncd instance already!")
         check = False
     if not os.path.exists(SOURCE_PATH):
-        print "There isn't any (source directory): " + SOURCE_PATH
+        print("There isn't any (source directory): " + SOURCE_PATH)
         check = False
     if not os.path.exists(RSYNC_PATH):
-        print "There isn't any : " + RSYNC_PATH
+        print("There isn't any : " + RSYNC_PATH)
         check = False
     # Both RSYNC_DAEMON & RSYNC_PASS_FILE arguments must have values.
     if RSYNC_DAEMON != False :
         if RSYNC_PASS_FILE is False :
-            print "--rsync_daemon is enabled only with --rsync_pass_file. \
-fallback to sync on local destination"
+            print("--rsync_daemon is enabled only with --rsync_pass_file. \
+fallback to sync on local destination")
             check = False
     if RSYNC_PASS_FILE != False :
         if RSYNC_DAEMON is False :
-            print "--rsync_pass_file is enabled only with --rsync_daemon. \
-fallback to sync on local destination"
+            print("--rsync_pass_file is enabled only with --rsync_daemon. \
+fallback to sync on local destination")
             check = False
     if DEST_SERVER != False :
         try:
             socket.inet_aton(socket.gethostbyname(DEST_SERVER))
         except socket.error:
-            print "This isnt a valid Host Name : " + DEST_SERVER
+            print("This isnt a valid Host Name : " + DEST_SERVER)
             check = False
     elif RSYNC_DAEMON != False and RSYNC_PASS_FILE != False :
         # Valid is something like this: rsync.example.com::data
             try:
                 socket.inet_aton(socket.gethostbyname(rsd.group(1)))
             except socket.error:
-                print "This isnt a valid Host Name : " + rsd.group(1)
+                print("This isnt a valid Host Name : " + rsd.group(1))
                 check = False
         else :
-            print "rsync_daemon: " +  RSYNC_DAEMON + ", isnt a valid argument. \
-You must type something like this: rsync.example.com::data"
+            print("rsync_daemon: " +  RSYNC_DAEMON + ", isnt a valid argument. \
+You must type something like this: rsync.example.com::data")
             check = False
         if not os.path.exists(RSYNC_PASS_FILE):
-            print "There isn't any : " + RSYNC_PASS_FILE
+            print("There isn't any : " + RSYNC_PASS_FILE)
             check = False
     else :
         if not os.path.exists(DEST_PATH):
-            print "There isn't any (destination directory): " + DEST_PATH
+            print("There isn't any (destination directory): " + DEST_PATH)
             check = False
     return check
 
-class PTmp(pyinotify.ProcessEvent):
-    """ Handles the pyinotify events """
+class PTmp(pyinotify3.ProcessEvent):
+    """ Handles the pyinotify3 events """
 
     def process_default(self, event):
         """ Default procudure is to debug and sychronize """
         default=DEST_SERVER,
         help="Enable rsync over ssh. You must define a destination server \
 (IP Address) with passwordless connection.")
+    parser.add_option("--port",
+        action="store",
+        type="string",
+        dest="DEST_PORT",
+        default=DEST_PORT,
+        help="Define the destination ssh port, only with --host")
     parser.add_option("--rsync_daemon",
         action="store",
         type="string",
         type="string",
         dest="EXCLUDE_PAT",
         default=EXCLUDE_PAT,
-        help="Exclude pattern from rsync and pyinotify watch.")
+        help="Exclude pattern from rsync and pyinotify3 watch.")
     parser.add_option("--examples",
         action="callback",
         callback=my_examples,
 
 def version():
     """ Print version info """
-    print "You are running pirsyncd: " + VERSION
-    print "Copyright (c) Evaggelos Balaskas (2009, 2010)"
-    print "Licenced under GNU General Public License, version 2\n"
+    print("You are running pirsyncd: " + VERSION)
+    print("Copyright (c) Evaggelos Balaskas (2009, 2010, 2011)")
+    print("Licenced under GNU General Public License, version 2\n")
 
 def my_version(option, opt, value, parser):
     """ Print version number & exit """
 
 def my_examples(option, opt, value, parser):
     """ print PIrsynD Usage Examples & exit """
-    print EXAMPLE_MSG
+    print(EXAMPLE_MSG)
     sys.exit()
 
 def my_action(option, opt, value, parser):
                 pid = line.strip()
             try:
                 os.kill(int(pid), signal.SIGKILL)
-                print "pirsyncd with PID: " + pid + " killed successfully"
-            except OSError, ose:
-                print "\nThere was an error: " + ose + "\n"
+                print("pirsyncd with PID: " + pid + " killed successfully")
+            except OSError as ose:
+                print("\nThere was an error: " + ose + "\n")
             try:
                 os.remove(PIDFILE)
-            except OSError, ose:
-                print "\nThere was an error: " + ose + "\n"
+            except OSError as ose:
+                print("\nThere was an error: " + ose + "\n")
         else:
-            print "There is no pidfile. Seems there is no running pirsyncd \
-instance."
+            print("There is no pidfile. Seems there is no running pirsyncd \
+instance.")
     elif value == "status":
         if os.path.exists(PIDFILE):
             for line in fileinput.input(PIDFILE):
                 pid = line.strip()
             try:
                 os.kill(int(pid), 0)
-                print "There is a running instance of pirsyncd, with PID: " \
-+ pid
-            except OSError, ose:
-                print "\nSomething is wrong! There is a pidfile: " + PIDFILE + \
-" \nbut there isnt any process with PID: " + pid + "\nError: " + ose + "\n"
+                print("There is a running instance of pirsyncd, with PID: " \
++ pid)
+            except OSError as ose:
+                print("\nSomething is wrong! There is a pidfile: " + PIDFILE + \
+" \nbut there isnt any process with PID: " + pid + "\nError: " + ose + "\n")
         else:
-            print "There is no pidfile. Seems there is no running pirsyncd \
-instance."
+            print("There is no pidfile. Seems there is no running pirsyncd \
+instance.")
     else:
-        print "There isnt any action for: " + value + "\nTry ./pirsyncd -h\n"
+        print("There isnt any action for: " + value + "\nTry ./pirsyncd -h\n")
     sys.exit()
 
 def mirror():
         try:
             retcode = subprocess.call(RSYNC_COMMAND, shell=True)
             if retcode < 0:
-                print "Child was terminated by signal", -retcode
-        except OSError, ose:
-            print "Execution failed: ", ose
+                print("Child was terminated by signal", -retcode)
+        except OSError as ose:
+            print("Execution failed: ", ose)
         logging.debug('pirsyncd: ' + RSYNC_COMMAND)
         COUNTER = INODES - 1 
         logging.debug("COUNTER: " + str(COUNTER))
     """ Configuration of RSYNC_COMMAND upon user arguments & 
         watching for Kernel Inotify Events on source folder """
 
-    global COUNTER, RSYNC_ARGS, RSYNC_COMMAND
+    global COUNTER, RSYNC_ARGS, RSYNC_COMMAND, SOURCE_PATH, DEST_PATH, DEST_PORT
     # Configuration settings are in order
     if check_config():
+        # Check for trailing slash (/) on source & destination path
+        if SOURCE_PATH[-1] != "/" :
+            SOURCE_PATH += "/"
+        if DEST_PATH[-1] != "/" :
+            DEST_PATH += "/"
         if ( EXCLUDE_PAT != '' ):
             RSYNC_ARGS = RSYNC_ARGS + " --exclude=" + EXCLUDE_PAT
         if ( RSYNC_APPEND != 'False' ):
         else :
             rsync_logfile = "--log-file=" + RSYNC_LOG
         if ( DEST_SERVER != False ):
-            # define the rsync command
+            #Define the ssh port, it have to be numeric and between 1 - 65535
+            port = 22
+            DEST_PORT = re.sub ( "\D", "", DEST_PORT )
+            if len ( DEST_PORT ) > 0 : 
+                if int ( DEST_PORT ) > 0  and int ( DEST_PORT ) < 65536 :
+                    port = int ( DEST_PORT )
+            # Define the rsync command
             RSYNC_COMMAND = RSYNC_PATH + " " + RSYNC_ARGS + " " + rsync_logfile\
-+ " " + SOURCE_PATH + " " + "-e ssh " + DEST_SERVER + ":" + DEST_PATH
++ " " + SOURCE_PATH + " " + "-e 'ssh -p " +str ( port ) + "' " + DEST_SERVER\
++ ":" + DEST_PATH
         elif RSYNC_DAEMON != False and RSYNC_PASS_FILE != False :
             # define rsync command against rsync server
             RSYNC_COMMAND = RSYNC_PATH + " " + RSYNC_ARGS + " " + rsync_logfile\
             RSYNC_COMMAND = RSYNC_PATH + " " + RSYNC_ARGS + " " + rsync_logfile\
 + " " + SOURCE_PATH + " " + DEST_PATH
 
-        print "pirsyncd is starting, waiting for daemonization..."
+        print("pirsyncd is starting, waiting for daemonization...")
         # Try to mirror for the first time, 
         # perhaps inotify events never occurs to watched directory
         COUNTER += 1
         mirror()
         # monitor for events
-        wmg = pyinotify.WatchManager()
-        mask =    pyinotify.IN_ATTRIB | pyinotify.IN_CLOSE_WRITE | \
-                  pyinotify.IN_CREATE | pyinotify.IN_DELETE | \
-                  pyinotify.IN_MODIFY | pyinotify.IN_MOVED_TO | \
-                  pyinotify.IN_MOVED_FROM | pyinotify.IN_DELETE_SELF
+        wmg = pyinotify3.WatchManager()
+        mask =    pyinotify3.IN_ATTRIB | pyinotify3.IN_CLOSE_WRITE | \
+                  pyinotify3.IN_CREATE | pyinotify3.IN_DELETE | \
+                  pyinotify3.IN_MODIFY | pyinotify3.IN_MOVED_TO | \
+                  pyinotify3.IN_MOVED_FROM | pyinotify3.IN_DELETE_SELF
         ptm = PTmp()
-        notifier = pyinotify.Notifier(wmg, ptm, read_freq=FREQ)
+        notifier = pyinotify3.Notifier(wmg, ptm, read_freq=FREQ)
         try:
             if ( EXCLUDE_PAT != '' ):
                 wmg.add_watch( SOURCE_PATH, mask, rec=True, auto_add=True, \
-exclude_filter=pyinotify.ExcludeFilter(["^" + SOURCE_PATH + EXCLUDE_PAT + "*"]))
+exclude_filter=pyinotify3.ExcludeFilter(["^" + SOURCE_PATH + EXCLUDE_PAT + "*"]))
             else:
                 wmg.add_watch( SOURCE_PATH, mask, rec=True, auto_add=True )
-        except pyinotify.WatchManagerError, err:
-            print err, err.wmd
+        except pyinotify3.WatchManagerError as err:
+            print(err, err.wmd)
 
         # Daemonize pirsyncd
         if ( FOREGROUND != 'False' ):
-            print "Daemon is ready! pirsyncd runs in foreground (ctrl + c) \
-to stop the daemon)\n\n"
+            print("Daemon is ready! pirsyncd runs in foreground (ctrl + c) \
+to stop the daemon)\n\n")
             # Daemonize in foreground
             notifier.loop ( pid_file = PIDFILE )
         else:
-            print "Daemon is ready! PIryncD runs in background\ntry \
-./pirsyncd -k status to see the running PID.\n"
+            print("Daemon is ready! PIryncD runs in background\ntry \
+./pirsyncd -k status to see the running PID.\n")
             # Daemonize in background
             notifier.loop(daemonize=True, pid_file=PIDFILE)
 
     # Configuration Settings arent in order, print this information message
     else:
-        print "Please check your configuration settings. Try this: \
-./pirsyncd -h\n"
+        print("Please check your configuration settings. Try this: \
+./pirsyncd -h\n")
         sys.exit(1)
 
 def main( options = getarguments() ):
     """ Declaration variables with user arguments """
     
-    global FOREGROUND, RSYNC_APPEND, RSYNC_LOG, DEST_SERVER, DEBUG_LOG, \
-SOURCE_PATH, RSYNC_PASS_FILE, RSYNC_V2, RSYNC_PATH, MIN_SIZE, RSYNC_DAEMON, \
-PIDFILE, PIDFILE, MAX_SIZE, DEST_PATH, EXCLUDE_PAT
+    global FOREGROUND, RSYNC_APPEND, RSYNC_LOG, DEST_SERVER, DEST_PORT, \
+DEBUG_LOG, SOURCE_PATH, RSYNC_PASS_FILE, RSYNC_V2, RSYNC_PATH, MIN_SIZE, \
+RSYNC_DAEMON, PIDFILE, PIDFILE, MAX_SIZE, DEST_PATH, EXCLUDE_PAT
 
     # Re-Define variables
     SOURCE_PATH     = options.SOURCE_PATH
     DEST_PATH       = options.DEST_PATH
     DEST_SERVER     = options.DEST_SERVER
+    DEST_PORT		= options.DEST_PORT
     RSYNC_DAEMON    = options.RSYNC_DAEMON
     RSYNC_PASS_FILE = options.RSYNC_PASS_FILE
     RSYNC_PATH      = options.RSYNC_PATH
 #!/usr/bin/env python
 
 # pyinotify.py - python interface to inotify
-# Copyright (c) 2010 Sebastien Martini <seb@dbzteam.org>
+# Copyright (c) 2005-2011 Sebastien Martini <seb@dbzteam.org>
 #
 # Permission is hereby granted, free of charge, to any person obtaining a copy
 # of this software and associated documentation files (the "Software"), to deal
         @param version: Current Python version
         @type version: string
         """
-        PyinotifyError.__init__(self,
-                                ('Python %s is unsupported, requires '
-                                 'at least Python 2.4') % version)
-
-
-class UnsupportedLibcVersionError(PyinotifyError):
-    """
-    Raised when libc couldn't be loaded or when inotify functions werent
-    provided.
-    """
-    def __init__(self):
-        err = 'libc does not provide required inotify support'
-        PyinotifyError.__init__(self, err)
+        err = 'Python %s is unsupported, requires at least Python 2.4'
+        PyinotifyError.__init__(self, err % version)
 
 
 # Check Python version
 import sys
-if sys.version < '2.4':
+if sys.version_info < (2, 4):
     raise UnsupportedPythonVersionError(sys.version)
 
 
 from collections import deque
 from datetime import datetime, timedelta
 import time
-import fnmatch
 import re
-import ctypes
-import ctypes.util
 import asyncore
 import glob
 
 except ImportError:
     pass  # Will fail on Python 2.4 which has reduce() builtin anyway.
 
+try:
+    import ctypes
+    import ctypes.util
+except ImportError:
+    ctypes = None
+
+try:
+    import inotify_syscalls
+except ImportError:
+    inotify_syscalls = None
+
+
 __author__ = "seb@dbzteam.org (Sebastien Martini)"
 
-__version__ = "0.9.0"
+__version__ = "0.9.1"
 
 __metaclass__ = type  # Use new-style classes by default
 
 COMPATIBILITY_MODE = False
 
 
-# Load libc
-LIBC = None
-strerrno = None
+class InotifyBindingNotFoundError(PyinotifyError):
+    """
+    Raised when no inotify support couldn't be found.
+    """
+    def __init__(self):
+        err = "Couldn't find any inotify binding"
+        PyinotifyError.__init__(self, err)
 
-def load_libc():
-    global strerrno
-    global LIBC
 
-    libc = None
-    try:
-        libc = ctypes.util.find_library('c')
-    except (OSError, IOError):
-        pass  # Will attemp to load it with None anyway.
+class INotifyWrapper:
+    """
+    Abstract class wrapping access to inotify's functions. This is an
+    internal class.
+    """
+    @staticmethod
+    def create():
+        # First, try to use ctypes.
+        if ctypes:
+            inotify = _CtypesLibcINotifyWrapper()
+            if inotify.init():
+                return inotify
+        # Second, see if C extension is compiled.
+        if inotify_syscalls:
+            inotify = _INotifySyscallsWrapper()
+            if inotify.init():
+                return inotify
 
-    if sys.version_info[0] >= 2 and sys.version_info[1] >= 6:
-        LIBC = ctypes.CDLL(libc, use_errno=True)
-        def _strerrno():
-            code = ctypes.get_errno()
-            return ' Errno=%s (%s)' % (os.strerror(code), errno.errorcode[code])
-        strerrno = _strerrno
-    else:
-        LIBC = ctypes.CDLL(libc)
-        strerrno = lambda : ''
+    def get_errno(self):
+        """
+        Return None is no errno code is available.
+        """
+        return self._get_errno()
 
-    # Check that libc has needed functions inside.
-    if (not hasattr(LIBC, 'inotify_init') or
-        not hasattr(LIBC, 'inotify_add_watch') or
-        not hasattr(LIBC, 'inotify_rm_watch')):
-        raise UnsupportedLibcVersionError()
+    def str_errno(self):
+        code = self.get_errno()
+        if code is None:
+            return 'Errno: no errno support'
+        return 'Errno=%s (%s)' % (os.strerror(code), errno.errorcode[code])
 
-load_libc()
+    def inotify_init(self):
+        return self._inotify_init()
 
+    def inotify_add_watch(self, fd, pathname, mask):
+        # Unicode strings must be encoded to string prior to calling this
+        # method.
+        assert isinstance(pathname, str)
+        return self._inotify_add_watch(fd, pathname, mask)
 
-class PyinotifyLogger(logging.Logger):
+    def inotify_rm_watch(self, fd, wd):
+        return self._inotify_rm_watch(fd, wd)
+
+
+class _INotifySyscallsWrapper(INotifyWrapper):
+    def __init__(self):
+        # Stores the last errno value.
+        self._last_errno = None
+
+    def init(self):
+        assert inotify_syscalls
+        return True
+
+    def _get_errno(self):
+        return self._last_errno
+
+    def _inotify_init(self):
+        try:
+            fd = inotify_syscalls.inotify_init()
+        except IOError, err:
+            self._last_errno = err.errno
+            return -1
+        return fd
+
+    def _inotify_add_watch(self, fd, pathname, mask):
+        try:
+            wd = inotify_syscalls.inotify_add_watch(fd, pathname, mask)
+        except IOError, err:
+            self._last_errno = err.errno
+            return -1
+        return wd
+
+    def _inotify_rm_watch(self, fd, wd):
+        try:
+            ret = inotify_syscalls.inotify_rm_watch(fd, wd)
+        except IOError, err:
+            self._last_errno = err.errno
+            return -1
+        return ret
+
+
+class _CtypesLibcINotifyWrapper(INotifyWrapper):
+    def __init__(self):
+        self._libc = None
+        self._get_errno_func = None
+
+    def init(self):
+        assert ctypes
+        libc_name = None
+        try:
+            libc_name = ctypes.util.find_library('c')
+        except (OSError, IOError):
+            pass  # Will attemp to load it with None anyway.
+
+        if sys.version_info >= (2, 6):
+            self._libc = ctypes.CDLL(libc_name, use_errno=True)
+            self._get_errno_func = ctypes.get_errno
+        else:
+            self._libc = ctypes.CDLL(libc_name)
+            try:
+                location = self._libc.__errno_location
+                location.restype = ctypes.POINTER(ctypes.c_int)
+                self._get_errno_func = lambda: location().contents.value
+            except AttributeError:
+                pass
+
+        # Eventually check that libc has needed inotify bindings.
+        if (not hasattr(self._libc, 'inotify_init') or
+            not hasattr(self._libc, 'inotify_add_watch') or
+            not hasattr(self._libc, 'inotify_rm_watch')):
+            return False
+        return True
+
+    def _get_errno(self):
+        if self._get_errno_func is not None:
+            return self._get_errno_func()
+        return None
+
+    def _inotify_init(self):
+        assert self._libc is not None
+        return self._libc.inotify_init()
+
+    def _inotify_add_watch(self, fd, pathname, mask):
+        assert self._libc is not None
+        pathname = ctypes.create_string_buffer(pathname)
+        return self._libc.inotify_add_watch(fd, pathname, mask)
+
+    def _inotify_rm_watch(self, fd, wd):
+        assert self._libc is not None
+        return self._libc.inotify_rm_watch(fd, wd)
+
+    def _sysctl(self, *args):
+        assert self._libc is not None
+        return self._libc.sysctl(*args)
+
+
+class _PyinotifyLogger(logging.Logger):
     """
     Pyinotify logger used for logging unicode strings.
     """
     def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None,
                    extra=None):
-        rv = UnicodeLogRecord(name, level, fn, lno, msg, args, exc_info, func)
+        rv = _UnicodeLogRecord(name, level, fn, lno, msg, args, exc_info, func)
         if extra is not None:
             for key in extra:
                 if (key in ["message", "asctime"]) or (key in rv.__dict__):
         return rv
 
 
-class UnicodeLogRecord(logging.LogRecord):
+class _UnicodeLogRecord(logging.LogRecord):
     def __init__(self, name, level, pathname, lineno,
                  msg, args, exc_info, func=None):
         py_version = sys.version_info
 # Logging
 def logger_init():
     """Initialize logger instance."""
-    logging.setLoggerClass(PyinotifyLogger)
+    logging.setLoggerClass(_PyinotifyLogger)
     log = logging.getLogger("pyinotify")
     console_handler = logging.StreamHandler()
     console_handler.setFormatter(
                      'max_user_watches': 2,
                      'max_queued_events': 3}
 
-    def __init__(self, attrname):
+    def __init__(self, attrname, inotify_wrapper):
+        # FIXME: right now only supporting ctypes
+        assert ctypes
+        self._attrname = attrname
+        self._inotify_wrapper = inotify_wrapper
         sino = ctypes.c_int * 3
-        self._attrname = attrname
         self._attr = sino(5, 20, SysCtlINotify.inotify_attrs[attrname])
 
+    @staticmethod
+    def create(attrname):
+        """
+        Factory method instanciating and returning the right wrapper.
+        """
+        # FIXME: right now only supporting ctypes
+        if ctypes is None:
+            return None
+        inotify_wrapper = _CtypesLibcINotifyWrapper()
+        if not inotify_wrapper.init():
+            return None
+        return SysCtlINotify(attrname, inotify_wrapper)
+
     def get_val(self):
         """
         Gets attribute's value.
         """
         oldv = ctypes.c_int(0)
         size = ctypes.c_int(ctypes.sizeof(oldv))
-        LIBC.sysctl(self._attr, 3,
-                    ctypes.c_voidp(ctypes.addressof(oldv)),
-                    ctypes.addressof(size),
-                    None, 0)
+        self._inotify_wrapper._sysctl(self._attr, 3,
+                                      ctypes.c_voidp(ctypes.addressof(oldv)),
+                                      ctypes.addressof(size),
+                                      None, 0)
         return oldv.value
 
     def set_val(self, nval):
         sizeo = ctypes.c_int(ctypes.sizeof(oldv))
         newv = ctypes.c_int(nval)
         sizen = ctypes.c_int(ctypes.sizeof(newv))
-        LIBC.sysctl(self._attr, 3,
-                    ctypes.c_voidp(ctypes.addressof(oldv)),
-                    ctypes.addressof(sizeo),
-                    ctypes.c_voidp(ctypes.addressof(newv)),
-                    ctypes.addressof(sizen))
+        self._inotify_wrapper._sysctl(self._attr, 3,
+                                      ctypes.c_voidp(ctypes.addressof(oldv)),
+                                      ctypes.addressof(sizeo),
+                                      ctypes.c_voidp(ctypes.addressof(newv)),
+                                      ctypes.addressof(sizen))
 
     value = property(get_val, set_val)
 
         return '<%s=%d>' % (self._attrname, self.get_val())
 
 
-# Singleton instances
+# Inotify's variables
+#
+# FIXME: currently these variables are only accessible when ctypes is used,
+#        otherwise there are set to None.
 #
 # read: myvar = max_queued_events.value
 # update: max_queued_events.value = 42
 #
 for attrname in ('max_queued_events', 'max_user_instances', 'max_user_watches'):
-    globals()[attrname] = SysCtlINotify(attrname)
+    globals()[attrname] = SysCtlINotify.create(attrname)
 
 
 class EventsCodes:
                                 rec=False, auto_add=watch_.auto_add,
                                 exclude_filter=watch_.exclude_filter)
 
-                # Trick to handle mkdir -p /t1/t2/t3 where t1 is watched and
-                # t2 and t3 are created.
-                # Since the directory is new, then everything inside it
-                # must also be new.
+                # Trick to handle mkdir -p /d1/d2/t3 where d1 is watched and
+                # d2 and t3 (directory or file) are created.
+                # Since the directory d2 is new, then everything inside it must
+                # also be new.
                 created_dir_wd = addw_ret.get(created_dir)
-                if (created_dir_wd is not None) and created_dir_wd > 0:
+                if (created_dir_wd is not None) and (created_dir_wd > 0):
                     for name in os.listdir(created_dir):
                         inner = os.path.join(created_dir, name)
-                        if (os.path.isdir(inner) and
-                            self._watch_manager.get_wd(inner) is None):
-                            # Generate (simulate) creation event for sub
-                            # directories.
-                            rawevent = _RawEvent(created_dir_wd,
-                                                 IN_CREATE | IN_ISDIR,
-                                                 0, name)
-                            self._notifier.append_event(rawevent)
+                        if self._watch_manager.get_wd(inner) is not None:
+                            continue
+                        # Generate (simulate) creation events for sub-
+                        # directories and files.
+                        if os.path.isfile(inner):
+                            # symlinks are handled as files.
+                            flags = IN_CREATE
+                        elif os.path.isdir(inner):
+                            flags = IN_CREATE | IN_ISDIR
+                        else:
+                            # This path should not be taken.
+                            continue
+                        rawevent = _RawEvent(created_dir_wd, flags, 0, name)
+                        self._notifier.append_event(rawevent)
         return self.process_default(raw_event)
 
     def process_IN_MOVED_FROM(self, raw_event):
 
     def dump(self, filename):
         """
-        Dumps statistics to file |filename|.
+        Dumps statistics.
 
-        @param filename: pathname.
+        @param filename: filename where stats will be dumped, filename is
+                         created and must not exist prior to this call.
         @type filename: string
         """
-        file_obj = file(filename, 'wb')
-        try:
-            file_obj.write(str(self))
-        finally:
-            file_obj.close()
+        flags = os.O_WRONLY|os.O_CREAT|os.O_NOFOLLOW|os.O_EXCL
+        fd = os.open(filename, flags, 0600)
+        os.write(fd, str(self))
+        os.close(fd)
 
     def __str__(self, scale=45):
         stats = self._stats_copy()
         if self._coalesce:
             self._eventset.clear()
 
-    def __daemonize(self, pid_file=None, force_kill=False, stdin=os.devnull,
-                    stdout=os.devnull, stderr=os.devnull):
+    def __daemonize(self, pid_file=None, stdin=os.devnull, stdout=os.devnull,
+                    stderr=os.devnull):
         """
-        pid_file: file to which the pid will be written.
-        force_kill: if True kill the process associated to pid_file.
-        stdin, stdout, stderr: files associated to common streams.
+        @param pid_file: file where the pid will be written. If pid_file=None
+                         the pid is written to
+                         /var/run/<sys.argv[0]|pyinotify>.pid, if pid_file=False
+                         no pid_file is written.
+        @param stdin:
+        @param stdout:
+        @param stderr: files associated to common streams.
         """
         if pid_file is None:
             dirname = '/var/run/'
             basename = os.path.basename(sys.argv[0]) or 'pyinotify'
             pid_file = os.path.join(dirname, basename + '.pid')
 
-        if os.path.exists(pid_file):
-            fo = file(pid_file, 'rb')
-            try:
-                try:
-                    pid = int(fo.read())
-                except ValueError:
-                    pid = None
-                if pid is not None:
-                    try:
-                        os.kill(pid, 0)
-                    except OSError, err:
-                        if err.errno == errno.ESRCH:
-                            log.debug(err)
-                        else:
-                            log.error(err)
-                    else:
-                        if not force_kill:
-                            s = 'There is already a pid file %s with pid %d'
-                            raise NotifierError(s % (pid_file, pid))
-                        else:
-                            os.kill(pid, 9)
-            finally:
-                fo.close()
-
+        if pid_file != False and os.path.lexists(pid_file):
+            err = 'Cannot daemonize: pid file %s already exists.' % pid_file
+            raise NotifierError(err)
 
         def fork_daemon():
             # Adapted from Chad J. Schroeder's recipe
                 if (pid == 0):
                     # child
                     os.chdir('/')
-                    os.umask(0)
+                    os.umask(022)
                 else:
                     # parent 2
                     os._exit(0)
 
             fd_inp = os.open(stdin, os.O_RDONLY)
             os.dup2(fd_inp, 0)
-            fd_out = os.open(stdout, os.O_WRONLY|os.O_CREAT)
+            fd_out = os.open(stdout, os.O_WRONLY|os.O_CREAT, 0600)
             os.dup2(fd_out, 1)
-            fd_err = os.open(stderr, os.O_WRONLY|os.O_CREAT)
+            fd_err = os.open(stderr, os.O_WRONLY|os.O_CREAT, 0600)
             os.dup2(fd_err, 2)
 
         # Detach task
         fork_daemon()
 
         # Write pid
-        file_obj = file(pid_file, 'wb')
-        try:
-            file_obj.write(str(os.getpid()) + '\n')
-        finally:
-            file_obj.close()
-
-        atexit.register(lambda : os.unlink(pid_file))
+        if pid_file != False:
+            flags = os.O_WRONLY|os.O_CREAT|os.O_NOFOLLOW|os.O_EXCL
+            fd_pid = os.open(pid_file, flags, 0600)
+            os.write(fd_pid, str(os.getpid()) + '\n')
+            os.close(fd_pid)
+            # Register unlink function
+            atexit.register(lambda : os.unlink(pid_file))
 
 
     def _sleep(self, ref_time):
         @type daemonize: boolean
         @param args: Optional and relevant only if daemonize is True. Remaining
                      keyworded arguments are directly passed to daemonize see
-                     __daemonize() method.
+                     __daemonize() method. If pid_file=None or is set to a
+                     pathname the caller must ensure the file does not exist
+                     before this method is called otherwise an exception
+                     pyinotify.NotifierError will be raised. If pid_file=False
+                     it is still daemonized but the pid is not written in any
+                     file.
         @type args: various
         """
         if daemonize:
     Represent a watch, i.e. a file or directory being watched.
 
     """
+    __slots__ = ('wd', 'path', 'mask', 'proc_fun', 'auto_add',
+                 'exclude_filter', 'dir')
+
     def __init__(self, wd, path, mask, proc_fun, auto_add, exclude_filter):
         """
         Initializations.
                                   output_format.punctuation('='),
                                   output_format.field_value(getattr(self,
                                                                     attr))) \
-                      for attr in self.__dict__ if not attr.startswith('_')])
+                      for attr in self.__slots__ if not attr.startswith('_')])
 
         s = '%s%s %s %s' % (output_format.punctuation('<'),
                             output_format.class_name(self.__class__.__name__),
     def __init__(self, exclude_filter=lambda path: False):
         """
         Initialization: init inotify, init watch manager dictionary.
-        Raise OSError if initialization fails.
+        Raise OSError if initialization fails, raise InotifyBindingNotFoundError
+        if no inotify binding was found (through ctypes or from direct access to
+        syscalls).
 
         @param exclude_filter: boolean function, returns True if current
                                path must be excluded from being watched.
         """
         self._exclude_filter = exclude_filter
         self._wmd = {}  # watch dict key: watch descriptor, value: watch
-        self._fd = LIBC.inotify_init() # inotify's init, file descriptor
+
+        self._inotify_wrapper = INotifyWrapper.create()
+        if self._inotify_wrapper is None:
+            raise InotifyBindingNotFoundError()
+
+        self._fd = self._inotify_wrapper.inotify_init() # file descriptor
         if self._fd < 0:
-            err = 'Cannot initialize new instance of inotify%s' % strerrno()
-            raise OSError(err)
+            err = 'Cannot initialize new instance of inotify, %s'
+            raise OSError(err % self._inotify_wrapper.str_errno())
+
+    def close(self):
+        """
+        Close inotify's file descriptor, this action will also automatically
+        remove (i.e. stop watching) all its associated watch descriptors.
+        After a call to this method the WatchManager's instance become useless
+        and cannot be reused, a new instance must then be instanciated. It
+        makes sense to call this method in few situations for instance if
+        several independant WatchManager must be instanciated or if all watches
+        must be removed and no other watches need to be added.
+        """
+        os.close(self._fd)
 
     def get_fd(self):
         """
         """
         Format path to its internal (stored in watch manager) representation.
         """
-        # Unicode strings are converted to byte strings, it seems to be
-        # required because LIBC.inotify_add_watch does not work well when
+        # Unicode strings are converted back to strings, because it seems
+        # that inotify_add_watch from ctypes does not work well when
         # it receives an ctypes.create_unicode_buffer instance as argument.
         # Therefore even wd are indexed with bytes string and not with
         # unicode paths.
         Add a watch on path, build a Watch object and insert it in the
         watch manager dictionary. Return the wd value.
         """
-        byte_path = self.__format_path(path)
-        wd_ = LIBC.inotify_add_watch(self._fd,
-                                     ctypes.create_string_buffer(byte_path),
-                                     mask)
-        if wd_ < 0:
-            return wd_
-        watch_ = Watch(wd=wd_, path=byte_path, mask=mask, proc_fun=proc_fun,
-                       auto_add=auto_add, exclude_filter=exclude_filter)
-        self._wmd[wd_] = watch_
-        log.debug('New %s', watch_)
-        return wd_
+        path = self.__format_path(path)
+        wd = self._inotify_wrapper.inotify_add_watch(self._fd, path, mask)
+        if wd < 0:
+            return wd
+        watch = Watch(wd=wd, path=path, mask=mask, proc_fun=proc_fun,
+                      auto_add=auto_add, exclude_filter=exclude_filter)
+        self._wmd[wd] = watch
+        log.debug('New %s', watch)
+        return wd
 
     def __glob(self, path, do_glob):
         if do_glob:
                                                             auto_add,
                                                             exclude_filter)
                         if wd < 0:
-                            err = 'add_watch: cannot watch %s WD=%d%s'
-                            err = err % (rpath, wd, strerrno())
+                            err = ('add_watch: cannot watch %s WD=%d, %s' % \
+                                       (rpath, wd,
+                                        self._inotify_wrapper.str_errno()))
                             if quiet:
                                 log.error(err)
                             else:
                 raise WatchManagerError(err, ret_)
 
             if mask:
-                addw = LIBC.inotify_add_watch
-                wd_ = addw(self._fd, ctypes.create_string_buffer(apath), mask)
+                wd_ = self._inotify_wrapper.inotify_add_watch(self._fd, apath,
+                                                              mask)
                 if wd_ < 0:
                     ret_[awd] = False
-                    err = 'update_watch: cannot update %s WD=%d%s'
-                    err = err % (apath, wd_, strerrno())
+                    err = ('update_watch: cannot update %s WD=%d, %s' % \
+                               (apath, wd_, self._inotify_wrapper.str_errno()))
                     if quiet:
                         log.error(err)
                         continue
         ret_ = {}  # return {wd: bool, ...}
         for awd in lwd:
             # remove watch
-            wd_ = LIBC.inotify_rm_watch(self._fd, awd)
+            wd_ = self._inotify_wrapper.inotify_rm_watch(self._fd, awd)
             if wd_ < 0:
                 ret_[awd] = False
-                err = 'rm_watch: cannot remove WD=%d%s' % (awd, strerrno())
+                err = ('rm_watch: cannot remove WD=%d, %s' % \
+                           (awd, self._inotify_wrapper.str_errno()))
                 if quiet:
                     log.error(err)
                     continue
                 raise WatchManagerError(err, ret_)
 
+            # Remove watch from our dictionary
+            if awd in self._wmd:
+                del self._wmd[awd]
             ret_[awd] = True
             log.debug('Watch WD=%d (%s) removed', awd, self.get_path(awd))
         return ret_
+#!/usr/bin/env python
+
+# pyinotify.py - python interface to inotify
+# Copyright (c) 2005-2011 Sebastien Martini <seb@dbzteam.org>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+"""
+pyinotify
+
+@author: Sebastien Martini
+@license: MIT License
+@contact: seb@dbzteam.org
+"""
+
+class PyinotifyError(Exception):
+    """Indicates exceptions raised by a Pyinotify class."""
+    pass
+
+
+class UnsupportedPythonVersionError(PyinotifyError):
+    """
+    Raised on unsupported Python versions.
+    """
+    def __init__(self, version):
+        """
+        @param version: Current Python version
+        @type version: string
+        """
+        PyinotifyError.__init__(self,
+                                ('Python %s is unsupported, requires '
+                                 'at least Python 3.0') % version)
+
+
+# Check Python version
+import sys
+if sys.version_info < (3, 0):
+    raise UnsupportedPythonVersionError(sys.version)
+
+
+# Import directives
+import threading
+import os
+import select
+import struct
+import fcntl
+import errno
+import termios
+import array
+import logging
+import atexit
+from collections import deque
+from datetime import datetime, timedelta
+import time
+import re
+import asyncore
+import glob
+import locale
+
+try:
+    from functools import reduce
+except ImportError:
+    pass  # Will fail on Python 2.4 which has reduce() builtin anyway.
+
+try:
+    import ctypes
+    import ctypes.util
+except ImportError:
+    ctypes = None
+
+try:
+    import inotify_syscalls
+except ImportError:
+    inotify_syscalls = None
+
+
+__author__ = "seb@dbzteam.org (Sebastien Martini)"
+
+__version__ = "0.9.1"
+
+
+# Compatibity mode: set to True to improve compatibility with
+# Pyinotify 0.7.1. Do not set this variable yourself, call the
+# function compatibility_mode() instead.
+COMPATIBILITY_MODE = False
+
+
+class InotifyBindingNotFoundError(PyinotifyError):
+    """
+    Raised when no inotify support couldn't be found.
+    """
+    def __init__(self):
+        err = "Couldn't find any inotify binding"
+        PyinotifyError.__init__(self, err)
+
+
+class INotifyWrapper:
+    """
+    Abstract class wrapping access to inotify's functions. This is an
+    internal class.
+    """
+    @staticmethod
+    def create():
+        """
+        Factory method instanciating and returning the right wrapper.
+        """
+        # First, try to use ctypes.
+        if ctypes:
+            inotify = _CtypesLibcINotifyWrapper()
+            if inotify.init():
+                return inotify
+        # Second, see if C extension is compiled.
+        if inotify_syscalls:
+            inotify = _INotifySyscallsWrapper()
+            if inotify.init():
+                return inotify
+
+    def get_errno(self):
+        """
+        Return None is no errno code is available.
+        """
+        return self._get_errno()
+
+    def str_errno(self):
+        code = self.get_errno()
+        if code is None:
+            return 'Errno: no errno support'
+        return 'Errno=%s (%s)' % (os.strerror(code), errno.errorcode[code])
+
+    def inotify_init(self):
+        return self._inotify_init()
+
+    def inotify_add_watch(self, fd, pathname, mask):
+        # Unicode strings must be encoded to string prior to calling this
+        # method.
+        assert isinstance(pathname, str)
+        return self._inotify_add_watch(fd, pathname, mask)
+
+    def inotify_rm_watch(self, fd, wd):
+        return self._inotify_rm_watch(fd, wd)
+
+
+class _INotifySyscallsWrapper(INotifyWrapper):
+    def __init__(self):
+        # Stores the last errno value.
+        self._last_errno = None
+
+    def init(self):
+        assert inotify_syscalls
+        return True
+
+    def _get_errno(self):
+        return self._last_errno
+
+    def _inotify_init(self):
+        try:
+            fd = inotify_syscalls.inotify_init()
+        except IOError as err:
+            self._last_errno = err.errno
+            return -1
+        return fd
+
+    def _inotify_add_watch(self, fd, pathname, mask):
+        try:
+            wd = inotify_syscalls.inotify_add_watch(fd, pathname, mask)
+        except IOError as err:
+            self._last_errno = err.errno
+            return -1
+        return wd
+
+    def _inotify_rm_watch(self, fd, wd):
+        try:
+            ret = inotify_syscalls.inotify_rm_watch(fd, wd)
+        except IOError as err:
+            self._last_errno = err.errno
+            return -1
+        return ret
+
+
+class _CtypesLibcINotifyWrapper(INotifyWrapper):
+    def __init__(self):
+        self._libc = None
+        self._get_errno_func = None
+
+    def init(self):
+        assert ctypes
+        libc_name = None
+        try:
+            libc_name = ctypes.util.find_library('c')
+        except (OSError, IOError):
+            pass  # Will attemp to load it with None anyway.
+
+        self._libc = ctypes.CDLL(libc_name, use_errno=True)
+        self._get_errno_func = ctypes.get_errno
+
+        # Eventually check that libc has needed inotify bindings.
+        if (not hasattr(self._libc, 'inotify_init') or
+            not hasattr(self._libc, 'inotify_add_watch') or
+            not hasattr(self._libc, 'inotify_rm_watch')):
+            return False
+        return True
+
+    def _get_errno(self):
+        assert self._get_errno_func
+        return self._get_errno_func()
+
+    def _inotify_init(self):
+        assert self._libc is not None
+        return self._libc.inotify_init()
+
+    def _inotify_add_watch(self, fd, pathname, mask):
+        assert self._libc is not None
+        # Encodes path to a bytes string. This conversion seems required because
+        # ctypes.create_string_buffer seems to manipulate bytes internally.
+        # Moreover it seems that inotify_add_watch does not work very well when
+        # it receives an ctypes.create_unicode_buffer instance as argument.
+        pathname = pathname.encode(sys.getfilesystemencoding())
+        pathname = ctypes.create_string_buffer(pathname)
+        return self._libc.inotify_add_watch(fd, pathname, mask)
+
+    def _inotify_rm_watch(self, fd, wd):
+        assert self._libc is not None
+        return self._libc.inotify_rm_watch(fd, wd)
+
+    def _sysctl(self, *args):
+        assert self._libc is not None
+        return self._libc.sysctl(*args)
+
+
+# Logging
+def logger_init():
+    """Initialize logger instance."""
+    log = logging.getLogger("pyinotify")
+    console_handler = logging.StreamHandler()
+    console_handler.setFormatter(
+        logging.Formatter("[%(asctime)s %(name)s %(levelname)s] %(message)s"))
+    log.addHandler(console_handler)
+    log.setLevel(20)
+    return log
+
+log = logger_init()
+
+
+# inotify's variables
+class SysCtlINotify:
+    """
+    Access (read, write) inotify's variables through sysctl. Usually it
+    requires administrator rights to update them.
+
+    Examples:
+      - Read max_queued_events attribute: myvar = max_queued_events.value
+      - Update max_queued_events attribute: max_queued_events.value = 42
+    """
+
+    inotify_attrs = {'max_user_instances': 1,
+                     'max_user_watches': 2,
+                     'max_queued_events': 3}
+
+    def __init__(self, attrname, inotify_wrapper):
+        # FIXME: right now only supporting ctypes
+        assert ctypes
+        self._attrname = attrname
+        self._inotify_wrapper = inotify_wrapper
+        sino = ctypes.c_int * 3
+        self._attr = sino(5, 20, SysCtlINotify.inotify_attrs[attrname])
+
+    @staticmethod
+    def create(attrname):
+        # FIXME: right now only supporting ctypes
+        if ctypes is None:
+            return None
+        inotify_wrapper = _CtypesLibcINotifyWrapper()
+        if not inotify_wrapper.init():
+            return None
+        return SysCtlINotify(attrname, inotify_wrapper)
+
+    def get_val(self):
+        """
+        Gets attribute's value.
+
+        @return: stored value.
+        @rtype: int
+        """
+        oldv = ctypes.c_int(0)
+        size = ctypes.c_int(ctypes.sizeof(oldv))
+        self._inotify_wrapper._sysctl(self._attr, 3,
+                                      ctypes.c_voidp(ctypes.addressof(oldv)),
+                                      ctypes.addressof(size),
+                                      None, 0)
+        return oldv.value
+
+    def set_val(self, nval):
+        """
+        Sets new attribute's value.
+
+        @param nval: replaces current value by nval.
+        @type nval: int
+        """
+        oldv = ctypes.c_int(0)
+        sizeo = ctypes.c_int(ctypes.sizeof(oldv))
+        newv = ctypes.c_int(nval)
+        sizen = ctypes.c_int(ctypes.sizeof(newv))
+        self._inotify_wrapper._sysctl(self._attr, 3,
+                                      ctypes.c_voidp(ctypes.addressof(oldv)),
+                                      ctypes.addressof(sizeo),
+                                      ctypes.c_voidp(ctypes.addressof(newv)),
+                                      ctypes.addressof(sizen))
+
+    value = property(get_val, set_val)
+
+    def __repr__(self):
+        return '<%s=%d>' % (self._attrname, self.get_val())
+
+
+# Inotify's variables
+#
+# FIXME: currently these variables are only accessible when ctypes is used,
+#        otherwise there are set to None.
+#
+# read: myvar = max_queued_events.value
+# update: max_queued_events.value = 42
+#
+for attrname in ('max_queued_events', 'max_user_instances', 'max_user_watches'):
+    globals()[attrname] = SysCtlINotify.create(attrname)
+
+
+class EventsCodes:
+    """
+    Set of codes corresponding to each kind of events.
+    Some of these flags are used to communicate with inotify, whereas
+    the others are sent to userspace by inotify notifying some events.
+
+    @cvar IN_ACCESS: File was accessed.
+    @type IN_ACCESS: int
+    @cvar IN_MODIFY: File was modified.
+    @type IN_MODIFY: int
+    @cvar IN_ATTRIB: Metadata changed.
+    @type IN_ATTRIB: int
+    @cvar IN_CLOSE_WRITE: Writtable file was closed.
+    @type IN_CLOSE_WRITE: int
+    @cvar IN_CLOSE_NOWRITE: Unwrittable file closed.
+    @type IN_CLOSE_NOWRITE: int
+    @cvar IN_OPEN: File was opened.
+    @type IN_OPEN: int
+    @cvar IN_MOVED_FROM: File was moved from X.
+    @type IN_MOVED_FROM: int
+    @cvar IN_MOVED_TO: File was moved to Y.
+    @type IN_MOVED_TO: int
+    @cvar IN_CREATE: Subfile was created.
+    @type IN_CREATE: int
+    @cvar IN_DELETE: Subfile was deleted.
+    @type IN_DELETE: int
+    @cvar IN_DELETE_SELF: Self (watched item itself) was deleted.
+    @type IN_DELETE_SELF: int
+    @cvar IN_MOVE_SELF: Self (watched item itself) was moved.
+    @type IN_MOVE_SELF: int
+    @cvar IN_UNMOUNT: Backing fs was unmounted.
+    @type IN_UNMOUNT: int
+    @cvar IN_Q_OVERFLOW: Event queued overflowed.
+    @type IN_Q_OVERFLOW: int
+    @cvar IN_IGNORED: File was ignored.
+    @type IN_IGNORED: int
+    @cvar IN_ONLYDIR: only watch the path if it is a directory (new
+                      in kernel 2.6.15).
+    @type IN_ONLYDIR: int
+    @cvar IN_DONT_FOLLOW: don't follow a symlink (new in kernel 2.6.15).
+                          IN_ONLYDIR we can make sure that we don't watch
+                          the target of symlinks.
+    @type IN_DONT_FOLLOW: int
+    @cvar IN_MASK_ADD: add to the mask of an already existing watch (new
+                       in kernel 2.6.14).
+    @type IN_MASK_ADD: int
+    @cvar IN_ISDIR: Event occurred against dir.
+    @type IN_ISDIR: int
+    @cvar IN_ONESHOT: Only send event once.
+    @type IN_ONESHOT: int
+    @cvar ALL_EVENTS: Alias for considering all of the events.
+    @type ALL_EVENTS: int
+    """
+
+    # The idea here is 'configuration-as-code' - this way, we get our nice class
+    # constants, but we also get nice human-friendly text mappings to do lookups
+    # against as well, for free:
+    FLAG_COLLECTIONS = {'OP_FLAGS': {
+        'IN_ACCESS'        : 0x00000001,  # File was accessed
+        'IN_MODIFY'        : 0x00000002,  # File was modified
+        'IN_ATTRIB'        : 0x00000004,  # Metadata changed
+        'IN_CLOSE_WRITE'   : 0x00000008,  # Writable file was closed
+        'IN_CLOSE_NOWRITE' : 0x00000010,  # Unwritable file closed
+        'IN_OPEN'          : 0x00000020,  # File was opened
+        'IN_MOVED_FROM'    : 0x00000040,  # File was moved from X
+        'IN_MOVED_TO'      : 0x00000080,  # File was moved to Y
+        'IN_CREATE'        : 0x00000100,  # Subfile was created
+        'IN_DELETE'        : 0x00000200,  # Subfile was deleted
+        'IN_DELETE_SELF'   : 0x00000400,  # Self (watched item itself)
+                                          # was deleted
+        'IN_MOVE_SELF'     : 0x00000800,  # Self (watched item itself) was moved
+        },
+                        'EVENT_FLAGS': {
+        'IN_UNMOUNT'       : 0x00002000,  # Backing fs was unmounted
+        'IN_Q_OVERFLOW'    : 0x00004000,  # Event queued overflowed
+        'IN_IGNORED'       : 0x00008000,  # File was ignored
+        },
+                        'SPECIAL_FLAGS': {
+        'IN_ONLYDIR'       : 0x01000000,  # only watch the path if it is a
+                                          # directory
+        'IN_DONT_FOLLOW'   : 0x02000000,  # don't follow a symlink
+        'IN_MASK_ADD'      : 0x20000000,  # add to the mask of an already
+                                          # existing watch
+        'IN_ISDIR'         : 0x40000000,  # event occurred against dir
+        'IN_ONESHOT'       : 0x80000000,  # only send event once
+        },
+                        }
+
+    def maskname(mask):
+        """
+        Returns the event name associated to mask. IN_ISDIR is appended to
+        the result when appropriate. Note: only one event is returned, because
+        only one event can be raised at a given time.
+
+        @param mask: mask.
+        @type mask: int
+        @return: event name.
+        @rtype: str
+        """
+        ms = mask
+        name = '%s'
+        if mask & IN_ISDIR:
+            ms = mask - IN_ISDIR
+            name = '%s|IN_ISDIR'
+        return name % EventsCodes.ALL_VALUES[ms]
+
+    maskname = staticmethod(maskname)
+
+
+# So let's now turn the configuration into code
+EventsCodes.ALL_FLAGS = {}
+EventsCodes.ALL_VALUES = {}
+for flagc, valc in EventsCodes.FLAG_COLLECTIONS.items():
+    # Make the collections' members directly accessible through the
+    # class dictionary
+    setattr(EventsCodes, flagc, valc)
+
+    # Collect all the flags under a common umbrella
+    EventsCodes.ALL_FLAGS.update(valc)
+
+    # Make the individual masks accessible as 'constants' at globals() scope
+    # and masknames accessible by values.
+    for name, val in valc.items():
+        globals()[name] = val
+        EventsCodes.ALL_VALUES[val] = name
+
+
+# all 'normal' events
+ALL_EVENTS = reduce(lambda x, y: x | y, EventsCodes.OP_FLAGS.values())
+EventsCodes.ALL_FLAGS['ALL_EVENTS'] = ALL_EVENTS
+EventsCodes.ALL_VALUES[ALL_EVENTS] = 'ALL_EVENTS'
+
+
+class _Event:
+    """
+    Event structure, represent events raised by the system. This
+    is the base class and should be subclassed.
+
+    """
+    def __init__(self, dict_):
+        """
+        Attach attributes (contained in dict_) to self.
+
+        @param dict_: Set of attributes.
+        @type dict_: dictionary
+        """
+        for tpl in dict_.items():
+            setattr(self, *tpl)
+
+    def __repr__(self):
+        """
+        @return: Generic event string representation.
+        @rtype: str
+        """
+        s = ''
+        for attr, value in sorted(self.__dict__.items(), key=lambda x: x[0]):
+            if attr.startswith('_'):
+                continue
+            if attr == 'mask':
+                value = hex(getattr(self, attr))
+            elif isinstance(value, str) and not value:
+                value = "''"
+            s += ' %s%s%s' % (output_format.field_name(attr),
+                              output_format.punctuation('='),
+                              output_format.field_value(value))
+
+        s = '%s%s%s %s' % (output_format.punctuation('<'),
+                           output_format.class_name(self.__class__.__name__),
+                           s,
+                           output_format.punctuation('>'))
+        return s
+
+    def __str__(self):
+        return repr(self)
+
+
+class _RawEvent(_Event):
+    """
+    Raw event, it contains only the informations provided by the system.
+    It doesn't infer anything.
+    """
+    def __init__(self, wd, mask, cookie, name):
+        """
+        @param wd: Watch Descriptor.
+        @type wd: int
+        @param mask: Bitmask of events.
+        @type mask: int
+        @param cookie: Cookie.
+        @type cookie: int
+        @param name: Basename of the file or directory against which the
+                     event was raised in case where the watched directory
+                     is the parent directory. None if the event was raised
+                     on the watched item itself.
+        @type name: string or None
+        """
+        # Use this variable to cache the result of str(self), this object
+        # is immutable.
+        self._str = None
+        # name: remove trailing '\0'
+        d = {'wd': wd,
+             'mask': mask,
+             'cookie': cookie,
+             'name': name.rstrip('\0')}
+        _Event.__init__(self, d)
+        log.debug(str(self))
+
+    def __str__(self):
+        if self._str is None:
+            self._str = _Event.__str__(self)
+        return self._str
+
+
+class Event(_Event):
+    """
+    This class contains all the useful informations about the observed
+    event. However, the presence of each field is not guaranteed and
+    depends on the type of event. In effect, some fields are irrelevant
+    for some kind of event (for example 'cookie' is meaningless for
+    IN_CREATE whereas it is mandatory for IN_MOVE_TO).
+
+    The possible fields are:
+      - wd (int): Watch Descriptor.
+      - mask (int): Mask.
+      - maskname (str): Readable event name.
+      - path (str): path of the file or directory being watched.
+      - name (str): Basename of the file or directory against which the
+              event was raised in case where the watched directory
+              is the parent directory. None if the event was raised
+              on the watched item itself. This field is always provided
+              even if the string is ''.
+      - pathname (str): Concatenation of 'path' and 'name'.
+      - src_pathname (str): Only present for IN_MOVED_TO events and only in
+              the case where IN_MOVED_FROM events are watched too. Holds the
+              source pathname from where pathname was moved from.
+      - cookie (int): Cookie.
+      - dir (bool): True if the event was raised against a directory.
+
+    """
+    def __init__(self, raw):
+        """
+        Concretely, this is the raw event plus inferred infos.
+        """
+        _Event.__init__(self, raw)
+        self.maskname = EventsCodes.maskname(self.mask)
+        if COMPATIBILITY_MODE:
+            self.event_name = self.maskname
+        try:
+            if self.name:
+                self.pathname = os.path.abspath(os.path.join(self.path,
+                                                             self.name))
+            else:
+                self.pathname = os.path.abspath(self.path)
+        except AttributeError as err:
+            # Usually it is not an error some events are perfectly valids
+            # despite the lack of these attributes.
+            log.debug(err)
+
+
+class ProcessEventError(PyinotifyError):
+    """
+    ProcessEventError Exception. Raised on ProcessEvent error.
+    """
+    def __init__(self, err):
+        """
+        @param err: Exception error description.
+        @type err: string
+        """
+        PyinotifyError.__init__(self, err)
+
+
+class _ProcessEvent:
+    """
+    Abstract processing event class.
+    """
+    def __call__(self, event):
+        """
+        To behave like a functor the object must be callable.
+        This method is a dispatch method. Its lookup order is:
+          1. process_MASKNAME method
+          2. process_FAMILY_NAME method
+          3. otherwise calls process_default
+
+        @param event: Event to be processed.
+        @type event: Event object
+        @return: By convention when used from the ProcessEvent class:
+                 - Returning False or None (default value) means keep on
+                 executing next chained functors (see chain.py example).
+                 - Returning True instead means do not execute next
+                   processing functions.
+        @rtype: bool
+        @raise ProcessEventError: Event object undispatchable,
+                                  unknown event.
+        """
+        stripped_mask = event.mask - (event.mask & IN_ISDIR)
+        maskname = EventsCodes.ALL_VALUES.get(stripped_mask)
+        if maskname is None:
+            raise ProcessEventError("Unknown mask 0x%08x" % stripped_mask)
+
+        # 1- look for process_MASKNAME
+        meth = getattr(self, 'process_' + maskname, None)
+        if meth is not None:
+            return meth(event)
+        # 2- look for process_FAMILY_NAME
+        meth = getattr(self, 'process_IN_' + maskname.split('_')[1], None)
+        if meth is not None:
+            return meth(event)
+        # 3- default call method process_default
+        return self.process_default(event)
+
+    def __repr__(self):
+        return '<%s>' % self.__class__.__name__
+
+
+class _SysProcessEvent(_ProcessEvent):
+    """
+    There is three kind of processing according to each event:
+
+      1. special handling (deletion from internal container, bug, ...).
+      2. default treatment: which is applied to the majority of events.
+      3. IN_ISDIR is never sent alone, he is piggybacked with a standard
+         event, he is not processed as the others events, instead, its
+         value is captured and appropriately aggregated to dst event.
+    """
+    def __init__(self, wm, notifier):
+        """
+
+        @param wm: Watch Manager.
+        @type wm: WatchManager instance
+        @param notifier: Notifier.
+        @type notifier: Notifier instance
+        """
+        self._watch_manager = wm  # watch manager
+        self._notifier = notifier  # notifier
+        self._mv_cookie = {}  # {cookie(int): (src_path(str), date), ...}
+        self._mv = {}  # {src_path(str): (dst_path(str), date), ...}
+
+    def cleanup(self):
+        """
+        Cleanup (delete) old (>1mn) records contained in self._mv_cookie
+        and self._mv.
+        """
+        date_cur_ = datetime.now()
+        for seq in (self._mv_cookie, self._mv):
+            for k in list(seq.keys()):
+                if (date_cur_ - seq[k][1]) > timedelta(minutes=1):
+                    log.debug('Cleanup: deleting entry %s', seq[k][0])
+                    del seq[k]
+
+    def process_IN_CREATE(self, raw_event):
+        """
+        If the event affects a directory and the auto_add flag of the
+        targetted watch is set to True, a new watch is added on this
+        new directory, with the same attribute values than those of
+        this watch.
+        """
+        if raw_event.mask & IN_ISDIR:
+            watch_ = self._watch_manager.get_watch(raw_event.wd)
+            created_dir = os.path.join(watch_.path, raw_event.name)
+            if watch_.auto_add and not watch_.exclude_filter(created_dir):
+                addw = self._watch_manager.add_watch
+                # The newly monitored directory inherits attributes from its
+                # parent directory.
+                addw_ret = addw(created_dir, watch_.mask,
+                                proc_fun=watch_.proc_fun,
+                                rec=False, auto_add=watch_.auto_add,
+                                exclude_filter=watch_.exclude_filter)
+
+                # Trick to handle mkdir -p /d1/d2/t3 where d1 is watched and
+                # d2 and t3 (directory or file) are created.
+                # Since the directory d2 is new, then everything inside it must
+                # also be new.
+                created_dir_wd = addw_ret.get(created_dir)
+                if (created_dir_wd is not None) and (created_dir_wd > 0):
+                    for name in os.listdir(created_dir):
+                        inner = os.path.join(created_dir, name)
+                        if self._watch_manager.get_wd(inner) is not None:
+                            continue
+                        # Generate (simulate) creation events for sub-
+                        # directories and files.
+                        if os.path.isfile(inner):
+                            # symlinks are handled as files.
+                            flags = IN_CREATE
+                        elif os.path.isdir(inner):
+                            flags = IN_CREATE | IN_ISDIR
+                        else:
+                            # This path should not be taken.
+                            continue
+                        rawevent = _RawEvent(created_dir_wd, flags, 0, name)
+                        self._notifier.append_event(rawevent)
+        return self.process_default(raw_event)
+
+    def process_IN_MOVED_FROM(self, raw_event):
+        """
+        Map the cookie with the source path (+ date for cleaning).
+        """
+        watch_ = self._watch_manager.get_watch(raw_event.wd)
+        path_ = watch_.path
+        src_path = os.path.normpath(os.path.join(path_, raw_event.name))
+        self._mv_cookie[raw_event.cookie] = (src_path, datetime.now())
+        return self.process_default(raw_event, {'cookie': raw_event.cookie})
+
+    def process_IN_MOVED_TO(self, raw_event):
+        """
+        Map the source path with the destination path (+ date for
+        cleaning).
+        """
+        watch_ = self._watch_manager.get_watch(raw_event.wd)
+        path_ = watch_.path
+        dst_path = os.path.normpath(os.path.join(path_, raw_event.name))
+        mv_ = self._mv_cookie.get(raw_event.cookie)
+        to_append = {'cookie': raw_event.cookie}
+        if mv_ is not None:
+            self._mv[mv_[0]] = (dst_path, datetime.now())
+            # Let's assume that IN_MOVED_FROM event is always queued before
+            # that its associated (they share a common cookie) IN_MOVED_TO
+            # event is queued itself. It is then possible in that scenario
+            # to provide as additional information to the IN_MOVED_TO event
+            # the original pathname of the moved file/directory.
+            to_append['src_pathname'] = mv_[0]
+        elif (raw_event.mask & IN_ISDIR and watch_.auto_add and
+              not watch_.exclude_filter(dst_path)):
+            # We got a diretory that's "moved in" from an unknown source and
+            # auto_add is enabled. Manually add watches to the inner subtrees.
+            # The newly monitored directory inherits attributes from its
+            # parent directory.
+            self._watch_manager.add_watch(dst_path, watch_.mask,
+                                          proc_fun=watch_.proc_fun,
+                                          rec=True, auto_add=True,
+                                          exclude_filter=watch_.exclude_filter)
+        return self.process_default(raw_event, to_append)
+
+    def process_IN_MOVE_SELF(self, raw_event):
+        """
+        STATUS: the following bug has been fixed in recent kernels (FIXME:
+        which version ?). Now it raises IN_DELETE_SELF instead.
+
+        Old kernels were bugged, this event raised when the watched item
+        were moved, so we had to update its path, but under some circumstances
+        it was impossible: if its parent directory and its destination
+        directory wasn't watched. The kernel (see include/linux/fsnotify.h)
+        doesn't bring us enough informations like the destination path of
+        moved items.
+        """
+        watch_ = self._watch_manager.get_watch(raw_event.wd)
+        src_path = watch_.path
+        mv_ = self._mv.get(src_path)
+        if mv_:
+            dest_path = mv_[0]
+            watch_.path = dest_path
+            # add the separator to the source path to avoid overlapping
+            # path issue when testing with startswith()
+            src_path += os.path.sep
+            src_path_len = len(src_path)
+            # The next loop renames all watches with src_path as base path.
+            # It seems that IN_MOVE_SELF does not provide IN_ISDIR information
+            # therefore the next loop is iterated even if raw_event is a file.
+            for w in self._watch_manager.watches.values():
+                if w.path.startswith(src_path):
+                    # Note that dest_path is a normalized path.
+                    w.path = os.path.join(dest_path, w.path[src_path_len:])
+        else:
+            log.error("The pathname '%s' of this watch %s has probably changed "
+                      "and couldn't be updated, so it cannot be trusted "
+                      "anymore. To fix this error move directories/files only "
+                      "between watched parents directories, in this case e.g. "
+                      "put a watch on '%s'.",
+                      watch_.path, watch_,
+                      os.path.normpath(os.path.join(watch_.path,
+                                                    os.path.pardir)))
+            if not watch_.path.endswith('-unknown-path'):
+                watch_.path += '-unknown-path'
+        return self.process_default(raw_event)
+
+    def process_IN_Q_OVERFLOW(self, raw_event):
+        """
+        Only signal an overflow, most of the common flags are irrelevant
+        for this event (path, wd, name).
+        """
+        return Event({'mask': raw_event.mask})
+
+    def process_IN_IGNORED(self, raw_event):
+        """
+        The watch descriptor raised by this event is now ignored (forever),
+        it can be safely deleted from the watch manager dictionary.
+        After this event we can be sure that neither the event queue nor
+        the system will raise an event associated to this wd again.
+        """
+        event_ = self.process_default(raw_event)
+        self._watch_manager.del_watch(raw_event.wd)
+        return event_
+
+    def process_default(self, raw_event, to_append=None):
+        """
+        Commons handling for the followings events:
+
+        IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_WRITE, IN_CLOSE_NOWRITE,
+        IN_OPEN, IN_DELETE, IN_DELETE_SELF, IN_UNMOUNT.
+        """
+        watch_ = self._watch_manager.get_watch(raw_event.wd)
+        if raw_event.mask & (IN_DELETE_SELF | IN_MOVE_SELF):
+            # Unfornulately this information is not provided by the kernel
+            dir_ = watch_.dir
+        else:
+            dir_ = bool(raw_event.mask & IN_ISDIR)
+        dict_ = {'wd': raw_event.wd,
+                 'mask': raw_event.mask,
+                 'path': watch_.path,
+                 'name': raw_event.name,
+                 'dir': dir_}
+        if COMPATIBILITY_MODE:
+            dict_['is_dir'] = dir_
+        if to_append is not None:
+            dict_.update(to_append)
+        return Event(dict_)
+
+
+class ProcessEvent(_ProcessEvent):
+    """
+    Process events objects, can be specialized via subclassing, thus its
+    behavior can be overriden:
+
+    Note: you should not override __init__ in your subclass instead define
+    a my_init() method, this method will be called automatically from the
+    constructor of this class with its optionals parameters.
+
+      1. Provide specialized individual methods, e.g. process_IN_DELETE for
+         processing a precise type of event (e.g. IN_DELETE in this case).
+      2. Or/and provide methods for processing events by 'family', e.g.
+         process_IN_CLOSE method will process both IN_CLOSE_WRITE and
+         IN_CLOSE_NOWRITE events (if process_IN_CLOSE_WRITE and
+         process_IN_CLOSE_NOWRITE aren't defined though).
+      3. Or/and override process_default for catching and processing all
+         the remaining types of events.
+    """
+    pevent = None
+
+    def __init__(self, pevent=None, **kargs):
+        """
+        Enable chaining of ProcessEvent instances.
+
+        @param pevent: Optional callable object, will be called on event
+                       processing (before self).
+        @type pevent: callable
+        @param kargs: This constructor is implemented as a template method
+                      delegating its optionals keyworded arguments to the
+                      method my_init().
+        @type kargs: dict
+        """
+        self.pevent = pevent
+        self.my_init(**kargs)
+
+    def my_init(self, **kargs):
+        """
+        This method is called from ProcessEvent.__init__(). This method is
+        empty here and must be redefined to be useful. In effect, if you
+        need to specifically initialize your subclass' instance then you
+        just have to override this method in your subclass. Then all the
+        keyworded arguments passed to ProcessEvent.__init__() will be
+        transmitted as parameters to this method. Beware you MUST pass
+        keyword arguments though.
+
+        @param kargs: optional delegated arguments from __init__().
+        @type kargs: dict
+        """
+        pass
+
+    def __call__(self, event):
+        stop_chaining = False
+        if self.pevent is not None:
+            # By default methods return None so we set as guideline
+            # that methods asking for stop chaining must explicitely
+            # return non None or non False values, otherwise the default
+            # behavior will be to accept chain call to the corresponding
+            # local method.
+            stop_chaining = self.pevent(event)
+        if not stop_chaining:
+            return _ProcessEvent.__call__(self, event)
+
+    def nested_pevent(self):
+        return self.pevent
+
+    def process_IN_Q_OVERFLOW(self, event):
+        """
+        By default this method only reports warning messages, you can overredide
+        it by subclassing ProcessEvent and implement your own
+        process_IN_Q_OVERFLOW method. The actions you can take on receiving this
+        event is either to update the variable max_queued_events in order to
+        handle more simultaneous events or to modify your code in order to
+        accomplish a better filtering diminishing the number of raised events.
+        Because this method is defined, IN_Q_OVERFLOW will never get
+        transmitted as arguments to process_default calls.
+
+        @param event: IN_Q_OVERFLOW event.
+        @type event: dict
+        """
+        log.warning('Event queue overflowed.')
+
+    def process_default(self, event):
+        """
+        Default processing event method. By default does nothing. Subclass
+        ProcessEvent and redefine this method in order to modify its behavior.
+
+        @param event: Event to be processed. Can be of any type of events but
+                      IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW).
+        @type event: Event instance
+        """
+        pass
+
+
+class PrintAllEvents(ProcessEvent):
+    """
+    Dummy class used to print events strings representations. For instance this
+    class is used from command line to print all received events to stdout.
+    """
+    def my_init(self, out=None):
+        """
+        @param out: Where events will be written.
+        @type out: Object providing a valid file object interface.
+        """
+        if out is None:
+            out = sys.stdout
+        self._out = out
+
+    def process_default(self, event):
+        """
+        Writes event string representation to file object provided to
+        my_init().
+
+        @param event: Event to be processed. Can be of any type of events but
+                      IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW).
+        @type event: Event instance
+        """
+        self._out.write(str(event))
+        self._out.write('\n')
+        self._out.flush()
+
+
+class ChainIfTrue(ProcessEvent):
+    """
+    Makes conditional chaining depending on the result of the nested
+    processing instance.
+    """
+    def my_init(self, func):
+        """
+        Method automatically called from base class constructor.
+        """
+        self._func = func
+
+    def process_default(self, event):
+        return not self._func(event)
+
+
+class Stats(ProcessEvent):
+    """
+    Compute and display trivial statistics about processed events.
+    """