Commits

Lucian Brănescu-Mihăilă  committed 08a77f3

Simplify project structure, to bring it in line with common packaging practice. Switch .cvsignore to .hgignore.

  • Participants
  • Parent commits 2294171
  • Tags v2.2.5

Comments (0)

Files changed (40)

+syntax: glob
+
+*.bak
+*.pyc
+*.pyo
+*.swp
+*.tmp
+build
+dist
+MANIFEST
+.pypirc
+Version 2.2.5 (16-sep-2010)
+---------------------------
+
+Fix Y2K issue with Last Update field in the header (sf bug 3065838).
+
+Version 2.2.4 (26-may-2009)
+---------------------------
+
+Ignore leading and trailing zero bytes in numeric fields.
+
+Version 2.2.3 (17-feb-2009)
+---------------------------
+
+Support for writing empty date values.
+
+Version 2.2.2 (16-sep-2008)
+---------------------------
+
+Numeric decoder returns float when the value contains decimal point
+without regard to the number of decimal places declared in the header.
+
+Version 2.2.1 (16-mar-2008)
+---------------------------
+
+Fix: raise ValueError if a name passed to field constructor
+is longer than 10 characters.
+
+Version 2.2.0 (11-feb-2007)
+---------------------------
+
+Features:
+  - date fields are allowed to contain values stored with leading spaces
+    instead of leading zeroes.
+  - dbf.header returns field definition objects if accessed as a list
+    or as a dictionary.
+  - added raw data access methods: DbfRecord.rawFromStream(),
+    DbfFieldDef.rawFromRecord().
+  - added conversion error handling: if ignoreErrors=True is passed
+    with Dbf constructor arguments, then failing field value conversions
+    will not raise errors but instead will return special object
+    INVALID_VALUE which is equal to None, empty string and zero.
+    .ignoreErrors property of the Dbf instances may be toggled also
+    after instance initialization.
+
+Version 2.1.0 (01-dec-2006)
+---------------------------
+
+Features:
+  - support field types 'F' (float), 'I' (integer) and 'Y' (currency)
+
+Fix processing of empty Timestamp values
+
+Version 2.0.3 (30-oct-2006)
+---------------------------
+
+Fixes:
+  - compatibility fix for Python versions prior to 2.4 (sf bug 1574526)
+  - wrong record length when reading from file object (sf bug 1586619)
+
+Version 2.0.2 (08-jul-2006)
+---------------------------
+
+Fix dbfnew (legacy API for DBF creation)
+
+Version 2.0.1 (10-mar-2006)
+---------------------------
+
+Fixes in Numeric field processing:
+  - decoding inserted decimal point where it shouldn't;
+  - encoded value never exceeds field length.
+    If encoded value is too large for the field, decimal digits get
+    removed (rounding the value down).  When integer value is larger
+    than the field length, ValueError is raised.
+
+Fix: Date and Logical fields were not able to decode empty values.
+
+Errors raised from field processing include field name.
+
+This file is included in the source distribution.
+
+Version 2.0.0 (20-dec-2005)
+---------------------------
+
+Changed style of doc-strings; added more docs.
+
+``mx.DateTime`` isn't required anymore - internal representation uses
+``datetime.date`` and ``datetime.datetime`` classes from the batteries.
+
+'D' and 'T' became "smarter":
+  - date fields could accept:
+    - date/time tuples [or tuples with at least
+      first three components (year, day, month)];
+    - strings in format 'yyyymmdd' or 'yymmdd';
+    - ``datetime.date`` and ``datetime.datetime`` instances;
+    - ``None`` instance - use current date value;
+    - numbers (``int``, ``long``, ``float``)
+      which are treated as seconds from epoch;
+    - objects having 'ticks' method (``mx.DateTime`` for example);
+  - datetime fields can accept everything date fields do
+    except string objects (which is a TODO stuff).
+
+Changed identifiers and args of some methods. For example old code
+
+         dbf = Dbf()
+         dbf.openFile("county.dbf", readOnly=1)
+
+now should be written like
+
+         dbf = Dbf("county.dbf", True)
+
+All classes now are new-style classes (inherited from the ``object``).
+
+Got rid of ``binnum`` and ``strutil`` modules,
+using ``struct`` and string methods instead.
+
+Group classes in different modules instead of having them all in
+one, as it was earlier (see dbf.py from the previous release(s)).
+
+Use fields' registry to simplify registering of the new data types.
+
+
+Previous changes:
+-----------------
+
+took over development 2000-10-06 Hans Fiby
+
+changes: 2001-02-07 Hans Fiby
+ * read long integers as long
+ * add this CHANGES File
+
+changed: 2000-10-06 Hans Fiby
+ * added dbfnew.py
+ * fixed decimal points
+include CHANGES
+Python modules for accessing .dbf (dbase) files
+readme_dbfpy.txt
+jjk  11/15/99
+2000-10-06 Hans Fiby
+
+I have used this code, in various forms, to read .dbf files.
+It includes some experimental code to write to .dbf files.
+This code may provide a starting point for others.
+
+Files:
+    dbf.py      reads (and possibly writes) .dbf file data directly from disk
+    dbfload.py  reads an entire .dbf file into memory, provides access to data
+    binnum.py   a module to decode/encode binary numbers
+    strutil.py  a module of string utilities
+    dbfnew.py   a module to create new .dbf files
+    county.dbf  a sample .dbf file
+    readme.txt  this file
+    dbfpy.tgz   the distribution tarball
+
+dbf.py and dbfload.py are independent ways to access .dbf files.
+
+*** !!  USE AT YOUR OWN RISK    !! ***
+*** !! NO WARRANTIES WHATSOEVER !! ***
+
+Jeff Kunce <kuncej@mail.conservation.state.mo.us>
+http://starship.python.net/crew/jjkunce/
+Hans Fiby <hans@fiby.at>
+http://www.fiby.at

File dbfpy/CHANGES

-Version 2.2.5 (16-sep-2010)
----------------------------
-
-Fix Y2K issue with Last Update field in the header (sf bug 3065838).
-
-Version 2.2.4 (26-may-2009)
----------------------------
-
-Ignore leading and trailing zero bytes in numeric fields.
-
-Version 2.2.3 (17-feb-2009)
----------------------------
-
-Support for writing empty date values.
-
-Version 2.2.2 (16-sep-2008)
----------------------------
-
-Numeric decoder returns float when the value contains decimal point
-without regard to the number of decimal places declared in the header.
-
-Version 2.2.1 (16-mar-2008)
----------------------------
-
-Fix: raise ValueError if a name passed to field constructor
-is longer than 10 characters.
-
-Version 2.2.0 (11-feb-2007)
----------------------------
-
-Features:
-  - date fields are allowed to contain values stored with leading spaces
-    instead of leading zeroes.
-  - dbf.header returns field definition objects if accessed as a list
-    or as a dictionary.
-  - added raw data access methods: DbfRecord.rawFromStream(),
-    DbfFieldDef.rawFromRecord().
-  - added conversion error handling: if ignoreErrors=True is passed
-    with Dbf constructor arguments, then failing field value conversions
-    will not raise errors but instead will return special object
-    INVALID_VALUE which is equal to None, empty string and zero.
-    .ignoreErrors property of the Dbf instances may be toggled also
-    after instance initialization.
-
-Version 2.1.0 (01-dec-2006)
----------------------------
-
-Features:
-  - support field types 'F' (float), 'I' (integer) and 'Y' (currency)
-
-Fix processing of empty Timestamp values
-
-Version 2.0.3 (30-oct-2006)
----------------------------
-
-Fixes:
-  - compatibility fix for Python versions prior to 2.4 (sf bug 1574526)
-  - wrong record length when reading from file object (sf bug 1586619)
-
-Version 2.0.2 (08-jul-2006)
----------------------------
-
-Fix dbfnew (legacy API for DBF creation)
-
-Version 2.0.1 (10-mar-2006)
----------------------------
-
-Fixes in Numeric field processing:
-  - decoding inserted decimal point where it shouldn't;
-  - encoded value never exceeds field length.
-    If encoded value is too large for the field, decimal digits get
-    removed (rounding the value down).  When integer value is larger
-    than the field length, ValueError is raised.
-
-Fix: Date and Logical fields were not able to decode empty values.
-
-Errors raised from field processing include field name.
-
-This file is included in the source distribution.
-
-Version 2.0.0 (20-dec-2005)
----------------------------
-
-Changed style of doc-strings; added more docs.
-
-``mx.DateTime`` isn't required anymore - internal representation uses
-``datetime.date`` and ``datetime.datetime`` classes from the batteries.
-
-'D' and 'T' became "smarter":
-  - date fields could accept:
-    - date/time tuples [or tuples with at least
-      first three components (year, day, month)];
-    - strings in format 'yyyymmdd' or 'yymmdd';
-    - ``datetime.date`` and ``datetime.datetime`` instances;
-    - ``None`` instance - use current date value;
-    - numbers (``int``, ``long``, ``float``)
-      which are treated as seconds from epoch;
-    - objects having 'ticks' method (``mx.DateTime`` for example);
-  - datetime fields can accept everything date fields do
-    except string objects (which is a TODO stuff).
-
-Changed identifiers and args of some methods. For example old code
-
-         dbf = Dbf()
-         dbf.openFile("county.dbf", readOnly=1)
-
-now should be written like
-
-         dbf = Dbf("county.dbf", True)
-
-All classes now are new-style classes (inherited from the ``object``).
-
-Got rid of ``binnum`` and ``strutil`` modules,
-using ``struct`` and string methods instead.
-
-Group classes in different modules instead of having them all in
-one, as it was earlier (see dbf.py from the previous release(s)).
-
-Use fields' registry to simplify registering of the new data types.
-
-
-Previous changes:
------------------
-
-took over development 2000-10-06 Hans Fiby
-
-changes: 2001-02-07 Hans Fiby
- * read long integers as long
- * add this CHANGES File
-
-changed: 2000-10-06 Hans Fiby
- * added dbfnew.py
- * fixed decimal points

File dbfpy/MANIFEST.in

-include CHANGES

File dbfpy/README

-Python modules for accessing .dbf (dbase) files
-readme_dbfpy.txt
-jjk  11/15/99
-2000-10-06 Hans Fiby
-
-I have used this code, in various forms, to read .dbf files.
-It includes some experimental code to write to .dbf files.
-This code may provide a starting point for others.
-
-Files:
-    dbf.py      reads (and possibly writes) .dbf file data directly from disk
-    dbfload.py  reads an entire .dbf file into memory, provides access to data
-    binnum.py   a module to decode/encode binary numbers
-    strutil.py  a module of string utilities
-    dbfnew.py   a module to create new .dbf files
-    county.dbf  a sample .dbf file
-    readme.txt  this file
-    dbfpy.tgz   the distribution tarball
-
-dbf.py and dbfload.py are independent ways to access .dbf files.
-
-*** !!  USE AT YOUR OWN RISK    !! ***
-*** !! NO WARRANTIES WHATSOEVER !! ***
-
-Jeff Kunce <kuncej@mail.conservation.state.mo.us>
-http://starship.python.net/crew/jjkunce/
-Hans Fiby <hans@fiby.at>
-http://www.fiby.at

File dbfpy/__init__.py

Empty file added.

File dbfpy/dbf.py

+#! /usr/bin/env python
+"""DBF accessing helpers.
+
+FIXME: more documentation needed
+
+Examples:
+
+    Create new table, setup structure, add records:
+
+        dbf = Dbf(filename, new=True)
+        dbf.addField(
+            ("NAME", "C", 15),
+            ("SURNAME", "C", 25),
+            ("INITIALS", "C", 10),
+            ("BIRTHDATE", "D"),
+        )
+        for (n, s, i, b) in (
+            ("John", "Miller", "YC", (1980, 10, 11)),
+            ("Andy", "Larkin", "", (1980, 4, 11)),
+        ):
+            rec = dbf.newRecord()
+            rec["NAME"] = n
+            rec["SURNAME"] = s
+            rec["INITIALS"] = i
+            rec["BIRTHDATE"] = b
+            rec.store()
+        dbf.close()
+
+    Open existed dbf, read some data:
+
+        dbf = Dbf(filename, True)
+        for rec in dbf:
+            for fldName in dbf.fieldNames:
+                print '%s:\t %s (%s)' % (fldName, rec[fldName],
+                    type(rec[fldName]))
+            print
+        dbf.close()
+
+"""
+"""History (most recent first):
+14-dec-2010 [als]   added Memo file support
+11-feb-2007 [als]   export INVALID_VALUE;
+                    Dbf: added .ignoreErrors, .INVALID_VALUE
+04-jul-2006 [als]   added export declaration
+20-dec-2005 [yc]    removed fromStream and newDbf methods:
+                    use argument of __init__ call must be used instead;
+                    added class fields pointing to the header and
+                    record classes.
+17-dec-2005 [yc]    split to several modules; reimplemented
+13-dec-2005 [yc]    adapted to the changes of the `strutil` module.
+13-sep-2002 [als]   support FoxPro Timestamp datatype
+15-nov-1999 [jjk]   documentation updates, add demo
+24-aug-1998 [jjk]   add some encodeValue methods (not tested), other tweaks
+08-jun-1998 [jjk]   fix problems, add more features
+20-feb-1998 [jjk]   fix problems, add more features
+19-feb-1998 [jjk]   add create/write capabilities
+18-feb-1998 [jjk]   from dbfload.py
+"""
+
+__version__ = "$Revision$"[11:-2]
+__date__ = "$Date$"[7:-2]
+__author__ = "Jeff Kunce <kuncej@mail.conservation.state.mo.us>"
+
+__all__ = ["Dbf"]
+
+import header
+import memo
+import record
+from utils import INVALID_VALUE
+
+class Dbf(object):
+    """DBF accessor.
+
+    FIXME:
+        docs and examples needed (dont' forget to tell
+        about problems adding new fields on the fly)
+
+    Implementation notes:
+        ``_new`` field is used to indicate whether this is
+        a new data table. `addField` could be used only for
+        the new tables! If at least one record was appended
+        to the table it's structure couldn't be changed.
+
+    """
+
+    __slots__ = ("name", "header", "stream", "memo",
+        "_changed", "_new", "_ignore_errors")
+
+    HeaderClass = header.DbfHeader
+    RecordClass = record.DbfRecord
+    INVALID_VALUE = INVALID_VALUE
+
+    ## initialization and creation helpers
+
+    def __init__(self, f, readOnly=False, new=False, ignoreErrors=False,
+                 memoFile=None):
+        """Initialize instance.
+
+        Arguments:
+            f:
+                Filename or file-like object.
+            readOnly:
+                if ``f`` argument is a string file will
+                be opend in read-only mode; in other cases
+                this argument is ignored. This argument is ignored
+                even if ``new`` argument is True.
+            new:
+                True if new data table must be created. Assume
+                data table exists if this argument is False.
+            ignoreErrors:
+                if set, failing field value conversion will return
+                ``INVALID_VALUE`` instead of raising conversion error.
+            memoFile:
+                optional path to the FPT (memo fields) file.
+                Default is generated from the DBF file name.
+
+        """
+        if isinstance(f, basestring):
+            # a filename
+            self.name = f
+            if new:
+                # new table (table file must be
+                # created or opened and truncated)
+                self.stream = file(f, "w+b")
+            else:
+                # table file must exist
+                self.stream = file(f, ("r+b", "rb")[bool(readOnly)])
+        else:
+            # a stream
+            self.name = getattr(f, "name", "")
+            self.stream = f
+        if new:
+            # if this is a new table, header will be empty
+            self.header = self.HeaderClass()
+        else:
+            # or instantiated using stream
+            self.header = self.HeaderClass.fromStream(self.stream)
+        self.ignoreErrors = ignoreErrors
+        self._new = bool(new)
+        self._changed = False
+        if memoFile:
+            self.memo = memo.MemoFile(memoFile, readOnly=readOnly, new=new)
+        elif self.header.hasMemoField:
+            self.memo = memo.MemoFile(memo.MemoFile.memoFileName(self.name),
+                readOnly=readOnly, new=new)
+        else:
+            self.memo = None
+        self.header.setMemoFile(self.memo)
+
+    ## properties
+
+    closed = property(lambda self: self.stream.closed)
+    recordCount = property(lambda self: self.header.recordCount)
+    fieldNames = property(
+        lambda self: [_fld.name for _fld in self.header.fields])
+    fieldDefs = property(lambda self: self.header.fields)
+    changed = property(lambda self: self._changed or self.header.changed)
+
+    def ignoreErrors(self, value):
+        """Update `ignoreErrors` flag on the header object and self"""
+        self.header.ignoreErrors = self._ignore_errors = bool(value)
+    ignoreErrors = property(
+        lambda self: self._ignore_errors,
+        ignoreErrors,
+        doc="""Error processing mode for DBF field value conversion
+
+        if set, failing field value conversion will return
+        ``INVALID_VALUE`` instead of raising conversion error.
+
+        """)
+
+    ## protected methods
+
+    def _fixIndex(self, index):
+        """Return fixed index.
+
+        This method fails if index isn't a numeric object
+        (long or int). Or index isn't in a valid range
+        (less or equal to the number of records in the db).
+
+        If ``index`` is a negative number, it will be
+        treated as a negative indexes for list objects.
+
+        Return:
+            Return value is numeric object maning valid index.
+
+        """
+        if not isinstance(index, (int, long)):
+            raise TypeError("Index must be a numeric object")
+        if index < 0:
+            # index from the right side
+            # fix it to the left-side index
+            index += len(self) + 1
+        if index >= len(self):
+            raise IndexError("Record index out of range")
+        return index
+
+    ## iterface methods
+
+    def close(self):
+        self.flush()
+        self.stream.close()
+
+    def flush(self):
+        """Flush data to the associated stream."""
+        if self.changed:
+            self.header.setCurrentDate()
+            self.header.write(self.stream)
+            self.stream.flush()
+            self.memo.flush()
+            self._changed = False
+
+    def indexOfFieldName(self, name):
+        """Index of field named ``name``."""
+        # FIXME: move this to header class
+        return self.header.fields.index(name)
+
+    def newRecord(self):
+        """Return new record, which belong to this table."""
+        return self.RecordClass(self)
+
+    def append(self, record):
+        """Append ``record`` to the database."""
+        record.index = self.header.recordCount
+        record._write()
+        self.header.recordCount += 1
+        self._changed = True
+        self._new = False
+
+    def addField(self, *defs):
+        """Add field definitions.
+
+        For more information see `header.DbfHeader.addField`.
+
+        """
+        if self._new:
+            self.header.addField(*defs)
+            if self.header.hasMemoField:
+                if not self.memo:
+                    self.memo = memo.MemoFile(
+                        memo.MemoFile.memoFileName(self.name), new=True)
+                self.header.setMemoFile(self.memo)
+        else:
+            raise TypeError("At least one record was added, "
+                "structure can't be changed")
+
+    ## 'magic' methods (representation and sequence interface)
+
+    def __repr__(self):
+        return "Dbf stream '%s'\n" % self.stream + repr(self.header)
+
+    def __len__(self):
+        """Return number of records."""
+        return self.recordCount
+
+    def __getitem__(self, index):
+        """Return `DbfRecord` instance."""
+        return self.RecordClass.fromStream(self, self._fixIndex(index))
+
+    def __setitem__(self, index, record):
+        """Write `DbfRecord` instance to the stream."""
+        record.index = self._fixIndex(index)
+        record._write()
+        self._changed = True
+        self._new = False
+
+    #def __del__(self):
+    #    """Flush stream upon deletion of the object."""
+    #    self.flush()
+
+
+def demoRead(filename):
+    _dbf = Dbf(filename, True)
+    for _rec in _dbf:
+        print
+        print repr(_rec)
+    _dbf.close()
+
+def demoCreate(filename):
+    _dbf = Dbf(filename, new=True)
+    _dbf.addField(
+        ("NAME", "C", 15),
+        ("SURNAME", "C", 25),
+        ("INITIALS", "C", 10),
+        ("BIRTHDATE", "D"),
+    )
+    for (_n, _s, _i, _b) in (
+        ("John", "Miller", "YC", (1981, 1, 2)),
+        ("Andy", "Larkin", "AL", (1982, 3, 4)),
+        ("Bill", "Clinth", "", (1983, 5, 6)),
+        ("Bobb", "McNail", "", (1984, 7, 8)),
+    ):
+        _rec = _dbf.newRecord()
+        _rec["NAME"] = _n
+        _rec["SURNAME"] = _s
+        _rec["INITIALS"] = _i
+        _rec["BIRTHDATE"] = _b
+        _rec.store()
+    print repr(_dbf)
+    _dbf.close()
+
+if (__name__=='__main__'):
+    import sys
+    _name = len(sys.argv) > 1 and sys.argv[1] or "county.dbf"
+    demoCreate(_name)
+    demoRead(_name)
+
+# vim: set et sw=4 sts=4 :

File dbfpy/dbfnew.py

+#!/usr/bin/python
+""".DBF creation helpers.
+
+Note: this is a legacy interface.  New code should use Dbf class
+    for table creation (see examples in dbf.py)
+
+TODO:
+  - handle Memo fields.
+  - check length of the fields accoring to the
+    `http://www.clicketyclick.dk/databases/xbase/format/data_types.html`
+
+"""
+"""History (most recent first)
+04-jul-2006 [als]   added export declaration;
+                    updated for dbfpy 2.0
+15-dec-2005 [yc]    define dbf_new.__slots__
+14-dec-2005 [yc]    added vim modeline; retab'd; added doc-strings;
+                    dbf_new now is a new class (inherited from object)
+??-jun-2000 [--]    added by Hans Fiby
+"""
+
+__version__ = "$Revision$"[11:-2]
+__date__ = "$Date$"[7:-2]
+
+__all__ = ["dbf_new"]
+
+from dbf import *
+from fields import *
+from header import *
+from record import *
+
+class _FieldDefinition(object):
+    """Field definition.
+
+    This is a simple structure, which contains ``name``, ``type``,
+    ``len``, ``dec`` and ``cls`` fields.
+
+    Objects also implement get/setitem magic functions, so fields
+    could be accessed via sequence iterface, where 'name' has
+    index 0, 'type' index 1, 'len' index 2, 'dec' index 3 and
+    'cls' could be located at index 4.
+
+    """
+
+    __slots__ = "name", "type", "len", "dec", "cls"
+
+    # WARNING: be attentive - dictionaries are mutable!
+    FLD_TYPES = {
+        # type: (cls, len)
+        "C": (DbfCharacterFieldDef, None),
+        "N": (DbfNumericFieldDef, None),
+        "L": (DbfLogicalFieldDef, 1),
+        # FIXME: support memos
+        # "M": (DbfMemoFieldDef),
+        "D": (DbfDateFieldDef, 8),
+        # FIXME: I'm not sure length should be 14 characters!
+        # but temporary I use it, cuz date is 8 characters
+        # and time 6 (hhmmss)
+        "T": (DbfDateTimeFieldDef, 14),
+    }
+
+    def __init__(self, name, type, len=None, dec=0):
+        _cls, _len = self.FLD_TYPES[type]
+        if _len is None:
+            if len is None:
+                raise ValueError("Field length must be defined")
+            _len = len
+        self.name = name
+        self.type = type
+        self.len = _len
+        self.dec = dec
+        self.cls = _cls
+
+    def getDbfField(self):
+        "Return `DbfFieldDef` instance from the current definition."
+        return self.cls(self.name, self.len, self.dec)
+
+    def appendToHeader(self, dbfh):
+        """Create a `DbfFieldDef` instance and append it to the dbf header.
+
+        Arguments:
+            dbfh: `DbfHeader` instance.
+
+        """
+        _dbff = self.getDbfField()
+        dbfh.addField(_dbff)
+
+
+class dbf_new(object):
+    """New .DBF creation helper.
+
+    Example Usage:
+
+        dbfn = dbf_new()
+        dbfn.add_field("name",'C',80)
+        dbfn.add_field("price",'N',10,2)
+        dbfn.add_field("date",'D',8)
+        dbfn.write("tst.dbf")
+
+    Note:
+        This module cannot handle Memo-fields,
+        they are special.
+
+    """
+
+    __slots__ = ("fields",)
+
+    FieldDefinitionClass = _FieldDefinition
+
+    def __init__(self):
+        self.fields = []
+
+    def add_field(self, name, typ, len, dec=0):
+        """Add field definition.
+
+        Arguments:
+            name:
+                field name (str object). field name must not
+                contain ASCII NULs and it's length shouldn't
+                exceed 10 characters.
+            typ:
+                type of the field. this must be a single character
+                from the "CNLMDT" set meaning character, numeric,
+                logical, memo, date and date/time respectively.
+            len:
+                length of the field. this argument is used only for
+                the character and numeric fields. all other fields
+                have fixed length.
+                FIXME: use None as a default for this argument?
+            dec:
+                decimal precision. used only for the numric fields.
+
+        """
+        self.fields.append(self.FieldDefinitionClass(name, typ, len, dec))
+
+    def write(self, filename):
+        """Create empty .DBF file using current structure."""
+        _dbfh = DbfHeader()
+        _dbfh.setCurrentDate()
+        for _fldDef in self.fields:
+            _fldDef.appendToHeader(_dbfh)
+        _dbfStream = file(filename, "wb")
+        _dbfh.write(_dbfStream)
+        _dbfStream.close()
+
+
+if (__name__=='__main__'):
+    # create a new DBF-File
+    dbfn=dbf_new()
+    dbfn.add_field("name",'C',80)
+    dbfn.add_field("price",'N',10,2)
+    dbfn.add_field("date",'D',8)
+    dbfn.write("tst.dbf")
+    # test new dbf
+    print "*** created tst.dbf: ***"
+    dbft = Dbf('tst.dbf', readOnly=0)
+    print repr(dbft)
+    # add a record
+    rec=DbfRecord(dbft)
+    rec['name']='something'
+    rec['price']=10.5
+    rec['date']=(2000,1,12)
+    rec.store()
+    # add another record
+    rec=DbfRecord(dbft)
+    rec['name']='foo and bar'
+    rec['price']=12234
+    rec['date']=(1992,7,15)
+    rec.store()
+
+    # show the records
+    print "*** inserted 2 records into tst.dbf: ***"
+    print repr(dbft)
+    for i1 in range(len(dbft)):
+        rec = dbft[i1]
+        for fldName in dbft.fieldNames:
+            print '%s:\t %s'%(fldName, rec[fldName])
+        print
+    dbft.close()
+
+# vim: set et sts=4 sw=4 :

File dbfpy/dbfpy/.cvsignore

-*.bak
-*.pyc
-*.pyo
-*.swp
-*.tmp

File dbfpy/dbfpy/__init__.py

Empty file removed.

File dbfpy/dbfpy/dbf.py

-#! /usr/bin/env python
-"""DBF accessing helpers.
-
-FIXME: more documentation needed
-
-Examples:
-
-    Create new table, setup structure, add records:
-
-        dbf = Dbf(filename, new=True)
-        dbf.addField(
-            ("NAME", "C", 15),
-            ("SURNAME", "C", 25),
-            ("INITIALS", "C", 10),
-            ("BIRTHDATE", "D"),
-        )
-        for (n, s, i, b) in (
-            ("John", "Miller", "YC", (1980, 10, 11)),
-            ("Andy", "Larkin", "", (1980, 4, 11)),
-        ):
-            rec = dbf.newRecord()
-            rec["NAME"] = n
-            rec["SURNAME"] = s
-            rec["INITIALS"] = i
-            rec["BIRTHDATE"] = b
-            rec.store()
-        dbf.close()
-
-    Open existed dbf, read some data:
-
-        dbf = Dbf(filename, True)
-        for rec in dbf:
-            for fldName in dbf.fieldNames:
-                print '%s:\t %s (%s)' % (fldName, rec[fldName],
-                    type(rec[fldName]))
-            print
-        dbf.close()
-
-"""
-"""History (most recent first):
-14-dec-2010 [als]   added Memo file support
-11-feb-2007 [als]   export INVALID_VALUE;
-                    Dbf: added .ignoreErrors, .INVALID_VALUE
-04-jul-2006 [als]   added export declaration
-20-dec-2005 [yc]    removed fromStream and newDbf methods:
-                    use argument of __init__ call must be used instead;
-                    added class fields pointing to the header and
-                    record classes.
-17-dec-2005 [yc]    split to several modules; reimplemented
-13-dec-2005 [yc]    adapted to the changes of the `strutil` module.
-13-sep-2002 [als]   support FoxPro Timestamp datatype
-15-nov-1999 [jjk]   documentation updates, add demo
-24-aug-1998 [jjk]   add some encodeValue methods (not tested), other tweaks
-08-jun-1998 [jjk]   fix problems, add more features
-20-feb-1998 [jjk]   fix problems, add more features
-19-feb-1998 [jjk]   add create/write capabilities
-18-feb-1998 [jjk]   from dbfload.py
-"""
-
-__version__ = "$Revision$"[11:-2]
-__date__ = "$Date$"[7:-2]
-__author__ = "Jeff Kunce <kuncej@mail.conservation.state.mo.us>"
-
-__all__ = ["Dbf"]
-
-import header
-import memo
-import record
-from utils import INVALID_VALUE
-
-class Dbf(object):
-    """DBF accessor.
-
-    FIXME:
-        docs and examples needed (dont' forget to tell
-        about problems adding new fields on the fly)
-
-    Implementation notes:
-        ``_new`` field is used to indicate whether this is
-        a new data table. `addField` could be used only for
-        the new tables! If at least one record was appended
-        to the table it's structure couldn't be changed.
-
-    """
-
-    __slots__ = ("name", "header", "stream", "memo",
-        "_changed", "_new", "_ignore_errors")
-
-    HeaderClass = header.DbfHeader
-    RecordClass = record.DbfRecord
-    INVALID_VALUE = INVALID_VALUE
-
-    ## initialization and creation helpers
-
-    def __init__(self, f, readOnly=False, new=False, ignoreErrors=False,
-                 memoFile=None):
-        """Initialize instance.
-
-        Arguments:
-            f:
-                Filename or file-like object.
-            readOnly:
-                if ``f`` argument is a string file will
-                be opend in read-only mode; in other cases
-                this argument is ignored. This argument is ignored
-                even if ``new`` argument is True.
-            new:
-                True if new data table must be created. Assume
-                data table exists if this argument is False.
-            ignoreErrors:
-                if set, failing field value conversion will return
-                ``INVALID_VALUE`` instead of raising conversion error.
-            memoFile:
-                optional path to the FPT (memo fields) file.
-                Default is generated from the DBF file name.
-
-        """
-        if isinstance(f, basestring):
-            # a filename
-            self.name = f
-            if new:
-                # new table (table file must be
-                # created or opened and truncated)
-                self.stream = file(f, "w+b")
-            else:
-                # table file must exist
-                self.stream = file(f, ("r+b", "rb")[bool(readOnly)])
-        else:
-            # a stream
-            self.name = getattr(f, "name", "")
-            self.stream = f
-        if new:
-            # if this is a new table, header will be empty
-            self.header = self.HeaderClass()
-        else:
-            # or instantiated using stream
-            self.header = self.HeaderClass.fromStream(self.stream)
-        self.ignoreErrors = ignoreErrors
-        self._new = bool(new)
-        self._changed = False
-        if memoFile:
-            self.memo = memo.MemoFile(memoFile, readOnly=readOnly, new=new)
-        elif self.header.hasMemoField:
-            self.memo = memo.MemoFile(memo.MemoFile.memoFileName(self.name),
-                readOnly=readOnly, new=new)
-        else:
-            self.memo = None
-        self.header.setMemoFile(self.memo)
-
-    ## properties
-
-    closed = property(lambda self: self.stream.closed)
-    recordCount = property(lambda self: self.header.recordCount)
-    fieldNames = property(
-        lambda self: [_fld.name for _fld in self.header.fields])
-    fieldDefs = property(lambda self: self.header.fields)
-    changed = property(lambda self: self._changed or self.header.changed)
-
-    def ignoreErrors(self, value):
-        """Update `ignoreErrors` flag on the header object and self"""
-        self.header.ignoreErrors = self._ignore_errors = bool(value)
-    ignoreErrors = property(
-        lambda self: self._ignore_errors,
-        ignoreErrors,
-        doc="""Error processing mode for DBF field value conversion
-
-        if set, failing field value conversion will return
-        ``INVALID_VALUE`` instead of raising conversion error.
-
-        """)
-
-    ## protected methods
-
-    def _fixIndex(self, index):
-        """Return fixed index.
-
-        This method fails if index isn't a numeric object
-        (long or int). Or index isn't in a valid range
-        (less or equal to the number of records in the db).
-
-        If ``index`` is a negative number, it will be
-        treated as a negative indexes for list objects.
-
-        Return:
-            Return value is numeric object maning valid index.
-
-        """
-        if not isinstance(index, (int, long)):
-            raise TypeError("Index must be a numeric object")
-        if index < 0:
-            # index from the right side
-            # fix it to the left-side index
-            index += len(self) + 1
-        if index >= len(self):
-            raise IndexError("Record index out of range")
-        return index
-
-    ## iterface methods
-
-    def close(self):
-        self.flush()
-        self.stream.close()
-
-    def flush(self):
-        """Flush data to the associated stream."""
-        if self.changed:
-            self.header.setCurrentDate()
-            self.header.write(self.stream)
-            self.stream.flush()
-            self.memo.flush()
-            self._changed = False
-
-    def indexOfFieldName(self, name):
-        """Index of field named ``name``."""
-        # FIXME: move this to header class
-        return self.header.fields.index(name)
-
-    def newRecord(self):
-        """Return new record, which belong to this table."""
-        return self.RecordClass(self)
-
-    def append(self, record):
-        """Append ``record`` to the database."""
-        record.index = self.header.recordCount
-        record._write()
-        self.header.recordCount += 1
-        self._changed = True
-        self._new = False
-
-    def addField(self, *defs):
-        """Add field definitions.
-
-        For more information see `header.DbfHeader.addField`.
-
-        """
-        if self._new:
-            self.header.addField(*defs)
-            if self.header.hasMemoField:
-                if not self.memo:
-                    self.memo = memo.MemoFile(
-                        memo.MemoFile.memoFileName(self.name), new=True)
-                self.header.setMemoFile(self.memo)
-        else:
-            raise TypeError("At least one record was added, "
-                "structure can't be changed")
-
-    ## 'magic' methods (representation and sequence interface)
-
-    def __repr__(self):
-        return "Dbf stream '%s'\n" % self.stream + repr(self.header)
-
-    def __len__(self):
-        """Return number of records."""
-        return self.recordCount
-
-    def __getitem__(self, index):
-        """Return `DbfRecord` instance."""
-        return self.RecordClass.fromStream(self, self._fixIndex(index))
-
-    def __setitem__(self, index, record):
-        """Write `DbfRecord` instance to the stream."""
-        record.index = self._fixIndex(index)
-        record._write()
-        self._changed = True
-        self._new = False
-
-    #def __del__(self):
-    #    """Flush stream upon deletion of the object."""
-    #    self.flush()
-
-
-def demoRead(filename):
-    _dbf = Dbf(filename, True)
-    for _rec in _dbf:
-        print
-        print repr(_rec)
-    _dbf.close()
-
-def demoCreate(filename):
-    _dbf = Dbf(filename, new=True)
-    _dbf.addField(
-        ("NAME", "C", 15),
-        ("SURNAME", "C", 25),
-        ("INITIALS", "C", 10),
-        ("BIRTHDATE", "D"),
-    )
-    for (_n, _s, _i, _b) in (
-        ("John", "Miller", "YC", (1981, 1, 2)),
-        ("Andy", "Larkin", "AL", (1982, 3, 4)),
-        ("Bill", "Clinth", "", (1983, 5, 6)),
-        ("Bobb", "McNail", "", (1984, 7, 8)),
-    ):
-        _rec = _dbf.newRecord()
-        _rec["NAME"] = _n
-        _rec["SURNAME"] = _s
-        _rec["INITIALS"] = _i
-        _rec["BIRTHDATE"] = _b
-        _rec.store()
-    print repr(_dbf)
-    _dbf.close()
-
-if (__name__=='__main__'):
-    import sys
-    _name = len(sys.argv) > 1 and sys.argv[1] or "county.dbf"
-    demoCreate(_name)
-    demoRead(_name)
-
-# vim: set et sw=4 sts=4 :

File dbfpy/dbfpy/dbfnew.py

-#!/usr/bin/python
-""".DBF creation helpers.
-
-Note: this is a legacy interface.  New code should use Dbf class
-    for table creation (see examples in dbf.py)
-
-TODO:
-  - handle Memo fields.
-  - check length of the fields accoring to the
-    `http://www.clicketyclick.dk/databases/xbase/format/data_types.html`
-
-"""
-"""History (most recent first)
-04-jul-2006 [als]   added export declaration;
-                    updated for dbfpy 2.0
-15-dec-2005 [yc]    define dbf_new.__slots__
-14-dec-2005 [yc]    added vim modeline; retab'd; added doc-strings;
-                    dbf_new now is a new class (inherited from object)
-??-jun-2000 [--]    added by Hans Fiby
-"""
-
-__version__ = "$Revision$"[11:-2]
-__date__ = "$Date$"[7:-2]
-
-__all__ = ["dbf_new"]
-
-from dbf import *
-from fields import *
-from header import *
-from record import *
-
-class _FieldDefinition(object):
-    """Field definition.
-
-    This is a simple structure, which contains ``name``, ``type``,
-    ``len``, ``dec`` and ``cls`` fields.
-
-    Objects also implement get/setitem magic functions, so fields
-    could be accessed via sequence iterface, where 'name' has
-    index 0, 'type' index 1, 'len' index 2, 'dec' index 3 and
-    'cls' could be located at index 4.
-
-    """
-
-    __slots__ = "name", "type", "len", "dec", "cls"
-
-    # WARNING: be attentive - dictionaries are mutable!
-    FLD_TYPES = {
-        # type: (cls, len)
-        "C": (DbfCharacterFieldDef, None),
-        "N": (DbfNumericFieldDef, None),
-        "L": (DbfLogicalFieldDef, 1),
-        # FIXME: support memos
-        # "M": (DbfMemoFieldDef),
-        "D": (DbfDateFieldDef, 8),
-        # FIXME: I'm not sure length should be 14 characters!
-        # but temporary I use it, cuz date is 8 characters
-        # and time 6 (hhmmss)
-        "T": (DbfDateTimeFieldDef, 14),
-    }
-
-    def __init__(self, name, type, len=None, dec=0):
-        _cls, _len = self.FLD_TYPES[type]
-        if _len is None:
-            if len is None:
-                raise ValueError("Field length must be defined")
-            _len = len
-        self.name = name
-        self.type = type
-        self.len = _len
-        self.dec = dec
-        self.cls = _cls
-
-    def getDbfField(self):
-        "Return `DbfFieldDef` instance from the current definition."
-        return self.cls(self.name, self.len, self.dec)
-
-    def appendToHeader(self, dbfh):
-        """Create a `DbfFieldDef` instance and append it to the dbf header.
-
-        Arguments:
-            dbfh: `DbfHeader` instance.
-
-        """
-        _dbff = self.getDbfField()
-        dbfh.addField(_dbff)
-
-
-class dbf_new(object):
-    """New .DBF creation helper.
-
-    Example Usage:
-
-        dbfn = dbf_new()
-        dbfn.add_field("name",'C',80)
-        dbfn.add_field("price",'N',10,2)
-        dbfn.add_field("date",'D',8)
-        dbfn.write("tst.dbf")
-
-    Note:
-        This module cannot handle Memo-fields,
-        they are special.
-
-    """
-
-    __slots__ = ("fields",)
-
-    FieldDefinitionClass = _FieldDefinition
-
-    def __init__(self):
-        self.fields = []
-
-    def add_field(self, name, typ, len, dec=0):
-        """Add field definition.
-
-        Arguments:
-            name:
-                field name (str object). field name must not
-                contain ASCII NULs and it's length shouldn't
-                exceed 10 characters.
-            typ:
-                type of the field. this must be a single character
-                from the "CNLMDT" set meaning character, numeric,
-                logical, memo, date and date/time respectively.
-            len:
-                length of the field. this argument is used only for
-                the character and numeric fields. all other fields
-                have fixed length.
-                FIXME: use None as a default for this argument?
-            dec:
-                decimal precision. used only for the numric fields.
-
-        """
-        self.fields.append(self.FieldDefinitionClass(name, typ, len, dec))
-
-    def write(self, filename):
-        """Create empty .DBF file using current structure."""
-        _dbfh = DbfHeader()
-        _dbfh.setCurrentDate()
-        for _fldDef in self.fields:
-            _fldDef.appendToHeader(_dbfh)
-        _dbfStream = file(filename, "wb")
-        _dbfh.write(_dbfStream)
-        _dbfStream.close()
-
-
-if (__name__=='__main__'):
-    # create a new DBF-File
-    dbfn=dbf_new()
-    dbfn.add_field("name",'C',80)
-    dbfn.add_field("price",'N',10,2)
-    dbfn.add_field("date",'D',8)
-    dbfn.write("tst.dbf")
-    # test new dbf
-    print "*** created tst.dbf: ***"
-    dbft = Dbf('tst.dbf', readOnly=0)
-    print repr(dbft)
-    # add a record
-    rec=DbfRecord(dbft)
-    rec['name']='something'
-    rec['price']=10.5
-    rec['date']=(2000,1,12)
-    rec.store()
-    # add another record
-    rec=DbfRecord(dbft)
-    rec['name']='foo and bar'
-    rec['price']=12234
-    rec['date']=(1992,7,15)
-    rec.store()
-
-    # show the records
-    print "*** inserted 2 records into tst.dbf: ***"
-    print repr(dbft)
-    for i1 in range(len(dbft)):
-        rec = dbft[i1]
-        for fldName in dbft.fieldNames:
-            print '%s:\t %s'%(fldName, rec[fldName])
-        print
-    dbft.close()
-
-# vim: set et sts=4 sw=4 :

File dbfpy/dbfpy/fields.py

-"""DBF fields definitions.
-
-TODO:
-  - make memos work
-"""
-"""History (most recent first):
-14-dec-2010 [als]   support reading and writing Memo fields;
-                    .toString: write field offset
-26-may-2009 [als]   DbfNumericFieldDef.decodeValue: strip zero bytes
-05-feb-2009 [als]   DbfDateFieldDef.encodeValue: empty arg produces empty date
-16-sep-2008 [als]   DbfNumericFieldDef decoding looks for decimal point
-                    in the value to select float or integer return type
-13-mar-2008 [als]   check field name length in constructor
-11-feb-2007 [als]   handle value conversion errors
-10-feb-2007 [als]   DbfFieldDef: added .rawFromRecord()
-01-dec-2006 [als]   Timestamp columns use None for empty values
-31-oct-2006 [als]   support field types 'F' (float), 'I' (integer)
-                    and 'Y' (currency);
-                    automate export and registration of field classes
-04-jul-2006 [als]   added export declaration
-10-mar-2006 [als]   decode empty values for Date and Logical fields;
-                    show field name in errors
-10-mar-2006 [als]   fix Numeric value decoding: according to spec,
-                    value always is string representation of the number;
-                    ensure that encoded Numeric value fits into the field
-20-dec-2005 [yc]    use field names in upper case
-15-dec-2005 [yc]    field definitions moved from `dbf`.
-"""
-
-__version__ = "$Revision$"[11:-2]
-__date__ = "$Date$"[7:-2]
-
-__all__ = ["lookupFor",] # field classes added at the end of the module
-
-import datetime
-import struct
-import sys
-
-from memo import MemoData
-import utils
-
-## abstract definitions
-
-class DbfFieldDef(object):
-    """Abstract field definition.
-
-    Child classes must override ``type`` class attribute to provide datatype
-    infromation of the field definition. For more info about types visit
-    `http://www.clicketyclick.dk/databases/xbase/format/data_types.html`
-
-    Also child classes must override ``defaultValue`` field to provide
-    default value for the field value.
-
-    If child class has fixed length ``length`` class attribute must be
-    overriden and set to the valid value. None value means, that field
-    isn't of fixed length.
-
-    Note: ``name`` field must not be changed after instantiation.
-
-    """
-
-    __slots__ = ("name", "length", "decimalCount",
-        "start", "end", "ignoreErrors")
-
-    # length of the field, None in case of variable-length field,
-    # or a number if this field is a fixed-length field
-    length = None
-
-    # field type. for more information about fields types visit
-    # `http://www.clicketyclick.dk/databases/xbase/format/data_types.html`
-    # must be overriden in child classes
-    typeCode = None
-
-    # default value for the field. this field must be
-    # overriden in child classes
-    defaultValue = None
-
-    # True if field data is kept in the Memo file
-    isMemo = property(lambda self: self.typeCode in "GMP")
-
-    def __init__(self, name, length=None, decimalCount=None,
-        start=None, stop=None, ignoreErrors=False,
-    ):
-        """Initialize instance."""
-        assert self.typeCode is not None, "Type code must be overriden"
-        assert self.defaultValue is not None, "Default value must be overriden"
-        ## fix arguments
-        if len(name) >10:
-            raise ValueError("Field name \"%s\" is too long" % name)
-        name = str(name).upper()
-        if self.__class__.length is None:
-            if length is None:
-                raise ValueError("[%s] Length isn't specified" % name)
-            length = int(length)
-            if length <= 0:
-                raise ValueError("[%s] Length must be a positive integer"
-                    % name)
-        else:
-            length = self.length
-        if decimalCount is None:
-            decimalCount = 0
-        ## set fields
-        self.name = name
-        # FIXME: validate length according to the specification at
-        # http://www.clicketyclick.dk/databases/xbase/format/data_types.html
-        self.length = length
-        self.decimalCount = decimalCount
-        self.ignoreErrors = ignoreErrors
-        self.start = start
-        self.end = stop
-
-    def __cmp__(self, other):
-        return cmp(self.name, str(other).upper())
-
-    def __hash__(self):
-        return hash(self.name)
-
-    def fromString(cls, string, start, ignoreErrors=False):
-        """Decode dbf field definition from the string data.
-
-        Arguments:
-            string:
-                a string, dbf definition is decoded from. length of
-                the string must be 32 bytes.
-            start:
-                position in the database file.
-            ignoreErrors:
-                initial error processing mode for the new field (boolean)
-
-        """
-        assert len(string) == 32
-        _length = ord(string[16])
-        return cls(utils.unzfill(string)[:11], _length, ord(string[17]),
-            start, start + _length, ignoreErrors=ignoreErrors)
-    fromString = classmethod(fromString)
-
-    def toString(self):
-        """Return encoded field definition.
-
-        Return:
-            Return value is a string object containing encoded
-            definition of this field.
-
-        """
-        if sys.version_info < (2, 4):
-            # earlier versions did not support padding character
-            _name = self.name[:11] + "\0" * (11 - len(self.name))
-        else:
-            _name = self.name.ljust(11, '\0')
-        return (
-            _name +
-            self.typeCode +
-            struct.pack("<L", self.start) +
-            chr(self.length) +
-            chr(self.decimalCount) +
-            chr(0) * 14
-        )
-
-    def __repr__(self):
-        return "%-10s %1s %3d %3d" % self.fieldInfo()
-
-    def fieldInfo(self):
-        """Return field information.
-
-        Return:
-            Return value is a (name, type, length, decimals) tuple.
-
-        """
-        return (self.name, self.typeCode, self.length, self.decimalCount)
-
-    def rawFromRecord(self, record):
-        """Return a "raw" field value from the record string."""
-        return record[self.start:self.end]
-
-    def decodeFromRecord(self, record):
-        """Return decoded field value from the record string."""
-        try:
-            return self.decodeValue(self.rawFromRecord(record))
-        except:
-            if self.ignoreErrors:
-                return utils.INVALID_VALUE
-            else:
-                raise
-
-    def decodeValue(self, value):
-        """Return decoded value from string value.
-
-        This method shouldn't be used publicly. It's called from the
-        `decodeFromRecord` method.
-
-        This is an abstract method and it must be overridden in child classes.
-        """
-        raise NotImplementedError
-
-    def encodeValue(self, value):
-        """Return str object containing encoded field value.
-
-        This is an abstract method and it must be overriden in child classes.
-        """
-        raise NotImplementedError
-
-## real classes
-
-class DbfCharacterFieldDef(DbfFieldDef):
-    """Definition of the character field."""
-
-    typeCode = "C"
-    defaultValue = ""
-
-    def decodeValue(self, value):
-        """Return string object.
-
-        Return value is a ``value`` argument with stripped right spaces.
-
-        """
-        return value.rstrip(" ")
-
-    def encodeValue(self, value):
-        """Return raw data string encoded from a ``value``."""
-        return str(value)[:self.length].ljust(self.length)
-
-
-class DbfNumericFieldDef(DbfFieldDef):
-    """Definition of the numeric field."""
-
-    typeCode = "N"
-    # XXX: now I'm not sure it was a good idea to make a class field
-    # `defaultValue` instead of a generic method as it was implemented
-    # previously -- it's ok with all types except number, cuz
-    # if self.decimalCount is 0, we should return 0 and 0.0 otherwise.
-    defaultValue = 0
-
-    def decodeValue(self, value):
-        """Return a number decoded from ``value``.
-
-        If decimals is zero, value will be decoded as an integer;
-        or as a float otherwise.
-
-        Return:
-            Return value is a int (long) or float instance.
-
-        """
-        value = value.strip(" \0")
-        if "." in value:
-            # a float (has decimal separator)
-            return float(value)
-        elif value:
-            # must be an integer
-            return int(value)
-        else:
-            return 0
-
-    def encodeValue(self, value):
-        """Return string containing encoded ``value``."""
-        _rv = ("%*.*f" % (self.length, self.decimalCount, value))
-        if len(_rv) > self.length:
-            _ppos = _rv.find(".")
-            if 0 <= _ppos <= self.length:
-                _rv = _rv[:self.length]
-            else:
-                raise ValueError("[%s] Numeric overflow: %s (field width: %i)"
-                    % (self.name, _rv, self.length))
-        return _rv
-
-class DbfFloatFieldDef(DbfNumericFieldDef):
-    """Definition of the float field - same as numeric."""
-
-    typeCode = "F"
-
-class DbfIntegerFieldDef(DbfFieldDef):
-    """Definition of the integer field."""
-
-    typeCode = "I"
-    length = 4
-    defaultValue = 0
-
-    def decodeValue(self, value):
-        """Return an integer number decoded from ``value``."""
-        return struct.unpack("<i", value)[0]
-
-    def encodeValue(self, value):
-        """Return string containing encoded ``value``."""
-        return struct.pack("<i", int(value))
-
-class DbfCurrencyFieldDef(DbfFieldDef):
-    """Definition of the currency field."""
-
-    typeCode = "Y"
-    length = 8
-    defaultValue = 0.0
-
-    def decodeValue(self, value):
-        """Return float number decoded from ``value``."""
-        return struct.unpack("<q", value)[0] / 10000.
-
-    def encodeValue(self, value):
-        """Return string containing encoded ``value``."""
-        return struct.pack("<q", round(value * 10000))
-
-class DbfLogicalFieldDef(DbfFieldDef):
-    """Definition of the logical field."""
-
-    typeCode = "L"
-    defaultValue = -1
-    length = 1
-
-    def decodeValue(self, value):
-        """Return True, False or -1 decoded from ``value``."""
-        # Note: value always is 1-char string
-        if value == "?":
-            return -1
-        if value in "NnFf ":
-            return False
-        if value in "YyTt":
-            return True
-        raise ValueError("[%s] Invalid logical value %r" % (self.name, value))
-
-    def encodeValue(self, value):
-        """Return a character from the "TF?" set.
-
-        Return:
-            Return value is "T" if ``value`` is True
-            "?" if value is -1 or False otherwise.
-
-        """
-        if value is True:
-            return "T"
-        if value == -1:
-            return "?"
-        return "F"
-
-
-class DbfMemoFieldDef(DbfFieldDef):
-    """Definition of the memo field."""
-
-    typeCode = "M"
-    defaultValue = "\0" * 4
-    length = 4
-    # MemoFile instance.  Must be set before reading or writing to the field.
-    file = None
-    # MemoData type for strings written to the memo file
-    memoType = MemoData.TYPE_MEMO
-
-    def decodeValue(self, value):
-        """Return MemoData instance containing field data."""
-        _block = struct.unpack("<L", value)[0]
-        if _block:
-            return self.file.read(_block)
-        else:
-            return MemoData("", self.memoType)
-
-    def encodeValue(self, value):
-        """Return raw data string encoded from a ``value``.
-
-        Note: this is an internal method.
-
-        """
-        if value:
-            return struct.pack("<L",
-                self.file.write(MemoData(value, self.memoType)))
-        else:
-            return self.defaultValue
-
-
-class DbfGeneralFieldDef(DbfFieldDef):
-    """Definition of the general (OLE object) field."""
-
-    typeCode = "G"
-    memoType = MemoData.TYPE_OBJECT
-
-
-class DbfDateFieldDef(DbfFieldDef):
-    """Definition of the date field."""
-
-    typeCode = "D"
-    defaultValue = utils.classproperty(lambda cls: datetime.date.today())
-    # "yyyymmdd" gives us 8 characters
-    length = 8
-
-    def decodeValue(self, value):
-        """Return a ``datetime.date`` instance decoded from ``value``."""
-        if value.strip():
-            return utils.getDate(value)
-        else:
-            return None
-
-    def encodeValue(self, value):
-        """Return a string-encoded value.
-
-        ``value`` argument should be a value suitable for the
-        `utils.getDate` call.
-
-        Return:
-            Return value is a string in format "yyyymmdd".
-
-        """
-        if value:
-            return utils.getDate(value).strftime("%Y%m%d")
-        else:
-            return " " * self.length
-
-
-class DbfDateTimeFieldDef(DbfFieldDef):
-    """Definition of the timestamp field."""
-
-    # a difference between JDN (Julian Day Number)
-    # and GDN (Gregorian Day Number). note, that GDN < JDN
-    JDN_GDN_DIFF = 1721425
-    typeCode = "T"
-    defaultValue = utils.classproperty(lambda cls: datetime.datetime.now())
-    # two 32-bits integers representing JDN and amount of
-    # milliseconds respectively gives us 8 bytes.
-    # note, that values must be encoded in LE byteorder.
-    length = 8
-
-    def decodeValue(self, value):
-        """Return a `datetime.datetime` instance."""
-        assert len(value) == self.length
-        # LE byteorder
-        _jdn, _msecs = struct.unpack("<2I", value)
-        if _jdn >= 1:
-            _rv = datetime.datetime.fromordinal(_jdn - self.JDN_GDN_DIFF)
-            _rv += datetime.timedelta(0, _msecs / 1000.0)
-        else:
-            # empty date
-            _rv = None
-        return _rv
-
-    def encodeValue(self, value):
-        """Return a string-encoded ``value``."""
-        if value:
-            value = utils.getDateTime(value)
-            # LE byteorder
-            _rv = struct.pack("<2I", value.toordinal() + self.JDN_GDN_DIFF,
-                (value.hour * 3600 + value.minute * 60 + value.second) * 1000)
-        else:
-            _rv = "\0" * self.length
-        assert len(_rv) == self.length
-        return _rv
-
-
-_fieldsRegistry = {}
-
-def registerField(fieldCls):
-    """Register field definition class.
-
-    ``fieldCls`` should be subclass of the `DbfFieldDef`.
-
-    Use `lookupFor` to retrieve field definition class
-    by the type code.
-
-    """
-    assert fieldCls.typeCode is not None, "Type code isn't defined"
-    # XXX: use fieldCls.typeCode.upper()? in case of any decign
-    # don't forget to look to the same comment in ``lookupFor`` method
-    _fieldsRegistry[fieldCls.typeCode] = fieldCls
-
-
-def lookupFor(typeCode):
-    """Return field definition class for the given type code.
-
-    ``typeCode`` must be a single character. That type should be
-    previously registered.
-
-    Use `registerField` to register new field class.
-
-    Return:
-        Return value is a subclass of the `DbfFieldDef`.
-
-    """
-    # XXX: use typeCode.upper()? in case of any decign don't
-    # forget to look to the same comment in ``registerField``
-    return _fieldsRegistry[typeCode]
-
-## register generic types
-
-for (_name, _val) in globals().items():
-    if isinstance(_val, type) and issubclass(_val, DbfFieldDef) \
-    and (_name != "DbfFieldDef"):
-        __all__.append(_name)
-        registerField(_val)
-del _name, _val
-
-# vim: et sts=4 sw=4 :

File dbfpy/dbfpy/header.py

-"""DBF header definition.
-
-TODO:
-  - handle encoding of the character fields
-    (encoding information stored in the DBF header)
-
-"""
-"""History (most recent first):
-14-dec-2010 [als]   added Memo file support
-16-sep-2010 [als]   fromStream: fix century of the last update field
-11-feb-2007 [als]   added .ignoreErrors
-10-feb-2007 [als]   added __getitem__: return field definitions
-                    by field name or field number (zero-based)
-04-jul-2006 [als]   added export declaration
-15-dec-2005 [yc]    created
-"""
-
-__version__ = "$Revision$"[11:-2]
-__date__ = "$Date$"[7:-2]
-
-__all__ = ["DbfHeader"]
-
-import cStringIO
-import datetime
-import struct
-import time
-
-import fields
-from utils import getDate
-
-
-class DbfHeader(object):
-    """Dbf header definition.
-
-    For more information about dbf header format visit
-    `http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_STRUCT`
-
-    Examples:
-        Create an empty dbf header and add some field definitions:
-            dbfh = DbfHeader()
-            dbfh.addField(("name", "C", 10))
-            dbfh.addField(("date", "D"))
-            dbfh.addField(DbfNumericFieldDef("price", 5, 2))
-        Create a dbf header with field definitions:
-            dbfh = DbfHeader([
-                ("name", "C", 10),
-                ("date", "D"),
-                DbfNumericFieldDef("price", 5, 2),
-            ])
-
-    """
-
-    __slots__ = ("signature", "fields", "lastUpdate", "recordLength",
-        "recordCount", "headerLength", "changed", "_ignore_errors")
-
-    ## instance construction and initialization methods
-
-    def __init__(self, fields=None, headerLength=0, recordLength=0,
-        recordCount=0, signature=0x03, lastUpdate=None, ignoreErrors=False,
-    ):
-        """Initialize instance.
-
-        Arguments:
-            fields:
-                a list of field definitions;
-            recordLength:
-                size of the records;
-            headerLength:
-                size of the header;
-            recordCount:
-                number of records stored in DBF;
-            signature:
-                version number (aka signature). using 0x03 as a default meaning
-                "File without DBT". for more information about this field visit
-                ``http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_NOTE_1_TARGET``
-            lastUpdate:
-                date of the DBF's update. this could be a string ('yymmdd' or
-                'yyyymmdd'), timestamp (int or float), datetime/date value,
-                a sequence (assuming (yyyy, mm, dd, ...)) or an object having
-                callable ``ticks`` field.
-            ignoreErrors:
-                error processing mode for DBF fields (boolean)
-
-        """
-        self.signature = signature
-        if fields is None:
-            self.fields = []
-        else:
-            self.fields = list(fields)
-        self.lastUpdate = getDate(lastUpdate)
-        self.recordLength = recordLength
-        self.headerLength = headerLength
-        self.recordCount = recordCount
-        self.ignoreErrors = ignoreErrors
-        # XXX: I'm not sure this is safe to
-        # initialize `self.changed` in this way
-        self.changed = bool(self.fields)
-
-    # @classmethod
-    def fromString(cls, string):
-        """Return header instance from the string object."""
-        return cls.fromStream(cStringIO.StringIO(str(string)))
-    fromString = classmethod(fromString)
-
-    # @classmethod
-    def fromStream(cls, stream):
-        """Return header object from the stream."""
-        stream.seek(0)
-        _data = stream.read(32)
-        (_cnt, _hdrLen, _recLen) = struct.unpack("<I2H", _data[4:12])
-        #reserved = _data[12:32]
-        _year = ord(_data[1])
-        if _year < 80:
-            # dBase II started at 1980.  It is quite unlikely
-            # that actual last update date is before that year.
-            _year += 2000
-        else:
-            _year += 1900
-        ## create header object
-        _obj = cls(None, _hdrLen, _recLen, _cnt, ord(_data[0]),
-            (_year, ord(_data[2]), ord(_data[3])))
-        ## append field definitions
-        # position 0 is for the deletion flag
-        _pos = 1
-        _data = stream.read(1)
-        while _data[0] != "\x0D":
-            _data += stream.read(31)
-            _fld = fields.lookupFor(_data[11]).fromString(_data, _pos)
-            _obj._addField(_fld)
-            _pos = _fld.end
-            _data = stream.read(1)
-        return _obj
-    fromStream = classmethod(fromStream)
-
-    ## properties
-
-    year = property(lambda self: self.lastUpdate.year)
-    month = property(lambda self: self.lastUpdate.month)
-    day = property(lambda self: self.lastUpdate.day)
-
-    @property
-    def hasMemoField(self):
-        """True if at least one field is a Memo field"""
-        for _field in self.fields:
-            if _field.isMemo:
-                return True
-        return False
-
-    def ignoreErrors(self, value):
-        """Update `ignoreErrors` flag on self and all fields"""
-        self._ignore_errors = value = bool(value)
-        for _field in self.fields:
-            _field.ignoreErrors = value
-    ignoreErrors = property(
-        lambda self: self._ignore_errors,
-        ignoreErrors,
-        doc="""Error processing mode for DBF field value conversion
-
-        if set, failing field value conversion will return
-        ``INVALID_VALUE`` instead of raising conversion error.
-
-        """)
-
-    ## object representation
-
-    def __repr__(self):
-        _rv = """\
-Version (signature): 0x%02x
-        Last update: %s
-      Header length: %d
-      Record length: %d
-       Record count: %d
- FieldName Type Len Dec
-""" % (self.signature, self.lastUpdate, self.headerLength,
-    self.recordLength, self.recordCount)
-        _rv += "\n".join(
-            ["%10s %4s %3s %3s" % _fld.fieldInfo() for _fld in self.fields]
-        )
-        return _rv
-
-    ## internal methods
-
-    def _addField(self, *defs):
-        """Internal variant of the `addField` method.
-
-        This method doesn't set `self.changed` field to True.
-
-        Return value is a length of the appended records.
-        Note: this method doesn't modify ``recordLength`` and
-        ``headerLength`` fields. Use `addField` instead of this
-        method if you don't exactly know what you're doing.
-
-        """
-        # insure we have dbf.DbfFieldDef instances first (instantiation
-        # from the tuple could raise an error, in such a case I don't
-        # wanna add any of the definitions -- all will be ignored)
-        _defs = []
-        _recordLength = self.recordLength
-        for _def in defs:
-            if isinstance(_def, fields.DbfFieldDef):
-                _obj = _def
-            else:
-                (_name, _type, _len, _dec) = (tuple(_def) + (None,) * 4)[:4]
-                _cls = fields.lookupFor(_type)
-                _obj = _cls(_name, _len, _dec, _recordLength,
-                    ignoreErrors=self._ignore_errors)
-            _recordLength += _obj.length
-            _defs.append(_obj)
-        # and now extend field definitions and
-        # update record length
-        self.fields += _defs
-        return (_recordLength - self.recordLength)
-
-    def _calcHeaderLength(self):
-        """Update self.headerLength attribute after change to header contents
-        """
-        # recalculate headerLength
-        self.headerLength = 32 + (32 * len(self.fields)) + 1
-        if self.signature == 0x30:
-            # Visual FoxPro files have 263-byte zero-filled field for backlink
-            self.headerLength += 263
-
-    ## interface methods
-
-    def setMemoFile(self, memo):
-        """Attach MemoFile instance to all memo fields; check header signature
-        """
-        _has_memo = False
-        for _field in self.fields:
-            if _field.isMemo:
-                _field.file = memo
-                _has_memo = True
-        # for signatures list, see
-        # http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_NOTE_1_TARGET
-        # http://www.dbf2002.com/dbf-file-format.html
-        # If memo is attached, will use 0x30 for Visual FoxPro file,
-        # 0x83 for dBASE III+.
-        if _has_memo \
-        and (self.signature not in (0x30, 0x83, 0x8B, 0xCB, 0xE5, 0xF5)):
-            if memo.is_fpt:
-                self.signature = 0x30
-            else:
-                self.signature = 0x83
-        self._calcHeaderLength()
-
-    def addField(self, *defs):
-        """Add field definition to the header.
-
-        Examples:
-            dbfh.addField(
-                ("name", "C", 20),
-                dbf.DbfCharacterFieldDef("surname", 20),
-                dbf.DbfDateFieldDef("birthdate"),
-                ("member", "L"),
-            )
-            dbfh.addField(("price", "N", 5, 2))
-            dbfh.addField(dbf.DbfNumericFieldDef("origprice", 5, 2))
-
-        """
-        if not self.recordLength:
-            self.recordLength = 1
-        self.recordLength += self._addField(*defs)
-        self._calcHeaderLength()
-        self.changed = True
-
-    def write(self, stream):
-        """Encode and write header to the stream."""
-        stream.seek(0)
-        stream.write(self.toString())
-        stream.write("".join([_fld.toString() for _fld in self.fields]))
-        stream.write(chr(0x0D))   # cr at end of all hdr data
-        _pos = stream.tell()
-        if _pos < self.headerLength:
-            stream.write("\0" * (self.headerLength - _pos))
-        self.changed = False
-
-    def toString(self):
-        """Returned 32 chars length string with encoded header."""
-        # FIXME: should keep flag and code page marks read from file
-        if self.hasMemoField:
-            _flag = "\x02"
-        else:
-            _flag = "\0"
-        _codepage = "\0"
-        return struct.pack("<4BI2H",
-            self.signature,
-            self.year - 1900,
-            self.month,
-            self.day,
-            self.recordCount,
-            self.headerLength,
-            self.recordLength) + "\0" * 16 + _flag + _codepage + "\0\0"
-
-    def setCurrentDate(self):
-        """Update ``self.lastUpdate`` field with current date value."""
-        self.lastUpdate = datetime.date.today()
-