Issue #5 new

Updating character or memo fields with bytes always attempts to decode using the ascii codec

Lucas Taylor
repo owner created an issue

The update field functions for character and memo fields always attempt to decode bytes to unicode using the ascii codec. This occurs regardless of the specified codepage. If you update a field with a unicode value, the decoding step is skipped, and the update function (correctly) uses the defined codepage for encoding.

In this example, the codepage is latin1, but the decoding is attempted using ascii

>>> table = dbf.Table('test_latin')
dbf.Table('test_latin.dbf', status='read-write')
>>> table.codepage
CodePage('latin1', 'latin1', '\x00')
>>> table.append({'memo': chr(148)})
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "", line 4670, in append
    gather(newrecord, dictdata, drop=drop)
  File "", line 7394, in gather
  File "", line 2235, in _commit_flux
    raise DbfError("unable to write updates to disk, original data restored: %r" % (exc,))
dbf.DbfError: unable to write updates to disk, original data restored: UnicodeDecodeError('ascii', '\x94', 0, 1, 'ordinal not in range(128)')
>>> table.append({'memo': unichr(148)})
>>> print table[-1]
  0 - memo      : u'\x94'

The issue appears to be a bug in the choice of default decoders in Record._update_field_value and RecordTemplate._update_field_value

It looks like those functions should use self._meta.decoder instead of self._meta.input_decoder

Comments (1)

  1. stoneleaf

    Ideally, all strings going into dbf are already in unicode format. When a string is passed in that is not unicode, and the field type is not binary (not possible for db3), then the input_decoding setting is used -- which defaults to ascii, and is what is used to set the self._meta.input_decoder.

  2. Log in to comment