The current Python avro package is packed with features but dog slow.
On a test case of about 10K records, it takes about 14sec to iterate over all of them. In comparison the JAVA avro SDK does it in about 1.9sec.
fastavro is less feature complete than avro, however it's much faster. It iterates over the same 10K records in 2.9sec, and if you use it with PyPy it'll do it in 1.5sec (to be fair, the JAVA benchmark is doing some extra JSON encoding/decoding).
If the optional C extension (generated by Cython) is available, then fastavro will be even faster. For the same 10K records it'll run in about 1.7sec.
import fastavro as avro with open('weather.avro', 'rb') as fo: reader = avro.reader(fo) schema = reader.schema for record in reader: process_record(record)
You can also use the fastavro script from the command line to dump avro files. Each record will be dumped to standard output in one line of JSON.
You can also dump the avro schema:
fastavro --schema weather.avro
- Support only iteration
- No writing for you!
- Supports only null and deflate codecs
- avro also supports snappy
No reader schema
As recommended by Cython, the C files output is distributed. This has the advantage that the end user does not need to have Cython installed. However it means that every time you change fastavro/pyfastavro.py you need to run make.
For make to succeed you need both python and python3 installed, cython on both of them. For ./test-install.sh you'll need virtualenv.
We're currently using travis.ci
See the ChangeLog