The current Python avro package is packed with features but dog slow.
On a test case of about 10K records, it takes about 14sec to iterate over all of them. In comparison the JAVA avro SDK does it in about 1.9sec.
fastavro is less feature complete than avro, however it's much faster. It iterates over the same 10K records in 2.9sec, and if you use it with PyPy it'll do it in 1.5sec (to be fair, the JAVA benchmark is doing some extra JSON encoding/decoding).
If the optional C extension (generated by Cython) is available, then fastavro will be even faster. For the same 10K records it'll run in about 1.7sec.
import fastavro as avro with open('weather.avro', 'rb') as fo: reader = avro.reader(fo) schema = reader.schema for record in reader: process_record(record)
You can also use the fastavro script from the command line to dump avro files.
By default fastavro prints one JSON object per line, you can use the --pretty flag to change this.
You can also dump the avro schema:
fastavro --schema weather.avro
Here's the full command line help
usage: fastavro [-h] [--schema] [--codecs] [--version] [-p] [file [file ...]] iter over avro file, emit records as JSON positional arguments: file file(s) to parse optional arguments: -h, --help show this help message and exit --schema dump schema instead of records --codecs print supported codecs --version show program's version number and exit -p, --pretty pretty print json
- Support only iteration
- No writing for you!
No reader schema
As recommended by Cython, the C files output is distributed. This has the advantage that the end user does not need to have Cython installed. However it means that every time you change fastavro/pyfastavro.py you need to run make.
For make to succeed you need both python and python3 installed, cython on both of them. For ./test-install.sh you'll need virtualenv.
We're currently using travis.ci
See the ChangeLog