Project aim and summary

NoSQL-biosets project includes scripts for indexing and querying selected free bioinformatics datasets. In adition to datasets, project aims to support common bioinformatics data types and formats, such as GFF. Elasticsearch and MongoDB are two primary databases supported for most datasets included in the project. Neo4j and PostgreSQL support was implemented as the third database option for few datasets, namely for IntEnz, PubTator and HGNC.

Datasets supported

Datasets that had more attention and have more stable support:

Datasets that had less attention after the initial support added to the project:

Project aims to connect above datasets by implementing query APIs for common query patterns with individual and multiple indexes. It also includes intial work on returning query results of IntEnz, DrugBank, HMDB, ModelSEEDdb, and MetaNetX datasets as graphs.

A sister project aims to develop index scripts for sequence similarity search results, either in NCBI-BLAST json format or in BLAST tabular format which is used by other search programs as well such as LAMBDA and DIAMOND. HSPsDB project aims to link the indexed search results to the datasets indexed with this project, nosqlbiosets.


Download nosqlbiosets project source code and install required libraries:

git clone
cd nosql-biosets
pip install -r requirements.txt --user

Since this project is yet in early stages you may need to check and modify source code of the scripts time to time, for this reason light install nosqlbiosets project to your local Python library/package folders using the develop and --user options that should allow you to run the index scripts from project source folders:

python develop --user

Default values of the hostname and port numbers of Elasticsearch and MongoDB servers are read from ./conf/dbservers.json file. Save your settings in this file to avoid entering --host and --port parameters in command line.


Example command lines for downloading UniProt Knowledgebase Swiss-Prot data set (~690M) and for indexing:

$ wget\

Make sure your Elasticsearch server is running in your localhost. If you are new to Elasticsearch and you are using Linux the easiest way is to download Elasticsearch with the TAR option (~32M). After extracting the tar file just cd to your Elasticsearch folder and run ./bin/elasticsearch command.

Now downloaded UniProt xml file can be indexed by running the following command from nosqlbiosets project root folder, typically requires 2 to 8 hours with Elasticsearch, and between 1 and 5 hours with MongoDB

./nosqlbiosets/uniprot/ ./uniprot_sprot.xml.gz\
   --host localhost --db Elasticsearch --index uniprot

Example query: list most mentioned gene names

curl -XGET "http://localhost:9200/uniprot/_search?pretty=true"\
 -H 'Content-Type: application/json' -d'
  "size": 0,
  "aggs": {
    "genes": {
      "terms": {
        "field": "",
        "size": 5
      "aggs": {
        "tids": {
          "terms": {
            "field": "",
            "size": 5

Check ./tests/ and ./nosqlbiosets/uniprot/ for example queries with Elasticsearch and MongoDB.

Similar Work

  • "GFF and GTF files are loaded into SQLite3 databases, allowing much more complex manipulation of hierarchical features (e.g., genes, transcripts, and exons) than is possible with plain-text methods alone"

    We are inspired by the gffutils project. Needless to say, nosql-biosets project doesn't yet have a level of maturity comparable to the gffutils library.

  • (SQLite, MySQL, PostgreSQL)

NoSQL-biosets project has been developed at King Abdullah University of Science and Technology,

NoSQL-biosets project is licensed with MIT license. If you would like to support the project with selecting a different license please let us know by creating an issue on github project page. We will help you with contacting the relavant bodies of KAUST.


  • Computers and file systems used in developing this work has been maintained by John Hanks
  • At early stages of the project we tried few xml libraries then settled with xmltodict library which helped us in parsing xml files without worry