PhenoDC is a multi-modal deep neural network architecture for leaf counting. This architecture is able to count leaves from top-view rosette-shaped plants, using multi-modal data. In our experiments, we used up to three modalities (RGB, near-infrared, and fluorescence).
- The package should work on Python 2.7.x. We advise to install a miniconda enviroment
- Install the following dependencies, following the directions provided according to your platform and requirements:
git clone https://firstname.lastname@example.org/tuttoweb/pheno-deep-counter.git
Downloading Pre-trained Networks
We release the pre-trained models on the following datasets: - CVPPP - Multi-modal imagery
They can be dowloaded at: http://www.valeriogiuffrida.academy/sites/default/files/phenodc_trained_models.tar.gz
You can execute the file ./train_mm.sh (only on Mac/Linux).
Under the hood
Here you can find an example how to get and train a multi-modal deep neural network with three modalities.
In this file, we show how to train the network with only one modality (RGB) on the CVPPP 2017 dataset (we do not redistribuite the dataset here!)
How to use the models
Each file accepts parameters. For a complete list of parameters, use
In order to fine-tune with a specific dataset, you can use
--model parameter followed by the path of the pre-trained weights