Filename Size Date modified Message
1.7 KB
1.4 KB
4.0 KB
7.4 KB
5.0 KB
25.1 KB
17.2 KB
7.1 KB
244 B
9.5 KB
7.8 KB
16.8 KB
6.7 KB
1.3 KB
Contact me:

Some required Python packages:
pathos (for multiprocessing)

Important notes:
1- This library is called 'SEE'. It is meant to be a framework for implementing vision tasks. For example, one should be able to create a receptor layer that will receive its input from an image. Further layers can be created so that they can pool over units in previous layers by applying filter kernels such as Gabors and DoGs.

2- This model always associates a visual angle with a given input image. Then, it projects the input image on the receptor layer that contains units with a retinal-like distribution. This emulates the retinal sampling step.

3- further layers contain units each of which is associated with a receptive field with an eccentricity dependent size. This emulates the cortical magnification effect observed in the ventral stream.

4- This model is not limited to implementing attention task. However, in order to use this model for producing saliency maps, we provided the cseeSaliency class, the script.

Generating the saliency maps:
1- Put all the images you want to process in a folder

2- If it is the first type you run the SEE model, you have to build a seeSystem and store it. This is done by uncommenting the line

in the file. The system is then stored in a file called seeSystem in folder called 'data'.

3- If you have already build the system previously, and you do not need to build a new system with different parameters, just leave the 
above line commented.

4- run the script using the command-line options indicated at the end of this tutorial.

5- After the code is run, you should see three new folders in the destination folder you indicated for running the script:
one folder contains the original images with little circles superposed indicating the fixation locations extracted. 
The second folder contains black images with fixation location as white pixels.  
The third folder contains the continuous gray-scale saliency maps.

Using the script:

                               [-f FIXCOUNT] [-i INHIBDIAM]
                               [-o FOCUSCENTER [FOCUSCENTER ...]]
                               [-p PROCOUNT] [-g IMGANG]

optional arguments:
  -h, --help            show this help message and exit
  -s SOURCE, --source SOURCE
                        The path to the benchmark's image folder
                        The path to the destination folder where results are
                        to be stored
                        the number of images to process in the source folder
  -f FIXCOUNT, --fixCount FIXCOUNT
                        the number of fixations to extract
                        the diameter (in visual angles) of the IOR area
                        the coordinated of the center of gaze point.Example 0
  -p PROCOUNT, --procount PROCOUNT
                        the number of processors to be used for calculation,
                        type 0 to used all available processors
  -g IMGANG, --imgAng IMGANG
                        the visual angle to be occupied by the diagonal of the

The command used to produce the performance on the CAT2000 test dataset provided by the MIT saliency benchmark:

/usr/bin/python -s <source-folder-of-category> -d <destination-folder> -f 250 -g 10 -i 0.1 -p 3