HTTPS SSH

Visualizing and Understanding Deep Texture Representations

Created by Tsung-Yu Lin and Subhransu Maji at UMass Amherst.

Introduction

This repository contains the code for reproducing the results in our CVPR 2016 paper:

    @inproceedings{lin2015visualizing,
    title={Visualizing and Understanding Deep Texture Representations},
    author={Lin, Tsung-Yu and Maji, Subhransu},
    booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2016}
}

You can use the code to do:

1. Inverting category label
2. Texture synthesis
3. Manipulating image with texture attributes

This code is tested on Ubuntu 14.04 using NVIDIA Titan X GPU and Matlab 2014b

Installation

This code requires following dependencies:

  • VLFEAT, MatConvNet and B-CNN

    These dependence is handled by git submodule and would be downloaded by

    >> git submodule init
    >> git submodule update
    

    Follow the instructions on VLFEAT and MatConvNet to have them installed. Our code is tested under MatConvNet version 1.0-beta18. You can retrieve a particular version of MatConvNet using git type:

    >> git fetch --tags
    >> git checkout tags/v1.0-beta18
    
  • minFunc

    We use the package for L-BFGS optimization. To install it, follow the instructions on the project webpage

  • imagequilt (optional)

    A MATLAB implementation of Efros & Freeman 2001. This is needed to initialize texture using image quilting with opts.textureInit=quilt option. You can instead set opts.textureInit=rand to randomly initialize texture without this package.

After installation of them, modify setup.m to point to the installed locations and setup the path by running the script.

Download ImageNet pre-trained CNNs

For the experiments in the paper we use imagenet-vgg-verydeep-16 to extract CNN features. Download the model and put it in data/models directory.

Pre-trained classifier models

We train linear classifiers of given attributes using B-CNN features on several datasets. To inverse categories or manipulate image with texture attribute, the optimization framework minimizes the negative log-likelihood of the given category. To achieve this, pre-trained classifiers scoring the probability of the presence of categories are required. We provide our pre-trained classifers using various layers of B-CNN features on DTD, FMD and MIT_Indoor datasets. The code reads these models from the default locations:

    data/models/dtd/model_name
    data/models/fmd/model_name
    data/models/mit_indoor/model_name

Using GPU

You can speedup the computation using GPU if one is available on your machine. To enable the gpu, set the option 'useGPU' to true in the texture_syn(). See the example for detail.

Inverting categories

Run the invert_attribute() to see an example of invertring categories. The code starts from a random image and minimizes the negative log-likelihood to visualize the pre-image of a given category. The code should produce the output below.

Output of invertion

Texture synthesis

Run the texture_syn_demo.m to see an example of texture synthesis. The code implements Gatys et al., NIPS 2015 using the Oxford's vgg-verydeep-16 network. The code should produce the output below. The code takes about 1 minute to reconstruct the image on GPU.

Output of the demo

Manipulating images with txture attributes

Run the modify_attribute_content() to see an example. The code takes an image (left) and a given attribute (interlaced) as input and produces the output (right) preserving the content but adjusts the image with the attribute.

Content Output