- edited description
Why I get a even higher accuracy than that in thesis on CUB-200-2011 with pre-trained B-CNN and SVM models?
I just wanted to test the accuracy on CUB-200-2011 test set using the pre-trained B-CNN and SVM models, so I wrote a script according to the bird_demo.m. My script can be seen as following:
function get_cub_accuracy()
setup;
% Default options
opts.model = 'models2/bcnn-cub-dm.mat';
opts.cubDir = '/home/grit/Documents/DuAngAng/Datasets/CUB-200-2011/CUB_200_2011/CUB_200_2011/';
opts.useGpu = true;
opts.svmPath = 'models2/svm_cub_vdm';
opts.imgPath = 'test_image.jpg';
opts.regionBorder = 0.05 ;
opts.normalization = 'sqrt_L2';
opts.topK = 1; % Number of labels to display
%% Datasets
% If the imagepaths are relative to some imagedir, specify it here, otherwise leave empty
opts.imagedir= '/home/grit/Documents/DuAngAng/Datasets/CUB-200-2011/CUB_200_2011/CUB_200_2011/images/';
% Path to your dataset, where the imagelist, the labels and the train test split is
opts.basedir = '/home/grit/Documents/DuAngAng/Datasets/CUB-200-2011/CUB_200_2011/CUB_200_2011';
% List of images
opts.imagelist_file = [opts.basedir '/imagelist_absolute.txt'];
% The assignment to train and test, mandatory if finetuning is used!
% 1 - train, 0 - test
opts.tr_ID_file = [opts.basedir '/tr_ID.txt'];
% List of class labels starting from 1
opts.labels_file = [opts.basedir '/labels.txt'];
imagelist = textread(opts.imagelist_file, '%s');
tr_ID = textread(opts.tr_ID_file, '%d'); % 1 for train; 0 for test
labels = textread(opts.labels_file, '%d');
% Get test set images ids
test_ID = find(~tr_ID);
% Load classifier
tic;
classifier = load(opts.svmPath);
% Load the bilinear models and move to GPU if necessary
load(opts.model);
if opts.useGpu
net = net_move_to_device(net, 'gpu');
else
net = net_move_to_device(net, 'cpu');
end
fprintf('%.2fs to load models into memory.\n', toc);
N = size(test_ID, 1); % Test set images number
count_correct_pred = 0;
for i = 1 : N
opts.imgPath = [opts.imagedir imagelist{i}];
% Read image
origIm = imread(opts.imgPath);
im = single(origIm);
% Predict
tic;
% Compute B-CNN feature for this image
code = get_bcnn_features(net, im, ...
'regionBorder', opts.regionBorder, ...
'normalization', opts.normalization);
% Make predictions
scores = classifier.w'*code{1} + classifier.b';
% [~, pred] = sort(scores, 'descend');
[~, pred_label] = max(scores, [], 1);
pred_class = classifier.classes(pred_label);
fprintf('Top %d prediction for %s:\n', opts.topK, opts.imgPath);
fprintf('%s\n', pred_class{:});
fprintf('%.2fs to make predictions [GPU=%d]\n', toc, opts.useGpu);
if pred_label == labels(i)
count_correct_pred = count_correct_pred + 1;
end
end
accuarcy = count_correct_pred / N;
fprintf('Accuracy = %7.4f%% (%d/%d)\n', accuarcy*100, count_correct_pred, N);
end
After running this script, I got a much higher accuracy 92.0262% like the picture shows, which is not matched with the accuracy 84.1% in the thesis.
I was confused, so I tried to find if you used the same training set and test set from the original CUB-200-2011 dataset, which is defined in train_test_split.txt. I ran the code again on the training set and also got a high accuracy 91.9419%.
Did you have got so high accuracy on CUB-200-2011 with your pre-trained models? Could you explain the possible reasons for that? Thank you!
Comments (6)
-
reporter -
reporter -
assigned issue to
- marked as enhancement
-
assigned issue to
-
reporter - edited description
-
reporter - edited description
-
repo owner The accuracy should not be such high. In addition, the error on training set should be almost 0. It seems there are some mismatches on dataset. You can look at the cub_get_database.m funciton to see the split.
-
reporter - changed status to resolved
- Log in to comment