Snippets

klassify Stanford ML course - test cases

Created by Miro Adamy

Octave has built in support for unit tests. You can add them at the end of you function file.

More explanation can be found here: http://wiki.octave.org/Tests

You can run your tests automatically by typing for example 'test sigmoid' in Octave, after you've added this at the end of your sigmoid file:

%% Define unit tests, see also https://www.gnu.org/software/octave/doc/interpreter/Test-Functions.html %% Can be run by typing: 'test sigmoid' in Octave

%!assert (sigmoid(1200000), 1) %!assert (sigmoid(-25000), 0) %!assert (sigmoid(0), 0.5)

%!shared tol %! tol = 5e-05 %!assert (sigmoid([4 5 6]), [0.9820 0.9933 0.9975], tol) %!assert (sigmoid(magic(3)), [0.9997 0.7311 0.9975; 0.9526 0.9933 0.9991; 0.9820 0.9999 0.8808], tol) %!assert (sigmoid(eye(2)), [0.7311 0.5000; 0.5000 0.7311], tol)

Note that tol is a variable here that specifies a tolerance in the comparison that assert makes. If you leave this out, because of rounding errors your tests will fail. Try it! And do the same for costFunction and predict. Good luck!


I was not aware of that Octave facility. Thanks for pointing this out! Here's the official Octave documentation as well:

https://www.gnu.org/software/octave/doc/interpreter/Test-Functions.html#Test-Functions

I did some searching in the Matlab documentation and it appears that they have several different mechanisms to support generation of unit tests, but they are not the same as the Octave mechanism. Of course the Octave code is in the form of comments, so the presence of those lines should not interfere with the execution of the code in Matlab. Here is the Matlab documentation:

http://www.mathworks.com/help/matlab/matlab_prog/write-script-based-unit-tests.html

% Test Cases for sigmoid() and predict()

>> sigmoid(-5)
ans =  0.0066929

>> sigmoid(0)
ans =  0.50000

>> sigmoid(5)
ans =  0.99331

>> sigmoid([4 5 6])
ans =

   0.98201   0.99331   0.99753

>> sigmoid([-1;0;1])
ans =

   0.26894
   0.50000
   0.73106

>> V = reshape(-1:.1:.9, 4, 5);
>> sigmoid(V)
ans =

   0.26894   0.35434   0.45017   0.54983   0.64566
   0.28905   0.37754   0.47502   0.57444   0.66819
   0.31003   0.40131   0.50000   0.59869   0.68997
   0.33181   0.42556   0.52498   0.62246   0.71095

>> X = [1 1 ; 1 2.5 ; 1 3 ; 1 4];
>> theta = [-3.5 ; 1.3];

% test case for predict()
>> predict(theta, X)
ans =

   0
   0
   1
   1
  
% ================== ex2 test cases for costFunction() and costFunctionReg()

X = [ones(3,1) magic(3)];
y = [1 0 1]';
theta = [-2 -1 1 2]';

% un-regularized
[j g] = costFunction(theta, X, y)
% or...
[j g] = costFunctionReg(theta, X, y, 0)

% results
j = 4.6832

g =
  0.31722
  0.87232
  1.64812
  2.23787

% regularized
[j g] = costFunctionReg(theta, X, y, 4)
% note: also works for ex3 lrCostFunction(theta, X, y, 4)

% results
j =  8.6832
g =

   0.31722
  -0.46102
   2.98146
   4.90454
  
Here are the ex1 test cases by Paul T. Mielke and Tom Mosher:
https://www.coursera.org/learn/machine-learning/discussions/all/threads/5wftpZnyEeWKNwpBrKr_Fw

===========

computeCost:

>>computeCost( [1 2; 1 3; 1 4; 1 5], [7;6;5;4], [0.1;0.2] )

ans = 11.9450

-----

>>computeCost( [1 2 3; 1 3 4; 1 4 5; 1 5 6], [7;6;5;4], [0.1;0.2;0.3])

ans = 7.0175

% gradient descent 1

>>[theta J_hist] = gradientDescent([1 5; 1 2; 1 4; 1 5],[1 6 4 2]',[0 0]',0.01,1000);

% then type in these variable names, to display the final results
>>theta
theta =
    5.2148
   -0.5733
>>J_hist(1)
ans  =  5.9794
>>J_hist(1000)
ans = 0.85426


% for debugging
% first iteration 
theta = 
   0.032500
   0.107500
% second iteration
theta = 
   0.060375
   0.194887
% third iteration
theta = 
   0.084476
   0.265867
% fourth iteration
theta = 
   0.10550
   0.32346
   
% test case 2

>> [theta J_hist] = gradientDescent([1 5; 1 2],[1 6]',[.5 .5]',0.1,10);
>> theta
theta =
   1.70986
   0.19229

>> J_hist
J_hist =
   5.8853
   5.7139
   5.5475
   5.3861
   5.2294
   5.0773
   4.9295
   4.7861
   4.6469
   4.5117
   
   
% ---------------
[Xn mu sigma] = featureNormalize([1 ; 2 ; 3])

% result

Xn =
  -1
   0
   1

mu =  2
sigma =  1

%---------------- featureNormalize
[Xn mu sigma] = featureNormalize(magic(3))

% result

Xn =
   1.13389  -1.00000   0.37796
  -0.75593   0.00000   0.75593
  -0.37796   1.00000  -1.13389

mu =
   5   5   5
sigma =
   2.6458   4.0000   2.6458

%--------------
[Xn mu sigma] = featureNormalize([-ones(1,3); magic(3)])

% results

Xn =
  -1.21725  -1.01472  -1.21725
   1.21725  -0.56373   0.67625
  -0.13525   0.33824   0.94675
   0.13525   1.24022  -0.40575

mu =
   3.5000   3.5000   3.5000

sigma =
   3.6968   4.4347   3.6968

% ===================

X = [ 2 1 3; 7 1 9; 1 8 1; 3 7 4 ];
y = [2 ; 5 ; 5 ; 6];
theta_test = [0.4 ; 0.6 ; 0.8];
computeCostMulti( X, y, theta_test )

% result
ans =  5.2950

% ========== gradientDescentMulti() w/ zeros for initial_theta

X = [ 2 1 3; 7 1 9; 1 8 1; 3 7 4 ];
y = [2 ; 5 ; 5 ; 6];
[theta J_hist] = gradientDescentMulti(X, y, zeros(3,1), 0.01, 10);

% results

>> theta
theta =

   0.25175
   0.53779
   0.32282

>> J_hist
J_hist =

   2.829855
   0.825963
   0.309163
   0.150847
   0.087853
   0.055720
   0.036678
   0.024617
   0.016782
   0.011646


% gradientDescentMulti() with non-zero initial_theta

X = [ 2 1 3; 7 1 9; 1 8 1; 3 7 4 ];
y = [2 ; 5 ; 5 ; 6];
[theta J_hist] = gradientDescentMulti(X, y, [0.1 ; -0.2 ; 0.3], 0.01, 10);

% results
>> theta
theta =

   0.18556
   0.50436
   0.40137

>> J_hist
J_hist =

   3.632547
   1.766095
   1.021517
   0.641008
   0.415306
   0.272296
   0.179384
   0.118479
   0.078429
   0.052065
   
% ============= normalEqn()   
X = [ 2 1 3; 7 1 9; 1 8 1; 3 7 4 ];
y = [2 ; 5 ; 5 ; 6];
theta = normalEqn(X,y)

% results
theta =

   0.0083857
   0.5681342
   0.4863732

Comments (0)

HTTPS SSH

You can clone a snippet to your computer for local editing. Learn more.