Snippets
Octave has built in support for unit tests. You can add them at the end of you function file.
More explanation can be found here: http://wiki.octave.org/Tests
You can run your tests automatically by typing for example 'test sigmoid' in Octave, after you've added this at the end of your sigmoid file:
%% Define unit tests, see also https://www.gnu.org/software/octave/doc/interpreter/Test-Functions.html %% Can be run by typing: 'test sigmoid' in Octave
%!assert (sigmoid(1200000), 1) %!assert (sigmoid(-25000), 0) %!assert (sigmoid(0), 0.5)
%!shared tol %! tol = 5e-05 %!assert (sigmoid([4 5 6]), [0.9820 0.9933 0.9975], tol) %!assert (sigmoid(magic(3)), [0.9997 0.7311 0.9975; 0.9526 0.9933 0.9991; 0.9820 0.9999 0.8808], tol) %!assert (sigmoid(eye(2)), [0.7311 0.5000; 0.5000 0.7311], tol)
Note that tol is a variable here that specifies a tolerance in the comparison that assert makes. If you leave this out, because of rounding errors your tests will fail. Try it! And do the same for costFunction and predict. Good luck!
I was not aware of that Octave facility. Thanks for pointing this out! Here's the official Octave documentation as well:
https://www.gnu.org/software/octave/doc/interpreter/Test-Functions.html#Test-Functions
I did some searching in the Matlab documentation and it appears that they have several different mechanisms to support generation of unit tests, but they are not the same as the Octave mechanism. Of course the Octave code is in the form of comments, so the presence of those lines should not interfere with the execution of the code in Matlab. Here is the Matlab documentation:
http://www.mathworks.com/help/matlab/matlab_prog/write-script-based-unit-tests.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | Here are the ex1 test cases by Paul T. Mielke and Tom Mosher:
https://www.coursera.org/learn/machine-learning/discussions/all/threads/5wftpZnyEeWKNwpBrKr_Fw
===========
computeCost:
>>computeCost( [1 2; 1 3; 1 4; 1 5], [7;6;5;4], [0.1;0.2] )
ans = 11.9450
-----
>>computeCost( [1 2 3; 1 3 4; 1 4 5; 1 5 6], [7;6;5;4], [0.1;0.2;0.3])
ans = 7.0175
% gradient descent 1
>>[theta J_hist] = gradientDescent([1 5; 1 2; 1 4; 1 5],[1 6 4 2]',[0 0]',0.01,1000);
% then type in these variable names, to display the final results
>>theta
theta =
5.2148
-0.5733
>>J_hist(1)
ans = 5.9794
>>J_hist(1000)
ans = 0.85426
% for debugging
% first iteration
theta =
0.032500
0.107500
% second iteration
theta =
0.060375
0.194887
% third iteration
theta =
0.084476
0.265867
% fourth iteration
theta =
0.10550
0.32346
% test case 2
>> [theta J_hist] = gradientDescent([1 5; 1 2],[1 6]',[.5 .5]',0.1,10);
>> theta
theta =
1.70986
0.19229
>> J_hist
J_hist =
5.8853
5.7139
5.5475
5.3861
5.2294
5.0773
4.9295
4.7861
4.6469
4.5117
% ---------------
[Xn mu sigma] = featureNormalize([1 ; 2 ; 3])
% result
Xn =
-1
0
1
mu = 2
sigma = 1
%---------------- featureNormalize
[Xn mu sigma] = featureNormalize(magic(3))
% result
Xn =
1.13389 -1.00000 0.37796
-0.75593 0.00000 0.75593
-0.37796 1.00000 -1.13389
mu =
5 5 5
sigma =
2.6458 4.0000 2.6458
%--------------
[Xn mu sigma] = featureNormalize([-ones(1,3); magic(3)])
% results
Xn =
-1.21725 -1.01472 -1.21725
1.21725 -0.56373 0.67625
-0.13525 0.33824 0.94675
0.13525 1.24022 -0.40575
mu =
3.5000 3.5000 3.5000
sigma =
3.6968 4.4347 3.6968
% ===================
X = [ 2 1 3; 7 1 9; 1 8 1; 3 7 4 ];
y = [2 ; 5 ; 5 ; 6];
theta_test = [0.4 ; 0.6 ; 0.8];
computeCostMulti( X, y, theta_test )
% result
ans = 5.2950
% ========== gradientDescentMulti() w/ zeros for initial_theta
X = [ 2 1 3; 7 1 9; 1 8 1; 3 7 4 ];
y = [2 ; 5 ; 5 ; 6];
[theta J_hist] = gradientDescentMulti(X, y, zeros(3,1), 0.01, 10);
% results
>> theta
theta =
0.25175
0.53779
0.32282
>> J_hist
J_hist =
2.829855
0.825963
0.309163
0.150847
0.087853
0.055720
0.036678
0.024617
0.016782
0.011646
% gradientDescentMulti() with non-zero initial_theta
X = [ 2 1 3; 7 1 9; 1 8 1; 3 7 4 ];
y = [2 ; 5 ; 5 ; 6];
[theta J_hist] = gradientDescentMulti(X, y, [0.1 ; -0.2 ; 0.3], 0.01, 10);
% results
>> theta
theta =
0.18556
0.50436
0.40137
>> J_hist
J_hist =
3.632547
1.766095
1.021517
0.641008
0.415306
0.272296
0.179384
0.118479
0.078429
0.052065
% ============= normalEqn()
X = [ 2 1 3; 7 1 9; 1 8 1; 3 7 4 ];
y = [2 ; 5 ; 5 ; 6];
theta = normalEqn(X,y)
% results
theta =
0.0083857
0.5681342
0.4863732
|
Comments (0)
You can clone a snippet to your computer for local editing. Learn more.