# Matrix exponential algorithm bug

Issue #380 resolved
pkerichang created an issue

Example code:

```#include <iostream>

#include <blaze/math/DynamicMatrix.h>

int main(int argc, char **argv) {

using mat_t = blaze::DynamicMatrix<double, blaze::columnMajor>;

mat_t A = {{-5.6053e5, 3.4334e4, -5.1459e-11, 1.3372e4, -2.0239e4},
{8.9958e4, -6.8667e4, 3.4334e4, -1.1233e-13, 6.8667e3},
{-2.0012e-11, 3.4334e4, -3.4334e4, 8.7111e-13, -4.2301e-13},
{0, 0, 0, 0, 0},
{0, 0, 0, 0, 0}};

mat_t B = blaze::matexp(A);

std::cout << "A =\n" << A << std::endl;
std::cout << "exp(A) =\n" << B << std::endl;

return 0;
}
```

On my computer, the output is:

```A =
(      -560530        34334  -5.1459e-11        13372       -20239 )
(        89958       -68667        34334  -1.1233e-13       6866.7 )
(  -2.0012e-11        34334       -34334   8.7111e-13  -4.2301e-13 )
(            0            0            0            0            0 )
(            0            0            0            0            0 )

exp(A) =
(         -nan         -nan         -nan         -nan         -nan )
(         -nan         -nan         -nan         -nan         -nan )
(         -nan         -nan         -nan         -nan         -nan )
(         -nan         -nan         -nan         -nan         -nan )
(         -nan         -nan         -nan         -nan         -nan )
```

where-as the equivalent python script:

```import numpy as np
from scipy.linalg import expm

test = np.empty((5, 5))

test[0, 0] = -5.6053e5;
test[0, 1] = 3.4334e4;
test[0, 2] = -5.1459e-11;
test[0, 3] = 1.3372e4;
test[0, 4] = -2.0239e4;

test[1, 0] = 8.9958e4;
test[1, 1] = -6.8667e4;
test[1, 2] = 3.4334e4;
test[1, 3] = -1.1233e-13;
test[1, 4] = 6.8667e3;

test[2, 0] = -2.0012e-11;
test[2, 1] = 3.4334e4;
test[2, 2] = -3.4334e4;
test[2, 3] = 8.7111e-13;
test[2, 4] = -4.2301e-13;

test[3, 0] = 0;
test[3, 1] = 0;
test[3, 2] = 0;
test[3, 3] = 0;
test[3, 4] = 0;

test[4, 0] = 0;
test[4, 1] = 0;
test[4, 2] = 0;
test[4, 3] = 0;
test[4, 4] = 0;

print("A = ")
print(test)
print("exp(A) = ")
print(expm(test))
```

outputs:

```A =
[[-5.6053e+05  3.4334e+04 -5.1459e-11  1.3372e+04 -2.0239e+04]
[ 8.9958e+04 -6.8667e+04  3.4334e+04 -1.1233e-13  6.8667e+03]
[-2.0012e-11  3.4334e+04 -3.4334e+04  8.7111e-13 -4.2301e-13]
[ 0.0000e+00  0.0000e+00  0.0000e+00  0.0000e+00  0.0000e+00]
[ 0.0000e+00  0.0000e+00  0.0000e+00  0.0000e+00  0.0000e+00]]
exp(A) =
[[ 0.          0.          0.          0.02841664 -0.02841685]
[ 0.          0.          0.          0.07445618  0.12554618]
[ 0.          0.          0.          0.07445618  0.12554618]
[ 0.          0.          0.          1.          0.        ]
[ 0.          0.          0.          0.          1.        ]]
```

It seems like the algorithm used by scipy (https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.expm.html#scipy.linalg.expm) is much more robust at handling matrices with large numbers.

Hi!

Thanks a lot for pointing us to this defect and for providing an example input matrix to demonstrate the issue. With the given matrix we could easily reproduce the issue. As it turns out, it is not an issue of the used algorithm, but that we are a little too optimistic in order to speed up the computation. We’ll fix the issue as quickly as possible. Thanks again,

Best regards,

Klaus!

• changed status to open
1. Commit 3f32e4f improves the robustness of the matrix exponential algorithm with respect to large numbers. The fix is immediately available via cloning the Blaze repository and will be officially released in Blaze 3.9.

Hi @pkerichang !

In order to resolve this issue as quickly as possible we have improved the robustness of the current implementation of the matrix exponential. For the given matrix, the algorithm now produces the correct result. Still, the algorithm doesn’t generally work well with large numbers. Therefore we are still looking into pull request #49. Unfortunately you might have to rebase the pull request now, which shouldn’t be a big issue, though. Thanks for pointing out the problem and for providing the pull request,

Best regards,

Klaus!