- changed status to open
Weird synth_cplx behavior in Python
Hi, I'm excited to come across your library, but when I use your Python API, I seem to see some weird behavior of synth_cplx. For example, check the code below:
import shtns
import numpy as np
N = 100
sh = shtns.sht(N - 1)
sh.set_grid(N, 2 * N)
y = np.random.randn(N // 2 * (N + 1)) + np.random.randn(N // 2 * (N + 1)) * 1j
for l in range(N):
y[sh.idx(l, 0)] = np.real(y[sh.idx(l, 0)])
y1 = np.zeros(N * N, dtype=np.complex128)
for l in range(N):
for m in range(-l, l + 1):
if m < 0:
y1[sh.zidx(l, m)] = np.conj(y[sh.idx(l, -m)]) * (-1 if m % 2 == 1 else 1)
else:
y1[sh.zidx(l, m)] = y[sh.idx(l, m)]
x = sh.synth(y)
for i in range(20):
x1 = sh.synth_cplx(y1)
print(np.abs(x-x1).max())
This code compares the output of synth_cplx with that of synth, and the difference should be within a small tolerance value. However, The actual output is, the error exponentially grows after every iteration. Below is a sample output on my laptop:
8.119531996600342e-12
1.4155416585174949e-11
1.7963408535310025e-10
9.980324174861112e-10
2.7691414832595017e-09
1.2769671945748624e-08
5.570217881739186e-08
2.8801474909439787e-07
9.280819620991512e-07
3.526773696904164e-06
1.4855976622950533e-05
5.6843605320204034e-05
0.00031710572009338236
0.001118229435879152
0.005015502662808095
0.018752983821505687
0.05690694744962309
0.28079629140733436
1.3102422041232116
3.8581097568769076
So it seems that something in the memory is not properly initialized in each function call. And from my observation, this kind of error only happens after a call of analys/synth. If you replace the line "x = sh.synth(y)" with "x = sh.synth_cplx(y1)", subsequent synth_cplx calls work as expected.
Comments (6)
-
repo owner -
I am having this problem as well.
Applying synth_cmplx after analys_cmplx to a solid cubegives the following result:
This problem is not present when using synth, analys .
In the code producing these pictures only synth_cmplx and analys_cmplx are called.
I also noticed that this behavior gets worse the higher the maximal considered order is.
If needed I could try to come up with a minimal example code as well.
-
repo owner Before I fix the issue here are three workarounds, choose the one that fits you best:
- add
polar_opt=0
when calling the set_grid() method. In the example code above this would besh.set_grid(N, 2 * N, polar_opt=0)
- configure shtns with the
--disable-simd
option. Everything will be much slower though. - use version 3.3.1 : https://bitbucket.org/nschaeff/shtns/downloads/shtns-3.3.1-r694.tar.gz
Tim, could you confirm that your issue is also solved by these workarounds ?
- add
-
Hi Nathanaël,
thanks a bunch for the very fast response!
Setting polar_opt=0 did indeed solve the problem. I did not try the other fixes yet.
If i understand correctly polar_opt corresponds to the eps parameter in the docs,
which lets you chose how many polar values of Legendre coeffs are being neglected right ? -
repo owner Yes. The zeroing associated with these polar values was incorrect, overwriting some other data.
-
repo owner - changed status to resolved
fixed in v3.4.6
- Log in to comment
I can reproduce the issue. Thanks for reporting, I will investigate and possibly fix as soon as possible.