- removed comment
The binary neutron star gallery example gives different results with the release candiate.
The late part of the waveform plot looks very different in the release candidate than on the gallery page. For l=2, m=2 mode of psi_4 extracted at R=300 (that seems to be the correct data file as otherwise the scale on the y-axis don't match) the merger seem to happen slightly later and the waveform after merger is completely different.
The gallery page plot:
The new plot:
Keyword:
Comments (34)
-
reporter -
reporter - removed comment
I will try to add "GRHydro::sources_spatial_order = 4" and see if this fixes this problem as well.
-
- removed comment
This may well be the same issue as with the single NS
#2118, namely having to setGRHydro::sources_spatial_order = 4
since http://einsteintoolkit.org/gallery/bns/nsnstohmns.par does not set it and the default is 2. Admittedly normally a too low order should be more dissipative and should lead to an earlier merger but still, having and order of 2 makes little sense. One would have to go back and check whether at the time the example was created (by me) GRHydro usedADMBase::spatial_order
orGRHydro::sources_spatial_order
. -
reporter - removed comment
With GRHydro::sources_spatial_order = 4 the plot of psi_4 look more similar to the gallery page than before. However,there are still differences after merger as can be seen from this plot: [[Image(mp_Psi4_l2_m2_r300.00_newer.png)]]
-
- removed comment
Quite likely the new run use the c++ code instead of the fortran one (that was the default at that time) and small difference can be present. A better comparison (for the post-merger phase) should be to consider (also) the module of psi_4 instead of just the real part. In the post merger phase there are more than one modes with diffrent constructive-distructive interference) and a small phase differnce on the two-three excited mode may show up. Running the Gallery example on Marconi.
-
- changed status to open
- assigned issue to
- removed comment
-
- removed comment
Peter, what is the status of this ticket?
-
- removed comment
- changed watchers to knarrff
Putting in my 2 cents, but no solution. I am currently also investigating a difference in our NSNS production runs when we move from the 2015_05 release to the current one. Details are of course different, but the most notable one is also a difference in the post BH-formation GW form, so CCing me here.
-
- removed comment
Peter, any progress?
-
reporter - removed comment
A part of the difference comes from a change to Meudon_Bin_NS to use EOS_Omni in commit: 1b9d6a3eff34224a357a61e3b74d5468f8efa82b. After this change some variables have slightly different profiles on the initial data slice. This affects: hydrobase::press, hydrobase::eps, grhydro::tau and grhydro::scon. The following plot shows the maximum density as function of time for 3 different runs. The red curve is for the original code at the time of the creation of the gallery example. The green curve is for the current development version of the toolkit. The blue curve is for the current development version with Meudon_Bin_NS reverted to the commit before the EOS_Omni commit.
[[Image(dens_compare.png)]]
As can be seen the blue and red curves agrees much better up to the merger, but still some differences appear after that.
The affect on the Psi_4 waveform can be seen here where I have zoomed in to the merger and post-merger phase:
[[Image(psi4_compare.png)]]
Also in this case, the agreement is better between the red and blue curves, but significant differences still appear late in the waveform.
I don't know if Meudon_Bin_NS is more correct before or after the commit mentioned above. And I don't know what then causes the additional differences.
-
- removed comment
The ticket discussing the change Peter identified is
#2039.Changes in Meudon_Bin seems quite likely to be capable of changing the initial density, these changes having an effect that only showsup very late (at t=1600) is certainly unexpected and somewhat disturbing.
-
- edited description
The late part of the waveform plot looks very different in the release candidate than on the gallery page. For l=2, m=2 mode of psi_4 extracted at R=300 (that seems to be the correct data file as otherwise the scale on the y-axis don't match) the merger seem to happen slightly later and the waveform after merger is completely different.
The gallery page plot:
The new plot:
Keyword:
-
- changed milestone to ET_2019_10
-
- edited description
-
I dug up the orignal hdf5 output file and with it the Formaline build id (build-sim-zwicky-rhaas-2014.04.16-20.59.35-19137) of the executable and finally the source code for this build. If desired I can send you the tarball. The original run ran on Caltech’s Zwicky cluster which used the Intel compiler (I think, one would have to dig into the simfactory repo to find out).
-
That way you could run on a current cluster once with the old code and original parfile and once with the new code and Peter’s best guess for a correct parfile and see if (a) the run on the new cluster and old code agrees with the old result (b) the two new runs agree with each other.
-
Roland, this is quite helpful. This should make it possible for us to resolve this problem. Maria agreed to try and run the old code, do you want to contact her? Do we know how?
-
Maria is @Maria on Bitbucket (a “developer” group member of the ET organization https://bitbucket.org/account/user/einsteintoolkit/groups/developers/) so Bitbucket will send emails when her user name is mentioned (as done in this post).
I am running the original code with the original parfile on my workstation (gcc compiler, so there should be difference just b/c of that) right now. It takes about 24hrs to run on all 12 cores. A real cluster node is likely faster and one should be able to run this on one node easily (~ 21GB of RAM required it would seem).
-
The comparison run finished and the results differ a bit from the original Zwicky results, but only in the post-merger, where I would accept issues such as different levels of IEEE compliance (between gcc and icc) to matter. There are differences even to Peter’s “New code, old Meudon” run though.
@Peter Diener do you have the data files for your “New Code, Old Meudon” run? I have attached the original Zwicky file and the one produced on my workstation (gcc 8 ).
-
- attached Zwicky_mp_Psi4_l2_m2_r300.00.asc
- attached ekohaes8_mp_Psi4_l2_m2_r300.00.asc
Data files for runs on Roland's workstation (ekohaes8) and the original run on Caltech's Zwicky cluster.
-
The comparison run finished and the results differ a bit from the original Zwicky results, but only in the post-merger, where I would accept issues such as different levels of IEEE compliance (between gcc and icc) to matter.
In that case I’d suggest perturbing the initial data at the least significant digit to confirm or refute. The wave amplitude seems too high for roundoff to be responsible for the discrepancy. You could also try the same run with Lean?
-
That may a quick(ish since it takes a day to get to merger) check.
Changing ID however is not quite the same thing as running with reduced IEEE compliance: I would still run the same code as before which differs from the code that ran on Zwicky by the fact that the Intel compiler defaults to imprecise math see eg the ever popular: https://www.nccs.nasa.gov/images/FloatingPoint_consistency.pdf and in particular the prec-sqrt section (the default is imprecise) https://software.intel.com/en-us/fortran-compiler-developer-guide-and-reference-prec-sqrt-qprec-sqrt.
-
reporter @Roland Haas Unfortunately I don’t have the data available anymore. I ran on mike at LSU and didn’t think to copy the data files somewhere else before they were purged from the /work filesystem.
-
I reran the same code, same ID, on the same workstation using gcc’s
-Ofast
rather than-O3
which should give an indication on how much any part of the simulation is affected by relaxed IEEE conformance (in the C code) or term reordering (in the Fortran code where rules are not as strict as in C anyway).The results are rather spectacular in the post merger waveform and density evolution:
where the waves shut off after t~2400 when
-Ofast
is used. Similarly for density (similar to what Peter had already found) the result is quite noticable:in the post-merger phase and already somewhat visible during merger when density peaks.
This simulation had a rather coarse resolution for the stars (0.28 Msun where one would normally want have at least 0.25 Msun).
This would mean that certainly the residual differences in post-merger evolution that Peter saw could be caused by just running the code on different clusters with different compilers and different optimization settings compared to the cluster originally used when producing the gallery data.
I will re-run once more, this time using
-O3
and the new Meudon-BNS thorn ie 1b9d6a3eff34224a357a61e3b74d5468f8efa82b. -
Running with the new Meudon code and
-O3
gives waveforms more similar to running with just-O3
but rho values closer to-Ofast
(but then max(rho) is a pointwise value).
It looks quite similar to what Peter found using all new code (ie smaller maximum and a deeper dip in rho just after the maximum).
-
So is this resolved? Should we standardize on running this with -O2 or something to prevent variation?
-
Not quite resolved I would say. Anyone using
-Ofast
or-ffast-math
with the GNU compiler needs to be aware that this can subtle change values which can cascade out of control. Similar for the Intel compilers derfault behaviour (which is closer to-ffast-math
than regular math unless-fp precise
is used).I would like to verify that when using
ET_Proca
for most of the code but the “old” Meudon code I do get the same results as using all old code (or all new code vs. old code and new Meudon which is about the same check) on my workstation with gcc and-O2
. Peter still saw some differences that we could not explain. Running with-Ofast
was mostly to get a handle of the differences to expect eg between gcc and Intel compilers (or between different clusters maybe since Fortran code always behaves more like-ffast-math
than regular math). -
Eventually I would advocate to replace the data on the gallery page with data produced by a system compiled with the gcc compiler and
-O2
so that there is a better chance of it agreeing between clusters. -
I ran the parfile (making sure to ask for 4th order finite differencing for the hydro source terms) using ET_Proca + Meudon_Bin_NS version eec2b89a "Exact: fix integer dvision for Goedel spacetime" of einsteininitialdata. This produces results (pre-merger) that are indistinguishable from results produced using the code originally run on Zwicky. There is a difference post-merger but given the huge difference running with or without
-Ofast
made I am not concerned about that. Both runs used the same compilation options (using-O2
). I will run one more run using the master version of Meudon_Bin_NS to check what differences become visible. Commit 1b9d6a3 "Meudon_Bin_NS: using EOS_Omni" of einsteininitialdata is known to change the ID on the grid a bit (see https://bitbucket.org/einsteintoolkit/tickets/issues/2119/the-binary-neutron-star-gallery-example#comment-50396585) but mostly in the post-merger region.
-
This looks really good. I’m very happy to see this progress.
-
Running all new (Proca) code shows only differences in post-merger, so this does seem to indeed be ok. I will try and spend a couple more runs to track down the first commit that introduces visible changes though based on @Peter Diener 's previous work it likely will be the one that recomputes eps from rho after reading in ID.
Zoom in to post-merger only:
-
Gravitational wave produced by clean Mayer setup. compared to original Zwicky code and Zwicky code with new Meudon_Bin_NS thorn:
Density evolution compared to code that ran on Zwicky and Zwicky code with new Meudon_Bin_NS thorn:
So there is no real difference between Mayer and Zwicky with the new Meudon code.
-
Pushed new data to website in git hash c38f3b5 "bns: make displayed file name match name of URL" of www. Updated data on https://bitbucket.org/einsteintoolkit/www/downloads/.
-
- changed status to resolved
- Log in to comment