*_evolution_method = "LeanBSSNMoL" in Lean does nothing

Create issue
Issue #2487 resolved
Gabriele Bozzola created an issue

I don’t know if this is a bug or a feature. Consider for example the parameter lapse_evolution_method. Lean extends the available keywords by adding LeanBSSNMoL. However, this parameter is never checked in the code and, as long as evolution_method = “LeanBSSNMoL”, the code evolves the lapse (as you can see in the files that set the rhs to zero and compute the rhss). Quickly grepping in ML_BSSN seems to indicate that the Lean is not alone in this doing this.

If this is the intended behavior, I find it very counter intuitive: why would Lean extend lapse_evolution_method if it is never used? Since the keyword exist, my expectation is that the evolution is activated only when the LeanBSSNMoL is selected, so if I set lapse_evolution_method to static I would like to have a fixed lapse. The same is true for the shift_evolution_method.

Comments (31)

  1. Gabriele Bozzola reporter

    Another thorn that does something similar is FishboneMoncriefID. If you activate the thorn, the values of initial_data, initial_lapse, … do not matter: FishboneMoncriefID will always set its values.

  2. Miguel Zilhão

    I don’t remember the exact details, but if I recall correctly there was an error or warning if this parameter was not extended? I could be wrong, though. In any case, I believe we just copied the behaviour from ML_BSSN and extended it trivially. Indeed, we never really intended that any other choice for the lapse evolution was used. If having such a choice is important, we can try to implement something.

  3. Gabriele Bozzola reporter

    The way I personally interpret the parameter lapse_evolution_method (and shift_evolution_method) is “the lapse will be evolved according to the method specified”. Therefore, my expectation is that that if it is not set to LeanBSSNMoL (or ML_BSSN), Lean (or ML_BSSN) should not evolve the lapse. In this way I can set it to static (evolve the metric in a fixed gauge, which is why I needed this), or develop my thorn to evolve with my personal choice. However, I don’t know if this is the intended behavior or not. Maybe evolution_method implies that metric, lapse, and shift have to be evolved in the same way.

    In any case, it is trivial to implement the behavior I describe: the simplest way is to put a conditional in front of the code that writes rhs_lapse and rhs_shift checking if lapse_evolution_method is LeanBSSNMoL. A better solution should also avoid computing/syncing stuff that is not needed.

  4. Miguel Zilhão

    In any case, it is easy trivial to implement the behavior I describe: the simplest way is to put a conditional in front of the code that writes rhs_lapse and rhs_shift checking if lapse_evolution_method is LeanBSSNMoL

    This should be very simple to implement, yes. Gabriele, do you want to propose a patch, or a PR?

  5. Gabriele Bozzola reporter

    I will submit a PR fixing this in the most efficient way (ie, avoiding unnecessary computations), but this will not come very soon. Also this will be a breaking change.

  6. Miguel Zilhão

    What do you mean exactly with “breaking change”? Do you mean that previously working parameter files will no longer work? If so, the PR will probably not be accepted, as we wouldn’t want dozens of parameter files to suddenly break…

  7. Gabriele Bozzola reporter

    What I mean is that currently if you set evolution_method = LeanBSSNMoL you get the evolution of the metric, lapse, and shift. With the behavior I describe, one must set evolution_method = LeanBSSNMoL, shift_evolution_method = LeanBSSNMoL, lapse_evolution_method = LeanBSSNMoL to achieve the same evolution. If the other two are not explicitly set, their default value will be zero if I remember correctly. This is a breaking change because one doesn’t need to specify the other two evolution methods currently, so if one didn’t specify them, with the new version the evolution will be different. If you specified all of them (as all your examples do), nothing will change.

  8. Miguel Zilhão

    Ah, I see. I think in fact we usually do set all those parameters in most parameter files, so maybe we can proceed as you propose. But there is indeed a danger of breaking some parameter files, yes. Given, as you mentioned, that this issue also occurs in ML_BSSN, maybe we should consider what would be the best generic way of handling this… @Roland Haas do you have any suggestions?

  9. Giuseppe Ficarra

    Hello, I think this happens also because of how things are set up in the Lean schedule. Of course conditional statements need to be add to the source code for the rhs of both lapse and shift as Gabriele suggested, but this should be supplied by further conditionals in LeanBSSNMoL/schedule.ccl such that when keywords are set to “static” for example, instead of solving the equations, lapse and shift should assume the values from ADMBase. Does this make sense? I don’t have a clear idea on how practically do this but I thought this could help the discussion.

  10. Gabriele Bozzola reporter

    @Miguel: my personal take on this is that Lean, ML_BSSN, FishboneMoncriefID (and likely several other thorns) are implementing a behavior that is incorrect and misses the point on having distinct parameters for evolving/setting metric and gauge (and hydro). Parameter files that rely on this fact and specify only evolution_method are currently working only because of what I consider being a bug. But since this problem is widespread, I agree that we should find a general to address this issue.

    @Giuseppe: Yes, I agree. That’s what I was trying to say when I wrote that “A better solution should also avoid computing/syncing stuff that is not needed.”. The tricky part is here is to see if there are computations that are common for the three rhss. If there are none, we can split the evolution in three scheduled functions (this will give us more control over storage and syncing). On the other hand, if there are computations that are common (eg, some terms in the evolution for the metric are also required for the shift), then we have to be more careful because we cannot simply split the function in three scheduled functions, otherwise we would repeat computations that have already done, leading to a possible inefficiency.

  11. Roland Haas

    ml_bssn has the same issue (see #590) and a similar solution has been proposed but was delayed due to releases and desire to benchmark its impact. I have no objection to your solution.

  12. Gabriele Bozzola reporter

    I implemented this and other improvements to Lean here (and here for Proca). These are the changes:

    41be06c * origin/master LeanBSSNMoL:: split the three evolution methods
    61698a8 * LeanBSSNMoL:: Optionally compute RHS after initial data
    e9c355b * LeanBSSNMoL: Rename CCTK_BOOL to BOOLEAN
    084825a * LeanBSSNMoL, NPScalars: Add ParamCheck
    9713bb1 * LeanBSSNMoL: Calculate constraints every N iterations
    bb3f2d1 * NPScalars: Compute NP scalars every N iterations
    70f508b * LeanBSSNMoL: Compute matter terms only if needed
    c59c4b7 * LeanBSSNMoL: Add OpenMP parallelization to adm_vs_bssn
    be0f263 * NPScalars: Add OpenMP parallelism
    

    It wasn’t wise to completely decouple the three evolutions, so the rhss are still computed in the same for loop. My expectation is that this leads to a performance penalty compared to the current version (due to the additional branching in hot loops), but I’d bet that the effect is overall negligible.

    I’ve run some medium size simulations with this version, and everything seems fine. Of course, this doesn’t mean that I haven’t introduced new problems.

  13. Roland Haas

    Looking at your code in particular the bits that have string comparisons:

    if (CCTK_EQUALS(lapse_evolution_method, "LeanBSSNMoL")) then
    

    in 3D loops, then yes, these will be quite slow since the compiler will not cache the result of the comparison and really due the comparison each and every loop iteration.

    Instead I would have a logical variable that is initialized outside of the loop with the result of the comparison, ie:

    logical :: want_lapse
    
    want_lapse =  CCTK_EQUALS(lapse_evolution_method, "LeanBSSNMoL")
    
    do i =1,cctk_lsh(1)
      if (want_lapse) then
        do_something
      endif
    end do
    

  14. Miguel Zilhão

    Thanks for this. I had a quick look and it seems fine. I’ll check things more carefully once I have some more time. Gabriele, can you submit a PR?

  15. Gabriele Bozzola reporter

    I submitted a PR. There’s some minor rebasing that needs to be done. Do you want me to do it, or are you going to do it when you review the changes?

  16. Miguel Zilhão

    If you can do the rebasing it would be great – thanks. I’ll review everything afterwards.

  17. Gabriele Bozzola reporter

    I think I’ve done it, but I’ve never used BitBucket for these things, so I hope everything will be okay.

  18. Miguel Zilhão

    I was just thinking that since we’re getting close to a release date, it’s probably better not to incorporate these changes in the master branch directly. Gabriele, could you amend your PR so that the target is the development branch instead? After the release date I would then merge that with master.

  19. Gabriele Bozzola reporter

    These changes were merged into the development branch. Will they be moved to master for the upcoming release?

  20. Roland Haas

    You will have to consider what will happen to existing parfiles that may not have set the admbase parameters. These would fail (or at least behave differently) and ideally we want to reduce the cases where the same parfile produces different physics when run with different ET versions. Not checking the gauge evoluion method was a bug so in principle all “correct” parfiles that did set the gauge parameter to “lean” should continue to work fine. You may want to check what eg your example parfile did do.

  21. Miguel Zilhão

    I’ve merged this to master. I have not come across any parameter file that was not setting the admbase parameters, so I hope there is no problem there. I haven’t had any issue while using the development branch. Unless there is anything else, I think this issue can be closed.

  22. Log in to comment