AMRKDTree rewrite, Volume Rendering Upgrades

#383 Merged at 31279a5
  1. Sam Skillman

This PR includes a complete rewrite of the AMRKDTree to allow for more arbitrary data sources, as well as upgrades to allow for MPI enabled, opaque renderings. This comes with the change to have 4 channels instead of 3, to allow for more arbitrary ray integration.

Other changes: A MosaicCamera, which utilizes multilevel parallelism Annotated volume renders (uses matplotlib to overlay TF, etc)

This PR needs a lot of review, and I would like folks to, as a first pass, try it out, and see if it breaks a rendering script as is. Please don't go through and comment line by line right now, since I already know of a lot of things that need fixing. I'm PR'ing right now because this needs a catalyst for it to get going. For example, I think a lot of the non-standard cameras might not work.

Saving annotated images works like ::

im = cam.snapshot()

if cam.comm.rank == 0:

  # White background

  cam.save_annotated('test_ab.png', im[:,:,:4])

  # Black background

  cam.save_annotated('test_bb.png', im[:,:,:3])

Update #1: Fixing first round of comments.

Update #2: More comments addressed. TF construction links alpha/rgb channels once again, should address blurriness seen by @xarthisius .

Update #3: Restored find_neighbor functionality

Update #4: Adding nosetest for amr_kdtree, generating a fake AMR sim using refine_amr and a cored sphere.

Update #5: Few updates/cleanups for transfer function construction/showing in annotated image.

Comments (16)

  1. MattT

    Sam, I'm still looking it over, but I am +1 on integrating you new kD-tree. Can you also include a note why we now have a bunch of items which seem to me could be methods, but which are now separate routines? I am not saying they need to be methods, but your input is welcome on this.

    1. Sam Skillman author

      Hi Matt, thanks for all the comments. I'll start crossing them off when I get a chance. As for the items in, I decided to put them outside of the AMRKDTree class method so that they can later be profiled/optimized/converted to cython. They deal almost entirely with numpy arrays, scalars, and booleans. Even the communicator pieces like size and rank are now separated out. This was one of the primary differences from the old way, which all hung off the class itself, and made it difficult to determine dependencies on "self" so it seemed like a good intermediate move to split it all out, then figure out where the slow parts are. A lot of the old stuff relied on having the actual Grid objects or pieces of the pf in order to get child masks, etc.

      1. MattT

        Those are pretty good reasons. This is a very nice change -- thank you for doing it. If we end up needing the change in performance, moving to Cython is a good idea, but that may not be necessary for now. It's nice to have that option!

  2. MattT

    As another note, I think now that we are moving into a new philosophy of kD-trees and the new VR is fully integrated, I think we should reconsider how we feed arguments to the sampler and get rid of the "get args" stuff.

  3. MattT

    Hey Sam -- I've finished with comments on style and so on. Today I'll test these changes. I had a couple additional thoughts.

    • It's not clear to me we can supply a data object just yet. For instance, is it possible to use a data source for masking of data, in addition to the child masking? If I wanted to, say, only VR a sphere? Or if I had an earthquake simulation and wanted to use boolean objects to cut out the outer core, could I do that?
    • I'd like to propose eliminating the homogenized volume, as these changes address the remaining performance and functionality differences. This should go to the list, though.

    I'm a big +1 on having this go in after edits + testing.

    ...speaking of which, before we can accept, I think we need to start testing basic functionality. In the tests right now we largely use unigrid. But there is the functionality to create pfs wholecloth that are AMR, using load_amr_grids. Right now we don't do that. We can use @scopatz 's AMR hierarchy array generation, and from that build fake random AMR pfs, and then use that for testing. I think we really need to at this point, and this PR is a great reason to do so. What do you think?

    1. Sam Skillman author

      Hi Matt,

      You can not yet supply a data object yet. All that it would need is a interface to iterate over the level sets of grids held by that data object. I am also not sure yet how to handle masking of data for things like spheres.

      I haven't removed the HV yet, maybe PR it later? In general I've tried to keep backwards compatibility in this change, to hold off on breaking until later (which we should definitely do!).

      I have added a test that uses load_uniform_grid and refine_amr to build a fake AMR dataset. I didn't quite see how to make random pfs, but that is something we could add later.

  4. Britton Smith

    Sam, do you want line-by-line comments at this stage? Your PR said that you did not.

  5. MattT

    Sam, after Britton's comment, I realized I misread your PR to say the opposite of what it does. Sorry for the line-by-lines early on in the process!

  6. Kacper Kowalik

    I've tested VR on my old script. I've noticed two things

    • imaged is flipped
    • old rendering seems to be sharper

    Example is here and here's the script

  7. Devin Silvia

    I just thought I'd chime in on this pull request. I just ran a test using a script that I know was working with the current tip of yt (rev: 0514b03), which produced this:

    Running that exact same script with the version associated with this PR, I get:

    It looks like the way the scale keyword works has changed and the method for generating an opaque render is now different. I'm may have missed an explanation somewhere as to how exactly opaque rendering has changed, but I thought it was worth documenting what I've encountered.

    1. Sam Skillman author

      Thanks for testing @devinsilvia, I think this is due to how the opacity is calculated in the old version vs the new. I've re-created this behavior locally. I think the new method is more explicit and easily controlled, so if you were to bump up the scale factor from what it was to that value**2, I think you would see a similar behavior to before. The nice thing with the new way is that if you have the opacity set to 10.0, then if you pass through 0.1 of the view depth at that value, it will be "opaque", so it has a better correspondence to optical depth than before. Let me know if squaring the scale factor in that image returns it to something similar to before.

      1. Devin Silvia

        Thanks for digging into this @samskillman. When I square my scale factor, I get:

        which is much closer to the original image, though still a bit less opaque. I also see you point about the scale factor being a bit more intuitive and know that I know what the appropriate sorts of values are, this all looks good to me.

        1. MattT

          So I guess now that we have this cleared up, we'll need to make sure people understand the difference. And overall, this seems like an improvement. I think what we need is a reminder email to yt-dev, combined with a big notice that the images might be different and scripts need minor adjusting, and then have at it?

          1. Nathan Goldbaum

            Perhaps we should also add an entry to the FAQ that we can link to if people e-mail the users list or drop in on IRC?