#1598 Merged at a0f4cce
Repository
MatthewTurk
Branch
yt
Repository
yt_analysis
Branch
yt
Author
  1. MattT
Reviewers
Description

This is a new implementation of AMR-aware OpenGL volume rendering.

The work was done by @chuckroz, and I'm going to take on stewarding the pull request into the main repository. There are a few enhancements and changes that may need to make their way in before it is fully functional, and I will take on documenting it, but for now I think a coarse examination of the code would be very helpful.

@chuckroz sends these comments:

The following commits are the start of a very, very basic OpenGL volume
renderer for yt. The added files, interactive_vr.py and interactive_loop.py are
a simple implementation of volume rendering of yt datasets using cyglfw3. The
method is outlined below, along with a couple comments about the faults and
weak points of this PR since it is very much a WIP.

Example Usage:

````Copy main.py from personal repos here `````

Method:
Several bounding boxes are created using scale and translation matrices. These
bounding boxes are fitted onto each dataset chunk, that is every
PartitionedGrid object gets a bounding box. Each box has geometry associated
with it that is passed down the OpenGL pipeline. For each box, the following
is done:

1. Load a 3D texture for the PartitionedGrid object's data.
2. Draw the bounding box.

The constant texture switches decreases speed by a large factor and is one of
the main issues with this method. Then, a fragment shader (specified in the
shaders/ folder) passes rays through each geometry fragment using a direction
obtained from the camera position. The ray passes through, simply obtaining a
maximum intensity value, and uses that for the color of the pixel. Note that
the 3D texture is loaded into the `red` coordinate only. There's likely some
GPU based tricks that can be done here in order to improve performance, by
packing more data into different color channels of the pixel.

After all the blocks are drawn, blending is used to select the `most maximum`
value for a screen pixel.

The interactive_loop.py file also includes a main_loop that allows some mild
(and buggy) interactivty.

Further Work and Improvements:

Speed is an issue. This method chokes on larger data-sets due to the constant
texture switching. It's still much faster that what we have currently in some
ways, but there needs some improvements on loading the data.

The interface isn't great, but I tried to leave room so that it can be expanded
to arbitrary shaders and many different collections of data.

UPDATE: This is now ready for testing. Documentation is in progress, but the overall code is ready to be looked at. I anticipate that as soon as I take WIP off this, Fido will let me know style errors, which I'll correct.

Tasks remaining: screencast, narrative documentation (short), adding the recipes to the cookbook, and ensuring docstrings exist for all user-facing operations.

  • All tasks resolved

Comments (49)

  1. Nathan Goldbaum

    Append ?w=1 to the end of the url for this page to turn on the option to ignore whitespace diffs - makes the real changes in this PR much easier to see.

  2. Andrew Myers

    I'd like to try this out. I'm guessing I need to install the PyOpenGL and cyglfw3 packages. Is there anything else?

  3. Andrew Myers

    Also, is there an example script somewhere that shows how to use this with one of the sample datasets?

  4. chummels

    Since the OpenGL VR is distinct from the standard VR, perhaps they should be placed in different directories in the source tree? Or at least, I think we should clearly demarcate in the source files which files pertain to which VR module so people don't get confused when trying to track things down.

    1. MattT author

      That is a good point, although I think we'd like to see them converge as much as reasonably possible. This may not be possible, but it's certainly desirable. Where do you think it should it go if not in that directory?

      1. chummels

        I too would like to see them converge more, but given that they'll likely be using different source files with similar names/functionality, I think either we (1) put them in a visualization/openGL_VR directory, or put some note in the header of the source file saying explicitly that one file is for software-driven VR or hardware-driven VR, just so people know what is going on. I don't foresee any of the hardware driven VR relying on any of the code from the software-driven VR, but I haven't dug deep here. Perhaps there are points of overlap?

        1. MattT author

          Until the software VR is finalized and released, we haven't explored making the modifications to it that would be necessary to get it to interop. For instance, we have the BlockCollection object in the OpenGL VR, which is distinct from the VolumeSource in the software VR. This is because right now, the changes to the software side to have it function similarly would be invasive, and it's not ready for that yet. The cameras might be able to be shared, although not in too meaningful a way until we have a more formalized projection matrix that could be shared. And from a simple usability standpoint, putting it into a subdirectory would mean that shaders would then be in yt/visualization/volume_rendering/opengl_vr/shaders/ which is quite the mouthful, and I would like to keep it as flat as we reasonably can.

          I'd really like to avoid another level of hierarchy unless we absolutely have to do so. I think demarcation is the best way forward.

          1. chummels

            I didn't realize that the software VR wasn't finalized. I thought that the bulk of development was completed on it as of 4 months ago.

            Demarcation is fine. I'm just wary of the situation where someone goes into the source to try to figure out how something works, and gets confused by two distinct sets of functionality and their corresponding files.

            1. MattT author

              We still have the sprint, one of the goals of which is to increase test and documentation coverage. That's what I meant by finalized. Until that's done, I'm not comfortable undertaking a major refactoring and pushing off release-worthy status of the software VR.

              1. chummels

                I don't think I ever suggested undertaking a major refactoring on the software VR or the hardware VR. I simply wanted to assure that users don't get confused by what source code refers to what functionality, and that these two similar but distinct parts of yt's functionality have similar APIs to avoid confusion.

                1. MattT author

                  No, you didn't -- I suggested it. I believe that we should share as much code and API as possible between the two, and I think that the best way to do that is to unify the objects as much as possible. So I was saying, I empathize with what you are saying, and I agree with it, and I would like to make it happen. But the roadblocks to doing that are a, b, c. I really don't want to be misunderstood here: your comments and concerns are valid, I agree with them, but there are specific technical reasons that I would like to hold off on making the changes necessary to do the "right thing" in terms of code changes and refactorings.

                  Those technical reasons are not insurmountable, but I think they require a bit of work, which is being undertaken independently of this pull request. So any suggestions you have -- particularly about the existing user-facing API (as seen in the cookbook recipes) -- to ease that transition would be appreciated, welcomed, and valued.

                  1. chummels

                    OK, that's fair.

                    I'm not sure what timeline is envisioned for this functionality to go into the mainline code, though. Is the idea to get both software VR and hardware VR (this PR) into stable for yt 3.3? And then down the road an attempt will be made to integrate the two potentially disparate infrastructures/APIs? Or to do all of this before 3.3's release? Or to just have software VR go into 3.3?

                    It seems like rather than releasing all of the code (software and hardware VR) into the world, and then refactoring things to all work (potentially breaking people's scripts and/or bending over backwards to retain backwards compatibility), it might make more sense to pre-factor them to work together before going into stable? I understand that software VR should definitely go into 3.3, but is hardware VR ready for it at this point?

                    I say this not to hold things up, but to avoid problems in the future with refactoring--the old pandora's box problem that once software gets released, it is harder to make major revisions to it without upsetting users.

                    1. Nathan Goldbaum

                      I think it's pretty cool and useful. We could just mark it experimental, with APIs subject to change, and print a warning about that when you import it.

                      1. MattT author

                        That was my hope, as well. I think this is a pretty cool feature that people will be willing to break to get to use.

                        Additionally, I think if we view the software API as stable, that will be enough -- the number of API calls and the level to which scripts need to be written for hardware VR is really, really small. I think the danger of breaking scripts around hardware VR is negligible.

                      2. chummels

                        It definitely looks useful, that wasn't the concern at all. I like the idea of marking the API as subject to change in the near future, so we don't have to bend over backwards to hold to it as things move forward.

                        • Add warning on import about experimental API
  5. Nathan Goldbaum

    It would be cool if there was a way to print out help text summarizing the keybindings. Either to the terminal or annotated on the volume rendering itself.

  6. Nathan Goldbaum

    While showing this off today I noticed that the tip commit of the PR is obsolete.

  7. chummels

    In general, it looks pretty good. I'd like to test it out before approving though. Can you guys let me know when the dependencies are setup in the conda channel so we can install it?

    A couple of things:

    • it seems like this is variously referred to as "opengl VR", "interactive data visualization", "interactive rendering", "hardware-accelerated VR", etc. I think it would be good to have a single name that we use consistently throughout the code and the docs when referring to this functionality so users don't get confused. Perhaps something that indicates this is similar to the VR stuff? Interactive VR? I dunno.

    • I think it is also useful for having a section of the narrative docs explaining how this differs from the software VR (since that is also going out in this stable release and serves similar functionality).

    • Lastly, looking over the code, it isn't immediately obvious to me how this all fits together and works. This might not be necessary for some users, but people who want to hack it need to have some idea how it all works. This was generally provided in the "method" docs for the software VR for instance, and I think it is beneficial for everyone.

    • The opengl_vr cookbook recipes are not linked in the cookbook anywhere so they don't actually show up.

    1. Kacper Kowalik

      About your last point: there's +:ref:`cookbook-opengl-vr`. in the docs piece I added. Is there any other place it should be referenced?

  8. chummels

    I tried running the code right now as described in the docs. I was able to conda install the necessary dependencies without problems, but when I ran the supplied script for use with IsolatedGalaxy, I got a segfault with this stack traceback. This is on a new Macbook Pro using an Intel Iris Graphics 6100 1536 MB card. Ideas?

    http://pastebin.com/B97MruVb

    1. Kacper Kowalik

      Unfortunately no ideas, I've tested this on an old Macbook Air using an Intel HD 5000 1536 MB card. It was slow but worked without any issues

        1. Nathan Goldbaum

          I reproduced your seg faults using Kacper cyglfw3 builds.

          I bet it will work if you build them yourself. I was able to pip install cyglfw3 after doing brew install glfw3 on my laptop.

          1. chummels

            I uninstalled and reinstalled from your conda source, and it fails again with the following segfault:

            http://pastebin.com/Dc86QbA0

            However, @Nathan Goldbaum 's suggestion of uninstalling with conda then installing with brew and pip did work. It's quite slow on this machine taking about 60 seconds to load an image, and it framerates of about 1-2 fps with considerable latency, but I guess that's what I get for not having an NVIDIA graphics card. I find that dragging the mouse across the image gives unexpected rotations of the data object, but that may be partially due to the framerate issues. I'll try to test it on a system with a good GPU.

            Overall, it looks really great. The hotkeys are sort of all over the place, but I like that you've grouped the transfer functions next to each other and the zoom next to each other. Is there a pan hotkey? That might be good to put at a and d to match the standard videogame keyboard layout to which everyone is accustomed. I'm eager to try this out with a production level dataset!

            1. chummels

              As a followup to this, as @Nathan Goldbaum and @Kacper Kowalik pointed out in the slack channel, I needed to actually remove the conda install files that conda installed for the "bad" version of glfw3 and cyglfw3 before reinstalling. When I did this with the updated conda packages, it worked.

  9. Nathan Goldbaum

    Overall I'm the code looks very clean (except for a few minor nits, mostly related to basic documentation for developer-facing classes). I'm also happy with the level of documentation.

    Great work on this!

  10. chummels

    This is by no means a blocker, but is it possible to do this VR on a downgraded dataset, like being able to specify maximum_level or something like that? I'd think that with very large datasets, people just want to get an idea of what is happening in their datasets with relatively high framerates rather than worrying about the detailed structure. It would be super cool to have a hotkey for that, so a user could potentially do the old "zoom and enhance" algorithm popularized in television and film: http://boingboing.net/2009/12/17/-darren-sez-a-terrif.html

    1. Nathan Goldbaum

      I bet it'll "just work" if you do something like:

      ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
      
      # Create GLUT window
      rc = RenderingContext(1280, 960)
      
      # Create a 3d Texture from all_data()
      collection = BlockCollection()
      dd = ds.all_data()
      dd.max_level = 5
      collection.add_data(dd, "density")
      

      Where max_level is whatever level you want to sample data from.

  11. chummels

    There are a couple of remaining comments I submitted that weren't addressed; however, I won't hold things up at the risk of being labeled a goalpost mover. I'll just issue a PR immediately addressing these issues.

    Good work, peeps. Looks great.