Need FluidKernelFFT type

Issue #28 new
Jacob Hinkle created an issue

Currently our program code has stuff like this:

if mType == ca.MEM_HOST:
    self.diffOp = ca.FluidKernelFFTCPU()
else:
    self.diffOp = ca.FluidKernelFFTGPU()

This is needlessly verbose, and actually can confuse the python parser if you compile without cuda support so that ca.FluidKernelFFTGPU does not exist.

Instead we need a higher level object like we have with Image3D and Field3D, so we can instantiate the same as in those cases:

self.diffOp = ca.FluidKernelFFT(grid, mType)

Comments (7)

  1. Jacob Hinkle reporter

    It might also be possible to make this use the memory manager. The only quirk is that we need contiguous memory, and twice as much of it as we do for a normal image. Since the memory manager works in image sized chunks that might be a little bit tricky.

  2. Sam Preston

    (Reply via js...@sci.utah.edu):

    This should probably be done in the C++ code, just make a wrapper object which holds a pointer to FluidKernelFFTInterface and allocates either a FluidKernelFFT<EXEC_CPU> or FluidKernelFFT<EXEC_GPU> based on whether we ask for a MEM_HOST or MEM_DEVICE object. This will also need to be done for all the external classes in alg, they all require this if/else syntax in python right now.

    On Mon, Feb 18, 2013 at 7:26 PM, Jacob Hinkle

  3. Jacob Hinkle reporter

    Cool. While we are at it, we might add grid, alpha, beta, gamma and divergenceFree to the constructor.

  4. Jacob Hinkle reporter

    After talking w/ sam this afternoon, there's something we need to keep in mind. That is that the LUT gets generated only when the grid is set or changed. In particular, it is NOT updated when running setAlpha, etc. That's why it used to take the diffOpParam type and updating meant passing in the grid, etc. all at once. I'm not sure exposing setAlpha and friends individually is very useful, and is probably unsafe for casual users. I propose having a constructor with all parameters, and an updateParameters method that also takes all parameters with the grid required, and no defaults for lpow, divergencefree, alpha,beta,gamma either. This lets us still instantiate diffOp on one line, and should make us think twice if we reset parameters, to remind us it's going to recompute the LUT.

  5. Jacob Hinkle reporter

    Since we are planning to deprecate the setAlpha method (and friends) I am wondering if it might be useful to have a macro that helps us keep track of these deprecated functions. I'm imaging I insert something like the following:

    void setAlpha(float a) {
        DEPRECATED("FluidKernelFFT::setAlpha", 0, 03, "Setting diffOp parameters " +
            "should now only be done using the constructor or the setParams() " +
            "function")
        _mAlpha = a;
    }
    

    What does the DEPRECATED macro do you ask?

    • If our version is less than 0.03 it simply prints the following warning:
    WARNING: FluidKernelFFT::setAlpha is DEPRECATED and will be removed after
    version 0.03.  Setting diffOp parameters should now only be done using the
    constructor or the setParams() function.
    
    • If the version is equal or greater than 0.03 it complains at compile time, and possibly refuses to compile, in order to remind us to actually remove it.

    I think the macro itself would be pretty painless to write since we now have facilities for getting version numbers. I am just thinking it might be good to have functions not just disappear immediately, but hang around for say, one or two minor versions.

    As an extra bonus this would let us put a link to the issue tracker or wiki with some more explanation so people aren't just bewildered when they're code starts breaking.

  6. Jacob Hinkle reporter

    @jspreston does FluidKernelFFT actually need contiguous memory? If it just used 3 mempools then it wouldn't be a big deal to just pad every mempool in the memory manager that slight amount. Then the user codes a temporary managed diffOp object like so

    import PyCA.Core as ca
    
    diffOp = ca.ManagedFluidKernelFFT(grid, mType, alpha, beta, gamma)
    diffOp.applyInverseOperator(velocity, momentum)
    del diffOp
    

    In fact if we structured it so that all Field3D objects used padded MemPools, we could just have FluidKernelFFT be a lightweight object that tests the capacity of the MemPools in a field then does the operation in place. Or better yet just make applyInverseOperator a static method. The downside is the slight memory increase from growing all our fields. the upside is not having to carry diffOp around.

  7. Jacob Hinkle reporter

    On second thought, I just realized we need the LUT to be stored. OK this gets nasty because when we delete the managed operator it will delete its LUT. Then it becomes the memory manager's problem to hold on to kernel parameters, which could be messy. A possible approach would be to have a global diffOp, like how ThreadMemoryManager works. That would let us get around passing diffOps everywhere, and if it used ManagedField3D for temporary memory it would automatically be space efficient.

    That might be a decent approach since we generally only use a single type of metric anyway, and since we already are initializing it when we pass it our parameters. Just something to think about, and maybe this thread can move to the mailing list, since the original issue is pretty trivial to address.

  8. Log in to comment