High CPU Usage When Using Expression Handlers

Issue #70 resolved
Alec Vallintine
created an issue

First off, great work on the addon! I'm quite simply thrilled that something like this is available for Blender. I hope you're able to continue the work and I'm looking forward to future updates.

So, I plan to use a large number of expressions, face units, visemes, driven facial poses, etc. and I'm probably going to reach the driver expression length limit of 256 characters. From what I understand, using handlers will avoid this problem. However, when I use handlers it causes Blender to use a lot of CPU just sitting idle. I'm assuming this is due to the bones constantly getting updated when the various handlers are called. In any case, I'm concerned that I won't have much CPU left over to do actual animation, etc.

Instead of using handlers, have you tried or considered using a custom driver function? You could pass to the function the context needed for it to return the value of the property. I'm thinking something like:

evalProperty(rig, bone, transform, axis)

If you check the "Use Self" box in the driver, you might not even have to pass the entire context to the function, although I haven't tested this.

This would also avoid the expression length limit, and might be faster than using handlers.

Thoughts?

Comments (8)

  1. Thomas Larsson repo owner

    Custom driver functions exist since a few commits ago, and it seems to work very well. Set the driver type to Function in the Morphs section, which is already the default. I will probably move this option to a less visible place or remove it completely in the future, because I don't think there is any reason not to use it.

  2. Alec Vallintine reporter

    Great, glad to see this change. I've been playing around with it today and it seems to be working fine. I agree that it seems like a better approach than using handlers or those long expressions.

    In related news, I was experimenting with further increasing the performance in this area, since I plan to have a ton of properties driving bones. In evalMorphs() we're iterating through properties on the bone (DazLocProps/DazRotProps) and also accessing properties on the rig in order to calculate the final value. I did some tests and found that accessing blender object properties in this manner is significantly slower than using, for example, a standard python dictionary. That got me wondering what would happen if we "cached" our data in a python dictionary somewhere and accessed that instead of the properties directly. The challenge, of course, is keeping the cache up to date. This is relatively straight-forward for the bone data because it doesn't change that often. We would just need to populate the cache when the scene loads and update it when new morphs, etc. are loaded. I tested this and actually got a significant improvement in speed (~35 fps to ~45 fps).

    I also tried caching the rig properties and got another boost in performance (~45 to ~55 fps). However, keeping the rig properties cache up to date is a challenge. These values will likely change every frame if the properties are being animated. We could use a handler to update the cache every frame, but that might undo any performance boost we get from using the cache. I tried converting the rig properties to custom properties and using an update handler to update the cache when the property changes, but the update handlers don't seem to fire reliably. For instance, they don't fire during animation playback or when hitting undo, which means the cache will become stale in those situations.

    Anyways, any thoughts on this?

  3. Thomas Larsson repo owner

    My spontaneous reaction is that caching would be very tricky to get right. It has to work with multiple characters, file linking, and when the user changes the name of the character.

  4. Alec Vallintine reporter

    Agreed, it's probably not a trivial thing to get right. Maintaining references to characters could be done by assigning a unique ID (UUID or similar) to each character and keying the in-memory cache off that. If no cache entry exists for a character's ID then build it, etc. I'm not familiar enough with file linking to comment on that, but it's probably do-able.

    At any rate, I'm going to resolve this issue since we now have driver functions.

  5. Thomas Larsson repo owner

    I replaced the Python loop in evalMorphs by a C loop. Provided that the loop takes up a significant fraction of the time this should be considerably faster. And this has no unpredictable side effects.

  6. Log in to comment