//Please note I am not referring to a deblurring algorithm.//
I was trying out various Android camera apps recently and noticed a few of them have a nifty feature that minimizes motion blur due to camera movement -- they use the gyroscopic sensors in the smartphone to determine movement and automatically take the shot when it detects the device is still enough. Our cameras don't have the fancy gyroscopic sensors of a smartphone but they do have the leveling sensor. Maybe you could accomplish something very similar with a combination of monitoring the leveling sensor and real-time image analyzation. Obviously this might not make much sense with lenses that already have image stabilization.
This feature would detect camera movement and then the instant the user holds it still enough it would automatically take a picture, thereby reducing or eliminating motion blur resulting from shaky hands. It's basically another way of accomplishing image stabilization. There are already smartphone camera apps that do this sort of thing.
Discussion about this idea: https://groups.google.com/forum/?fromgroups#!topic/ml-devel/6U3cpJMMrLU