Julio Jerez wrote:I am kind of abandoning the idea of motion synthesis because no matter how good the method is you can never get away form eth robot looking results.
If i get you right i disagree. Don't wanna argue but maybe i can inspire you with an example.
Looking at your articulated joints, AFAIK you provide an interface to target constant linear velocity, for instance to make a slider joint moving or a motor rotating at constant speed.
Of course this generates robotic looking motion, and such an interface is not usefull for human motion.
Human motion rarely uses constant velocity, it is more like constant acceleration. (Which is used for robotics as well because it's more efficient, mostly for pneumatic systems i guess)
Now the example is this:
Imagine you have a ball resting on the floor and your goal is to move it up to 1m hight and it should rest there at zero velocity.
You also have a switch where you can negate gravity so it goes upwards instead downwards.
So all we need is a controller that switches gravity to make the ball moving upwards and at half the distance switch gravity again, so it deaccelerates and comes to rest at our target height.
(At that point we would want to turn gravity off or keep switching at high frequency to keep it at rest).
Now it's easy to write a controller that can solve this problem even with initial velocity, target velocity and any target height just by working with the quadratic equation that comes from constant gravity, but the important thing here is: The resulting motion does not look robotic or artificial - it looks nice and smooth and efficient, exactly like human motion.
I believe we can synthesize perfect human motion with simple models like this.
E.g. the controller can calculate the time necessary until to switch gravity, and we can use this time to know something like: In 0.8 seconds the swing foot must be at the predicted zero moment point to stop walking, or a bit behind to keep walking etc.
The correspondence to constant gravity for my example is the maximum force a character can get from putting its center of preassure to the edge of its stance foot. This force is not exactly constant because it depends on the angle to the COM, which it also affects, but this results in a luckily linear relationship from angle to force. We get equations of motion with position dependent acceleration, resulting in complex but still analytically solvable equations.
So if the controller tells us: Keep the COP at the back of the foot and in 0.8s move it to the front of the foot,
and we use a simple IK solver to follow the resulting predicted COM movement from the IP, then the resulting motion should look good, because it is that simple and it's the same thing we humans do while balancing. It looks complicated because our bodies are complicated, but we do it efficiently and algorigthms can do this as well.
Just let arms loosly swinging, add a bit of posing for body language etc. and it should look good as i hope.
However, there is nothing wrong with using mocap and i guess it's easier to get more things done this way (plus all this expensive equipment would still make sense, so i guess the AAA industry would prefer this anyways, while indies might tend to an approach independent of any data.)