Global Position Prediction for Interactive Motion Capture

Abstract

We present a method for reconstructing the global position of motion capture where position sensing is poor or unavailable. Capture systems, such as IMU suits, can provide excellent pose and orientation data of a capture subject, but otherwise need post processing to estimate global position. We propose a solution that trains a neural network to predict, in real-time, the height and body displacement given a short window of pose and orientation data. Our training dataset contains pre-recorded data with global positions from many different capture subjects, performing a wide variety of activities in order to broadly train a network to estimate on like and unseen activities. We compare training on two network architectures, a universal network (u-net) and a traditional convolutional neural network (CNN) - observing better error properties for the u-net in our results. We also evaluate our method for different classes of motion. We observe high quality results for motion examples with good representation in specialized datasets, while general performance appears better in a more broadly sampled dataset when input motions are far from training examples.

Publication
In Proceedings of the ACM on Computer Graphics and Interactive Techniques
Paul Schreiner
Paul Schreiner
Industrial PhD student @Rokoko

I am an industrial PhD student at Rokoko working on the development of novel character animation tools based on Machine Learning and Bio-Mechanics.

Related