Abstract
We present a method for reconstructing the global position of motion capture where position sensing is poor or unavailable. Capture systems, such as IMU suits, can provide excellent pose and orientation data of a capture subject, but otherwise need post processing to estimate global position. We propose a solution that trains a neural network to predict, in real-time, the height and body displacement given a short window of pose and orientation data. Our training dataset contains pre-recorded data with global positions from many different capture subjects, performing a wide variety of activities in order to broadly train a network to estimate on like and unseen activities. We compare training on two network architectures, a universal network (u-net) and a traditional convolutional neural network (CNN) - observing better error properties for the u-net in our results. We also evaluate our method for different classes of motion. We observe high quality results for motion examples with good representation in specialized datasets, while general performance appears better in a more broadly sampled dataset when input motions are far from training examples.
Original language | English |
---|---|
Article number | 3479985 |
Journal | Proceedings of the ACM on Computer Graphics and Interactive Techniques |
Volume | 4 |
Issue number | 3 |
Pages (from-to) | 1-16 |
DOIs | |
Publication status | Published - 2021 |
Keywords
- IMU
- motion capture
- neural networks