Noise-in, Bias-out: Balanced and Real-time MoCap Solving
- URL: http://arxiv.org/abs/2309.14330v1
- Date: Mon, 25 Sep 2023 17:55:24 GMT
- Title: Noise-in, Bias-out: Balanced and Real-time MoCap Solving
- Authors: Georgios Albanis and Nikolaos Zioulis and Spyridon Thermos and
Anargyros Chatzitofis and Kostas Kolomvatsos
- Abstract summary: We apply machine learning to solve noisy unstructured marker estimates in real-time.
We deliver robust marker-based Motion Capture (MoCap) even when using sparse affordable sensors.
- Score: 13.897997236684283
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Real-time optical Motion Capture (MoCap) systems have not benefited from the
advances in modern data-driven modeling. In this work we apply machine learning
to solve noisy unstructured marker estimates in real-time and deliver robust
marker-based MoCap even when using sparse affordable sensors. To achieve this
we focus on a number of challenges related to model training, namely the
sourcing of training data and their long-tailed distribution. Leveraging
representation learning we design a technique for imbalanced regression that
requires no additional data or labels and improves the performance of our model
in rare and challenging poses. By relying on a unified representation, we show
that training such a model is not bound to high-end MoCap training data
acquisition, and exploit the advances in marker-less MoCap to acquire the
necessary data. Finally, we take a step towards richer and affordable MoCap by
adapting a body model-based inverse kinematics solution to account for
measurement and inference uncertainty, further improving performance and
robustness. Project page: https://moverseai.github.io/noise-tail
Related papers
- Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Data-efficient Large Vision Models through Sequential Autoregression [58.26179273091461]
We develop an efficient, autoregression-based vision model on a limited dataset.
We demonstrate how this model achieves proficiency in a spectrum of visual tasks spanning both high-level and low-level semantic understanding.
Our empirical evaluations underscore the model's agility in adapting to various tasks, heralding a significant reduction in the parameter footprint.
arXiv Detail & Related papers (2024-02-07T13:41:53Z) - MOSEL: Inference Serving Using Dynamic Modality Selection [4.849058875921672]
We introduce a form of dynamism, modality selection, where we adaptively choose modalities from inference inputs while maintaining the model quality.
We introduce MOSEL, an automated inference serving system for multi-modal ML models that carefully picks input modalities per request based on user-defined performance and accuracy requirements.
arXiv Detail & Related papers (2023-10-27T20:50:56Z) - MoMo: Momentum Models for Adaptive Learning Rates [14.392926033512069]
We develop new Polyak-type adaptive learning rates that can be used on top of any momentum method.
We first develop MoMo, a Momentum Model based adaptive learning rate for SGD-M.
We show how MoMo can be used in combination with any momentum-based method, and showcase this by developing MoMo-Adam.
arXiv Detail & Related papers (2023-05-12T16:25:57Z) - AI Model Disgorgement: Methods and Choices [127.54319351058167]
We introduce a taxonomy of possible disgorgement methods that are applicable to modern machine learning systems.
We investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch.
arXiv Detail & Related papers (2023-04-07T08:50:18Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z) - Model-Augmented Q-learning [112.86795579978802]
We propose a MFRL framework that is augmented with the components of model-based RL.
Specifically, we propose to estimate not only the $Q$-values but also both the transition and the reward with a shared network.
We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.
arXiv Detail & Related papers (2021-02-07T17:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.