State estimation with limited sensors -- A deep learning based approach
- URL: http://arxiv.org/abs/2101.11513v1
- Date: Wed, 27 Jan 2021 16:14:59 GMT
- Title: State estimation with limited sensors -- A deep learning based approach
- Authors: Yash Kumar, Pranav Bahl, Souvik Chakraborty
- Abstract summary: We propose a novel deep learning based state estimation framework that learns from sequential data.
We illustrate that utilizing sequential data allows for state recovery from only one or two sensors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The importance of state estimation in fluid mechanics is well-established; it
is required for accomplishing several tasks including design/optimization,
active control, and future state prediction. A common tactic in this regards is
to rely on reduced order models. Such approaches, in general, use measurement
data of one-time instance. However, oftentimes data available from sensors is
sequential and ignoring it results in information loss. In this paper, we
propose a novel deep learning based state estimation framework that learns from
sequential data. The proposed model structure consists of the recurrent cell to
pass information from different time steps enabling utilization of this
information to recover the full state. We illustrate that utilizing sequential
data allows for state recovery from only one or two sensors. For efficient
recovery of the state, the proposed approached is coupled with an auto-encoder
based reduced order model. We illustrate the performance of the proposed
approach using two examples and it is found to outperform other alternatives
existing in the literature.
Related papers
- Multistep Inverse Is Not All You Need [87.62730694973696]
In real-world control settings, the observation space is often unnecessarily high-dimensional and subject to time-correlated noise.
It is therefore desirable to learn an encoder to map the observation space to a simpler space of control-relevant variables.
We propose a new algorithm, ACDF, which combines multistep-inverse prediction with a latent forward model.
arXiv Detail & Related papers (2024-03-18T16:36:01Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - OpenPI-C: A Better Benchmark and Stronger Baseline for Open-Vocabulary
State Tracking [55.62705574507595]
OpenPI is the only dataset annotated for open-vocabulary state tracking.
We categorize 3 types of problems on the procedure level, step level and state change level respectively.
For the evaluation metric, we propose a cluster-based metric to fix the original metric's preference for repetition.
arXiv Detail & Related papers (2023-06-01T16:48:20Z) - Stream-based active learning with linear models [0.7734726150561089]
In production, instead of performing random inspections to obtain product information, labels are collected by evaluating the information content of the unlabeled data.
We propose a new strategy for the stream-based scenario, where instances are sequentially offered to the learner.
The iterative aspect of the decision-making process is tackled by setting a threshold on the informativeness of the unlabeled data points.
arXiv Detail & Related papers (2022-07-20T13:15:23Z) - Value-Consistent Representation Learning for Data-Efficient
Reinforcement Learning [105.70602423944148]
We propose a novel method, called value-consistent representation learning (VCR), to learn representations that are directly related to decision-making.
Instead of aligning this imagined state with a real state returned by the environment, VCR applies a $Q$-value head on both states and obtains two distributions of action values.
It has been demonstrated that our methods achieve new state-of-the-art performance for search-free RL algorithms.
arXiv Detail & Related papers (2022-06-25T03:02:25Z) - A Closer Look at Debiased Temporal Sentence Grounding in Videos:
Dataset, Metric, and Approach [53.727460222955266]
Temporal Sentence Grounding in Videos (TSGV) aims to ground a natural language sentence in an untrimmed video.
Recent studies have found that current benchmark datasets may have obvious moment annotation biases.
We introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets.
arXiv Detail & Related papers (2022-03-10T08:58:18Z) - Deep Measurement Updates for Bayes Filters [5.059735037931382]
We propose the novel approach Deep Measurement Update (DMU) as a general update rule for a wide range of systems.
DMU has a conditional encoder-decoder neural network structure to process depth images as raw inputs.
We demonstrate how the DMU models can be trained efficiently to be sensitive to condition variables without having to rely on an information bottleneck.
arXiv Detail & Related papers (2021-12-01T10:00:37Z) - Assessment of machine learning methods for state-to-state approaches [0.0]
We investigate the possibilities offered by the use of machine learning methods for state-to-state approaches.
Deep neural networks appear to be a viable technology also for these tasks.
arXiv Detail & Related papers (2021-04-02T13:27:23Z) - Model adaptation and unsupervised learning with non-stationary batch
data under smooth concept drift [8.068725688880772]
Most predictive models assume that training and test data are generated from a stationary process.
We consider the scenario of a gradual concept drift due to the underlying non-stationarity of the data source.
We propose a novel, iterative algorithm for unsupervised adaptation of predictive models.
arXiv Detail & Related papers (2020-02-10T21:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.