MEMS Gyroscope Multi-Feature Calibration Using Machine Learning Technique
- URL: http://arxiv.org/abs/2410.07519v1
- Date: Thu, 10 Oct 2024 01:21:48 GMT
- Title: MEMS Gyroscope Multi-Feature Calibration Using Machine Learning Technique
- Authors: Yaoyao Long, Zhenming Liu, Cong Hao, Farrokh Ayazi,
- Abstract summary: This study leverages machine learning (ML) and uses multiple signals of the MEMS gyroscope resonator to improve its calibration.
XGBoost, known for its high predictive accuracy and ability to handle complex, non-linear relationships, are employed to enhance the calibration process.
Our findings show that both XGBoost and models significantly reduce noise and enhance accuracy and stability, outperforming the traditional calibration techniques.
- Score: 6.972912567929995
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gyroscopes are crucial for accurate angular velocity measurements in navigation, stabilization, and control systems. MEMS gyroscopes offer advantages like compact size and low cost but suffer from errors and inaccuracies that are complex and time varying. This study leverages machine learning (ML) and uses multiple signals of the MEMS resonator gyroscope to improve its calibration. XGBoost, known for its high predictive accuracy and ability to handle complex, non-linear relationships, and MLP, recognized for its capability to model intricate patterns through multiple layers and hidden dimensions, are employed to enhance the calibration process. Our findings show that both XGBoost and MLP models significantly reduce noise and enhance accuracy and stability, outperforming the traditional calibration techniques. Despite higher computational costs, DL models are ideal for high-stakes applications, while ML models are efficient for consumer electronics and environmental monitoring. Both ML and DL models demonstrate the potential of advanced calibration techniques in enhancing MEMS gyroscope performance and calibration efficiency.
Related papers
- Astro: Activation-guided Structured Regularization for Outlier-Robust LLM Post-Training Quantization [56.5199302532159]
We propose an Activation-guided Structured Regularization framework to suppress the negative effects of outliers.<n>Astro actively reconstructs intrinsically robust weights, aggressively suppressing weight outliers corresponding to high-magnitude activations.<n>Astro achieves highly competitive performance; notably, on LLaMA-2-7B, it achieves better performance than complex learning-based rotation methods with almost 1/3 of the quantization time.
arXiv Detail & Related papers (2026-02-07T15:50:18Z) - Hybrid Modeling, Sim-to-Real Reinforcement Learning, and Large Language Model Driven Control for Digital Twins [4.34315145996134]
This work investigates the use of digital twins for dynamical system modeling and control.<n>It integrates physics-based, data-driven, and hybrid approaches with both traditional and AI-driven controllers.
arXiv Detail & Related papers (2025-10-27T21:43:42Z) - Influences on LLM Calibration: A Study of Response Agreement, Loss Functions, and Prompt Styles [4.477423478591491]
Calib-n is a novel framework that trains an auxiliary model for confidence estimation.
We find that few-shot prompts are the most effective for auxiliary model-based methods.
arXiv Detail & Related papers (2025-01-07T18:48:42Z) - Real-time Calibration Model for Low-cost Sensor in Fine-grained Time series [6.648146664198283]
We develop a model called TESLA, Transformer for effective sensor calibration utilizing logarithmic-binned attention.
TESLA uses a high-performance deep learning model, Transformers, to calibrate and capture non-linear components.
Experiments show that TESLA outperforms existing novel deep learning and newly crafted linear models in accuracy, calibration speed, and energy efficiency.
arXiv Detail & Related papers (2024-12-28T14:58:46Z) - Atomic Calibration of LLMs in Long-Form Generations [46.01229352035088]
Large language models (LLMs) often suffer from hallucinations, posing significant challenges for real-world applications.
We introduce atomic calibration, a novel approach that evaluates factuality calibration at a fine-grained level by breaking down long responses into atomic claims.
Our experiments show that atomic calibration is well-suited for long-form generation and can also improve macro calibration results.
arXiv Detail & Related papers (2024-10-17T06:09:26Z) - SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models [67.67135738642547]
Post-training quantization (PTQ) is a powerful compression technique investigated in large language models (LLMs)
Existing PTQ methods are not ideal in terms of accuracy and efficiency, especially with below 4 bit-widths.
This paper presents a Salience-Driven Mixed-Precision Quantization scheme for LLMs, namely SliM-LLM.
arXiv Detail & Related papers (2024-05-23T16:21:48Z) - LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit [55.73370804397226]
Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating large language models.
We present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization.
Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats.
arXiv Detail & Related papers (2024-05-09T11:49:05Z) - Calibrating Large Language Models with Sample Consistency [76.23956851098598]
We explore the potential of deriving confidence from the distribution of multiple randomly sampled model generations, via three measures of consistency.
Results show that consistency-based calibration methods outperform existing post-hoc approaches.
We offer practical guidance on choosing suitable consistency metrics for calibration, tailored to the characteristics of various LMs.
arXiv Detail & Related papers (2024-02-21T16:15:20Z) - Thermometer: Towards Universal Calibration for Large Language Models [22.03852781949075]
We propose OMETER, a calibration approach tailored to large language models (LLM)
OMETER learns an auxiliary model, given data from multiple tasks, for calibrating a LLM.
It is computationally efficient, preserves the accuracy of the LLM, and produces better-calibrated responses for new tasks.
arXiv Detail & Related papers (2024-02-20T04:13:48Z) - CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain
Performance and Calibration [59.48235003469116]
We show that data augmentation consistently enhances OOD performance.
We also show that CF augmented models which are easier to calibrate also exhibit much lower entropy when assigning importance.
arXiv Detail & Related papers (2023-09-14T16:16:40Z) - Bayesian Calibration of MEMS Accelerometers [0.0]
The parameters of error-correcting functions are determined during a calibration process.
Due to various sources of noise, these parameters cannot be determined with precision, making it desirable to incorporate uncertainty in the calibration models.
This study introduces Bayesian methods for the calibration of MEMS accelerometer data in a straightforward manner using recent advances in probabilistic programming.
arXiv Detail & Related papers (2023-06-09T09:10:28Z) - Towards Unbiased Calibration using Meta-Regularization [6.440598446802981]
We propose to learn better-calibrated models via meta-regularization, which has two components.
We evaluate the effectiveness of the proposed approach in regularizing neural networks towards improved and unbiased calibration on three computer vision datasets.
arXiv Detail & Related papers (2023-03-27T10:00:50Z) - Support Vector Machine for Determining Euler Angles in an Inertial
Navigation System [55.41644538483948]
The paper discusses the improvement of the accuracy of an inertial navigation system created on the basis of MEMS sensors using machine learning (ML) methods.
The proposed algorithm based on MO has demonstrated its ability to correctly classify in the presence of noise typical for MEMS sensors.
arXiv Detail & Related papers (2022-12-07T10:01:11Z) - On the Importance of Calibration in Semi-supervised Learning [13.859032326378188]
State-of-the-art (SOTA) semi-supervised learning (SSL) methods have been highly successful in leveraging a mix of labeled and unlabeled data.
We introduce a family of new SSL models that optimize for calibration and demonstrate their effectiveness across standard vision benchmarks.
arXiv Detail & Related papers (2022-10-10T15:41:44Z) - Physics Guided Machine Learning for Variational Multiscale Reduced Order
Modeling [58.720142291102135]
We propose a new physics guided machine learning (PGML) paradigm to increase the accuracy of reduced order models (ROMs) at a modest computational cost.
The hierarchical structure of the ROM basis and the variational multiscale (VMS) framework enable a natural separation of the resolved and unresolved ROM spatial scales.
Modern PGML algorithms are used to construct novel models for the interaction among the resolved and unresolved ROM scales.
arXiv Detail & Related papers (2022-05-25T00:07:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.