Leveraging LSTM for Predictive Modeling of Satellite Clock Bias
- URL: http://arxiv.org/abs/2411.07015v1
- Date: Mon, 11 Nov 2024 14:18:32 GMT
- Title: Leveraging LSTM for Predictive Modeling of Satellite Clock Bias
- Authors: Ahan Bhatt, Ishaan Mehta, Pravin Patidar,
- Abstract summary: We propose an approach utilizing Long Short-Term Memory (LSTM) networks to predict satellite clock bias.
Our LSTM model exhibits exceptional accuracy, with a Root Mean Square Error (RMSE) of 2.11 $times$ 10$-11$.
This study holds significant potential in enhancing the accuracy and efficiency of low-power receivers used in various devices.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Satellite clock bias prediction plays a crucial role in enhancing the accuracy of satellite navigation systems. In this paper, we propose an approach utilizing Long Short-Term Memory (LSTM) networks to predict satellite clock bias. We gather data from the PRN 8 satellite of the Galileo and preprocess it to obtain a single difference sequence, crucial for normalizing the data. Normalization allows resampling of the data, ensuring that the predictions are equidistant and complete. Our methodology involves training the LSTM model on varying lengths of datasets, ranging from 7 days to 31 days. We employ a training set consisting of two days' worth of data in each case. Our LSTM model exhibits exceptional accuracy, with a Root Mean Square Error (RMSE) of 2.11 $\times$ 10$^{-11}$. Notably, our approach outperforms traditional methods used for similar time-series forecasting projects, being 170 times more accurate than RNN, 2.3 $\times$ 10$^7$ times more accurate than MLP, and 1.9 $\times$ 10$^4$ times more accurate than ARIMA. This study holds significant potential in enhancing the accuracy and efficiency of low-power receivers used in various devices, particularly those requiring power conservation. By providing more accurate predictions of satellite clock bias, the findings of this research can be integrated into the algorithms of such devices, enabling them to function with heightened precision while conserving power. Improved accuracy in clock bias predictions ensures that low-power receivers can maintain optimal performance levels, thereby enhancing the overall reliability and effectiveness of satellite navigation systems. Consequently, this advancement holds promise for a wide range of applications, including remote areas, IoT devices, wearable technology, and other devices where power efficiency and navigation accuracy are paramount.
Related papers
- Probing Deep into Temporal Profile Makes the Infrared Small Target Detector Much Better [63.567886330598945]
Infrared small target (IRST) detection is challenging in simultaneously achieving precise, universal, robust and efficient performance.<n>Current learning-based methods attempt to leverage more" information from both the spatial and the short-term temporal domains.<n>We propose an efficient deep temporal probe network (DeepPro) that only performs calculations in the time dimension for IRST detection.
arXiv Detail & Related papers (2025-06-15T08:19:32Z) - GPS-Aided Deep Learning for Beam Prediction and Tracking in UAV mmWave Communication [6.21540494241516]
This research presents a GPS-aided deep learning (DL) model that simultaneously predicts current and future optimal beams for UAV mmWave communications.<n>The model reduces overhead by approximately 93% (requiring the training of 2 3 beams instead of 32 beams) with 95% beam prediction accuracy guarantees, and ensures 94% to 96% of predictions exhibit mean power loss not exceeding 1 dB.
arXiv Detail & Related papers (2025-05-23T06:38:00Z) - Resource-Efficient Beam Prediction in mmWave Communications with Multimodal Realistic Simulation Framework [57.994965436344195]
Beamforming is a key technology in millimeter-wave (mmWave) communications that improves signal transmission by optimizing directionality and intensity.
multimodal sensing-aided beam prediction has gained significant attention, using various sensing data to predict user locations or network conditions.
Despite its promising potential, the adoption of multimodal sensing-aided beam prediction is hindered by high computational complexity, high costs, and limited datasets.
arXiv Detail & Related papers (2025-04-07T15:38:25Z) - HEROS-GAN: Honed-Energy Regularized and Optimal Supervised GAN for Enhancing Accuracy and Range of Low-Cost Accelerometers [9.98317903374184]
Low-cost accelerometers play a crucial role in modern society due to their advantages of small size, ease of integration, wearability, and mass production.
However, this widely used sensor suffers from severe accuracy and range limitations.
We propose a honed-energy regularized and optimal supervised GAN (HEROS-GAN), which transforms low-cost sensor signals into high-cost equivalents.
arXiv Detail & Related papers (2025-02-25T10:31:01Z) - Estimating Voltage Drop: Models, Features and Data Representation Towards a Neural Surrogate [1.7010199949406575]
We investigate how Machine Learning (ML) techniques can aid in reducing the computational effort and implicitly the time required to estimate the voltage drop in Integrated Circuits (ICs)
Our approach leverages ASICs' electrical, timing, and physical to train ML models, ensuring adaptability across diverse designs with minimal adjustments.
This study illustrates the effectiveness of ML algorithms in precisely estimating IR drop and optimizing ASIC sign-off.
arXiv Detail & Related papers (2025-02-07T21:31:13Z) - VECTOR: Velocity-Enhanced GRU Neural Network for Real-Time 3D UAV Trajectory Prediction [2.1825723033513165]
We propose a new trajectory prediction method using Gated Recurrent Units (GRUs) within sequence-based neural networks.
We employ both synthetic and real-world 3D UAV trajectory data, capturing a wide range of flight patterns, speeds, and agility.
The GRU-based models significantly outperform state-of-the-art RNN approaches, with a mean square error (MSE) as low as 2 x 10-8.
arXiv Detail & Related papers (2024-10-24T07:16:42Z) - Real-time gravitational-wave inference for binary neutron stars using machine learning [71.29593576787549]
We present a machine learning framework that performs complete BNS inference in just one second without making any approximations.
Our approach enhances multi-messenger observations by providing (i) accurate localization even before the merger; (ii) improved localization precision by $sim30%$ compared to approximate low-latency methods; and (iii) detailed information on luminosity distance, inclination, and masses.
arXiv Detail & Related papers (2024-07-12T18:00:02Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - Synchronizing clocks via satellites using entangled photons: Effect of
relative velocity on precision [0.0]
We develop tools to study the effect of the relative velocity between the satellite and ground stations on the success of the QCS protocol.
We simulate the synchronization outcomes for cities across the continental U.S. using a single satellite in a LEO, low-cost entanglement sources, portable atomic clocks, and avalanche detectors.
arXiv Detail & Related papers (2023-06-13T21:31:21Z) - RNN-Based GNSS Positioning using Satellite Measurement Features and
Pseudorange Residuals [0.0]
This work leverages the potential of machine learning in predicting link-wise measurement quality factors.
We use a customized matrix composed of conditional pseudorange residuals and per-link satellite metrics.
Our experimental results on real data, obtained from extensive field measurements, demonstrate the high potential of our proposed solution.
arXiv Detail & Related papers (2023-06-08T16:11:57Z) - Collaborative Learning with a Drone Orchestrator [79.75113006257872]
A swarm of intelligent wireless devices train a shared neural network model with the help of a drone.
The proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone hovering time.
arXiv Detail & Related papers (2023-03-03T23:46:25Z) - Time-to-Green predictions for fully-actuated signal control systems with
supervised learning [56.66331540599836]
This paper proposes a time series prediction framework using aggregated traffic signal and loop detector data.
We utilize state-of-the-art machine learning models to predict future signal phases' duration.
Results based on an empirical data set from a fully-actuated signal control system in Zurich, Switzerland, show that machine learning models outperform conventional prediction methods.
arXiv Detail & Related papers (2022-08-24T07:50:43Z) - Taking ROCKET on an Efficiency Mission: Multivariate Time Series
Classification with LightWaveS [3.5786621294068373]
We present LightWaveS, a framework for accurate multivariate time series classification.
It employs just 2.5% of the ROCKET features, while achieving accuracy comparable to recent deep learning models.
We show that we achieve speedup ranging from 9x to 65x compared to ROCKET during inference on an edge device.
arXiv Detail & Related papers (2022-04-04T10:52:20Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Uncertainty-Aware Learning for Improvements in Image Quality of the
Canada-France-Hawaii Telescope [9.963669010212012]
We leverage state-of-the-art machine learning methods to predict observatory image quality (IQ) from environmental conditions and observatory operating parameters.
We develop accurate and interpretable models of the complex dependence between data features and observed IQ for CFHT's wide field camera, MegaCam.
arXiv Detail & Related papers (2021-06-30T18:10:20Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.