Deep reinforcement learning for smart calibration of radio telescopes
- URL: http://arxiv.org/abs/2102.03200v1
- Date: Fri, 5 Feb 2021 14:35:28 GMT
- Title: Deep reinforcement learning for smart calibration of radio telescopes
- Authors: Sarod Yatawatta and Ian M. Avruch
- Abstract summary: We introduce the use of reinforcement learning to train an autonomous agent to perform fine tuning of data calibration pipelines.
We consider the pipeline to be a black-box system where only an interpreted state of the pipeline is used by the agent.
The autonomous agent trained in this manner is able to determine optimal settings for diverse observations and is therefore able to perform'smart' calibration, minimizing the need for human intervention.
- Score: 3.655021726150368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern radio telescopes produce unprecedented amounts of data, which are
passed through many processing pipelines before the delivery of scientific
results. Hyperparameters of these pipelines need to be tuned by hand to produce
optimal results. Because many thousands of observations are taken during a
lifetime of a telescope and because each observation will have its unique
settings, the fine tuning of pipelines is a tedious task. In order to automate
this process of hyperparameter selection in data calibration pipelines, we
introduce the use of reinforcement learning. We use a reinforcement learning
technique called twin delayed deep deterministic policy gradient (TD3) to train
an autonomous agent to perform this fine tuning. For the sake of
generalization, we consider the pipeline to be a black-box system where only an
interpreted state of the pipeline is used by the agent. The autonomous agent
trained in this manner is able to determine optimal settings for diverse
observations and is therefore able to perform 'smart' calibration, minimizing
the need for human intervention.
Related papers
- SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - Pre-training on Synthetic Driving Data for Trajectory Prediction [61.520225216107306]
We propose a pipeline-level solution to mitigate the issue of data scarcity in trajectory forecasting.
We adopt HD map augmentation and trajectory synthesis for generating driving data, and then we learn representations by pre-training on them.
We conduct extensive experiments to demonstrate the effectiveness of our data expansion and pre-training strategies.
arXiv Detail & Related papers (2023-09-18T19:49:22Z) - Deep Pipeline Embeddings for AutoML [11.168121941015015]
AutoML is a promising direction for democratizing AI by automatically deploying Machine Learning systems with minimal human expertise.
Existing Pipeline Optimization techniques fail to explore deep interactions between pipeline stages/components.
This paper proposes a novel neural architecture that captures the deep interaction between the components of a Machine Learning pipeline.
arXiv Detail & Related papers (2023-05-23T12:40:38Z) - A Deep-Learning-Aided Pipeline for Efficient Post-Silicon Tuning [5.904240881373805]
In post-silicon validation, tuning is to find the values for the tuning knobs, potentially as a function of process parameters and/or known operating conditions.
We leverage neural networks to efficiently select the most relevant variables and present a corresponding deep-learning-aided pipeline for efficient tuning.
arXiv Detail & Related papers (2022-07-01T11:04:53Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - Where Is My Training Bottleneck? Hidden Trade-Offs in Deep Learning
Preprocessing Pipelines [77.45213180689952]
Preprocessing pipelines in deep learning aim to provide sufficient data throughput to keep the training processes busy.
We introduce a new perspective on efficiently preparing datasets for end-to-end deep learning pipelines.
We obtain an increased throughput of 3x to 13x compared to an untuned system.
arXiv Detail & Related papers (2022-02-17T14:31:58Z) - Predicting pigging operations in oil pipelines [0.0]
This paper presents an innovative machine learning methodology to perform automated predictions of the needed pigging operations in crude oil trunklines.
Historical pressure signals have been collected by Eni for two years along an oil pipeline (100 km length, 16 inch diameter pipes) located in Northern Italy.
A tool has been implemented to automatically highlight the historical pig operations performed on the line.
arXiv Detail & Related papers (2021-09-24T08:49:33Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - Learning Dexterous Manipulation from Suboptimal Experts [69.8017067648129]
Relative Entropy Q-Learning (REQ) is a simple policy algorithm that combines ideas from successful offline and conventional RL algorithms.
We show how REQ is also effective for general off-policy RL, offline RL, and RL from demonstrations.
arXiv Detail & Related papers (2020-10-16T18:48:49Z) - AVATAR -- Machine Learning Pipeline Evaluation Using Surrogate Model [10.83607599315401]
We propose a novel method to evaluate the validity of ML pipelines using a surrogate model (AVATAR)
Our experiments show that the AVATAR is more efficient in evaluating complex pipelines in comparison with the traditional evaluation approaches requiring their execution.
arXiv Detail & Related papers (2020-01-30T02:53:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.