Automating Experimental Optics with Sample Efficient Machine Learning Methods
- URL: http://arxiv.org/abs/2503.14260v1
- Date: Tue, 18 Mar 2025 13:50:44 GMT
- Title: Automating Experimental Optics with Sample Efficient Machine Learning Methods
- Authors: Arindam Saha, Baramee Charoensombutamon, Thibault Michel, V. Vijendran, Lachlan Walker, Akira Furusawa, Syed M. Assad, Ben C. Buchler, Ping Koy Lam, Aaron D. Tranter,
- Abstract summary: We show how machine learning can be used to achieve autonomous mode-matching of a free-space optical resonator with minimal supervision.<n>In this work, we demonstrate how machine learning can be used to achieve autonomous mode-matching of a free-space optical resonator with minimal supervision.
- Score: 0.47936618873102926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As free-space optical systems grow in scale and complexity, troubleshooting becomes increasingly time-consuming and, in the case of remote installations, perhaps impractical. An example of a task that is often laborious is the alignment of a high-finesse optical resonator, which is highly sensitive to the mode of the input beam. In this work, we demonstrate how machine learning can be used to achieve autonomous mode-matching of a free-space optical resonator with minimal supervision. Our approach leverages sample-efficient algorithms to reduce data requirements while maintaining a simple architecture for easy deployment. The reinforcement learning scheme that we have developed shows that automation is feasible even in systems prone to drift in experimental parameters, as may well be the case in real-world applications.
Related papers
- Active Alignments of Lens Systems with Reinforcement Learning [0.0]
We propose a reinforcement learning (RL) approach that learns exclusively in the pixel space of the sensor output.<n>We conduct an extensive benchmark study and show that our approach surpasses other methods in speed, precision, and robustness.
arXiv Detail & Related papers (2025-03-03T21:57:08Z) - Inverse Surrogate Model of a Soft X-Ray Spectrometer using Domain Adaptation [0.0]
In this study, we present a method to create a robust inverse surrogate model for a soft X-ray spectrometer.<n>Due to limited experimental data, such models are often trained with simulated data.<n>We demonstrate the application of data augmentation and adversarial domain adaptation techniques, with which we can predict absolute coordinates for the automated alignment of our spectrometer.
arXiv Detail & Related papers (2025-02-21T19:42:50Z) - Model-free reinforcement learning with noisy actions for automated experimental control in optics [2.3003734964536524]
We show that reinforcement learning can overcome challenges when coupling laser light into an optical fiber.<n>By utilizing the sample-efficient algorithms Soft Actor-Critic (SAC) or Truncated Quantile Critics (TQC), our agent learns to couple with 90% efficiency, comparable to the human expert.
arXiv Detail & Related papers (2024-05-24T10:36:23Z) - MLXP: A Framework for Conducting Replicable Experiments in Python [63.37350735954699]
We propose MLXP, an open-source, simple, and lightweight experiment management tool based on Python.
It streamlines the experimental process with minimal overhead while ensuring a high level of practitioner overhead.
arXiv Detail & Related papers (2024-02-21T14:22:20Z) - Controlling dynamical systems to complex target states using machine
learning: next-generation vs. classical reservoir computing [68.8204255655161]
Controlling nonlinear dynamical systems using machine learning allows to drive systems into simple behavior like periodicity but also to more complex arbitrary dynamics.
We show first that classical reservoir computing excels at this task.
In a next step, we compare those results based on different amounts of training data to an alternative setup, where next-generation reservoir computing is used instead.
It turns out that while delivering comparable performance for usual amounts of training data, next-generation RC significantly outperforms in situations where only very limited data is available.
arXiv Detail & Related papers (2023-07-14T07:05:17Z) - Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning [61.3506230781327]
In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles.
Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently.
We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm.
arXiv Detail & Related papers (2023-03-03T21:55:04Z) - SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via
Differentiable Physics-Based Simulation and Rendering [49.78647219715034]
We propose a sensing-aware model-based reinforcement learning system called SAM-RL.
With the sensing-aware learning pipeline, SAM-RL allows a robot to select an informative viewpoint to monitor the task process.
We apply our framework to real world experiments for accomplishing three manipulation tasks: robotic assembly, tool manipulation, and deformable object manipulation.
arXiv Detail & Related papers (2022-10-27T05:30:43Z) - Toward Fast, Flexible, and Robust Low-Light Image Enhancement [87.27326390675155]
We develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios.
Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage.
We make comprehensive explorations to SCI's inherent properties including operation-insensitive adaptability and model-irrelevant generality.
arXiv Detail & Related papers (2022-04-21T14:40:32Z) - Controlling nonlinear dynamical systems into arbitrary states using
machine learning [77.34726150561087]
We propose a novel and fully data driven control scheme which relies on machine learning (ML)
Exploiting recently developed ML-based prediction capabilities of complex systems, we demonstrate that nonlinear systems can be forced to stay in arbitrary dynamical target states coming from any initial state.
Having this highly flexible control scheme with little demands on the amount of required data on hand, we briefly discuss possible applications that range from engineering to medicine.
arXiv Detail & Related papers (2021-02-23T16:58:26Z) - Ensemble learning and iterative training (ELIT) machine learning:
applications towards uncertainty quantification and automated experiment in
atom-resolved microscopy [0.0]
Deep learning has emerged as a technique of choice for rapid feature extraction across imaging disciplines.
Here we explore the application of deep learning for feature extraction in atom-resolved electron microscopy.
This approach both allows uncertainty into deep learning analysis and also enables automated experimental detection where retraining of network to compensate for out-of-distribution drift due to change in imaging conditions is substituted for a human operator or programmatic selection of networks from the ensemble.
arXiv Detail & Related papers (2021-01-21T05:29:26Z) - Indoor Point-to-Point Navigation with Deep Reinforcement Learning and
Ultra-wideband [1.6799377888527687]
Moving obstacles and non-line-of-sight occurrences can generate noisy and unreliable signals.
We show how a power-efficient point-to-point local planner, learnt with deep reinforcement learning (RL), can constitute a robust and resilient to noise short-range guidance system complete solution.
Our results show that the computational efficient end-to-end policy learnt in plain simulation, can provide a robust, scalable and at-the-edge low-cost navigation system solution.
arXiv Detail & Related papers (2020-11-18T12:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.