Reinforcement Learning with Automated Auxiliary Loss Search
- URL: http://arxiv.org/abs/2210.06041v1
- Date: Wed, 12 Oct 2022 09:24:53 GMT
- Title: Reinforcement Learning with Automated Auxiliary Loss Search
- Authors: Tairan He, Yuge Zhang, Kan Ren, Minghuan Liu, Che Wang, Weinan Zhang,
Yuqing Yang, Dongsheng Li
- Abstract summary: We propose a principled and universal method for learning better representations with auxiliary loss functions.
Specifically, we define a general auxiliary loss space of size $7.5 times 1020$ and explore the space with an efficient evolutionary search strategy.
Empirical results show that the discovered auxiliary loss significantly improves the performance on both high-dimensional (image) and low-dimensional (vector) unseen tasks.
- Score: 34.83123677004838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A good state representation is crucial to solving complicated reinforcement
learning (RL) challenges. Many recent works focus on designing auxiliary losses
for learning informative representations. Unfortunately, these handcrafted
objectives rely heavily on expert knowledge and may be sub-optimal. In this
paper, we propose a principled and universal method for learning better
representations with auxiliary loss functions, named Automated Auxiliary Loss
Search (A2LS), which automatically searches for top-performing auxiliary loss
functions for RL. Specifically, based on the collected trajectory data, we
define a general auxiliary loss space of size $7.5 \times 10^{20}$ and explore
the space with an efficient evolutionary search strategy. Empirical results
show that the discovered auxiliary loss (namely, A2-winner) significantly
improves the performance on both high-dimensional (image) and low-dimensional
(vector) unseen tasks with much higher efficiency, showing promising
generalization ability to different settings and even different benchmark
domains. We conduct a statistical analysis to reveal the relations between
patterns of auxiliary losses and RL performance.
Related papers
- Towards Robust Out-of-Distribution Generalization: Data Augmentation and Neural Architecture Search Approaches [4.577842191730992]
We study ways toward robust OoD generalization for deep learning.
We first propose a novel and effective approach to disentangle the spurious correlation between features that are not essential for recognition.
We then study the problem of strengthening neural architecture search in OoD scenarios.
arXiv Detail & Related papers (2024-10-25T20:50:32Z) - Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control [12.832009040635462]
Reinforcement learning (RL) has proven to be well-performed and general-purpose in the inventory control (IC) domain.
Online experience is expensive to acquire in real-world applications.
Online experience may not reflect the true demand due to the lost sales phenomenon typical in IC.
arXiv Detail & Related papers (2024-06-26T13:52:47Z) - MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation
Models [10.10825306582544]
We propose MEta Loss TRansformer (MELTR), a plug-in module that automatically and non-linearly combines various loss functions to aid learning the target task via auxiliary learning.
For evaluation, we apply our framework to various video foundation models (UniVL, Violet and All-in-one) and show significant performance gain on all four downstream tasks.
arXiv Detail & Related papers (2023-03-23T03:06:44Z) - Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary
Data [100.33096338195723]
We focus on Few-shot Learning with Auxiliary Data (FLAD)
FLAD assumes access to auxiliary data during few-shot learning in hopes of improving generalization.
We propose two algorithms -- EXP3-FLAD and UCB1-FLAD -- and compare them with prior FLAD methods that either explore or exploit.
arXiv Detail & Related papers (2023-02-01T18:59:36Z) - A survey and taxonomy of loss functions in machine learning [60.41650195728953]
Most state-of-the-art machine learning techniques revolve around the optimisation of loss functions.
This survey aims to provide a reference of the most essential loss functions for both beginner and advanced machine learning practitioners.
arXiv Detail & Related papers (2023-01-13T14:38:24Z) - Return-Based Contrastive Representation Learning for Reinforcement
Learning [126.7440353288838]
We propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns.
Our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite.
arXiv Detail & Related papers (2021-02-22T13:04:18Z) - Loss Function Discovery for Object Detection via Convergence-Simulation
Driven Search [101.73248560009124]
We propose an effective convergence-simulation driven evolutionary search algorithm, CSE-Autoloss, for speeding up the search progress.
We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses.
Our experiments show that the best-discovered loss function combinations outperform default combinations by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors.
arXiv Detail & Related papers (2021-02-09T08:34:52Z) - Loss Function Search for Face Recognition [75.79325080027908]
We develop a reward-guided search method to automatically obtain the best candidate.
Experimental results on a variety of face recognition benchmarks have demonstrated the effectiveness of our method.
arXiv Detail & Related papers (2020-07-10T03:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.