RandomNet: Towards Fully Automatic Neural Architecture Design for
Multimodal Learning
- URL: http://arxiv.org/abs/2003.01181v1
- Date: Mon, 2 Mar 2020 20:41:57 GMT
- Title: RandomNet: Towards Fully Automatic Neural Architecture Design for
Multimodal Learning
- Authors: Stefano Alletto, Shenyang Huang, Vincent Francois-Lavet, Yohei Nakata
and Guillaume Rabusseau
- Abstract summary: We study the effectiveness of a random search strategy for fully automated multimodal neural architecture search.
Compared to traditional methods that rely on manually crafted feature extractors, our method selects each modality from a large search space with minimal human supervision.
- Score: 7.5352209570833555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Almost all neural architecture search methods are evaluated in terms of
performance (i.e. test accuracy) of the model structures that it finds. Should
it be the only metric for a good autoML approach? To examine aspects beyond
performance, we propose a set of criteria aimed at evaluating the core of
autoML problem: the amount of human intervention required to deploy these
methods into real world scenarios. Based on our proposed evaluation checklist,
we study the effectiveness of a random search strategy for fully automated
multimodal neural architecture search. Compared to traditional methods that
rely on manually crafted feature extractors, our method selects each modality
from a large search space with minimal human supervision. We show that our
proposed random search strategy performs close to the state of the art on the
AV-MNIST dataset while meeting the desirable characteristics for a fully
automated design process.
Related papers
- Dynamic Design of Machine Learning Pipelines via Metalearning [1.1356542363919058]
This paper introduces a metalearning method for dynamically designing search spaces for AutoML system.<n>The proposed method uses historical metaknowledge to select promising regions of the search space, accelerating the optimization process.<n>According to experiments conducted for this study, the proposed method can reduce runtime by 89% in Random Search and search space.
arXiv Detail & Related papers (2025-08-19T01:33:33Z) - MMSearch-R1: Incentivizing LMMs to Search [49.889749277236376]
We present MMSearch-R1, the first end-to-end reinforcement learning framework that enables on-demand, multi-turn search in real-world Internet environments.<n>Our framework integrates both image and text search tools, allowing the model to reason about when and how to invoke them guided by an outcome-based reward with a search penalty.
arXiv Detail & Related papers (2025-06-25T17:59:42Z) - ZeroLM: Data-Free Transformer Architecture Search for Language Models [54.83882149157548]
Current automated proxy discovery approaches suffer from extended search times, susceptibility to data overfitting, and structural complexity.
This paper introduces a novel zero-cost proxy methodology that quantifies model capacity through efficient weight statistics.
Our evaluation demonstrates the superiority of this approach, achieving a Spearman's rho of 0.76 and Kendall's tau of 0.53 on the FlexiBERT benchmark.
arXiv Detail & Related papers (2025-03-24T13:11:22Z) - Fully Automated Correlated Time Series Forecasting in Minutes [31.198713853170375]
We propose a fully automated and highly efficient correlated time series forecasting framework.
It includes a data-driven, iterative strategy to automatically prune a large search space to obtain a high-quality search space for a new forecasting task.
Experiments on seven benchmark datasets offer evidence that the framework is capable of state-of-the-art accuracy and is much more efficient than existing methods.
arXiv Detail & Related papers (2024-11-06T09:02:13Z) - A Comprehensive Comparative Study of Individual ML Models and Ensemble Strategies for Network Intrusion Detection Systems [1.1587112467663427]
We introduce an ensemble learning framework tailored for assessing individual models and ensemble methods in network intrusion detection tasks.
Our framework encompasses the loading of input datasets, training of individual models and ensemble methods, and the generation of evaluation metrics.
arXiv Detail & Related papers (2024-10-21T02:44:58Z) - AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML [56.565200973244146]
Automated machine learning (AutoML) accelerates AI development by automating tasks in the development pipeline.
Recent works have started exploiting large language models (LLM) to lessen such burden.
This paper proposes AutoML-Agent, a novel multi-agent framework tailored for full-pipeline AutoML.
arXiv Detail & Related papers (2024-10-03T20:01:09Z) - A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - AutoXPCR: Automated Multi-Objective Model Selection for Time Series
Forecasting [1.0515439489916734]
We propose AutoXPCR - a novel method for automated and explainable multi-objective model selection.
Our approach leverages meta-learning to estimate any model's performance along PCR criteria, which encompass (P)redictive error, (C)omplexity, and (R)esource demand.
Our method clearly outperforms other model selection approaches - on average, it only requires 20% of computation costs for recommending models with 90% of the best-possible quality.
arXiv Detail & Related papers (2023-12-20T14:04:57Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph
Neural Networks [75.11008617118908]
AutoML techniques consider each task independently from scratch, leading to high computational cost.
Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest.
arXiv Detail & Related papers (2023-03-14T07:23:16Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z) - Deep-n-Cheap: An Automated Search Framework for Low Complexity Deep
Learning [3.479254848034425]
We present Deep-n-Cheap -- an open-source AutoML framework to search for deep learning models.
Our framework is targeted for deployment on both benchmark and custom datasets.
Deep-n-Cheap includes a user-customizable complexity penalty which trades off performance with training time or number of parameters.
arXiv Detail & Related papers (2020-03-27T13:00:21Z) - CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus [62.86856923633923]
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
In contrast to previous works, which resorted to hand-crafted search strategies for multiple model detection, we learn the search strategy from data.
For self-supervised learning of the search, we evaluate the proposed algorithm on multi-homography estimation and demonstrate an accuracy that is superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.