IDENAS: Internal Dependency Exploration for Neural Architecture Search
- URL: http://arxiv.org/abs/2310.17250v1
- Date: Thu, 26 Oct 2023 08:58:29 GMT
- Title: IDENAS: Internal Dependency Exploration for Neural Architecture Search
- Authors: Anh T. Hoang, Zsolt J. Viharos
- Abstract summary: Internal Dependency-based Exploration for Neural Architecture Search (NAS) and Feature Selection have emerged as promising solutions in such scenarios.
This research proposes IDENAS, an Internal Dependency-based Exploration for Neural Architecture Search, integrating NAS with feature selection.
The methodology explores internal dependencies in the complete parameter space for classification involving 1D sensor and 2D image data as well.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning is a powerful tool for extracting valuable information and
making various predictions from diverse datasets. Traditional algorithms rely
on well-defined input and output variables however, there are scenarios where
the distinction between the input and output variables and the underlying,
associated (input and output) layers of the model, are unknown. Neural
Architecture Search (NAS) and Feature Selection have emerged as promising
solutions in such scenarios. This research proposes IDENAS, an Internal
Dependency-based Exploration for Neural Architecture Search, integrating NAS
with feature selection. The methodology explores internal dependencies in the
complete parameter space for classification involving 1D sensor and 2D image
data as well. IDENAS employs a modified encoder-decoder model and the
Sequential Forward Search (SFS) algorithm, combining input-output configuration
search with embedded feature selection. Experimental results demonstrate
IDENASs superior performance in comparison to other algorithms, showcasing its
effectiveness in model development pipelines and automated machine learning. On
average, IDENAS achieved significant modelling improvements, underscoring its
significant contribution to advancing the state-of-the-art in neural
architecture search and feature selection integration.
Related papers
- Combatting Dimensional Collapse in LLM Pre-Training Data via Diversified File Selection [65.96556073745197]
DiverSified File selection algorithm (DiSF) is proposed to select the most decorrelated text files in the feature space.
DiSF saves 98.5% of 590M training files in SlimPajama, outperforming the full-data pre-training within a 50B training budget.
arXiv Detail & Related papers (2025-04-29T11:13:18Z) - A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - Efficient Network Traffic Feature Sets for IoT Intrusion Detection [0.0]
This work evaluates the feature sets provided by a combination of different feature selection methods, namely Information Gain, Chi-Squared Test, Recursive Feature Elimination, Mean Absolute Deviation, and Dispersion Ratio, in multiple IoT network datasets.
The influence of the smaller feature sets on both the classification performance and the training time of ML models is compared, with the aim of increasing the computational efficiency of IoT intrusion detection.
arXiv Detail & Related papers (2024-06-12T09:51:29Z) - Multiple-Input Auto-Encoder Guided Feature Selection for IoT Intrusion Detection Systems [30.16714420093091]
This paper first introduces a novel neural network architecture called Multiple-Input Auto-Encoder (MIAE)
MIAE consists of multiple sub-encoders that can process inputs from different sources with different characteristics.
To distil and retain more relevant features but remove less important/redundant ones during the training process, we further design and embed a feature selection layer.
This layer learns the importance of features in the representation vector, facilitating the selection of informative features from the representation vector.
arXiv Detail & Related papers (2024-03-22T03:54:04Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Multi-objective Differentiable Neural Architecture Search [58.67218773054753]
We propose a novel NAS algorithm that encodes user preferences for the trade-off between performance and hardware metrics.
Our method outperforms existing MOO NAS methods across a broad range of qualitatively different search spaces and datasets.
arXiv Detail & Related papers (2024-02-28T10:09:04Z) - Evolution and Efficiency in Neural Architecture Search: Bridging the Gap Between Expert Design and Automated Optimization [1.7385545432331702]
The paper provides a comprehensive overview of Neural Architecture Search.
It emphasizes its evolution from manual design to automated, computationally-driven approaches.
It highlights its application across various domains, including medical imaging and natural language processing.
arXiv Detail & Related papers (2024-02-11T18:27:29Z) - AFS-BM: Enhancing Model Performance through Adaptive Feature Selection with Binary Masking [0.0]
We introduce the "Adaptive Feature Selection with Binary Masking" (AFS-BM)
We do the joint optimization and binary masking to continuously adapt the set of features and model parameters during the training process.
Our results show that AFS-BM makes significant improvement in terms of accuracy and requires significantly less computational complexity.
arXiv Detail & Related papers (2024-01-20T15:09:41Z) - Contrastive Transformer Learning with Proximity Data Generation for
Text-Based Person Search [60.626459715780605]
Given a descriptive text query, text-based person search aims to retrieve the best-matched target person from an image gallery.
Such a cross-modal retrieval task is quite challenging due to significant modality gap, fine-grained differences and insufficiency of annotated data.
In this paper, we propose a simple yet effective dual Transformer model for text-based person search.
arXiv Detail & Related papers (2023-11-15T16:26:49Z) - POPNASv3: a Pareto-Optimal Neural Architecture Search Solution for Image
and Time Series Classification [8.190723030003804]
This article presents the third version of a sequential model-based NAS algorithm targeting different hardware environments and multiple classification tasks.
Our method is able to find competitive architectures within large search spaces, while keeping a flexible structure and data processing pipeline to adapt to different tasks.
The experiments performed on images and time series classification datasets provide evidence that POPNASv3 can explore a large set of assorted operators and converge to optimal architectures suited for the type of data provided under different scenarios.
arXiv Detail & Related papers (2022-12-13T17:14:14Z) - Transformer-based Context Condensation for Boosting Feature Pyramids in
Object Detection [77.50110439560152]
Current object detectors typically have a feature pyramid (FP) module for multi-level feature fusion (MFF)
We propose a novel and efficient context modeling mechanism that can help existing FPs deliver better MFF results.
In particular, we introduce a novel insight that comprehensive contexts can be decomposed and condensed into two types of representations for higher efficiency.
arXiv Detail & Related papers (2022-07-14T01:45:03Z) - Concurrent Neural Tree and Data Preprocessing AutoML for Image
Classification [0.5735035463793008]
Current state-of-the-art (SOTA) methods do not include traditional methods for manipulating input data as part of the algorithmic search space.
We adapt the Evolutionary Multi-objective Algorithm Design Engine (EMADE), a multi-objective evolutionary search framework for traditional machine learning methods, to perform neural architecture search.
We show that including these methods as part of the search space shows potential to provide benefits to performance on the CIFAR-10 image classification benchmark dataset.
arXiv Detail & Related papers (2022-05-25T20:03:09Z) - EmotionNAS: Two-stream Neural Architecture Search for Speech Emotion
Recognition [48.71010404625924]
We propose a two-stream neural architecture search framework, called enquoteEmotionNAS.
Specifically, we take two-stream features (i.e., handcrafted and deep features) as the inputs, followed by NAS to search for the optimal structure for each stream.
Experimental results demonstrate that our method outperforms existing manually-designed and NAS-based models.
arXiv Detail & Related papers (2022-03-25T12:35:44Z) - Towards Tailored Models on Private AIoT Devices: Federated Direct Neural
Architecture Search [22.69123714900226]
We propose a Federated Direct Neural Architecture Search (FDNAS) framework that allows for hardware-friendly NAS from non- IID data across devices.
Experiments on non-IID datasets have shown the state-of-the-art accuracy-efficiency trade-offs achieved by the proposed solution.
arXiv Detail & Related papers (2022-02-23T13:10:01Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Differentiable NAS Framework and Application to Ads CTR Prediction [30.74403362212425]
We implement an inference and modular framework for Differentiable Neural Architecture Search (DNAS)
We apply DNAS to the problem of ads click-through rate (CTR) prediction, arguably the highest-value and most worked on AI problem at hyperscalers today.
We develop and tailor novel search spaces to a Deep Learning Recommendation Model (DLRM) backbone for CTR prediction, and report state-of-the-art results on the Criteo Kaggle CTR prediction dataset.
arXiv Detail & Related papers (2021-10-25T05:46:27Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - Progressive Automatic Design of Search Space for One-Shot Neural
Architecture Search [15.017964136568061]
It has been observed that a model with higher one-shot model accuracy does not necessarily perform better when stand-alone trained.
We propose Progressive Automatic Design of search space, named PAD-NAS.
In this way, PAD-NAS can automatically design the operations for each layer and achieve a trade-off between search space quality and model diversity.
arXiv Detail & Related papers (2020-05-15T14:21:07Z) - ASFD: Automatic and Scalable Face Detector [129.82350993748258]
We propose a novel Automatic and Scalable Face Detector (ASFD)
ASFD is based on a combination of neural architecture search techniques as well as a new loss design.
Our ASFD-D6 outperforms the prior strong competitors, and our lightweight ASFD-D0 runs at more than 120 FPS with Mobilenet for VGA-resolution images.
arXiv Detail & Related papers (2020-03-25T06:00:47Z) - ARDA: Automatic Relational Data Augmentation for Machine Learning [23.570173866941612]
We present system, an end-to-end system that takes as input a dataset and a data repository, and outputs an augmented data set.
Our system has two distinct components: (1) a framework to search and join data with the input data, based on various attributes of the input, and (2) an efficient feature selection algorithm that prunes out noisy or irrelevant features from the resulting join.
arXiv Detail & Related papers (2020-03-21T21:55:22Z) - NAS-Count: Counting-by-Density with Neural Architecture Search [74.92941571724525]
We automate the design of counting models with Neural Architecture Search (NAS)
We introduce an end-to-end searched encoder-decoder architecture, Automatic Multi-Scale Network (AMSNet)
arXiv Detail & Related papers (2020-02-29T09:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.