Lexicase Selection at Scale
- URL: http://arxiv.org/abs/2208.10719v1
- Date: Tue, 23 Aug 2022 03:58:47 GMT
- Title: Lexicase Selection at Scale
- Authors: Li Ding, Ryan Boldi, Thomas Helmuth, Lee Spector
- Abstract summary: Lexicase selection is a semantic-aware parent selection method, which assesses individual test cases in a randomly-shuffled data stream.
One potential drawback of lexicase selection and its variants is that the selection procedure requires evaluating training cases in a single data stream.
We propose a novel method, fast lexicase selection, which incorporates lexicase selection and weighted shuffle with partial evaluation.
- Score: 5.4968949435821735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lexicase selection is a semantic-aware parent selection method, which
assesses individual test cases in a randomly-shuffled data stream. It has
demonstrated success in multiple research areas including genetic programming,
genetic algorithms, and more recently symbolic regression and deep learning.
One potential drawback of lexicase selection and its variants is that the
selection procedure requires evaluating training cases in a single data stream,
making it difficult to handle tasks where the evaluation is computationally
heavy or the dataset is large-scale, e.g., deep learning. In this work, we
investigate how the weighted shuffle methods can be employed to improve the
efficiency of lexicase selection. We propose a novel method, fast lexicase
selection, which incorporates lexicase selection and weighted shuffle with
partial evaluation. Experiments on both classic genetic programming and deep
learning tasks indicate that the proposed method can significantly reduce the
number of evaluation steps needed for lexicase selection to select an
individual, improving its efficiency while maintaining the performance.
Related papers
- Lexicase-based Selection Methods with Down-sampling for Symbolic Regression Problems: Overview and Benchmark [0.8602553195689513]
This paper evaluates random as well as informed down-sampling in combination with the relevant lexicase-based selection methods on a wide range of symbolic regression problems.
We find that for a given evaluation budget, epsilon-lexicase selection in combination with random or informed down-sampling outperforms all other methods.
arXiv Detail & Related papers (2024-07-31T14:26:22Z) - Diversified Batch Selection for Training Acceleration [68.67164304377732]
A prevalent research line, known as online batch selection, explores selecting informative subsets during the training process.
vanilla reference-model-free methods involve independently scoring and selecting data in a sample-wise manner.
We propose Diversified Batch Selection (DivBS), which is reference-model-free and can efficiently select diverse and representative samples.
arXiv Detail & Related papers (2024-06-07T12:12:20Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - DALex: Lexicase-like Selection via Diverse Aggregation [6.394522608656896]
We show that DALex (for Diversely Aggregated Lexicase) achieves significant speedups over lexicase selection and its relaxed variants.
Results on program synthesis, deep learning, symbolic regression, and learning systems demonstrate that DALex achieves significant speedups over lexicase selection and its relaxed variants.
arXiv Detail & Related papers (2024-01-23T01:20:15Z) - A Contrast Based Feature Selection Algorithm for High-dimensional Data
set in Machine Learning [9.596923373834093]
We propose a novel filter feature selection method, ContrastFS, which selects discriminative features based on the discrepancies features shown between different classes.
We validate effectiveness and efficiency of our approach on several widely studied benchmark datasets, results show that the new method performs favorably with negligible computation.
arXiv Detail & Related papers (2024-01-15T05:32:35Z) - Towards Free Data Selection with General-Purpose Models [71.92151210413374]
A desirable data selection algorithm can efficiently choose the most informative samples to maximize the utility of limited annotation budgets.
Current approaches, represented by active learning methods, typically follow a cumbersome pipeline that iterates the time-consuming model training and batch data selection repeatedly.
FreeSel bypasses the heavy batch selection process, achieving a significant improvement in efficiency and being 530x faster than existing active learning methods.
arXiv Detail & Related papers (2023-09-29T15:50:14Z) - Probabilistic Lexicase Selection [6.177959045971966]
We introduce probabilistic lexicase selection (plexicase selection), a novel parent selection algorithm that efficiently approximates the probability distribution of lexicase selection.
Our method not only demonstrates superior problem-solving capabilities as a semantic-aware selection method, but also benefits from having a probabilistic representation of the selection process.
arXiv Detail & Related papers (2023-05-19T13:57:04Z) - Bilevel Optimization for Feature Selection in the Data-Driven Newsvendor
Problem [8.281391209717105]
We study the feature-based news vendor problem, in which a decision-maker has access to historical data.
In this setting, we investigate feature selection, aiming to derive sparse, explainable models with improved out-of-sample performance.
We present a mixed integer linear program reformulation for the bilevel program, which can be solved to optimality with standard optimization solvers.
arXiv Detail & Related papers (2022-09-12T08:52:26Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Online Active Model Selection for Pre-trained Classifiers [72.84853880948894]
We design an online selective sampling approach that actively selects informative examples to label and outputs the best model with high probability at any round.
Our algorithm can be used for online prediction tasks for both adversarial and streams.
arXiv Detail & Related papers (2020-10-19T19:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.