GPU-Accelerated Rule Evaluation and Evolution
- URL: http://arxiv.org/abs/2406.01821v1
- Date: Mon, 3 Jun 2024 22:24:12 GMT
- Title: GPU-Accelerated Rule Evaluation and Evolution
- Authors: Hormoz Shahrzad, Risto Miikkulainen,
- Abstract summary: This paper introduces an innovative approach to boost the efficiency and scalability of Evolutionary Rule-based machine Learning (ERL)
The method proposed in this paper, AERL (Accelerated ERL) solves this problem in two ways.
First, by adopting GPU-optimized rule sets through a tensorized representation within the PyTorch framework, AERL mitigates the bottleneck and accelerates fitness evaluation significantly.
Second, AERL takes further advantage of the GPU by fine-tuning the rule coefficients via back-propagation, thereby improving search space exploration.
- Score: 10.60691612679966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces an innovative approach to boost the efficiency and scalability of Evolutionary Rule-based machine Learning (ERL), a key technique in explainable AI. While traditional ERL systems can distribute processes across multiple CPUs, fitness evaluation of candidate rules is a bottleneck, especially with large datasets. The method proposed in this paper, AERL (Accelerated ERL) solves this problem in two ways. First, by adopting GPU-optimized rule sets through a tensorized representation within the PyTorch framework, AERL mitigates the bottleneck and accelerates fitness evaluation significantly. Second, AERL takes further advantage of the GPUs by fine-tuning the rule coefficients via back-propagation, thereby improving search space exploration. Experimental evidence confirms that AERL search is faster and more effective, thus empowering explainable artificial intelligence.
Related papers
- Search for Efficient Large Language Models [52.98684997131108]
Large Language Models (LLMs) have long held sway in the realms of artificial intelligence research.
Weight pruning, quantization, and distillation have been embraced to compress LLMs, targeting memory reduction and inference acceleration.
Most model compression techniques concentrate on weight optimization, overlooking the exploration of optimal architectures.
arXiv Detail & Related papers (2024-09-25T21:32:12Z) - AcceleratedLiNGAM: Learning Causal DAGs at the speed of GPUs [57.12929098407975]
We show that by efficiently parallelizing existing causal discovery methods, we can scale them to thousands of dimensions.
Specifically, we focus on the causal ordering subprocedure in DirectLiNGAM and implement GPU kernels to accelerate it.
This allows us to apply DirectLiNGAM to causal inference on large-scale gene expression data with genetic interventions yielding competitive results.
arXiv Detail & Related papers (2024-03-06T15:06:11Z) - NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning
Applications [6.873777465945062]
We establish a new benchmark of evolutionary optimization methods (NeuroEvoBench) tailored toward Deep Learning applications.
We investigate core scientific questions including resource allocation, fitness shaping, normalization, regularization & scalability of EO.
arXiv Detail & Related papers (2023-11-04T12:42:38Z) - Efficient GNN Explanation via Learning Removal-based Attribution [56.18049062940675]
We propose a framework of GNN explanation named LeArn Removal-based Attribution (LARA) to address this problem.
The explainer in LARA learns to generate removal-based attribution which enables providing explanations with high fidelity.
In particular, LARA is 3.5 times faster and achieves higher fidelity than the state-of-the-art method on the large dataset ogbn-arxiv.
arXiv Detail & Related papers (2023-06-09T08:54:20Z) - M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast
Self-Adaptation [145.7321032755538]
Learning to Optimize (L2O) has drawn increasing attention as it often remarkably accelerates the optimization procedure of complex tasks.
This paper investigates a potential solution to this open challenge by meta-training an L2O that can perform fast test-time self-adaptation to an out-of-distribution task.
arXiv Detail & Related papers (2023-02-28T19:23:20Z) - Enabling surrogate-assisted evolutionary reinforcement learning via
policy embedding [28.272572839321104]
This paper proposes a PE-SAERL Framework to enable surrogate-assisted evolutionary reinforcement learning via policy embedding.
Empirical results on 5 Atari games show that the proposed method can perform more efficiently than the four state-of-the-art algorithms.
arXiv Detail & Related papers (2023-01-31T02:36:06Z) - Deep Black-Box Reinforcement Learning with Movement Primitives [15.184283143878488]
We present a new algorithm for deep reinforcement learning (RL)
It is based on differentiable trust region layers, a successful on-policy deep RL algorithm.
We compare our ERL algorithm to state-of-the-art step-based algorithms in many complex simulated robotic control tasks.
arXiv Detail & Related papers (2022-10-18T06:34:52Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z) - Hardware Acceleration of Explainable Machine Learning using Tensor
Processing Units [3.5027291542274357]
We propose a novel framework for accelerating explainable machine learning (ML) using Processing Units (TPUs)
The proposed framework exploits the synergy between matrix convolution and Fourier transform, and takes full advantage of TPU's natural ability in accelerating matrix computations.
Our proposed approach is applicable across a wide variety of ML algorithms, and effective utilization of TPU-based acceleration can lead to real-time outcome interpretation.
arXiv Detail & Related papers (2021-03-22T15:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.