Lights and Shadows in Evolutionary Deep Learning: Taxonomy, Critical
Methodological Analysis, Cases of Study, Learned Lessons, Recommendations and
Challenges
- URL: http://arxiv.org/abs/2008.03620v1
- Date: Sun, 9 Aug 2020 00:25:06 GMT
- Title: Lights and Shadows in Evolutionary Deep Learning: Taxonomy, Critical
Methodological Analysis, Cases of Study, Learned Lessons, Recommendations and
Challenges
- Authors: Aritz D. Martinez, Javier Del Ser, Esther Villar-Rodriguez, Eneko
Osaba, Javier Poyatos, Siham Tabik, Daniel Molina, Francisco Herrera
- Abstract summary: Much has been said about the fusion of bio-inspired optimization algorithms and Deep Learning models.
Three axes - optimization and taxonomy, critical analysis, and challenges - outline a vision of a merger of two technologies.
- Score: 15.954992915497874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much has been said about the fusion of bio-inspired optimization algorithms
and Deep Learning models for several purposes: from the discovery of network
topologies and hyper-parametric configurations with improved performance for a
given task, to the optimization of the model's parameters as a replacement for
gradient-based solvers. Indeed, the literature is rich in proposals showcasing
the application of assorted nature-inspired approaches for these tasks. In this
work we comprehensively review and critically examine contributions made so far
based on three axes, each addressing a fundamental question in this research
avenue: a) optimization and taxonomy (Why?), including a historical
perspective, definitions of optimization problems in Deep Learning, and a
taxonomy associated with an in-depth analysis of the literature, b) critical
methodological analysis (How?), which together with two case studies, allows us
to address learned lessons and recommendations for good practices following the
analysis of the literature, and c) challenges and new directions of research
(What can be done, and what for?). In summary, three axes - optimization and
taxonomy, critical analysis, and challenges - which outline a complete vision
of a merger of two technologies drawing up an exciting future for this area of
fusion research.
Related papers
- A Survey of Optimization Methods for Training DL Models: Theoretical Perspective on Convergence and Generalization [11.072619355813496]
We provide an extensive summary of theoretical foundations of optimization methods in deep learning (DL)
This paper includes theoretical analysis of popular gradient-based first-order second-order generalization methods.
We also discuss the analysis of the generic convex loss and explicitly encourage the discovery of well-generalizing optimal points.
arXiv Detail & Related papers (2025-01-24T12:42:38Z) - Exploring the Technology Landscape through Topic Modeling, Expert Involvement, and Reinforcement Learning [0.48342038441006807]
This study proposes a method that combines topic modeling, expert knowledge inputs, and reinforcement learning (RL) to enhance the detection of technological changes.
Results demonstrate the method's effectiveness in identifying, ranking, and tracking trends that align with expert input.
arXiv Detail & Related papers (2025-01-22T22:18:50Z) - The Paradox of Success in Evolutionary and Bioinspired Optimization: Revisiting Critical Issues, Key Studies, and Methodological Pathways [15.29595828816055]
Evolutionary and bioinspired computation are crucial for efficiently addressing complex optimization problems across diverse application domains.
They excel at finding near-optimal solutions in large, complex search spaces, making them invaluable in numerous fields.
However, both areas are plagued by challenges at their core, including inadequate benchmarking, problem-specific overfitting, insufficient theoretical grounding, and superfluous proposals justified only by their biological metaphor.
arXiv Detail & Related papers (2025-01-13T17:37:37Z) - Inference Optimizations for Large Language Models: Effects, Challenges, and Practical Considerations [0.0]
Large language models are ubiquitous in natural language processing because they can adapt to new tasks without retraining.
This literature review focuses on various techniques for reducing resource requirements and compressing large language models.
arXiv Detail & Related papers (2024-08-06T12:07:32Z) - Fine-Grained Zero-Shot Learning: Advances, Challenges, and Prospects [84.36935309169567]
We present a broad review of recent advances for fine-grained analysis in zero-shot learning (ZSL)
We first provide a taxonomy of existing methods and techniques with a thorough analysis of each category.
Then, we summarize the benchmark, covering publicly available datasets, models, implementations, and some more details as a library.
arXiv Detail & Related papers (2024-01-31T11:51:24Z) - Bridging Evolutionary Algorithms and Reinforcement Learning: A Comprehensive Survey on Hybrid Algorithms [50.91348344666895]
Evolutionary Reinforcement Learning (ERL) integrates Evolutionary Algorithms (EAs) and Reinforcement Learning (RL) for optimization.
This survey offers a comprehensive overview of the diverse research branches in ERL.
arXiv Detail & Related papers (2024-01-22T14:06:37Z) - The Efficiency Spectrum of Large Language Models: An Algorithmic Survey [54.19942426544731]
The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains.
This paper examines the multi-faceted dimensions of efficiency essential for the end-to-end algorithmic development of LLMs.
arXiv Detail & Related papers (2023-12-01T16:00:25Z) - A Survey of Contextual Optimization Methods for Decision Making under
Uncertainty [47.73071218563257]
This review article identifies three main frameworks for learning policies from data and discusses their strengths and limitations.
We present the existing models and methods under a uniform notation and terminology and classify them according to the three main frameworks.
arXiv Detail & Related papers (2023-06-17T15:21:02Z) - Hierarchical Optimization-Derived Learning [58.69200830655009]
We establish a new framework, named Hierarchical ODL (HODL), to simultaneously investigate the intrinsic behaviors of optimization-derived model construction and its corresponding learning process.
This is the first theoretical guarantee for these two coupled ODL components: optimization and learning.
arXiv Detail & Related papers (2023-02-11T03:35:13Z) - Faithfulness in Natural Language Generation: A Systematic Survey of
Analysis, Evaluation and Optimization Methods [48.47413103662829]
Natural Language Generation (NLG) has made great progress in recent years due to the development of deep learning techniques such as pre-trained language models.
However, the faithfulness problem that the generated text usually contains unfaithful or non-factual information has become the biggest challenge.
arXiv Detail & Related papers (2022-03-10T08:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.