Stop overkilling simple tasks with black-box models and use transparent
models instead
- URL: http://arxiv.org/abs/2302.02804v3
- Date: Mon, 18 Sep 2023 14:34:27 GMT
- Title: Stop overkilling simple tasks with black-box models and use transparent
models instead
- Authors: Matteo Rizzo, Matteo Marcuzzo, Alessandro Zangari, Andrea Gasparetto,
Andrea Albarelli
- Abstract summary: Deep learning approaches are able to extract features autonomously from raw data.
This allows for bypassing the feature engineering process.
Deep learning strategies often outperform traditional models in terms of accuracy.
- Score: 57.42190785269343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the employment of deep learning methods has led to several
significant breakthroughs in artificial intelligence. Different from
traditional machine learning models, deep learning-based approaches are able to
extract features autonomously from raw data. This allows for bypassing the
feature engineering process, which is generally considered to be both
error-prone and tedious. Moreover, deep learning strategies often outperform
traditional models in terms of accuracy.
Related papers
- Accelerating Deep Learning with Fixed Time Budget [2.190627491782159]
This paper proposes an effective technique for training arbitrary deep learning models within fixed time constraints.
The proposed method is extensively evaluated in both classification and regression tasks in computer vision.
arXiv Detail & Related papers (2024-10-03T21:18:04Z) - Machine Unlearning in Contrastive Learning [3.6218162133579694]
We introduce a novel gradient constraint-based approach for training the model to effectively achieve machine unlearning.
Our approach demonstrates proficient performance not only on contrastive learning models but also on supervised learning models.
arXiv Detail & Related papers (2024-05-12T16:09:01Z) - A Survey of Deep Learning and Foundation Models for Time Series
Forecasting [16.814826712022324]
Deep learning has been successfully applied to many application domains, yet its advantages have been slow to emerge for time series forecasting.
Foundation models with extensive pre-training allow models to understand patterns and acquire knowledge that can be applied to new related problems.
There is ongoing research examining how to utilize or inject such knowledge into deep learning models.
arXiv Detail & Related papers (2024-01-25T03:14:07Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - From Actions to Events: A Transfer Learning Approach Using Improved Deep
Belief Networks [1.0554048699217669]
This paper proposes a novel approach to map the knowledge from action recognition to event recognition using an energy-based model.
Such a model can process all frames simultaneously, carrying spatial and temporal information through the learning process.
arXiv Detail & Related papers (2022-11-30T14:47:10Z) - Capturing and incorporating expert knowledge into machine learning
models for quality prediction in manufacturing [0.0]
This study introduces a general methodology for building quality prediction models with machine learning methods on small datasets.
The proposed methodology produces prediction models that strictly comply with all the expert knowledge specified by the involved process specialists.
arXiv Detail & Related papers (2022-02-04T07:22:29Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Model-Based Robust Deep Learning: Generalizing to Natural,
Out-of-Distribution Data [104.69689574851724]
We propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning.
Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data.
arXiv Detail & Related papers (2020-05-20T13:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.