A conceptual model for leaving the data-centric approach in machine
learning
- URL: http://arxiv.org/abs/2302.03361v1
- Date: Tue, 7 Feb 2023 10:06:48 GMT
- Title: A conceptual model for leaving the data-centric approach in machine
learning
- Authors: Sebastian Scher, Bernhard Geiger, Simone Kopeinik, Andreas Tr\"ugler,
Dominik Kowald
- Abstract summary: Methods have been proposed to include external constraints in the machine learning models.
We present and discuss a conceptual high-level model that unifies these approaches in a common language.
- Score: 1.24245398967236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For a long time, machine learning (ML) has been seen as the abstract problem
of learning relationships from data independent of the surrounding settings.
This has recently been challenged, and methods have been proposed to include
external constraints in the machine learning models. These methods usually come
from application-specific fields, such as de-biasing algorithms in the field of
fairness in ML or physical constraints in the fields of physics and
engineering. In this paper, we present and discuss a conceptual high-level
model that unifies these approaches in a common language. We hope that this
will enable and foster exchange between the different fields and their
different methods for including external constraints into ML models, and thus
leaving purely data-centric approaches.
Related papers
- Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - Masked Modeling for Self-supervised Representation Learning on Vision
and Beyond [69.64364187449773]
Masked modeling has emerged as a distinctive approach that involves predicting parts of the original data that are proportionally masked during training.
We elaborate on the details of techniques within masked modeling, including diverse masking strategies, recovering targets, network architectures, and more.
We conclude by discussing the limitations of current techniques and point out several potential avenues for advancing masked modeling research.
arXiv Detail & Related papers (2023-12-31T12:03:21Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Physics-Inspired Interpretability Of Machine Learning Models [0.0]
The ability to explain decisions made by machine learning models remains one of the most significant hurdles towards widespread adoption of AI.
We propose a novel approach to identify relevant features of the input data, inspired by methods from the energy landscapes field.
arXiv Detail & Related papers (2023-04-05T11:35:17Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - An Introduction to Machine Unlearning [0.6649973446180738]
We summarise and compare seven state-of-the-art machine unlearning algorithms.
We consolidate definitions of core concepts used in the field.
We discuss issues related to applying machine unlearning in practice.
arXiv Detail & Related papers (2022-09-02T10:24:50Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - A Perspective on Machine Learning Methods in Turbulence Modelling [0.0]
This work presents a review of the current state of research in data-driven turbulence closure modeling.
We stress that consistency of the training data, the model, the underlying physics and the discretization is a key issue that needs to be considered for a successful ML-augmented modeling strategy.
arXiv Detail & Related papers (2020-10-23T08:19:30Z) - Invariant Causal Prediction for Block MDPs [106.63346115341862]
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
We propose a method of invariant prediction to learn model-irrelevance state abstractions (MISA) that generalize to novel observations in the multi-environment setting.
arXiv Detail & Related papers (2020-03-12T21:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.