Knodle: Modular Weakly Supervised Learning with PyTorch
- URL: http://arxiv.org/abs/2104.11557v1
- Date: Fri, 23 Apr 2021 12:33:25 GMT
- Title: Knodle: Modular Weakly Supervised Learning with PyTorch
- Authors: Anastasiia Sedova, Andreas Stephan, Marina Speranskaya, Benjamin Roth
- Abstract summary: Knodle is a software framework for separating weak data annotations, powerful deep learning models, and methods for improving weakly supervised training.
This modularization gives the training process access to fine-grained information such as data set characteristics, matches of rules, or elements of the deep learning model ultimately used for prediction.
- Score: 5.874587993411972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Methods for improving the training and prediction quality of weakly
supervised machine learning models vary in how much they are tailored to a
specific task, or integrated with a specific model architecture. In this work,
we propose a software framework Knodle that provides a modularization for
separating weak data annotations, powerful deep learning models, and methods
for improving weakly supervised training. This modularization gives the
training process access to fine-grained information such as data set
characteristics, matches of heuristic rules, or elements of the deep learning
model ultimately used for prediction. Hence, our framework can encompass a wide
range of training methods for improving weak supervision, ranging from methods
that only look at the correlations of rules and output classes (independently
of the machine learning model trained with the resulting labels), to those
methods that harness the interplay of neural networks and weakly labeled data.
Related papers
- Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [49.043599241803825]
Iterative Contrastive Unlearning (ICU) framework consists of three core components.
A Knowledge Unlearning Induction module removes specific knowledge through an unlearning loss.
A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal.
And an Iterative Unlearning Refinement module that dynamically assess the unlearning extent on specific data pieces and make iterative update.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Machine Unlearning in Contrastive Learning [3.6218162133579694]
We introduce a novel gradient constraint-based approach for training the model to effectively achieve machine unlearning.
Our approach demonstrates proficient performance not only on contrastive learning models but also on supervised learning models.
arXiv Detail & Related papers (2024-05-12T16:09:01Z) - Personalized Federated Learning with Contextual Modulation and
Meta-Learning [2.7716102039510564]
Federated learning has emerged as a promising approach for training machine learning models on decentralized data sources.
We propose a novel framework that combines federated learning with meta-learning techniques to enhance both efficiency and generalization capabilities.
arXiv Detail & Related papers (2023-12-23T08:18:22Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Reinforcement Learning for Topic Models [3.42658286826597]
We apply reinforcement learning techniques to topic modeling by replacing the variational autoencoder in ProdLDA with a continuous action space reinforcement learning policy.
We introduce several modifications: modernize the neural network architecture, weight the ELBO loss, use contextual embeddings, and monitor the learning process via computing topic diversity and coherence.
arXiv Detail & Related papers (2023-05-08T16:41:08Z) - Rank-Minimizing and Structured Model Inference [7.067529286680843]
This work introduces a method that infers models from data with physical insights encoded in the form of structure.
The proposed method numerically solves the equations for minimal-rank solutions and so obtains models of low order.
Numerical experiments demonstrate that the combination of structure preservation and rank leads to accurate models with orders of magnitude fewer degrees of freedom than models of comparable prediction quality.
arXiv Detail & Related papers (2023-02-19T09:46:35Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Learning to Reweight with Deep Interactions [104.68509759134878]
We propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model.
Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods.
arXiv Detail & Related papers (2020-07-09T09:06:31Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.