Towards Sustainable Learning: Coresets for Data-efficient Deep Learning
- URL: http://arxiv.org/abs/2306.01244v1
- Date: Fri, 2 Jun 2023 02:51:08 GMT
- Title: Towards Sustainable Learning: Coresets for Data-efficient Deep Learning
- Authors: Yu Yang, Hao Kang, Baharan Mirzasoleiman
- Abstract summary: CREST is the first scalable subsetx deep networks framework with rigorous theoretical subsetx experiments on datasets.
CREST identifies the most valuable examples of a non-Image function.
- Score: 9.51481812606879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To improve the efficiency and sustainability of learning deep models, we
propose CREST, the first scalable framework with rigorous theoretical
guarantees to identify the most valuable examples for training non-convex
models, particularly deep networks. To guarantee convergence to a stationary
point of a non-convex function, CREST models the non-convex loss as a series of
quadratic functions and extracts a coreset for each quadratic sub-region. In
addition, to ensure faster convergence of stochastic gradient methods such as
(mini-batch) SGD, CREST iteratively extracts multiple mini-batch coresets from
larger random subsets of training data, to ensure nearly-unbiased gradients
with small variances. Finally, to further improve scalability and efficiency,
CREST identifies and excludes the examples that are learned from the coreset
selection pipeline. Our extensive experiments on several deep networks trained
on vision and NLP datasets, including CIFAR-10, CIFAR-100, TinyImageNet, and
SNLI, confirm that CREST speeds up training deep networks on very large
datasets, by 1.7x to 2.5x with minimum loss in the performance. By analyzing
the learning difficulty of the subsets selected by CREST, we show that deep
models benefit the most by learning from subsets of increasing difficulty
levels.
Related papers
- Adaptive Dataset Quantization [2.0105434963031463]
We introduce a versatile framework for dataset compression, namely Adaptive dataset Quantization (ADQ)
We propose a novel adaptive sampling strategy through the evaluation of generated bins' representativeness score, diversity score and importance score.
Our method not only exhibits superior generalization capability across different architectures, but also attains state-of-the-art results, surpassing DQ by average 3% on various datasets.
arXiv Detail & Related papers (2024-12-22T07:08:29Z) - A Fresh Take on Stale Embeddings: Improving Dense Retriever Training with Corrector Networks [81.2624272756733]
In dense retrieval, deep encoders provide embeddings for both inputs and targets.
We train a small parametric corrector network that adjusts stale cached target embeddings.
Our approach matches state-of-the-art results even when no target embedding updates are made during training.
arXiv Detail & Related papers (2024-09-03T13:29:13Z) - A Closer Look at Spatial-Slice Features Learning for COVID-19 Detection [8.215897530386343]
We introduce an enhanced Spatial-Slice Feature Learning (SSFL++) framework specifically designed for CT scan.
It aim to filter out a OOD data within whole CT scan, enabling our to select crucial spatial-slice for analysis by reducing 70% redundancy totally.
Experiments demonstrate the promising performance of our model using a simple EfficientNet-2D (E2D) model, even with only 1% of the training data.
arXiv Detail & Related papers (2024-04-02T05:19:27Z) - Simple 2D Convolutional Neural Network-based Approach for COVID-19 Detection [8.215897530386343]
This study explores the use of deep learning techniques for analyzing lung Computed Tomography (CT) images.
We propose an advanced Spatial-Slice Feature Learning (SSFL++) framework specifically tailored for CT scans.
It aims to filter out out out-of-distribution (OOD) data within the entire CT scan, allowing us to select essential spatial-slice features for analysis by reducing data redundancy by 70%.
arXiv Detail & Related papers (2024-03-17T14:34:51Z) - Exploring Learning Complexity for Efficient Downstream Dataset Pruning [8.990878450631596]
Existing dataset pruning methods require training on the entire dataset.
We propose a straightforward, novel, and training-free hardness score named Distorting-based Learning Complexity (DLC)
Our method is motivated by the observation that easy samples learned faster can also be learned with fewer parameters.
arXiv Detail & Related papers (2024-02-08T02:29:33Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - A Weighted K-Center Algorithm for Data Subset Selection [70.49696246526199]
Subset selection is a fundamental problem that can play a key role in identifying smaller portions of the training data.
We develop a novel factor 3-approximation algorithm to compute subsets based on the weighted sum of both k-center and uncertainty sampling objective functions.
arXiv Detail & Related papers (2023-12-17T04:41:07Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Adaptive Second Order Coresets for Data-efficient Machine Learning [5.362258158646462]
Training machine learning models on datasets incurs substantial computational costs.
We propose AdaCore to extract subsets of the training examples for efficient machine learning.
arXiv Detail & Related papers (2022-07-28T05:43:09Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.