Distributionally Robust Deep Learning using Hardness Weighted Sampling
- URL: http://arxiv.org/abs/2001.02658v4
- Date: Thu, 14 Jul 2022 22:03:25 GMT
- Title: Distributionally Robust Deep Learning using Hardness Weighted Sampling
- Authors: Lucas Fidon, Michael Aertsen, Thomas Deprest, Doaa Emam, Fr\'ed\'eric
Guffens, Nada Mufti, Esther Van Elslander, Ernst Schwartz, Michael Ebner,
Daniela Prayer, Gregor Kasprian, Anna L. David, Andrew Melbourne, S\'ebastien
Ourselin, Jan Deprest, Georg Langs, Tom Vercauteren
- Abstract summary: We propose a principled and efficient algorithm for DRO in machine learning that is particularly suited in the context of deep learning.
Our experiments on fetal brain 3D MRI segmentation and brain tumor segmentation in MRI demonstrate the feasibility and the usefulness of our approach.
- Score: 5.277562268045534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Limiting failures of machine learning systems is of paramount importance for
safety-critical applications. In order to improve the robustness of machine
learning systems, Distributionally Robust Optimization (DRO) has been proposed
as a generalization of Empirical Risk Minimization (ERM). However, its use in
deep learning has been severely restricted due to the relative inefficiency of
the optimizers available for DRO in comparison to the wide-spread variants of
Stochastic Gradient Descent (SGD) optimizers for ERM. We propose SGD with
hardness weighted sampling, a principled and efficient optimization method for
DRO in machine learning that is particularly suited in the context of deep
learning. Similar to a hard example mining strategy in practice, the proposed
algorithm is straightforward to implement and computationally as efficient as
SGD-based optimizers used for deep learning, requiring minimal overhead
computation. In contrast to typical ad hoc hard mining approaches, we prove the
convergence of our DRO algorithm for over-parameterized deep learning networks
with ReLU activation and a finite number of layers and parameters. Our
experiments on fetal brain 3D MRI segmentation and brain tumor segmentation in
MRI demonstrate the feasibility and the usefulness of our approach. Using our
hardness weighted sampling for training a state-of-the-art deep learning
pipeline leads to improved robustness to anatomical variabilities in automatic
fetal brain 3D MRI segmentation using deep learning and to improved robustness
to the image protocol variations in brain tumor segmentation. Our code is
available at https://github.com/LucasFidon/HardnessWeightedSampler.
Related papers
- An Imbalanced Learning-based Sampling Method for Physics-informed Neural Networks [4.609724496676172]
RSmote is an innovative local adaptive sampling technique tailored to improve the performance of Physics-Informed Neural Networks (PINNs)
Traditional residual-based adaptive sampling methods, while effective in enhancing PINN accuracy, often struggle with efficiency and high memory consumption.
arXiv Detail & Related papers (2025-01-20T02:16:03Z) - Learning for Cross-Layer Resource Allocation in MEC-Aided Cell-Free Networks [71.30914500714262]
Cross-layer resource allocation over mobile edge computing (MEC)-aided cell-free networks can sufficiently exploit the transmitting and computing resources to promote the data rate.
Joint subcarrier allocation and beamforming optimization are investigated for the MEC-aided cell-free network from the perspective of deep learning.
arXiv Detail & Related papers (2024-12-21T10:18:55Z) - Unveiling Incomplete Modality Brain Tumor Segmentation: Leveraging Masked Predicted Auto-Encoder and Divergence Learning [6.44069573245889]
Brain tumor segmentation remains a significant challenge, particularly in the context of multi-modal magnetic resonance imaging (MRI)
We propose a novel strategy, which is called masked predicted pre-training, enabling robust feature learning from incomplete modality data.
In the fine-tuning phase, we utilize a knowledge distillation technique to align features between complete and missing modality data, simultaneously enhancing model robustness.
arXiv Detail & Related papers (2024-06-12T20:35:16Z) - Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Multi-Objective Learning for Deformable Image Registration [0.0]
Deformable image registration (DIR) involves optimization of multiple conflicting objectives.
In this paper, we combine a recently proposed approach for MO training of neural networks with a well-known deep neural network for DIR.
We evaluate the proposed approach for DIR of pelvic magnetic resonance imaging (MRI) scans.
arXiv Detail & Related papers (2024-02-23T15:42:13Z) - Learning to sample in Cartesian MRI [1.2432046687586285]
Shortening scanning times is crucial in clinical settings, as it increases patient comfort, decreases examination costs and improves throughput.
Recent advances in compressed sensing (CS) and deep learning allow accelerated MRI acquisition by reconstructing high-quality images from undersampled data.
This thesis explores two approaches to address this gap in the context of Cartesian MRI.
arXiv Detail & Related papers (2023-12-07T14:38:07Z) - Complex-valued Federated Learning with Differential Privacy and MRI Applications [51.34714485616763]
We introduce the complex-valued Gaussian mechanism, whose behaviour we characterise in terms of $f$-DP, $(varepsilon, delta)$-DP and R'enyi-DP.
We present novel complex-valued neural network primitives compatible with DP.
Experimentally, we showcase a proof-of-concept by training federated complex-valued neural networks with DP on a real-world task.
arXiv Detail & Related papers (2021-10-07T14:03:00Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - ConCrete MAP: Learning a Probabilistic Relaxation of Discrete Variables
for Soft Estimation with Low Complexity [9.62543698736491]
ConCrete MAP Detection (CMD) is an iterative detection algorithm for large inverse linear problems.
We show CMD to feature a promising performance complexity trade-off compared to SotA.
Notably, we demonstrate CMD's soft outputs to be reliable for decoders.
arXiv Detail & Related papers (2021-02-25T09:54:25Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.