Controlled Data Rebalancing in Multi-Task Learning for Real-World Image Super-Resolution
- URL: http://arxiv.org/abs/2506.05607v1
- Date: Thu, 05 Jun 2025 21:40:21 GMT
- Title: Controlled Data Rebalancing in Multi-Task Learning for Real-World Image Super-Resolution
- Authors: Shuchen Lin, Mingtao Feng, Weisheng Dong, Fangfang Wu, Jianqiao Luo, Yaonan Wang, Guangming Shi,
- Abstract summary: Real-world image super-resolution (Real-SR) is a challenging problem due to the complex degradation patterns in low-resolution images.<n>We propose an improved paradigm that frames Real-SR as a data-heterogeneous multi-task learning problem.
- Score: 51.79973519845773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world image super-resolution (Real-SR) is a challenging problem due to the complex degradation patterns in low-resolution images. Unlike approaches that assume a broadly encompassing degradation space, we focus specifically on achieving an optimal balance in how SR networks handle different degradation patterns within a fixed degradation space. We propose an improved paradigm that frames Real-SR as a data-heterogeneous multi-task learning problem, our work addresses task imbalance in the paradigm through coordinated advancements in task definition, imbalance quantification, and adaptive data rebalancing. Specifically, we introduce a novel task definition framework that segments the degradation space by setting parameter-specific boundaries for degradation operators, effectively reducing the task quantity while maintaining task discrimination. We then develop a focal loss based multi-task weighting mechanism that precisely quantifies task imbalance dynamics during model training. Furthermore, to prevent sporadic outlier samples from dominating the gradient optimization of the shared multi-task SR model, we strategically convert the quantified task imbalance into controlled data rebalancing through deliberate regulation of task-specific training volumes. Extensive quantitative and qualitative experiments demonstrate that our method achieves consistent superiority across all degradation tasks.
Related papers
- Bigger, Regularized, Categorical: High-Capacity Value Functions are Efficient Multi-Task Learners [60.75160178669076]
We show that the use of high-capacity value models trained via cross-entropy and conditioned on learnable task embeddings addresses the problem of task interference in online reinforcement learning.<n>We test our approach on 7 multi-task benchmarks with over 280 unique tasks, spanning high degree-of-freedom humanoid control and discrete vision-based RL.
arXiv Detail & Related papers (2025-05-29T06:41:45Z) - Task-Aware Harmony Multi-Task Decision Transformer for Offline Reinforcement Learning [70.96345405979179]
The purpose of offline multi-task reinforcement learning (MTRL) is to develop a unified policy applicable to diverse tasks without the need for online environmental interaction.
variations in task content and complexity pose significant challenges in policy formulation.
We introduce the Harmony Multi-Task Decision Transformer (HarmoDT), a novel solution designed to identify an optimal harmony subspace of parameters for each task.
arXiv Detail & Related papers (2024-11-02T05:49:14Z) - Minimizing Embedding Distortion for Robust Out-of-Distribution Performance [1.0923877073891446]
We introduce a novel approach we call "similarity loss", which can be incorporated into the fine-tuning process of any task.
We evaluate our approach on two diverse tasks: image classification on satellite imagery and face recognition.
arXiv Detail & Related papers (2024-09-11T19:22:52Z) - AdapMTL: Adaptive Pruning Framework for Multitask Learning Model [5.643658120200373]
AdapMTL is an adaptive pruning framework for multitask models.
It balances sparsity allocation and accuracy performance across multiple tasks.
It showcases superior performance compared to state-of-the-art pruning methods.
arXiv Detail & Related papers (2024-08-07T17:19:15Z) - M-HOF-Opt: Multi-Objective Hierarchical Output Feedback Optimization via Multiplier Induced Loss Landscape Scheduling [4.369346338392536]
A probabilistic graphical model is proposed, modeling the joint model parameter and multiplier evolution.<n>We address multi-objective model parameter optimization via a surrogate single objective penalty loss.
arXiv Detail & Related papers (2024-03-20T16:38:26Z) - Dual-Balancing for Multi-Task Learning [42.613360970194734]
We propose a Dual-Balancing Multi-Task Learning (DB-MTL) method to alleviate the task balancing problem from both loss and gradient perspectives.
DB-MTL ensures loss-scale balancing by performing a logarithm transformation on each task loss, and guarantees gradient-magnitude balancing via normalizing all task gradients to the same magnitude as the maximum gradient norm.
arXiv Detail & Related papers (2023-08-23T09:41:28Z) - Overcoming Distribution Mismatch in Quantizing Image Super-Resolution Networks [53.23803932357899]
quantization leads to accuracy loss in image super-resolution (SR) networks.
Existing works address this distribution mismatch problem by dynamically adapting quantization ranges during test time.
We propose a new quantization-aware training scheme that effectively Overcomes the Distribution Mismatch problem in SR networks.
arXiv Detail & Related papers (2023-07-25T08:50:01Z) - Exposing and Addressing Cross-Task Inconsistency in Unified
Vision-Language Models [80.23791222509644]
Inconsistent AI models are considered brittle and untrustworthy by human users.
We find that state-of-the-art vision-language models suffer from a surprisingly high degree of inconsistent behavior across tasks.
We propose a rank correlation-based auxiliary training objective, computed over large automatically created cross-task contrast sets.
arXiv Detail & Related papers (2023-03-28T16:57:12Z) - Task Aware Dreamer for Task Generalization in Reinforcement Learning [31.364276322513447]
We show that training a general world model can utilize similar structures in tasks and help train more generalizable agents.<n>We introduce a novel method named Task Aware Dreamer (TAD), which integrates reward-informed features to identify latent characteristics across tasks.<n>Experiments in both image-based and state-based tasks show that TAD can significantly improve the performance of handling different tasks simultaneously.
arXiv Detail & Related papers (2023-03-09T08:04:16Z) - A Dirichlet Process Mixture of Robust Task Models for Scalable Lifelong
Reinforcement Learning [11.076005074172516]
reinforcement learning algorithms can easily encounter catastrophic forgetting or interference when faced with lifelong streaming information.
We propose a scalable lifelong RL method that dynamically expands the network capacity to accommodate new knowledge.
We show that our method successfully facilitates scalable lifelong RL and outperforms relevant existing methods.
arXiv Detail & Related papers (2022-05-22T09:48:41Z) - Conflict-Averse Gradient Descent for Multi-task Learning [56.379937772617]
A major challenge in optimizing a multi-task model is the conflicting gradients.
We introduce Conflict-Averse Gradient descent (CAGrad) which minimizes the average loss function.
CAGrad balances the objectives automatically and still provably converges to a minimum over the average loss.
arXiv Detail & Related papers (2021-10-26T22:03:51Z) - A Simple General Approach to Balance Task Difficulty in Multi-Task
Learning [4.531240717484252]
In multi-task learning, difficulty levels of different tasks are varying.
We propose a Balanced Multi-Task Learning (BMTL) framework.
The proposed BMTL framework is very simple and it can be combined with most multi-task learning models.
arXiv Detail & Related papers (2020-02-12T04:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.