Forgetting to Remember: A Scalable Incremental Learning Framework for
Cross-Task Blind Image Quality Assessment
- URL: http://arxiv.org/abs/2209.07126v1
- Date: Thu, 15 Sep 2022 08:19:12 GMT
- Title: Forgetting to Remember: A Scalable Incremental Learning Framework for
Cross-Task Blind Image Quality Assessment
- Authors: Rui Ma, Qingbo Wu, King N. Ngan, Hongliang Li, Fanman Meng, Linfeng Xu
- Abstract summary: This paper proposes a scalable incremental learning framework (SILF) that could sequentially conduct blind image quality assessment (BIQA) across multiple evaluation tasks with limited memory capacity.
To suppress the unrestrained expansion of memory capacity in sequential learning, we develop a scalable memory unit by gradually and selectively pruning unimportant neurons from previously settled parameter subsets.
- Score: 25.67247922033185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the great success of blind image quality
assessment (BIQA) in various task-specific scenarios, which present invariable
distortion types and evaluation criteria. However, due to the rigid structure
and learning framework, they cannot apply to the cross-task BIQA scenario,
where the distortion types and evaluation criteria keep changing in practical
applications. This paper proposes a scalable incremental learning framework
(SILF) that could sequentially conduct BIQA across multiple evaluation tasks
with limited memory capacity. More specifically, we develop a dynamic parameter
isolation strategy to sequentially update the task-specific parameter subsets,
which are non-overlapped with each other. Each parameter subset is temporarily
settled to Remember one evaluation preference toward its corresponding task,
and the previously settled parameter subsets can be adaptively reused in the
following BIQA to achieve better performance based on the task relevance. To
suppress the unrestrained expansion of memory capacity in sequential tasks
learning, we develop a scalable memory unit by gradually and selectively
pruning unimportant neurons from previously settled parameter subsets, which
enable us to Forget part of previous experiences and free the limited memory
capacity for adapting to the emerging new tasks. Extensive experiments on
eleven IQA datasets demonstrate that our proposed method significantly
outperforms the other state-of-the-art methods in cross-task BIQA.
Related papers
- Class Incremental Learning with Task-Specific Batch Normalization and Out-of-Distribution Detection [25.224930928724326]
This study focuses on incremental learning for image classification, exploring how to reduce catastrophic forgetting of all learned knowledge when access to old data is restricted due to memory or privacy constraints.
The challenge of incremental learning lies in achieving an optimal balance between plasticity, the ability to learn new knowledge, and stability, the ability to retain old knowledge.
arXiv Detail & Related papers (2024-11-01T07:54:29Z) - Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer: A Disentangled Approach [87.8330887605381]
We show how to adapt a pre-trained Vision Transformer to downstream recognition tasks with only a few learnable parameters.
We synthesize a task-specific query with a learnable and lightweight module, which is independent of the pre-trained model.
Our method achieves state-of-the-art performance under memory constraints, showcasing its applicability in real-world situations.
arXiv Detail & Related papers (2024-07-09T15:45:04Z) - MCF-VC: Mitigate Catastrophic Forgetting in Class-Incremental Learning
for Multimodal Video Captioning [10.95493493610559]
We propose a method to Mitigate Catastrophic Forgetting in class-incremental learning for multimodal Video Captioning (MCF-VC)
In order to better constrain the knowledge characteristics of old and new tasks at the specific feature level, we have created the Two-stage Knowledge Distillation (TsKD)
Our experiments on the public dataset MSR-VTT show that the proposed method significantly resists the forgetting of previous tasks without replaying old samples, and performs well on the new task.
arXiv Detail & Related papers (2024-02-27T16:54:08Z) - Continual Action Assessment via Task-Consistent Score-Discriminative Feature Distribution Modeling [31.696222064667243]
Action Quality Assessment (AQA) is a task that tries to answer how well an action is carried out.
Existing works on AQA assume that all the training data are visible for training at one time, but do not enable continual learning.
We propose a unified model to learn AQA tasks sequentially without forgetting.
arXiv Detail & Related papers (2023-09-29T10:06:28Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Improving Meta-learning for Low-resource Text Classification and
Generation via Memory Imitation [87.98063273826702]
We propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation.
A theoretical analysis is provided to prove the effectiveness of our method.
arXiv Detail & Related papers (2022-03-22T12:41:55Z) - Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks [133.93803565077337]
retrieval-augmented generation models combine pre-trained parametric and non-parametric memory for language generation.
We show that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.
arXiv Detail & Related papers (2020-05-22T21:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.