A Framework for On the Fly Input Refinement for Deep Learning Models
- URL: http://arxiv.org/abs/2502.05456v1
- Date: Sat, 08 Feb 2025 05:41:01 GMT
- Title: A Framework for On the Fly Input Refinement for Deep Learning Models
- Authors: Ravishka Rathnasuriya,
- Abstract summary: Deep learning models still exhibit notable mispredictions in real-world applications, even when trained on up-to-date data.
This research introduces an adaptive, on-the-fly input refinement framework aimed at improving model performance through input validation and transformation.
As a scalable and resource-efficient solution, this framework holds significant promise for high-stakes applications in software engineering, natural language processing, and computer vision.
- Score: 0.0
- License:
- Abstract: Advancements in deep learning have significantly improved model performance across tasks involving code, text, and image processing. However, these models still exhibit notable mispredictions in real-world applications, even when trained on up-to-date data. Such failures often arise from slight variations in inputs such as minor syntax changes in code, rephrasing in text, or subtle lighting shifts in images that reveal inherent limitations in these models' capability to generalize effectively. Traditional approaches to address these challenges involve retraining, a resource-intensive process that demands significant investments in data labeling, model updates, and redeployment. This research introduces an adaptive, on-the-fly input refinement framework aimed at improving model performance through input validation and transformation. The input validation component detects inputs likely to cause errors, while input transformation applies domain-specific adjustments to better align these inputs with the model's handling capabilities. This dual strategy reduces mispredictions across various domains, boosting model performance without necessitating retraining. As a scalable and resource-efficient solution, this framework holds significant promise for high-stakes applications in software engineering, natural language processing, and computer vision.
Related papers
- Boosting Alignment for Post-Unlearning Text-to-Image Generative Models [55.82190434534429]
Large-scale generative models have shown impressive image-generation capabilities, propelled by massive data.
This often inadvertently leads to the generation of harmful or inappropriate content and raises copyright concerns.
We propose a framework that seeks an optimal model update at each unlearning iteration, ensuring monotonic improvement on both objectives.
arXiv Detail & Related papers (2024-12-09T21:36:10Z) - Fine-Grained Verifiers: Preference Modeling as Next-token Prediction in Vision-Language Alignment [57.0121616203175]
We propose FiSAO, a novel self-alignment method that utilizes the model's own visual encoder as a fine-grained verifier to improve vision-language alignment.
By leveraging token-level feedback from the vision encoder, FiSAO significantly improves vision-language alignment, even surpassing traditional preference tuning methods that require additional data.
arXiv Detail & Related papers (2024-10-18T03:34:32Z) - Adjusting Pretrained Backbones for Performativity [34.390793811659556]
We propose a novel technique to adjust pretrained backbones for performativity in a modular way.
We show how it leads to smaller loss along the retraining trajectory and enables us to effectively select among candidate models to anticipate performance degradations.
arXiv Detail & Related papers (2024-10-06T14:41:13Z) - Data-efficient Large Vision Models through Sequential Autoregression [58.26179273091461]
We develop an efficient, autoregression-based vision model on a limited dataset.
We demonstrate how this model achieves proficiency in a spectrum of visual tasks spanning both high-level and low-level semantic understanding.
Our empirical evaluations underscore the model's agility in adapting to various tasks, heralding a significant reduction in the parameter footprint.
arXiv Detail & Related papers (2024-02-07T13:41:53Z) - Sparse Training for Federated Learning with Regularized Error Correction [9.852567834643292]
Federated Learning (FL) has attracted much interest due to the significant advantages it brings to training deep neural network (DNN) models.
FLARE presents a novel sparse training approach via accumulated pulling of the updated models with regularization on the embeddings in the FL process.
The performance of FLARE is validated through extensive experiments on diverse and complex models, achieving a remarkable sparsity level (10 times and more beyond the current state-of-the-art) along with significantly improved accuracy.
arXiv Detail & Related papers (2023-12-21T12:36:53Z) - Robustness, Evaluation and Adaptation of Machine Learning Models in the
Wild [4.304803366354879]
We study causes of impaired robustness to domain shifts and present algorithms for training domain robust models.
A key source of model brittleness is due to domain overfitting, which our new training algorithms suppress and instead encourage domain-general hypotheses.
arXiv Detail & Related papers (2023-03-05T21:41:16Z) - Correlation Information Bottleneck: Towards Adapting Pretrained
Multimodal Models for Robust Visual Question Answering [63.87200781247364]
Correlation Information Bottleneck (CIB) seeks a tradeoff between compression and redundancy in representations.
We derive a tight theoretical upper bound for the mutual information between multimodal inputs and representations.
arXiv Detail & Related papers (2022-09-14T22:04:10Z) - Improving generalization with synthetic training data for deep learning
based quality inspection [0.0]
supervised deep learning requires a large amount of annotated images for training.
In practice, collecting and annotating such data is costly and laborious.
We show the use of randomly generated synthetic training images can help tackle domain instability.
arXiv Detail & Related papers (2022-02-25T16:51:01Z) - Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning [65.268245109828]
In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models.
Deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning.
Model reprogramming enables resource-efficient cross-domain machine learning by repurposing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning.
arXiv Detail & Related papers (2022-02-22T02:33:54Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.