Feedback is Needed for Retakes: An Explainable Poor Image Notification
Framework for the Visually Impaired
- URL: http://arxiv.org/abs/2211.09427v1
- Date: Thu, 17 Nov 2022 09:22:28 GMT
- Title: Feedback is Needed for Retakes: An Explainable Poor Image Notification
Framework for the Visually Impaired
- Authors: Kazuya Ohata, Shunsuke Kitada, Hitoshi Iyatomi
- Abstract summary: Our framework first determines the quality of images and then generates captions using only those images that are determined to be of high quality.
The user is notified by the flaws feature to retake if image quality is low, and this cycle is repeated until the input image is deemed to be of high quality.
- Score: 6.0158981171030685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a simple yet effective image captioning framework that can
determine the quality of an image and notify the user of the reasons for any
flaws in the image. Our framework first determines the quality of images and
then generates captions using only those images that are determined to be of
high quality. The user is notified by the flaws feature to retake if image
quality is low, and this cycle is repeated until the input image is deemed to
be of high quality. As a component of the framework, we trained and evaluated a
low-quality image detection model that simultaneously learns difficulty in
recognizing images and individual flaws, and we demonstrated that our proposal
can explain the reasons for flaws with a sufficient score. We also evaluated a
dataset with low-quality images removed by our framework and found improved
values for all four common metrics (e.g., BLEU-4, METEOR, ROUGE-L, CIDEr),
confirming an improvement in general-purpose image captioning capability. Our
framework would assist the visually impaired, who have difficulty judging image
quality.
Related papers
- Dual-Representation Interaction Driven Image Quality Assessment with Restoration Assistance [11.983231834400698]
No-Reference Image Quality Assessment for distorted images has always been a challenging problem due to image content variance and distortion diversity.
Previous IQA models mostly encode explicit single-quality features of synthetic images to obtain quality-aware representations for quality score prediction.
We introduce the DRI method to obtain degradation vectors and quality vectors of images, which separately model the degradation and quality information of low-quality images.
arXiv Detail & Related papers (2024-11-26T12:48:47Z) - Mitigating Perception Bias: A Training-Free Approach to Enhance LMM for Image Quality Assessment [18.622560025505233]
We propose a training-free debiasing framework for image quality assessment.
We first explore several semantic-preserving distortions that can significantly degrade image quality.
We then apply these specific distortions to the query or test images.
During quality inference, both a query image and its corresponding degraded version are fed to the LMM.
All degraded images are consistently rated as poor quality, regardless of their semantic difference.
arXiv Detail & Related papers (2024-11-19T15:00:59Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - Interpretable Image Quality Assessment via CLIP with Multiple
Antonym-Prompt Pairs [1.6317061277457001]
No reference image quality assessment (NR-IQA) is a task to estimate the perceptual quality of an image without its corresponding original image.
We propose a new zero-shot and interpretable NRIQA method that exploits the ability of a pre-trained vision model.
Experimental results show that the proposed method outperforms existing zero-shot NR-IQA methods in terms of accuracy.
arXiv Detail & Related papers (2023-08-24T21:37:00Z) - Helping Visually Impaired People Take Better Quality Pictures [52.03016269364854]
We develop tools to help visually impaired users minimize occurrences of common technical distortions.
We also create a prototype feedback system that helps to guide users to mitigate quality issues.
arXiv Detail & Related papers (2023-05-14T04:37:53Z) - Test your samples jointly: Pseudo-reference for image quality evaluation [3.2634122554914]
We propose to jointly model different images depicting the same content to improve the precision of quality estimation.
Our experiments show that at test-time, our method successfully combines the features from multiple images depicting the same new content, improving estimation quality.
arXiv Detail & Related papers (2023-04-07T17:59:27Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Inducing Predictive Uncertainty Estimation for Face Recognition [102.58180557181643]
We propose a method for generating image quality training data automatically from'mated-pairs' of face images.
We use the generated data to train a lightweight Predictive Confidence Network, termed as PCNet, for estimating the confidence score of a face image.
arXiv Detail & Related papers (2020-09-01T17:52:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.