Optimization of Structural Similarity in Mathematical Imaging
- URL: http://arxiv.org/abs/2002.02657v1
- Date: Fri, 7 Feb 2020 07:46:31 GMT
- Title: Optimization of Structural Similarity in Mathematical Imaging
- Authors: D. Otero, D. La Torre, O. Michailovich, E.R. Vrscay
- Abstract summary: We introduce a general framework that encompasses a wide range of imaging applications in which the SSIM can be employed as a fidelity measure.
We show how the framework can be used to cast some standard as well as original imaging tasks into optimization problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is now generally accepted that Euclidean-based metrics may not always
adequately represent the subjective judgement of a human observer. As a result,
many image processing methodologies have been recently extended to take
advantage of alternative visual quality measures, the most prominent of which
is the Structural Similarity Index Measure (SSIM). The superiority of the
latter over Euclidean-based metrics have been demonstrated in several studies.
However, being focused on specific applications, the findings of such studies
often lack generality which, if otherwise acknowledged, could have provided a
useful guidance for further development of SSIM-based image processing
algorithms. Accordingly, instead of focusing on a particular image processing
task, in this paper, we introduce a general framework that encompasses a wide
range of imaging applications in which the SSIM can be employed as a fidelity
measure. Subsequently, we show how the framework can be used to cast some
standard as well as original imaging tasks into optimization problems, followed
by a discussion of a number of novel numerical strategies for their solution.
Related papers
- Two Approaches to Supervised Image Segmentation [55.616364225463066]
The present work develops comparison experiments between deep learning and multiset neurons approaches.
The deep learning approach confirmed its potential for performing image segmentation.
The alternative multiset methodology allowed for enhanced accuracy while requiring little computational resources.
arXiv Detail & Related papers (2023-07-19T16:42:52Z) - Deep Image Deblurring: A Survey [165.32391279761006]
Deblurring is a classic problem in low-level computer vision, which aims to recover a sharp image from a blurred input image.
Recent advances in deep learning have led to significant progress in solving this problem.
arXiv Detail & Related papers (2022-01-26T01:31:30Z) - How can we learn (more) from challenges? A statistical approach to
driving future algorithm development [1.0690055408831725]
We present a statistical framework for learning from challenges and instantiate it for the specific task of instrument instance segmentation in laparoscopic videos.
Based on 51,542 meta data performed on 2,728 images, we applied our approach to the results of the Robust Medical Instrument Challenge (ROBUST-MIS) challenge 2019.
Our method development, tailored to the specific remaining issues, yielded a deep learning model with state-of-the-art overall performance and specific strengths in the processing of images in which previous methods tended to fail.
arXiv Detail & Related papers (2021-06-17T08:12:37Z) - Common Limitations of Image Processing Metrics: A Picture Story [58.83274952067888]
This document focuses on biomedical image analysis problems that can be phrased as image-level classification, semantic segmentation, instance segmentation, or object detection task.
The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts from more than 60 institutions worldwide.
arXiv Detail & Related papers (2021-04-12T17:03:42Z) - A Survey of Orthogonal Moments for Image Representation: Theory,
Implementation, and Evaluation [70.0671278823937]
Moment-based image representation has been reported to be effective in satisfying the core conditions of semantic description.
This paper presents a comprehensive survey of the orthogonal moments for image representation, covering recent advances in fast/accurate calculation, robustness/invariance optimization, and definition extension.
The presented theory analysis, software implementation, and evaluation results can support the community, particularly in developing novel techniques and promoting real-world applications.
arXiv Detail & Related papers (2021-03-27T03:41:08Z) - Revisiting Contrastive Learning for Few-Shot Classification [74.78397993160583]
Instance discrimination based contrastive learning has emerged as a leading approach for self-supervised learning of visual representations.
We show how one can incorporate supervision in the instance discrimination based contrastive self-supervised learning framework to learn representations that generalize better to novel tasks.
We propose a novel model selection algorithm that can be used in conjunction with a universal embedding trained using CIDS to outperform state-of-the-art algorithms on the challenging Meta-Dataset benchmark.
arXiv Detail & Related papers (2021-01-26T19:58:08Z) - A Hitchhiker's Guide to Structural Similarity [40.567747702628076]
The Structural Similarity (SSIM) Index is a very widely used image/video quality model.
We studied and compared the functions and performances of popular and widely used implementations of SSIM.
We have arrived at a collection of recommendations on how to use SSIM most effectively.
arXiv Detail & Related papers (2021-01-16T02:51:06Z) - Deep Keypoint-Based Camera Pose Estimation with Geometric Constraints [80.60538408386016]
Estimating relative camera poses from consecutive frames is a fundamental problem in visual odometry.
We propose an end-to-end trainable framework consisting of learnable modules for detection, feature extraction, matching and outlier rejection.
arXiv Detail & Related papers (2020-07-29T21:41:31Z) - Determining Image similarity with Quasi-Euclidean Metric [0.0]
We evaluate Quasi-Euclidean metric as an image similarity measure and analyze how it fares against the existing standard ways like SSIM and Euclidean metric.
In some cases, our methodology projected remarkable performance and it is also interesting to note that our implementation proves to be a step ahead in recognizing similarity.
arXiv Detail & Related papers (2020-06-25T18:12:21Z) - DeepFactors: Real-Time Probabilistic Dense Monocular SLAM [29.033778410908877]
We present a SLAM system that unifies methods in a probabilistic framework while still maintaining real-time performance.
This is achieved through the use of a learned compact depth map representation and reformulating three different types of errors.
We evaluate our system on trajectory estimation and depth reconstruction on real-world sequences and present various examples of estimated dense geometry.
arXiv Detail & Related papers (2020-01-14T21:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.