ConKeD++ -- Improving descriptor learning for retinal image registration: A comprehensive study of contrastive losses
- URL: http://arxiv.org/abs/2404.16773v1
- Date: Thu, 25 Apr 2024 17:24:35 GMT
- Title: ConKeD++ -- Improving descriptor learning for retinal image registration: A comprehensive study of contrastive losses
- Authors: David Rivas-Villar, Álvaro S. Hervella, José Rouco, Jorge Novo,
- Abstract summary: We propose to test and improve a state-of-the-art framework for color fundus image registration, ConKeD.
Using the ConKeD framework we test multiple loss functions, adapting them to the framework and the application domain.
Our work demonstrates state-of-the-art performance across all datasets and metrics.
- Score: 6.618504904743609
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Self-supervised contrastive learning has emerged as one of the most successful deep learning paradigms. In this regard, it has seen extensive use in image registration and, more recently, in the particular field of medical image registration. In this work, we propose to test and extend and improve a state-of-the-art framework for color fundus image registration, ConKeD. Using the ConKeD framework we test multiple loss functions, adapting them to the framework and the application domain. Furthermore, we evaluate our models using the standarized benchmark dataset FIRE as well as several datasets that have never been used before for color fundus registration, for which we are releasing the pairing data as well as a standardized evaluation approach. Our work demonstrates state-of-the-art performance across all datasets and metrics demonstrating several advantages over current SOTA color fundus registration methods
Related papers
- ConKeD: Multiview contrastive descriptor learning for keypoint-based retinal image registration [6.618504904743609]
We propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration.
In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy.
Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy.
arXiv Detail & Related papers (2024-01-11T13:22:54Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Deformable Image Registration using Neural ODEs [15.245085400790002]
We present a generic, fast, and accurate diffeomorphic image registration framework that leverages neural ordinary differential equations (NODEs)
Compared with traditional optimization-based methods, our framework reduces the running time from tens of minutes to tens of seconds.
Our experiments show that the registration results of our method outperform state-of-the-arts under various metrics.
arXiv Detail & Related papers (2021-08-07T12:54:17Z) - Multi-Label Image Classification with Contrastive Learning [57.47567461616912]
We show that a direct application of contrastive learning can hardly improve in multi-label cases.
We propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting.
arXiv Detail & Related papers (2021-07-24T15:00:47Z) - Towards Unsupervised Sketch-based Image Retrieval [126.77787336692802]
We introduce a novel framework that simultaneously performs unsupervised representation learning and sketch-photo domain alignment.
Our framework achieves excellent performance in the new unsupervised setting, and performs comparably or better than state-of-the-art in the zero-shot setting.
arXiv Detail & Related papers (2021-05-18T02:38:22Z) - A Meta-Learning Approach for Medical Image Registration [6.518615946009265]
We propose a novel unsupervised registration model which is integrated with a gradient-based meta learning framework.
In our experiments, the proposed model obtained significantly improved performance in terms of accuracy and training time.
arXiv Detail & Related papers (2021-04-21T10:27:05Z) - INSPIRE: Intensity and Spatial Information-Based Deformable Image
Registration [3.584984184069584]
INSPIRE is a top-performing general-purpose method for deformable image registration.
We show that the proposed method delivers both highly accurate as well as stable and robust registration results.
We also evaluate the method on four benchmark datasets of 3D images of brains, for a total of 2088 pairwise registrations.
arXiv Detail & Related papers (2020-12-14T01:51:59Z) - Deep Group-wise Variational Diffeomorphic Image Registration [3.0022455491411653]
We propose to extend current learning-based image registration to allow simultaneous registration of multiple images.
We present a general mathematical framework that enables both registration of multiple images to their viscous geodesic average and registration in which any of the available images can be used as a fixed image.
arXiv Detail & Related papers (2020-10-01T07:37:28Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Gradient-Induced Co-Saliency Detection [81.54194063218216]
Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images.
In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection method.
arXiv Detail & Related papers (2020-04-28T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.