A Novel Hybrid Endoscopic Dataset for Evaluating Machine Learning-based
Photometric Image Enhancement Models
- URL: http://arxiv.org/abs/2207.02396v1
- Date: Wed, 6 Jul 2022 01:47:17 GMT
- Title: A Novel Hybrid Endoscopic Dataset for Evaluating Machine Learning-based
Photometric Image Enhancement Models
- Authors: Axel Garcia-Vega, Ricardo Espinosa, Gilberto Ochoa-Ruiz, Thomas Bazin,
Luis Eduardo Falcon-Morales, Dominique Lamarque, Christian Daul
- Abstract summary: This work introduces a new synthetically generated data-set generated by a generative adversarial techniques.
It also explores both shallow based and deep learning-based image-enhancement methods in overexposed and underexposed lighting conditions.
- Score: 0.9236074230806579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Endoscopy is the most widely used medical technique for cancer and polyp
detection inside hollow organs. However, images acquired by an endoscope are
frequently affected by illumination artefacts due to the enlightenment source
orientation. There exist two major issues when the endoscope's light source
pose suddenly changes: overexposed and underexposed tissue areas are produced.
These two scenarios can result in misdiagnosis due to the lack of information
in the affected zones or hamper the performance of various computer vision
methods (e.g., SLAM, structure from motion, optical flow) used during the non
invasive examination. The aim of this work is two-fold: i) to introduce a new
synthetically generated data-set generated by a generative adversarial
techniques and ii) and to explore both shallow based and deep learning-based
image-enhancement methods in overexposed and underexposed lighting conditions.
Best quantitative results (i.e., metric based results), were obtained by the
deep-learnnig-based LMSPEC method,besides a running time around 7.6 fps)
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Leveraging Near-Field Lighting for Monocular Depth Estimation from Endoscopy Videos [12.497782583094281]
Monocular depth estimation in endoscopy videos can enable assistive and robotic surgery to obtain better coverage of the organ and detection of various health issues.
Despite promising progress on mainstream, natural image depth estimation, techniques perform poorly on endoscopy images.
In this paper, we utilize the photometric cues, i.e., the light emitted from an endoscope and reflected by the surface, to improve monocular depth estimation.
arXiv Detail & Related papers (2024-03-26T17:52:23Z) - LightNeuS: Neural Surface Reconstruction in Endoscopy using Illumination
Decline [45.49984459497878]
We propose a new approach to 3D reconstruction from sequences of images acquired by monocular endoscopes.
It is based on two key insights. First, endoluminal cavities are watertight, a property naturally enforced by modeling them in terms of a signed distance function.
Second, the scene illumination is variable. It comes from the endoscope's light sources and decays with the inverse of the squared distance to the surface.
arXiv Detail & Related papers (2023-09-06T06:41:40Z) - Learning How To Robustly Estimate Camera Pose in Endoscopic Videos [5.073761189475753]
We propose a solution for stereo endoscopes that estimates depth and optical flow to minimize two geometric losses for camera pose estimation.
Most importantly, we introduce two learned adaptive per-pixel weight mappings that balance contributions according to the input image content.
We validate our approach on the publicly available SCARED dataset and introduce a new in-vivo dataset, StereoMIS.
arXiv Detail & Related papers (2023-04-17T07:05:01Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Multi-Scale Structural-aware Exposure Correction for Endoscopic Imaging [0.879504058268139]
This contribution presents an extension to the objective function of LMSPEC, a method originally introduced to enhance images from natural scenes.
It is used here for the exposure correction in endoscopic imaging and the preservation of structural information.
Tested on the Endo4IE dataset, the proposed implementation has yielded a SSIM increase of 4.40% and 4.21% for over- and underexposed images, respectively.
arXiv Detail & Related papers (2022-10-26T21:04:54Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - A Temporal Learning Approach to Inpainting Endoscopic Specularities and
Its effect on Image Correspondence [13.25903945009516]
We propose using a temporal generative adversarial network (GAN) to inpaint the hidden anatomy under specularities.
This is achieved using in-vivo data of gastric endoscopy (Hyper-Kvasir) in a fully unsupervised manner.
We also assess the effect of our method in computer vision tasks that underpin 3D reconstruction and camera motion estimation.
arXiv Detail & Related papers (2022-03-31T13:14:00Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.