On Learning the Invisible in Photoacoustic Tomography with Flat
Directionally Sensitive Detector
- URL: http://arxiv.org/abs/2204.10001v1
- Date: Thu, 21 Apr 2022 09:57:01 GMT
- Title: On Learning the Invisible in Photoacoustic Tomography with Flat
Directionally Sensitive Detector
- Authors: Bolin Pan, Marta M. Betcke
- Abstract summary: In this paper, we focus on the second type caused by a varying sensitivity of the sensor to the incoming wavefront direction.
The visible ranges, in image and data domains, are related by the wavefront direction mapping.
We optimally combine fast approximate operators with tailored deep neural network architectures into efficient learned reconstruction methods.
- Score: 0.27074235008521236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In photoacoustic tomography (PAT) with flat sensor, we routinely encounter
two types of limited data. The first is due to using a finite sensor and is
especially perceptible if the region of interest is large relatively to the
sensor or located farther away from the sensor. In this paper, we focus on the
second type caused by a varying sensitivity of the sensor to the incoming
wavefront direction which can be modelled as binary i.e. by a cone of
sensitivity. Such visibility conditions result, in Fourier domain, in a
restriction of both the image and the data to a bowtie, akin to the one
corresponding to the range of the forward operator. The visible ranges, in
image and data domains, are related by the wavefront direction mapping. We
adapt the wedge restricted Curvelet decomposition, we previously proposed for
the representation of the full PAT data, to separate the visible and invisible
wavefronts in the image. We optimally combine fast approximate operators with
tailored deep neural network architectures into efficient learned
reconstruction methods which perform reconstruction of the visible coefficients
and the invisible coefficients are learned from a training set of similar data.
Related papers
- Adaptive Domain Learning for Cross-domain Image Denoising [57.4030317607274]
We present a novel adaptive domain learning scheme for cross-domain image denoising.
We use existing data from different sensors (source domain) plus a small amount of data from the new sensor (target domain)
The ADL training scheme automatically removes the data in the source domain that are harmful to fine-tuning a model for the target domain.
Also, we introduce a modulation module to adopt sensor-specific information (sensor type and ISO) to understand input data for image denoising.
arXiv Detail & Related papers (2024-11-03T08:08:26Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Fisheye Camera and Ultrasonic Sensor Fusion For Near-Field Obstacle
Perception in Bird's-Eye-View [4.536942273206611]
We present the first end-to-end multimodal fusion model tailored for efficient obstacle perception in a bird's-eye-view (BEV) perspective.
Fisheye cameras are frequently employed for comprehensive surround-view perception, including rear-view obstacle localization.
However, the performance of such cameras can significantly deteriorate in low-light conditions, during nighttime, or when subjected to intense sun glare.
arXiv Detail & Related papers (2024-02-01T14:52:16Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Augmenting Deep Learning Adaptation for Wearable Sensor Data through
Combined Temporal-Frequency Image Encoding [4.458210211781739]
We present a novel modified-recurrent plot-based image representation that seamlessly integrates both temporal and frequency domain information.
We evaluate the proposed method using accelerometer-based activity recognition data and a pretrained ResNet model, and demonstrate its superior performance compared to existing approaches.
arXiv Detail & Related papers (2023-07-03T09:29:27Z) - DopUS-Net: Quality-Aware Robotic Ultrasound Imaging based on Doppler
Signal [48.97719097435527]
DopUS-Net combines the Doppler images with B-mode images to increase the segmentation accuracy and robustness of small blood vessels.
An artery re-identification module qualitatively evaluate the real-time segmentation results and automatically optimize the probe pose for enhanced Doppler images.
arXiv Detail & Related papers (2023-05-15T18:19:29Z) - A photosensor employing data-driven binning for ultrafast image
recognition [0.0]
Pixel binning is a technique widely used in optical image acquisition and spectroscopy.
Here, we push the concept of binning to its limit by combining a large fraction of the sensor elements into a single superpixel.
For a given pattern recognition task, its optimal shape is determined from training data using a machine learning algorithm.
arXiv Detail & Related papers (2021-11-20T15:38:39Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Deep Phase Correlation for End-to-End Heterogeneous Sensor Measurements
Matching [12.93459392278491]
We present an end-to-end deep phase correlation network (DPCN) to match heterogeneous sensor measurements.
The primary component is a differentiable correlation-based estimator that back-propagates the pose error to learnable feature extractors.
With the interpretable modeling, the network is light-weighted and promising for better generalization.
arXiv Detail & Related papers (2020-08-21T13:42:25Z) - Cross-Sensor Adversarial Domain Adaptation of Landsat-8 and Proba-V
images for Cloud Detection [1.5828697880068703]
The number of Earth observation satellites carrying optical sensors with similar characteristics is constantly growing.
Differences in retrieved radiances lead to significant drops in accuracy, which hampers knowledge and information sharing across sensors.
We propose a domain adaptation to reduce the statistical differences between images of two satellite sensors in order to boost the performance of transfer learning models.
arXiv Detail & Related papers (2020-06-10T16:16:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.