Fast algorithms enabling optimization and deep learning for photoacoustic tomography in a circular detection geometry
- URL: http://arxiv.org/abs/2510.24687v1
- Date: Tue, 28 Oct 2025 17:49:31 GMT
- Title: Fast algorithms enabling optimization and deep learning for photoacoustic tomography in a circular detection geometry
- Authors: Andreas Hauptmann, Leonid Kunyansky, Jenni Poimala,
- Abstract summary: inverse source problem arising in photoacoustic tomography and in several other coupled-physics modalities is frequently solved by iterative algorithms.<n>New algorithms for numerical evaluation of the forward and adjoint operators are presented.<n>A Python implementation of our algorithms and computational examples are available to the general public.
- Score: 2.5387090319723717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The inverse source problem arising in photoacoustic tomography and in several other coupled-physics modalities is frequently solved by iterative algorithms. Such algorithms are based on the minimization of a certain cost functional. In addition, novel deep learning techniques are currently being investigated to further improve such optimization approaches. All such methods require multiple applications of the operator defining the forward problem, and of its adjoint. In this paper, we present new asymptotically fast algorithms for numerical evaluation of the forward and adjoint operators, applicable in the circular acquisition geometry. For an $(n \times n)$ image, our algorithms compute these operators in $\mathcal{O}(n^2 \log n)$ floating point operations. We demonstrate the performance of our algorithms in numerical simulations, where they are used as an integral part of several iterative image reconstruction techniques: classic variational methods, such as non-negative least squares and total variation regularized least squares, as well as deep learning methods, such as learned primal dual. A Python implementation of our algorithms and computational examples is available to the general public.
Related papers
- A Binary Optimisation Algorithm for Near-Term Photonic Quantum Processors [32.80760571694025]
We propose a new algorithm for binary optimisation, designed for near-term photonic quantum processors.<n>This variational algorithm uses samples from a quantum optical circuit, which are post-processed using trainable classical bit-flip probabilities.<n>A gradient-based training loop finds progressively better solutions until convergence.
arXiv Detail & Related papers (2025-10-09T14:30:50Z) - A Robust Algorithm for Non-IID Machine Learning Problems with Convergence Analysis [2.4462606119036456]
We propose an improved numerical algorithm for solving minimax problems based on nonsmooth optimization, quadratic programming and iterative process.<n>Such an algorithm can be widely applied in various fields such as robust optimization, imbalanced learning, etc.
arXiv Detail & Related papers (2025-07-01T14:41:59Z) - Why do we regularise in every iteration for imaging inverse problems? [0.29792392019703945]
Regularisation is commonly used in iterative methods for solving imaging inverse problems.
ProxSkip randomly skips regularisation steps, reducing the computational time of an iterative algorithm without affecting its convergence.
arXiv Detail & Related papers (2024-11-01T15:50:05Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - A Compound Gaussian Least Squares Algorithm and Unrolled Network for
Linear Inverse Problems [1.283555556182245]
This paper develops two new approaches to solving linear inverse problems.
The first is an iterative algorithm that minimizes a regularized least squares objective function.
The second is a deep neural network that corresponds to an "unrolling" or "unfolding" of the iterative algorithm.
arXiv Detail & Related papers (2023-05-18T17:05:09Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Learning to solve TV regularized problems with unrolled algorithms [18.241062505073234]
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals.
We develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures.
arXiv Detail & Related papers (2020-10-19T14:19:02Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.