Latent Space Arc Therapy Optimization
- URL: http://arxiv.org/abs/2106.05846v1
- Date: Mon, 24 May 2021 19:06:00 GMT
- Title: Latent Space Arc Therapy Optimization
- Authors: Noah Bice, Mohamad Fakhreddine, Ruiqi Li, Dan Nguyen, Christopher
Kabat, Pamela Myers, Niko Papanikolaou, and Neil Kirby
- Abstract summary: arc therapy planning is a challenging problem in high-dimensional, non-informed optimization.
In this paper we address the issue of arc therapy optimization with unsupervised deep learning.
An engine is built based on low-dimensional arc representations which facilitates faster planning times.
- Score: 1.1186291300604743
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Volumetric modulated arc therapy planning is a challenging problem in
high-dimensional, non-convex optimization. Traditionally, heuristics such as
fluence-map-optimization-informed segment initialization use locally optimal
solutions to begin the search of the full arc therapy plan space from a
reasonable starting point. These routines facilitate arc therapy optimization
such that clinically satisfactory radiation treatment plans can be created in
about 10 minutes. However, current optimization algorithms favor solutions near
their initialization point and are slower than necessary due to plan
overparameterization. In this work, arc therapy overparameterization is
addressed by reducing the effective dimension of treatment plans with
unsupervised deep learning. An optimization engine is then built based on
low-dimensional arc representations which facilitates faster planning times.
Related papers
- SPARE: Symmetrized Point-to-Plane Distance for Robust Non-Rigid Registration [76.40993825836222]
We propose SPARE, a novel formulation that utilizes a symmetrized point-to-plane distance for robust non-rigid registration.
The proposed method greatly improves the accuracy of non-rigid registration problems and maintains relatively high solution efficiency.
arXiv Detail & Related papers (2024-05-30T15:55:04Z) - Efficient Radiation Treatment Planning based on Voxel Importance [1.9712632719704106]
We propose to reduce the optimization problem by only using a representative subset of informative voxels.
By solving a reduced version of the original optimization problem using this subset, we effectively reduce the problem's size and computational demands.
Empirical experiments on open benchmark data highlight substantially reduced optimization times, up to 50 times faster than the original ones.
arXiv Detail & Related papers (2024-05-06T21:55:19Z) - A Particle-based Sparse Gaussian Process Optimizer [5.672919245950197]
We present a new swarm-swarm-based framework utilizing the underlying dynamical process of descent.
The biggest advantage of this approach is greater exploration around the current state before deciding descent descent.
arXiv Detail & Related papers (2022-11-26T09:06:15Z) - A Data-Driven Evolutionary Transfer Optimization for Expensive Problems
in Dynamic Environments [9.098403098464704]
Data-driven, a.k.a. surrogate-assisted, evolutionary optimization has been recognized as an effective approach for tackling expensive black-box optimization problems.
This paper proposes a simple but effective transfer learning framework to empower data-driven evolutionary optimization to solve dynamic optimization problems.
Experiments on synthetic benchmark test problems and a real-world case study demonstrate the effectiveness of our proposed algorithm.
arXiv Detail & Related papers (2022-11-05T11:19:50Z) - Towards the optimization of ballistics in proton therapy using genetic
algorithms: implementation issues [0.0]
We investigate a new optimization framework based on a genetic algorithm approach.
The proposed optimization routine takes typically into account several thousands of spots of fixed size.
The behavior of the proposed genetic algorithm is illustrated in both elementary and clinically-realistic test cases.
arXiv Detail & Related papers (2022-05-17T12:31:14Z) - Optimal Parameter-free Online Learning with Switching Cost [47.415099037249085]
-freeness in online learning refers to the adaptivity of an algorithm with respect to the optimal decision in hindsight.
In this paper, we design such algorithms in the presence of switching cost - the latter penalizes the optimistic updates required by parameter-freeness.
We propose a simple yet powerful algorithm for Online Linear Optimization (OLO) with switching cost, which improves the existing suboptimal regret bound [ZCP22a] to the optimal rate.
arXiv Detail & Related papers (2022-05-13T18:44:27Z) - Efficient Projection-Free Online Convex Optimization with Membership
Oracle [11.745866777357566]
We present a new reduction that turns any algorithm A defined on a Euclidean ball to an algorithm on a constrained set C contained within the ball.
Our reduction requires O(T log T) calls to a Membership Oracle on C after T rounds, and no linear optimization on C is needed.
arXiv Detail & Related papers (2021-11-10T17:22:29Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Resource Planning for Hospitals Under Special Consideration of the
COVID-19 Pandemic: Optimization and Sensitivity Analysis [87.31348761201716]
Crises like the COVID-19 pandemic pose a serious challenge to health-care institutions.
BaBSim.Hospital is a tool for capacity planning based on discrete event simulation.
We aim to investigate and optimize these parameters to improve BaBSim.Hospital.
arXiv Detail & Related papers (2021-05-16T12:38:35Z) - A feasibility study of a hyperparameter tuning approach to automated
inverse planning in radiotherapy [68.8204255655161]
The purpose of this study is to automate the inverse planning process to reduce active planning time while maintaining plan quality.
We investigated the impact of the choice of dose parameters, random and Bayesian search methods, and utility function form on planning time and plan quality.
Using 100 samples was found to produce satisfactory plan quality, and the average planning time was 2.3 hours.
arXiv Detail & Related papers (2021-05-14T18:37:00Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.