Lagrangian based A* algorithm for automated reasoning
- URL: http://arxiv.org/abs/2306.16368v1
- Date: Wed, 28 Jun 2023 17:01:03 GMT
- Title: Lagrangian based A* algorithm for automated reasoning
- Authors: Renju Rajan
- Abstract summary: A weightage is introduced in the part of the A* algorithm to improve its efficiency.
An application of the algorithm is considered for UAV path planning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a modification of A* algorithm is considered for the shortest
path problem. A weightage is introduced in the heuristic part of the A*
algorithm to improve its efficiency. An application of the algorithm is
considered for UAV path planning wherein velocity is taken as the weigtage to
the heuristic. At the outset, calculus of variations based Lagrange's equation
was used to identify velocity as the decisive factor for the dynamical system.
This approach would be useful for other problems as well to improve the
efficiency of algorithms in those areas.
Related papers
- Optimizing Tensor Contraction Paths: A Greedy Algorithm Approach With Improved Cost Functions [1.3812010983144802]
We introduce a novel approach based on the greedy algorithm by opt_einsum that computes efficient contraction paths in less time.
With our approach, we are even able to compute paths for large problems where modern algorithms fail.
arXiv Detail & Related papers (2024-05-08T09:25:39Z) - Efficient Numerical Algorithm for Large-Scale Damped Natural Gradient
Descent [7.368877979221163]
We propose a new algorithm for efficiently solving the damped Fisher matrix in large-scale scenarios where the number of parameters significantly exceeds the number of available samples.
Our algorithm is based on Cholesky decomposition and is generally applicable. Benchmark results show that the algorithm is significantly faster than existing methods.
arXiv Detail & Related papers (2023-10-26T16:46:13Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning
Problems [0.7614628596146599]
We propose a novel algorithm for adaptive step length selection in the classical SGD framework.
Under reasonable conditions, the algorithm produces step lengths in line with well-established theoretical requirements.
We show that the algorithm can generate step lengths comparable to the best step length obtained from manual tuning.
arXiv Detail & Related papers (2023-05-17T06:22:11Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - ANA: Ant Nesting Algorithm for Optimizing Real-World Problems [21.95618652596178]
A novel intelligent swarm is proposed called ant nesting algorithm (ANA)
The algorithm is inspired by Leptothorax ants and mimics the behavior of ants searching for positions to deposit grains while building a new nest.
ANA is considered a continuous algorithm that updates the search agent position by adding the rate of change.
arXiv Detail & Related papers (2021-12-04T08:55:06Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Performance Improvement of Path Planning algorithms with Deep Learning
Encoder Model [1.1995939891389429]
Convolutional Neural Network (CNN) was used to overcome this situation.
This paper analyzes in-depth the performance to eliminate the useless paths using this CNN model.
arXiv Detail & Related papers (2020-08-05T17:34:31Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.