Energy Optimized Piecewise Polynomial Approximation Utilizing Modern Machine Learning Optimizers
- URL: http://arxiv.org/abs/2503.09329v2
- Date: Mon, 14 Apr 2025 10:53:19 GMT
- Title: Energy Optimized Piecewise Polynomial Approximation Utilizing Modern Machine Learning Optimizers
- Authors: Hannes Waclawek, Stefan Huber,
- Abstract summary: We introduce a framework that minimizes elastic strain energy in cam profiles, leading to smoother motion.<n> Experimental results confirm the effectiveness of this approach, demonstrating its potential to approximation quality against energy consumption.
- Score: 0.9208007322096533
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This work explores an extension of machine learning-optimized piecewise polynomial approximation by incorporating energy optimization as an additional objective. Traditional closed-form solutions enable continuity and approximation targets but lack flexibility in accommodating complex optimization goals. By leveraging modern gradient descent optimizers within TensorFlow, we introduce a framework that minimizes elastic strain energy in cam profiles, leading to smoother motion. Experimental results confirm the effectiveness of this approach, demonstrating its potential to Pareto-efficiently trade approximation quality against energy consumption.
Related papers
- Efficient Design of Compliant Mechanisms Using Multi-Objective Optimization [50.24983453990065]
We address the synthesis of a compliant cross-hinge mechanism capable of large angular strokes.
We formulate a multi-objective optimization problem based on kinetostatic performance measures.
arXiv Detail & Related papers (2025-04-23T06:29:10Z) - Differentiable Optimization for Deep Learning-Enhanced DC Approximation of AC Optimal Power Flow [0.0]
We propose a novel deep learning-based framework for network equivalency that enhances DC-OPF to more closely mimic the behavior of AC-OPF.
The approach utilizes recent advances in differentiable optimization, incorporating a neural network trained to predict adjusted nodal shunt conductances and branch susceptances.
Results demonstrate the framework's ability to significantly improve prediction accuracy, paving the way for more reliable and efficient power systems.
arXiv Detail & Related papers (2025-03-22T20:53:53Z) - Scalable Min-Max Optimization via Primal-Dual Exact Pareto Optimization [66.51747366239299]
We propose a smooth variant of the min-max problem based on the augmented Lagrangian.
The proposed algorithm scales better with the number of objectives than subgradient-based strategies.
arXiv Detail & Related papers (2025-03-16T11:05:51Z) - A Fenchel-Young Loss Approach to Data-Driven Inverse Optimization [1.7068557927955381]
We build a connection between inverse optimization and the Fenchel-Young (FY) loss originally designed for structured prediction.<n>This new approach is amenable to efficient gradient-based optimization, hence much more efficient than existing methods.
arXiv Detail & Related papers (2025-02-22T07:04:32Z) - A Novel Unified Parametric Assumption for Nonconvex Optimization [53.943470475510196]
Non optimization is central to machine learning, but the general framework non convexity enables weak convergence guarantees too pessimistic compared to the other hand.<n>We introduce a novel unified assumption in non convex algorithms.
arXiv Detail & Related papers (2025-02-17T21:25:31Z) - Dynamic Anisotropic Smoothing for Noisy Derivative-Free Optimization [0.0]
We propose a novel algorithm that extends the methods of ball smoothing and Gaussian smoothing for noisy derivative-free optimization.
The algorithm dynamically adapts the shape of the smoothing kernel to approximate the Hessian of the objective function around a local optimum.
arXiv Detail & Related papers (2024-05-02T21:04:20Z) - Machine Learning Optimized Orthogonal Basis Piecewise Polynomial Approximation [0.9208007322096533]
Piecewise Polynomials (PPs) are utilized in several engineering disciplines, like trajectory planning, to approximate position profiles given in the form of a set of points.
arXiv Detail & Related papers (2024-03-13T14:34:34Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Tuning-Free Maximum Likelihood Training of Latent Variable Models via
Coin Betting [1.8416014644193066]
We introduce two new particle-based algorithms for learning latent variable models via marginal maximum likelihood estimation.
One algorithm is entirely tuning-free.
We validate our algorithms across several numerical experiments, including several high-dimensional settings.
arXiv Detail & Related papers (2023-05-24T09:03:55Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Multi-step Planning for Automated Hyperparameter Optimization with
OptFormer [29.358188163138173]
We build on the recently proposed OptFormer model to make planning via rollouts simple and efficient.
We conduct extensive exploration of different strategies for performing multi-step planning on top of the OptFormer model to highlight its potential for use in constructing non-myopic HPO strategies.
arXiv Detail & Related papers (2022-10-10T19:07:59Z) - Non-Convex Optimization with Certificates and Fast Rates Through Kernel
Sums of Squares [68.8204255655161]
We consider potentially non- optimized approximation problems.
In this paper, we propose an algorithm that achieves close to optimal a priori computational guarantees.
arXiv Detail & Related papers (2022-04-11T09:37:04Z) - NOVAS: Non-convex Optimization via Adaptive Stochastic Search for
End-to-End Learning and Control [22.120942106939122]
We propose the use of adaptive search as a building block for general, non- neural optimization operations.
We benchmark it against two existing alternatives on a synthetic energy-based structured task, and showcase its use in optimal control applications.
arXiv Detail & Related papers (2020-06-22T03:40:36Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.