Mutual-energy inner product optimization method for constructing feature coordinates and image classification in Machine Learning
- URL: http://arxiv.org/abs/2411.06100v1
- Date: Sat, 09 Nov 2024 07:26:03 GMT
- Title: Mutual-energy inner product optimization method for constructing feature coordinates and image classification in Machine Learning
- Authors: Yuanxiu Wang,
- Abstract summary: This paper proposes the mutual-energy inner product optimization method for constructing a feature coordinate system.
By expressing the mutual-energy inner product as a series of eigenfunctions, it shows a significant advantage of enhancing low-frequency features.
A stable and efficient sequential linearization algorithm is constructed to solve the optimization model.
- Score: 0.0
- License:
- Abstract: As a key task in machine learning, data classification is essentially to find a suitable coordinate system to represent data features of different classes of samples. This paper proposes the mutual-energy inner product optimization method for constructing a feature coordinate system. First, by analyzing the solution space and eigenfunctions of partial differential equations describing a non-uniform membrane, the mutual-energy inner product is defined. Second, by expressing the mutual-energy inner product as a series of eigenfunctions, it shows a significant advantage of enhancing low-frequency features and suppressing high-frequency noise, compared with the Euclidean inner product. And then, a mutual-energy inner product optimization model is built to extract data features, and convexity and concavity properties of its objective function are discussed. Next, by combining the finite element method, a stable and efficient sequential linearization algorithm is constructed to solve the optimization model. This algorithm only solves equations including positive definite symmetric matrix and linear programming with a few constraints, and its vectorized implementation is discussed. Finally, the mutual-energy inner product optimization method is used to construct feature coordinates, and multi-class Gaussian classifiers are trained on the MINST training set. Good prediction results of Gaussian classifiers are achieved on the MINST test set.
Related papers
- Operator SVD with Neural Networks via Nested Low-Rank Approximation [19.562492156734653]
This paper proposes a new optimization framework based on the low-rank approximation characterization of a truncated singular value decomposition.
New techniques called emphnesting for learning the top-$L$ singular values and singular functions in the correct order.
We demonstrate the effectiveness of the proposed framework for use cases in computational physics and machine learning.
arXiv Detail & Related papers (2024-02-06T03:06:06Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Structured model selection via $\ell_1-\ell_2$ optimization [1.933681537640272]
We develop a learning approach for identifying structured dynamical systems.
We show that if the set of candidate functions forms a bounded system, the recovery is stable and is bounded.
arXiv Detail & Related papers (2023-05-27T12:51:26Z) - An inexact LPA for DC composite optimization and application to matrix completions with outliers [5.746154410100363]
This paper concerns a class of composite optimization problems.
By leveraging the composite structure, we provide a condition for the potential function to have the KL property of $1/2$ at the iterate sequence.
arXiv Detail & Related papers (2023-03-29T16:15:34Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Optimal prediction for kernel-based semi-functional linear regression [5.827901300943599]
We establish minimax optimal rates of convergence for prediction in a semi-functional linear model.
Our results reveal that the smoother functional component can be learned with the minimax rate as if the nonparametric component were known.
arXiv Detail & Related papers (2021-10-29T04:55:44Z) - On the Efficient Implementation of the Matrix Exponentiated Gradient
Algorithm for Low-Rank Matrix Optimization [26.858608065417663]
Convex optimization over the spectrahedron has important applications in machine learning, signal processing and statistics.
We propose efficient implementations of MEG, which are tailored for optimization with low-rank matrices, and only use a single low-rank SVD on each iteration.
We also provide efficiently-computable certificates for the correct convergence of our methods.
arXiv Detail & Related papers (2020-12-18T19:14:51Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.