Learning Geometric-Aware Quadrature Rules for Functional Minimization
- URL: http://arxiv.org/abs/2508.05445v1
- Date: Thu, 07 Aug 2025 14:37:23 GMT
- Title: Learning Geometric-Aware Quadrature Rules for Functional Minimization
- Authors: Costas Smaragdakis,
- Abstract summary: We introduce a Neural Network (GNN) architecture designed to learn optimal quadrature weights directly from the underlying geometry of point clouds.<n>This work is critical for optimization of energy functional variations, leading to improved deep learning-based variational solvers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate numerical integration over non-uniform point clouds is a challenge for modern mesh-free machine learning solvers for partial differential equations (PDEs) using variational principles. While standard Monte Carlo (MC) methods are not capable of handling a non-uniform point cloud, modern neural network architectures can deal with permutation-invariant inputs, creating quadrature rules for any point cloud. In this work, we introduce QuadrANN, a Graph Neural Network (GNN) architecture designed to learn optimal quadrature weights directly from the underlying geometry of point clouds. The design of the model exploits a deep message-passing scheme where the initial layer encodes rich local geometric features from absolute and relative positions as well as an explicit local density measure. In contrast, the following layers incorporate a global context vector. These architectural choices allow the QuadrANN to generate a data-driven quadrature rule that is permutation-invariant and adaptive to both local point density and the overall domain shape. We test our methodology on a series of challenging test cases, including integration on convex and non-convex domains and estimating the solution of the Heat and Fokker-Planck equations. Across all the tests, QuadrANN reduces the variance of the integral estimation compared to standard Quasi-Monte Carlo methods by warping the point clouds to be more dense in critical areas where the integrands present certain singularities. This enhanced stability in critical areas of the domain at hand is critical for the optimization of energy functionals, leading to improved deep learning-based variational solvers.
Related papers
- Point Cloud Denoising With Fine-Granularity Dynamic Graph Convolutional Networks [58.050130177241186]
Noise perturbations often corrupt 3-D point clouds, hindering downstream tasks such as surface reconstruction, rendering, and further processing.
This paper introduces finegranularity dynamic graph convolutional networks called GDGCN, a novel approach to denoising in 3-D point clouds.
arXiv Detail & Related papers (2024-11-21T14:19:32Z) - OCMG-Net: Neural Oriented Normal Refinement for Unstructured Point Clouds [18.234146052486054]
We present a robust refinement method for estimating oriented normals from unstructured point clouds.
Our framework incorporates sign orientation and data augmentation in the feature space to refine the initial oriented normals.
To address the issue of noise-caused direction inconsistency existing in previous approaches, we introduce a new metric called the Chamfer Normal Distance.
arXiv Detail & Related papers (2024-09-02T09:30:02Z) - Geometrically Inspired Kernel Machines for Collaborative Learning Beyond Gradient Descent [36.59087823764832]
This paper develops a novel mathematical framework for collaborative learning by means of geometrically inspired kernel machines.
For classification problems, this approach allows us to learn bounded geometric structures around given data points.
arXiv Detail & Related papers (2024-07-05T08:20:27Z) - Message-Passing Monte Carlo: Generating low-discrepancy point sets via Graph Neural Networks [64.39488944424095]
We present the first machine learning approach to generate low-discrepancy point sets named Message-Passing Monte Carlo points.
MPMC points are empirically shown to be either optimal or near-optimal with respect to the discrepancy for low dimension and small number of points.
arXiv Detail & Related papers (2024-05-23T21:17:20Z) - Operator Learning with Neural Fields: Tackling PDEs on General
Geometries [15.65577053925333]
Machine learning approaches for solving partial differential equations require learning mappings between function spaces.
New CORAL method leverages coordinate-based networks for PDEs on some general constraints.
arXiv Detail & Related papers (2023-06-12T17:52:39Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Differentiable Convolution Search for Point Cloud Processing [114.66038862207118]
We propose a novel differential convolution search paradigm on point clouds.
It can work in a purely data-driven manner and thus is capable of auto-creating a group of suitable convolutions for geometric shape modeling.
We also propose a joint optimization framework for simultaneous search of internal convolution and external architecture, and introduce epsilon-greedy algorithm to alleviate the effect of discretization error.
arXiv Detail & Related papers (2021-08-29T14:42:03Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.