Polaritonic Machine Learning for Graph-based Data Analysis
- URL: http://arxiv.org/abs/2507.10415v1
- Date: Mon, 14 Jul 2025 15:57:22 GMT
- Title: Polaritonic Machine Learning for Graph-based Data Analysis
- Authors: Yuan Wang, Stefano Scali, Oleksandr Kyriienko,
- Abstract summary: Photonic and polaritonic systems offer a fast and efficient platform for accelerating machine learning (ML) through physics-based computing.<n>We show how lattices of condensates can efficiently embed relational and topological information from point cloud datasets.<n>This information is then incorporated into a pattern recognition workflow based on convolutional neural networks (CNNs)
- Score: 25.723282688367924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photonic and polaritonic systems offer a fast and efficient platform for accelerating machine learning (ML) through physics-based computing. To gain a computational advantage, however, polaritonic systems must: (1) exploit features that specifically favor nonlinear optical processing; (2) address problems that are computationally hard and depend on these features; (3) integrate photonic processing within broader ML pipelines. In this letter, we propose a polaritonic machine learning approach for solving graph-based data problems. We demonstrate how lattices of condensates can efficiently embed relational and topological information from point cloud datasets. This information is then incorporated into a pattern recognition workflow based on convolutional neural networks (CNNs), leading to significantly improved learning performance compared to physics-agnostic methods. Our extensive benchmarking shows that photonic machine learning achieves over 90\% accuracy for Betti number classification and clique detection tasks - a substantial improvement over the 35\% accuracy of bare CNNs. Our study introduces a distinct way of using photonic systems as fast tools for feature engineering, while building on top of high-performing digital machine learning.
Related papers
- Learning-Based Finite Element Methods Modeling for Complex Mechanical Systems [1.6977525619006286]
Complex mechanic systems simulation is important in many real-world applications.
Recent CNN or GNN-based simulation models still struggle to effectively represent complex mechanic simulation.
In this paper, we propose a novel two-level mesh graph network.
arXiv Detail & Related papers (2024-08-30T15:56:50Z) - Joint Feature and Differentiable $ k $-NN Graph Learning using Dirichlet
Energy [103.74640329539389]
We propose a deep FS method that simultaneously conducts feature selection and differentiable $ k $-NN graph learning.
We employ Optimal Transport theory to address the non-differentiability issue of learning $ k $-NN graphs in neural networks.
We validate the effectiveness of our model with extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-21T08:15:55Z) - Unlearning Graph Classifiers with Limited Data Resources [39.29148804411811]
Controlled data removal is becoming an important feature of machine learning models for data-sensitive Web applications.
It is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs)
Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs.
Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism.
Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a 10.38x speed-up
arXiv Detail & Related papers (2022-11-06T20:46:50Z) - Scalable algorithms for physics-informed neural and graph networks [0.6882042556551611]
Physics-informed machine learning (PIML) has emerged as a promising new approach for simulating complex physical and biological systems.
In PIML, we can train such networks from additional information obtained by employing the physical laws and evaluating them at random points in the space-time domain.
We review some of the prevailing trends in embedding physics into machine learning, using physics-informed neural networks (PINNs) based primarily on feed-forward neural networks and automatic differentiation.
arXiv Detail & Related papers (2022-05-16T15:46:11Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Scalable Graph Embedding LearningOn A Single GPU [18.142879223260785]
We introduce a hybrid CPU-GPU framework that addresses the challenges of learning embedding of large-scale graphs.
We show that our system can scale training to datasets with an order of magnitude greater than a single machine's total memory capacity.
arXiv Detail & Related papers (2021-10-13T19:09:33Z) - A Framework for Fast Scalable BNN Inference using Googlenet and Transfer
Learning [0.0]
This thesis aims to achieve high accuracy in object detection with good real-time performance.
The binarized neural network has shown high performance in various vision tasks such as image classification, object detection, and semantic segmentation.
Results show that the accuracy of objects detected by the transfer learning method is more when compared to the existing methods.
arXiv Detail & Related papers (2021-01-04T06:16:52Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.