Photonic restricted Boltzmann machine for content generation tasks
- URL: http://arxiv.org/abs/2508.20472v1
- Date: Thu, 28 Aug 2025 06:40:33 GMT
- Title: Photonic restricted Boltzmann machine for content generation tasks
- Authors: Li Luo, Yisheng Fang, Wanyi Zhang, Zhichao Ruan,
- Abstract summary: High computational cost of Gibbs sampling in content generation tasks imposes significant bottlenecks on electronic implementations.<n>We propose a photonic restricted Boltzmann machine that leverages photonic computing to accelerate Gibbs sampling.<n>We experimentally validate the photonic-accelerated Gibbs sampling by simulating a two-dimensional Ising model.
- Score: 1.6035239519292575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The restricted Boltzmann machine (RBM) is a neural network based on the Ising model, well known for its ability to learn probability distributions and stochastically generate new content. However, the high computational cost of Gibbs sampling in content generation tasks imposes significant bottlenecks on electronic implementations. Here, we propose a photonic restricted Boltzmann machine (PRBM) that leverages photonic computing to accelerate Gibbs sampling, enabling efficient content generation. By introducing an efficient encoding method, the PRBM eliminates the need for computationally intensive matrix decomposition and reduces the computational complexity of Gibbs sampling from $O(N)$ to $O(1)$. Moreover, its non-Von Neumann photonic computing architecture circumvents the memory storage of interaction matrices, providing substantial advantages for large-scale RBMs. We experimentally validate the photonic-accelerated Gibbs sampling by simulating a two-dimensional Ising model, where the observed phase transition temperature closely matches the theoretical predictions. Beyond physics-inspired tasks, the PRBM demonstrates robust capabilities in generating and restoring diverse content, including images and temporal sequences, even in the presence of noise and aberrations. The scalability and reduced training cost of the PRBM framework underscore its potential as a promising pathway for advancing photonic computing in generative artificial intelligence.
Related papers
- Bayesian Interpolating Neural Network (B-INN): a scalable and reliable Bayesian model for large-scale physical systems [0.8593015118377854]
This paper proposes a scalable and reliable Bayesian surrogate model, termed the Bayesian Interpolating Neural Network (B-INN)<n>B-INN combines high-order theory with tensor decomposition and alternating direction algorithm to enable effective dimensionality reduction without compromising predictive accuracy.<n> Numerical experiments demonstrate that B-INNs can be from 20 times to 10,000 times faster with a robust uncertainty estimation.
arXiv Detail & Related papers (2026-01-30T11:38:42Z) - Hardware-inspired Continuous Variables Quantum Optical Neural Networks [0.27998963147546146]
In quantum optics, Gaussian operators induce affine mappings on the quadratures of optical modes.<n>This work presents a novel experimentally-feasible framework for continuous-variable quantum optical neural networks.
arXiv Detail & Related papers (2025-12-04T19:20:27Z) - MPQ-DMv2: Flexible Residual Mixed Precision Quantization for Low-Bit Diffusion Models with Temporal Distillation [74.34220141721231]
We present MPQ-DMv2, an improved textbfMixed textbfPrecision textbfQuantization framework for extremely low-bit textbfDiffusion textbfModels.
arXiv Detail & Related papers (2025-07-06T08:16:50Z) - HQViT: Hybrid Quantum Vision Transformer for Image Classification [48.72766405978677]
We propose a Hybrid Quantum Vision Transformer (HQViT) to accelerate model training while enhancing model performance.<n>HQViT introduces whole-image processing with amplitude encoding to better preserve global image information without additional positional encoding.<n>Experiments across various computer vision datasets demonstrate that HQViT outperforms existing models, achieving a maximum improvement of up to $10.9%$ (on the MNIST 10-classification task) over the state of the art.
arXiv Detail & Related papers (2025-04-03T16:13:34Z) - Expressive equivalence of classical and quantum restricted Boltzmann machines [1.1639171061272031]
We propose a semi-quantum restricted Boltzmann machine (sqRBM) for classical data.<n> sqRBM is commuting in the visible subspace while remaining non-commuting in the hidden subspace.<n>Our theoretical analysis predicts that, to learn a given probability distribution, an RBM requires three times as many hidden units as an sqRBM.
arXiv Detail & Related papers (2025-02-24T19:00:02Z) - Gaussian Models to Non-Gaussian Realms of Quantum Photonic Simulators [2.592307869002029]
Quantum photonic simulators have emerged as indispensable tools for modeling and optimizing quantum photonic circuits.<n>This review explores the transition from Gaussian to non-Gaussian models and the computational challenges associated with simulating large-scale photonic systems.<n>We evaluate the leading photonic quantum simulators, including Strawberry Fields, Piquasso, QuTiP SimulaQron, Perceval, and QuantumOPtics.jl.
arXiv Detail & Related papers (2025-02-07T15:04:42Z) - TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training [91.8932638236073]
We introduce textbfTensorGRaD, a novel method that directly addresses the memory challenges associated with large-structured weights.<n>We show that sparseGRaD reduces total memory usage by over $50%$ while maintaining and sometimes even improving accuracy.
arXiv Detail & Related papers (2025-01-04T20:51:51Z) - Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction [45.84205238554709]
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions.
We include the Gibbs-Duhem equation explicitly in the loss function for training neural networks.
arXiv Detail & Related papers (2023-05-31T07:36:45Z) - Simulating Gaussian Boson Sampling with Tensor Networks in the
Heisenberg picture [0.9208007322096533]
We introduce a novel method for computing the probability distribution of boson sampling based on the time evolution of tensor networks in the Heisenberg picture.
Our results demonstrate the effectiveness of the method and its potential to advance quantum computing research.
arXiv Detail & Related papers (2023-05-18T18:00:00Z) - Stochastic Security as a Performance Metric for Quantum-enhanced Generative AI [0.0]
Deep energy-based models (EBM) require continuous-domain Gibbs sampling both during training and inference.<n>In lieu of fault-tolerant quantum computers that can execute quantum Gibbs sampling algorithms, we use the Monte Carlo simulation of diffusion processes as a classical alternative.<n>Our results show that increasing the computational budget of Gibbs sampling in persistent contrastive divergence improves both the calibration and adversarial robustness of the model.
arXiv Detail & Related papers (2023-05-13T17:33:01Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Simulation of Entanglement Generation between Absorptive Quantum
Memories [56.24769206561207]
We use the open-source Simulator of QUantum Network Communication (SeQUeNCe), developed by our team, to simulate entanglement generation between two atomic frequency comb (AFC) absorptive quantum memories.
We realize the representation of photonic quantum states within truncated Fock spaces in SeQUeNCe.
We observe varying fidelity with SPDC source mean photon number, and varying entanglement generation rate with both mean photon number and memory mode number.
arXiv Detail & Related papers (2022-12-17T05:51:17Z) - Scalable Nanophotonic-Electronic Spiking Neural Networks [3.9918594409417576]
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing.
Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm.
Co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
arXiv Detail & Related papers (2022-08-28T06:10:06Z) - Preparation of excited states for nuclear dynamics on a quantum computer [117.44028458220427]
We study two different methods to prepare excited states on a quantum computer.
We benchmark these techniques on emulated and real quantum devices.
These findings show that quantum techniques designed to achieve good scaling on fault tolerant devices might also provide practical benefits on devices with limited connectivity and gate fidelity.
arXiv Detail & Related papers (2020-09-28T17:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.