Programmable metasurfaces for future photonic artificial intelligence
- URL: http://arxiv.org/abs/2505.11659v1
- Date: Fri, 16 May 2025 19:50:01 GMT
- Title: Programmable metasurfaces for future photonic artificial intelligence
- Authors: Loubnan Abou-Hamdan, Emil Marinov, Peter Wiecha, Philipp del Hougne, Tianyu Wang, Patrice Genevet,
- Abstract summary: Photonic neural networks (PNNs) could challenge traditional digital neural networks in terms of energy efficiency, latency, and throughput.<n>We discuss how field-programmable metasurface technology may become a key hardware ingredient in achieving scalable photonic AI accelerators.
- Score: 5.066823502780349
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Photonic neural networks (PNNs), which share the inherent benefits of photonic systems, such as high parallelism and low power consumption, could challenge traditional digital neural networks in terms of energy efficiency, latency, and throughput. However, producing scalable photonic artificial intelligence (AI) solutions remains challenging. To make photonic AI models viable, the scalability problem needs to be solved. Large optical AI models implemented on PNNs are only commercially feasible if the advantages of optical computation outweigh the cost of their input-output overhead. In this Perspective, we discuss how field-programmable metasurface technology may become a key hardware ingredient in achieving scalable photonic AI accelerators and how it can compete with current digital electronic technologies. Programmability or reconfigurability is a pivotal component for PNN hardware, enabling in situ training and accommodating non-stationary use cases that require fine-tuning or transfer learning. Co-integration with electronics, 3D stacking, and large-scale manufacturing of metasurfaces would significantly improve PNN scalability and functionalities. Programmable metasurfaces could address some of the current challenges that PNNs face and enable next-generation photonic AI technology.
Related papers
- Hardware-Efficient Photonic Tensor Core: Accelerating Deep Neural Networks with Structured Compression [15.665630650382226]
We introduce a block-circulant photonic tensor core for a structure-compressed optical neural network (StrC-ONN) architecture.<n>This work explores a new pathway toward practical and scalable ONNs, highlighting a promising route to address future computational efficiency challenges.
arXiv Detail & Related papers (2025-02-01T17:03:45Z) - Optical Computing for Deep Neural Network Acceleration: Foundations, Recent Developments, and Emerging Directions [3.943289808718775]
We discuss the fundamentals and state-of-the-art developments in optical computing, with an emphasis on deep neural networks (DNNs)
Various promising approaches are described for engineering optical devices, enhancing optical circuits, and designing architectures that can adapt optical computing to a variety of DNN workloads.
Novel techniques for hardware/software co-design that can intelligently tune and map DNN models to improve performance and energy-efficiency on optical computing platforms across high performance and resource constrained embedded, edge, and IoT platforms are also discussed.
arXiv Detail & Related papers (2024-07-30T20:50:30Z) - NeuSpin: Design of a Reliable Edge Neuromorphic System Based on
Spintronics for Green AI [0.22499166814992438]
Internet of Things (IoT) and smart wearable devices for personalized healthcare will require storing and computing ever-increasing amounts of data.
The key requirements for these devices are ultra-low-power, high-processing capabilities, autonomy at low cost, as well as reliability and accuracy to enable Green AI at the edge.
The NeuSPIN project aims to address these challenges through full-stack hardware and software co-design, developing novel algorithmic and circuit design approaches to enhance the performance, energy-efficiency and robustness of BayNNs on sprint-based CIM platforms.
arXiv Detail & Related papers (2024-01-11T13:27:19Z) - Green Edge AI: A Contemporary Survey [46.11332733210337]
The transformative power of AI is derived from the utilization of deep neural networks (DNNs)
Deep learning (DL) is increasingly being transitioned to wireless edge networks in proximity to end-user devices (EUDs)
Despite its potential, edge AI faces substantial challenges, mostly due to the dichotomy between the resource limitations of wireless edge networks and the resource-intensive nature of DL.
arXiv Detail & Related papers (2023-12-01T04:04:37Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - A Survey on Brain-Inspired Deep Learning via Predictive Coding [85.93245078403875]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.<n>PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - The role of all-optical neural networks [2.3204178451683264]
All-optical devices will be at an advantage in the case of inference in large neural network models.
We consider the limitations of all-optical neural networks including footprint, strength of nonlinearity, optical signal degradation, limited precision of computations, and quantum noise.
arXiv Detail & Related papers (2023-06-11T09:26:08Z) - Physics Embedded Machine Learning for Electromagnetic Data Imaging [83.27424953663986]
Electromagnetic (EM) imaging is widely applied in sensing for security, biomedicine, geophysics, and various industries.
It is an ill-posed inverse problem whose solution is usually computationally expensive. Machine learning (ML) techniques and especially deep learning (DL) show potential in fast and accurate imaging.
This article surveys various schemes to incorporate physics in learning-based EM imaging.
arXiv Detail & Related papers (2022-07-26T02:10:15Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Silicon photonic subspace neural chip for hardware-efficient deep
learning [11.374005508708995]
optical neural network (ONN) is a promising candidate for next-generation neurocomputing.
We devise a hardware-efficient photonic subspace neural network architecture.
We experimentally demonstrate our PSNN on a butterfly-style programmable silicon photonic integrated circuit.
arXiv Detail & Related papers (2021-11-11T06:34:05Z) - Reservoir Computing with Magnetic Thin Films [35.32223849309764]
New unconventional computing hardware has emerged with the potential to exploit natural phenomena and gain efficiency.
Physical reservoir computing demonstrates this with a variety of unconventional systems.
We perform an initial exploration of three magnetic materials in thin-film geometries via microscale simulation.
arXiv Detail & Related papers (2021-01-29T17:37:17Z) - Photonics for artificial intelligence and neuromorphic computing [52.77024349608834]
Photonic integrated circuits have enabled ultrafast artificial neural networks.
Photonic neuromorphic systems offer sub-nanosecond latencies.
These systems could address the growing demand for machine learning and artificial intelligence.
arXiv Detail & Related papers (2020-10-30T21:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.