Mapping and Scheduling Spiking Neural Networks On Segmented Ladder Bus Architectures
- URL: http://arxiv.org/abs/2506.11286v1
- Date: Thu, 12 Jun 2025 20:44:50 GMT
- Title: Mapping and Scheduling Spiking Neural Networks On Segmented Ladder Bus Architectures
- Authors: Phu Khanh Huynh, Francky Catthoor, Anup Das,
- Abstract summary: Large-scale neuromorphic architectures consist of computing tiles that communicate spikes using a shared interconnect.<n>These characteristics require optimized interconnects to handle high-activity bursts while consuming minimal power during idle periods.<n>We present a design methodology for a scenario-aware control plane tailored to a segmented ladder bus.
- Score: 2.732919960807485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-scale neuromorphic architectures consist of computing tiles that communicate spikes using a shared interconnect. The communication patterns in these systems are inherently sparse, asynchronous, and localized, as neural activity is characterized by temporal sparsity with occasional bursts of high traffic. These characteristics require optimized interconnects to handle high-activity bursts while consuming minimal power during idle periods. Among the proposed interconnect solutions, the dynamic segmented bus has gained attention due to its structural simplicity, scalability, and energy efficiency. Since the benefits of a dynamic segmented bus stem from its simplicity, it is essential to develop a streamlined control plane that can scale efficiently with the network. In this paper, we present a design methodology for a scenario-aware control plane tailored to a segmented ladder bus, with the aim of minimizing control overhead and optimizing energy and area utilization. We evaluated our approach using a combination of FPGA implementation and software simulation to assess scalability. The results demonstrated that our design process effectively reduces the control plane's area footprint compared to the data plane while maintaining scalability with network size.
Related papers
- Predicting Large-scale Urban Network Dynamics with Energy-informed Graph Neural Diffusion [51.198001060683296]
Networked urban systems facilitate the flow of people, resources, and services.<n>Current models such as graph neural networks have shown promise but face a trade-off between efficacy and efficiency.<n>This paper addresses this trade-off by drawing inspiration from physical laws to inform essential model designs.
arXiv Detail & Related papers (2025-07-31T01:24:01Z) - Flow-Through Tensors: A Unified Computational Graph Architecture for Multi-Layer Transportation Network Optimization [20.685856719515026]
Flow Throughs (FTT) is a unified computational graph architecture that connects origin destination flows, path, probabilities and link travel times as interconnected tensors.<n>Our framework makes three key contributions: first, it establishes a consistent mathematical structure that enables gradient-based optimization across previously separate modeling elements.<n>Second, it supports multidimensional analysis of traffic patterns over time, space, and user groups with precise quantification of system efficiency.
arXiv Detail & Related papers (2025-06-30T06:42:23Z) - World Model-Based Learning for Long-Term Age of Information Minimization in Vehicular Networks [53.98633183204453]
In this paper, a novel world model-based learning framework is proposed to minimize packet-completeness-aware age of information (CAoI) in a vehicular network.<n>A world model framework is proposed to jointly learn a dynamic model of the mmWave V2X environment and use it to imagine trajectories for learning how to perform link scheduling.<n>In particular, the long-term policy is learned in differentiable imagined trajectories instead of environment interactions.
arXiv Detail & Related papers (2025-05-03T06:23:18Z) - Brain-Inspired Decentralized Satellite Learning in Space Computing Power Networks [42.67808523367945]
Space Computing Power Networks (Space-CPN) emerges as a promising architecture to coordinate the computing capability of satellites and enable on board data processing.<n>We propose to employ spiking neural networks (SNNs), which is supported by the neuromorphic computing architecture, for on-board data processing.<n>We put forward a decentralized neuromorphic learning framework, where a communication-efficient inter-plane model aggregation method is developed.
arXiv Detail & Related papers (2025-01-27T12:29:47Z) - SCoTT: Strategic Chain-of-Thought Tasking for Wireless-Aware Robot Navigation in Digital Twins [78.53885607559958]
We propose SCoTT, a wireless-aware path planning framework.<n>We show that SCoTT achieves path gains within 2% of DP-WA* while consistently generating shorter trajectories.<n>We also show the practical viability of our approach by deploying SCoTT as a ROS node within Gazebo simulations.
arXiv Detail & Related papers (2024-11-27T10:45:49Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - STGformer: Efficient Spatiotemporal Graph Transformer for Traffic Forecasting [11.208740750755025]
Traffic is a cornerstone of smart city management enabling efficient allocation and transportation planning.
Deep learning, with its ability to capture complex nonlinear patterns in data, has emerged as a powerful tool for traffic forecasting.
graph neural networks (GCNs) and transformer-based models have shown promise, but their computational demands often hinder their application to realworld networks.
We propose a noveltemporal graph transformer (STG) architecture, enabling efficient modeling of both global and local traffic patterns while maintaining a manageable computational footprint.
arXiv Detail & Related papers (2024-10-01T04:15:48Z) - Unifying Dimensions: A Linear Adaptive Approach to Lightweight Image Super-Resolution [6.857919231112562]
Window-based transformers have demonstrated outstanding performance in super-resolution tasks.
They exhibit higher computational complexity and inference latency than convolutional neural networks.
We construct a convolution-based Transformer framework named the linear adaptive mixer network (LAMNet)
arXiv Detail & Related papers (2024-09-26T07:24:09Z) - PDSketch: Integrated Planning Domain Programming and Learning [86.07442931141637]
We present a new domain definition language, named PDSketch.
It allows users to flexibly define high-level structures in the transition models.
Details of the transition model will be filled in by trainable neural networks.
arXiv Detail & Related papers (2023-03-09T18:54:12Z) - Architecture Aware Latency Constrained Sparse Neural Networks [35.50683537052815]
In this paper, we design an architecture aware latency constrained sparse framework to prune and accelerate CNN models.
We also propose a novel sparse convolution algorithm for efficient computation.
Our system-algorithm co-design framework can achieve much better frontier among network accuracy and latency on resource-constrained mobile devices.
arXiv Detail & Related papers (2021-09-01T03:41:31Z) - Spatio-temporal Modeling for Large-scale Vehicular Networks Using Graph
Convolutional Networks [110.80088437391379]
A graph-based framework called SMART is proposed to model and keep track of the statistics of vehicle-to-temporal (V2I) communication latency across a large geographical area.
We develop a graph reconstruction-based approach using a graph convolutional network integrated with a deep Q-networks algorithm.
Our results show that the proposed method can significantly improve both the accuracy and efficiency for modeling and the latency performance of large vehicular networks.
arXiv Detail & Related papers (2021-03-13T06:56:29Z) - Dataflow Aware Mapping of Convolutional Neural Networks Onto Many-Core
Platforms With Network-on-Chip Interconnect [0.0764671395172401]
Machine intelligence, especially using convolutional neural networks (CNNs), has become a large area of research over the past years.
Many-core platforms consisting of several homogeneous cores can alleviate limitations with regard to physical implementation at the expense of an increased dataflow mapping effort.
This work presents an automated mapping strategy starting at the single-core level with different optimization targets for minimal runtime and minimal off-chip memory accesses.
The strategy is then extended towards a suitable many-core mapping scheme and evaluated using a scalable system-level simulation with a network-on-chip interconnect.
arXiv Detail & Related papers (2020-06-18T17:13:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.