Ensembles of Compact, Region-specific & Regularized Spiking Neural
Networks for Scalable Place Recognition
- URL: http://arxiv.org/abs/2209.08723v3
- Date: Fri, 5 May 2023 07:01:17 GMT
- Title: Ensembles of Compact, Region-specific & Regularized Spiking Neural
Networks for Scalable Place Recognition
- Authors: Somayeh Hussaini, Michael Milford and Tobias Fischer
- Abstract summary: Spiking neural networks have significant potential in robotics due to their high energy efficiency on specialized hardware.
This paper introduces a novel modular ensemble network approach, where compact, localized spiking networks each learn and are solely responsible for recognizing places in a local region only.
It comes with a high-performance cost where a lack of global regularization at deployment time leads to hyperactive neurons that erroneously respond to places outside their learned region.
We evaluate this new scalable modular system on benchmark localization datasets Nordland and Oxford RobotCar, with comparisons to standard techniques NetVLAD, DenseVLAD, and SAD, and a previous spiking
- Score: 25.0834855255728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks have significant potential utility in robotics due to
their high energy efficiency on specialized hardware, but proof-of-concept
implementations have not yet typically achieved competitive performance or
capability with conventional approaches. In this paper, we tackle one of the
key practical challenges of scalability by introducing a novel modular ensemble
network approach, where compact, localized spiking networks each learn and are
solely responsible for recognizing places in a local region of the environment
only. This modular approach creates a highly scalable system. However, it comes
with a high-performance cost where a lack of global regularization at
deployment time leads to hyperactive neurons that erroneously respond to places
outside their learned region. Our second contribution introduces a
regularization approach that detects and removes these problematic hyperactive
neurons during the initial environmental learning phase. We evaluate this new
scalable modular system on benchmark localization datasets Nordland and Oxford
RobotCar, with comparisons to standard techniques NetVLAD, DenseVLAD, and SAD,
and a previous spiking neural network system. Our system substantially
outperforms the previous SNN system on its small dataset, but also maintains
performance on 27 times larger benchmark datasets where the operation of the
previous system is computationally infeasible, and performs competitively with
the conventional localization systems.
Related papers
- Neural Network with Local Converging Input (NNLCI) for Supersonic Flow
Problems with Unstructured Grids [0.9152133607343995]
We develop a neural network with local converging input (NNLCI) for high-fidelity prediction using unstructured data.
As a validation case, the NNLCI method is applied to study inviscid supersonic flows in channels with bumps.
arXiv Detail & Related papers (2023-10-23T19:03:37Z) - TOPIQ: A Top-down Approach from Semantics to Distortions for Image
Quality Assessment [53.72721476803585]
Image Quality Assessment (IQA) is a fundamental task in computer vision that has witnessed remarkable progress with deep neural networks.
We propose a top-down approach that uses high-level semantics to guide the IQA network to focus on semantically important local distortion regions.
A key component of our approach is the proposed cross-scale attention mechanism, which calculates attention maps for lower level features.
arXiv Detail & Related papers (2023-08-06T09:08:37Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Spatial Bias for Attention-free Non-local Neural Networks [11.320414512937946]
We introduce the spatial bias to learn global knowledge without self-attention in convolutional neural networks.
We show that the spatial bias achieves competitive performance that improves the classification accuracy by +0.79% and +1.5% on ImageNet-1K and cifar100 datasets.
arXiv Detail & Related papers (2023-02-24T08:16:16Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - Locally Supervised Learning with Periodic Global Guidance [19.41730292017383]
We propose Periodically Guided local Learning (PGL) to reinstate the global objective repetitively into the local-loss based training of neural networks.
We show that a simple periodic guidance scheme begets significant performance gains while having a low memory footprint.
arXiv Detail & Related papers (2022-08-01T13:06:26Z) - Learn to Communicate with Neural Calibration: Scalability and
Generalization [10.775558382613077]
We propose a scalable and generalizable neural calibration framework for future wireless system design.
The proposed neural calibration framework is applied to solve challenging resource management problems in massive multiple-input multiple-output (MIMO) systems.
arXiv Detail & Related papers (2021-10-01T09:00:25Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Revisiting the double-well problem by deep learning with a hybrid
network [7.308730248177914]
We propose a novel hybrid network which integrates two different kinds of neural networks: LSTM and ResNet.
Such a hybrid network can be applied for solving cooperative dynamics in a system with fast spatial or temporal modulations.
arXiv Detail & Related papers (2021-04-25T07:51:43Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.