PLSM: A Parallelized Liquid State Machine for Unintentional Action
Detection
- URL: http://arxiv.org/abs/2105.09909v1
- Date: Thu, 6 May 2021 08:10:35 GMT
- Title: PLSM: A Parallelized Liquid State Machine for Unintentional Action
Detection
- Authors: Dipayan Das, Saumik Bhattacharya, Umapada Pal, and Sukalpa Chanda
- Abstract summary: Reservoir Computing (RC) offers a viable option to AI algorithms on low-end embedded system platforms.
Liquid State Machine (LSM) is a bio-inspired RC model that mimics the cortical neuromorphic spiking neural networks (SNN) that can be directly realized on hardware.
- Score: 14.873546762084063
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reservoir Computing (RC) offers a viable option to deploy AI algorithms on
low-end embedded system platforms. Liquid State Machine (LSM) is a bio-inspired
RC model that mimics the cortical microcircuits and uses spiking neural
networks (SNN) that can be directly realized on neuromorphic hardware. In this
paper, we present a novel Parallelized LSM (PLSM) architecture that
incorporates spatio-temporal read-out layer and semantic constraints on model
output. To the best of our knowledge, such a formulation has been done for the
first time in literature, and it offers a computationally lighter alternative
to traditional deep-learning models. Additionally, we also present a
comprehensive algorithm for the implementation of parallelizable SNNs and LSMs
that are GPU-compatible. We implement the PLSM model to classify
unintentional/accidental video clips, using the Oops dataset. From the
experimental results on detecting unintentional action in video, it can be
observed that our proposed model outperforms a self-supervised model and a
fully supervised traditional deep learning model. All the implemented codes can
be found at our repository
https://github.com/anonymoussentience2020/Parallelized_LSM_for_Unintentional_Action_Recognition.
Related papers
- Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling [51.40150411616207]
We introduce Latent Particle World Model (LPWM), a self-supervised object-centric world model scaled to real-world multi-object datasets.<n>LPWM autonomously discovers keypoints, bounding boxes, and object masks directly from video data.<n>Our architecture is trained end-to-end purely from videos and supports flexible conditioning on actions, language, and image goals.
arXiv Detail & Related papers (2026-03-04T19:36:08Z) - A Lightweight Library for Energy-Based Joint-Embedding Predictive Architectures [58.26804959656713]
We present EB-JEPA, an open-source library for learning representations and world models using Joint-Embedding Predictive Architectures (JEPAs)<n>JEPAs learn to predict in representation space rather than pixel space, avoiding the pitfalls of generative modeling.<n>We show how these representations can drive action-conditioned world models, achieving a 97% planning success rate on the Two Rooms navigation task.
arXiv Detail & Related papers (2026-02-03T14:56:24Z) - Modulating Reservoir Dynamics via Reinforcement Learning for Efficient Robot Skill Synthesis [0.0]
A random recurrent neural network, called a reservoir, can be used to learn robot movements conditioned on context inputs.
In this work, we propose a novel RC-based Learning from Demonstration (LfD) framework.
arXiv Detail & Related papers (2024-11-17T07:25:54Z) - NNsight and NDIF: Democratizing Access to Open-Weight Foundation Model Internals [58.83169560132308]
We introduce NNsight and NDIF, technologies that work in tandem to enable scientific study of the representations and computations learned by very large neural networks.
arXiv Detail & Related papers (2024-07-18T17:59:01Z) - From system models to class models: An in-context learning paradigm [0.0]
We introduce a novel paradigm for system identification, addressing two primary tasks: one-step-ahead prediction and multi-step simulation.
We learn a meta model that represents a class of dynamical systems.
For one-step prediction, a GPT-like decoder-only architecture is utilized, whereas the simulation problem employs an encoder-decoder structure.
arXiv Detail & Related papers (2023-08-25T13:50:17Z) - Sparse Modular Activation for Efficient Sequence Modeling [94.11125833685583]
Recent models combining Linear State Space Models with self-attention mechanisms have demonstrated impressive results across a range of sequence modeling tasks.
Current approaches apply attention modules statically and uniformly to all elements in the input sequences, leading to sub-optimal quality-efficiency trade-offs.
We introduce Sparse Modular Activation (SMA), a general mechanism enabling neural networks to sparsely activate sub-modules for sequence elements in a differentiable manner.
arXiv Detail & Related papers (2023-06-19T23:10:02Z) - IR-MCL: Implicit Representation-Based Online Global Localization [31.77645160411745]
In this paper, we address the problem of estimating the robots pose in an indoor environment using 2D LiDAR data.
We propose a neural occupancy field (NOF) to implicitly represent the scene using a neural network.
We show that we can accurately and efficiently localize a robot using our approach surpassing the localization performance of state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T17:59:08Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - A unified software/hardware scalable architecture for brain-inspired
computing based on self-organizing neural models [6.072718806755325]
We develop an original brain-inspired neural model associating Self-Organizing Maps (SOM) and Hebbian learning in the Reentrant SOM (ReSOM) model.
This work also demonstrates the distributed and scalable nature of the model through both simulation results and hardware execution on a dedicated FPGA-based platform.
arXiv Detail & Related papers (2022-01-06T22:02:19Z) - Recurrent neural network-based Internal Model Control of unknown
nonlinear stable systems [0.30458514384586394]
Gated Recurrent Neural Networks (RNNs) have become popular tools for learning dynamical systems.
This paper aims to discuss how these networks can be adopted for the synthesis of Internal Model Control (IMC) architectures.
arXiv Detail & Related papers (2021-08-10T11:02:25Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.