A Population-Level Analysis of Neural Dynamics in Robust Legged Robots
- URL: http://arxiv.org/abs/2306.15793v1
- Date: Tue, 27 Jun 2023 20:41:59 GMT
- Title: A Population-Level Analysis of Neural Dynamics in Robust Legged Robots
- Authors: Eugene R. Rush, Christoffer Heckman, Kaushik Jayaram, J. Sean Humbert
- Abstract summary: We investigate population-level activity of robust robot locomotion controllers.
We find that fragile controllers have a higher number of fixed points with unstable directions, resulting in poorer balance when instructed to stand in place.
We find evidence that recurrent state dynamics are structured and low-dimensional during walking, which aligns with primate studies.
- Score: 6.107812768939554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recurrent neural network-based reinforcement learning systems are capable of
complex motor control tasks such as locomotion and manipulation, however, much
of their underlying mechanisms still remain difficult to interpret. Our aim is
to leverage computational neuroscience methodologies to understanding the
population-level activity of robust robot locomotion controllers. Our
investigation begins by analyzing topological structure, discovering that
fragile controllers have a higher number of fixed points with unstable
directions, resulting in poorer balance when instructed to stand in place.
Next, we analyze the forced response of the system by applying targeted neural
perturbations along directions of dominant population-level activity. We find
evidence that recurrent state dynamics are structured and low-dimensional
during walking, which aligns with primate studies. Additionally, when recurrent
states are perturbed to zero, fragile agents continue to walk, which is
indicative of a stronger reliance on sensory input and weaker recurrence.
Related papers
- Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Humanoid Whole-Body Locomotion on Narrow Terrain via Dynamic Balance and Reinforcement Learning [54.26816599309778]
We propose a novel whole-body locomotion algorithm based on dynamic balance and Reinforcement Learning (RL)
Specifically, we introduce a dynamic balance mechanism by leveraging an extended measure of Zero-Moment Point (ZMP)-driven rewards and task-driven rewards in a whole-body actor-critic framework.
Experiments conducted on a full-sized Unitree H1-2 robot verify the ability of our method to maintain balance on extremely narrow terrains.
arXiv Detail & Related papers (2025-02-24T14:53:45Z) - Transformer Dynamics: A neuroscientific approach to interpretability of large language models [0.0]
We focus on the residual stream (RS) in transformer models, conceptualizing it as a dynamical system evolving across layers.
We find that activations of individual RS units exhibit strong continuity across layers, despite the RS being a non-privileged basis.
In reduced-dimensional spaces, the RS follows a curved trajectory with attractor-like dynamics in the lower layers.
arXiv Detail & Related papers (2025-02-17T18:49:40Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Rethinking Robustness Assessment: Adversarial Attacks on Learning-based Quadrupedal Locomotion Controllers [33.50779001548997]
Legged locomotion has recently achieved remarkable success with the progress of machine learning techniques.
We propose a computational method that leverages sequential adversarial attacks to identify weaknesses in learned locomotion controllers.
Our research demonstrates that, even state-of-the-art robust controllers can fail significantly under well-designed, low-magnitude adversarial sequence.
arXiv Detail & Related papers (2024-05-21T00:26:11Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Learning low-dimensional dynamics from whole-brain data improves task
capture [2.82277518679026]
We introduce a novel approach to learning low-dimensional approximations of neural dynamics by using a sequential variational autoencoder (SVAE)
Our method finds smooth dynamics that can predict cognitive processes with accuracy higher than classical methods.
We evaluate our approach on various task-fMRI datasets, including motor, working memory, and relational processing tasks.
arXiv Detail & Related papers (2023-05-18T18:43:13Z) - From Data-Fitting to Discovery: Interpreting the Neural Dynamics of
Motor Control through Reinforcement Learning [3.6159844753873087]
We study structured neural activity of a virtual robot performing legged locomotion.
We find that embodied agents trained to walk exhibit smooth dynamics that avoid tangling -- or opposing neural trajectories in neighboring neural space.
arXiv Detail & Related papers (2023-05-18T16:52:27Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Learning threshold neurons via the "edge of stability" [33.64379851307296]
Existing analyses of neural network training often operate under the unrealistic assumption of an extremely small learning rate.
"Edge of stability" or "unstable dynamics" works on two-layer neural networks.
This paper performs a detailed analysis of gradient descent for simplified models of two-layer neural networks.
arXiv Detail & Related papers (2022-12-14T19:27:03Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Deep Reinforcement Learning for Neural Control [4.822598110892847]
We present a novel methodology for control of neural circuits based on deep reinforcement learning.
We map neural circuits and their connectome into a grid-world like setting and infers the actions needed to achieve aimed behavior.
Our framework successfully infers neuropeptidic currents and synaptic architectures for control of chemotaxis.
arXiv Detail & Related papers (2020-06-12T17:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.