Unsupervised Hebbian Learning on Point Sets in StarCraft II
- URL: http://arxiv.org/abs/2207.12323v1
- Date: Wed, 13 Jul 2022 13:09:48 GMT
- Title: Unsupervised Hebbian Learning on Point Sets in StarCraft II
- Authors: Beomseok Kang, Harshit Kumar, Saurabh Dash, Saibal Mukhopadhyay
- Abstract summary: We present a novel Hebbian learning method to extract the global feature of point sets in StarCraft II game units.
Our model includes encoder, LSTM, and decoder, and we train the encoder with the unsupervised learning method.
- Score: 12.095363582092904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the evolution of real-time strategy (RTS) game is a challenging
problem in artificial intelligent (AI) system. In this paper, we present a
novel Hebbian learning method to extract the global feature of point sets in
StarCraft II game units, and its application to predict the movement of the
points. Our model includes encoder, LSTM, and decoder, and we train the encoder
with the unsupervised learning method. We introduce the concept of neuron
activity aware learning combined with k-Winner-Takes-All. The optimal value of
neuron activity is mathematically derived, and experiments support the
effectiveness of the concept over the downstream task. Our Hebbian learning
rule benefits the prediction with lower loss compared to self-supervised
learning. Also, our model significantly saves the computational cost such as
activations and FLOPs compared to a frame-based approach.
Related papers
- Game Theory Meets Statistical Mechanics in Deep Learning Design [0.06990493129893112]
We present a novel deep representation that seamlessly merges principles of game theory with laws of statistical mechanics.
It performs feature extraction, dimensionality reduction, and pattern classification within a single learning framework.
arXiv Detail & Related papers (2024-10-16T06:02:18Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - Bridging Logic and Learning: A Neural-Symbolic Approach for Enhanced
Reasoning in Neural Models (ASPER) [0.13053649021965597]
This paper introduces an approach designed to improve the performance of neural models in learning reasoning tasks.
It achieves this by integrating Answer Set Programming solvers and domain-specific expertise.
The model shows a significant improvement in solving Sudoku puzzles using only 12 puzzles for training and testing.
arXiv Detail & Related papers (2023-12-18T19:06:00Z) - Learning Two-Player Mixture Markov Games: Kernel Function Approximation
and Correlated Equilibrium [157.0902680672422]
We consider learning Nash equilibria in two-player zero-sum Markov Games with nonlinear function approximation.
We propose a novel online learning algorithm to find a Nash equilibrium by minimizing the duality gap.
arXiv Detail & Related papers (2022-08-10T14:21:54Z) - Hebbian Continual Representation Learning [9.54473759331265]
Continual Learning aims to bring machine learning into a more realistic scenario.
We investigate whether biologically inspired Hebbian learning is useful for tackling continual challenges.
arXiv Detail & Related papers (2022-06-28T09:21:03Z) - Unsupervised Learning of Neurosymbolic Encoders [40.3575054882791]
We present a framework for the unsupervised learning of neurosymbolic encoders, i.e., encoders obtained by composing neural networks with symbolic programs from a domain-specific language.
Such a framework can naturally incorporate symbolic expert knowledge into the learning process and lead to more interpretable and factorized latent representations than fully neural encoders.
arXiv Detail & Related papers (2021-07-28T02:16:14Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z) - Integration of Leaky-Integrate-and-Fire-Neurons in Deep Learning
Architectures [0.0]
We show that biologically inspired neuron models provide novel and efficient ways of information encoding.
We derived simple update-rules for the LIF units from the differential equations, which are easy to numerically integrate.
We apply our method to the IRIS blossoms image data set and show that the training technique can be used to train LIF neurons on image classification tasks.
arXiv Detail & Related papers (2020-04-28T13:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.