Unsupervised Hebbian Learning on Point Sets in StarCraft II
- URL: http://arxiv.org/abs/2207.12323v1
- Date: Wed, 13 Jul 2022 13:09:48 GMT
- Title: Unsupervised Hebbian Learning on Point Sets in StarCraft II
- Authors: Beomseok Kang, Harshit Kumar, Saurabh Dash, Saibal Mukhopadhyay
- Abstract summary: We present a novel Hebbian learning method to extract the global feature of point sets in StarCraft II game units.
Our model includes encoder, LSTM, and decoder, and we train the encoder with the unsupervised learning method.
- Score: 12.095363582092904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the evolution of real-time strategy (RTS) game is a challenging
problem in artificial intelligent (AI) system. In this paper, we present a
novel Hebbian learning method to extract the global feature of point sets in
StarCraft II game units, and its application to predict the movement of the
points. Our model includes encoder, LSTM, and decoder, and we train the encoder
with the unsupervised learning method. We introduce the concept of neuron
activity aware learning combined with k-Winner-Takes-All. The optimal value of
neuron activity is mathematically derived, and experiments support the
effectiveness of the concept over the downstream task. Our Hebbian learning
rule benefits the prediction with lower loss compared to self-supervised
learning. Also, our model significantly saves the computational cost such as
activations and FLOPs compared to a frame-based approach.
Related papers
- Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - Bridging Logic and Learning: A Neural-Symbolic Approach for Enhanced
Reasoning in Neural Models (ASPER) [0.13053649021965597]
This paper introduces an approach designed to improve the performance of neural models in learning reasoning tasks.
It achieves this by integrating Answer Set Programming solvers and domain-specific expertise.
The model shows a significant improvement in solving Sudoku puzzles using only 12 puzzles for training and testing.
arXiv Detail & Related papers (2023-12-18T19:06:00Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - Neuro-Symbolic Sudoku Solver [0.0]
We extend the functionality of the Neuro Logic Machine (NLM) to solve a 9X9 game of Sudoku.
In our study, we showcase an NLM which is capable of obtaining 100% accuracy for solving a Sudoku with empty cells ranging from 3 to 10.
We analyze the behaviour of NLMs with a backtracking algorithm by comparing the convergence time using a graph plot on the same problem.
arXiv Detail & Related papers (2023-07-02T20:04:01Z) - Learning Two-Player Mixture Markov Games: Kernel Function Approximation
and Correlated Equilibrium [157.0902680672422]
We consider learning Nash equilibria in two-player zero-sum Markov Games with nonlinear function approximation.
We propose a novel online learning algorithm to find a Nash equilibrium by minimizing the duality gap.
arXiv Detail & Related papers (2022-08-10T14:21:54Z) - Hebbian Continual Representation Learning [9.54473759331265]
Continual Learning aims to bring machine learning into a more realistic scenario.
We investigate whether biologically inspired Hebbian learning is useful for tackling continual challenges.
arXiv Detail & Related papers (2022-06-28T09:21:03Z) - Unsupervised Learning of Neurosymbolic Encoders [40.3575054882791]
We present a framework for the unsupervised learning of neurosymbolic encoders, i.e., encoders obtained by composing neural networks with symbolic programs from a domain-specific language.
Such a framework can naturally incorporate symbolic expert knowledge into the learning process and lead to more interpretable and factorized latent representations than fully neural encoders.
arXiv Detail & Related papers (2021-07-28T02:16:14Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z) - Integration of Leaky-Integrate-and-Fire-Neurons in Deep Learning
Architectures [0.0]
We show that biologically inspired neuron models provide novel and efficient ways of information encoding.
We derived simple update-rules for the LIF units from the differential equations, which are easy to numerically integrate.
We apply our method to the IRIS blossoms image data set and show that the training technique can be used to train LIF neurons on image classification tasks.
arXiv Detail & Related papers (2020-04-28T13:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.