Intrinsically motivated graph exploration using network theories of
human curiosity
- URL: http://arxiv.org/abs/2307.04962v4
- Date: Fri, 1 Dec 2023 13:35:01 GMT
- Title: Intrinsically motivated graph exploration using network theories of
human curiosity
- Authors: Shubhankar P. Patankar, Mathieu Ouellet, Juan Cervino, Alejandro
Ribeiro, Kieran A. Murphy and Dani S. Bassett
- Abstract summary: We propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity.
We use these proposed features as rewards for graph neural-network-based reinforcement learning.
- Score: 71.2717061477241
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intrinsically motivated exploration has proven useful for reinforcement
learning, even without additional extrinsic rewards. When the environment is
naturally represented as a graph, how to guide exploration best remains an open
question. In this work, we propose a novel approach for exploring
graph-structured data motivated by two theories of human curiosity: the
information gap theory and the compression progress theory. The theories view
curiosity as an intrinsic motivation to optimize for topological features of
subgraphs induced by nodes visited in the environment. We use these proposed
features as rewards for graph neural-network-based reinforcement learning. On
multiple classes of synthetically generated graphs, we find that trained agents
generalize to longer exploratory walks and larger environments than are seen
during training. Our method computes more efficiently than the greedy
evaluation of the relevant topological properties. The proposed intrinsic
motivations bear particular relevance for recommender systems. We demonstrate
that next-node recommendations considering curiosity are more predictive of
human choices than PageRank centrality in several real-world graph
environments.
Related papers
- On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Neural-Symbolic Recommendation with Graph-Enhanced Information [7.841447116972524]
We build a neuro-symbolic recommendation model with both global implicit reasoning ability and local explicit logic reasoning ability.
We transform user behavior into propositional logic expressions to achieve recommendations from the perspective of cognitive reasoning.
arXiv Detail & Related papers (2023-07-11T06:29:31Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Saliency Prediction with External Knowledge [27.75589849982756]
We develop a new Graph Semantic Saliency Network (GraSSNet) that constructs a graph that encodes semantic relationships learned from external knowledge.
A Spatial Graph Attention Network is then developed to update saliency features based on the learned graph.
Experiments show that the proposed model learns to predict saliency from the external knowledge and outperforms the state-of-the-art on four saliency benchmarks.
arXiv Detail & Related papers (2020-07-27T20:12:28Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z) - EvoNet: A Neural Network for Predicting the Evolution of Dynamic Graphs [26.77596449192451]
We propose a model that predicts the evolution of dynamic graphs.
Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs.
We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets.
arXiv Detail & Related papers (2020-03-02T12:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.