Graphs for deep learning representations
- URL: http://arxiv.org/abs/2012.07439v1
- Date: Mon, 14 Dec 2020 11:51:23 GMT
- Title: Graphs for deep learning representations
- Authors: Carlos Lassance
- Abstract summary: We introduce a graph formalism based on the recent advances in Graph Signal Processing (GSP)
Namely, we use graphs to represent the latent spaces of deep neural networks.
We showcase that this graph formalism allows us to answer various questions including: ensuring robustness, reducing the amount of arbitrary choices in the design of the learning process, improving to small generalizations added to the inputs, and reducing computational complexity.
- Score: 1.0152838128195467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Deep Learning methods have achieved state of the art
performance in a vast range of machine learning tasks, including image
classification and multilingual automatic text translation. These architectures
are trained to solve machine learning tasks in an end-to-end fashion. In order
to reach top-tier performance, these architectures often require a very large
number of trainable parameters. There are multiple undesirable consequences,
and in order to tackle these issues, it is desired to be able to open the black
boxes of deep learning architectures. Problematically, doing so is difficult
due to the high dimensionality of representations and the stochasticity of the
training process. In this thesis, we investigate these architectures by
introducing a graph formalism based on the recent advances in Graph Signal
Processing (GSP). Namely, we use graphs to represent the latent spaces of deep
neural networks. We showcase that this graph formalism allows us to answer
various questions including: ensuring generalization abilities, reducing the
amount of arbitrary choices in the design of the learning process, improving
robustness to small perturbations added to the inputs, and reducing
computational complexity
Related papers
- Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - MultiScale MeshGraphNets [65.26373813797409]
We propose two complementary approaches to improve the framework from MeshGraphNets.
First, we demonstrate that it is possible to learn accurate surrogate dynamics of a high-resolution system on a much coarser mesh.
Second, we introduce a hierarchical approach (MultiScale MeshGraphNets) which passes messages on two different resolutions.
arXiv Detail & Related papers (2022-10-02T20:16:20Z) - Convolutional Learning on Multigraphs [153.20329791008095]
We develop convolutional information processing on multigraphs and introduce convolutional multigraph neural networks (MGNNs)
To capture the complex dynamics of information diffusion within and across each of the multigraph's classes of edges, we formalize a convolutional signal processing model.
We develop a multigraph learning architecture, including a sampling procedure to reduce computational complexity.
The introduced architecture is applied towards optimal wireless resource allocation and a hate speech localization task, offering improved performance over traditional graph neural networks.
arXiv Detail & Related papers (2022-09-23T00:33:04Z) - Reinforced Continual Learning for Graphs [18.64268861430314]
This paper proposes a graph continual learning strategy that combines the architecture-based and memory-based approaches.
It is numerically validated with several graph continual learning benchmark problems in both task-incremental learning and class-incremental learning settings.
arXiv Detail & Related papers (2022-09-04T07:49:59Z) - Metric Based Few-Shot Graph Classification [18.785949422663233]
Few-shot learning allows employing modern deep learning models in scarce data regimes without waiving their effectiveness.
We show that a simple distance metric learning baseline with a state-of-the-art graph embedder allows to obtain competitive results on the task.
We also propose a MixUp-based online data augmentation technique acting in the latent space and show its effectiveness on the task.
arXiv Detail & Related papers (2022-06-08T06:29:46Z) - Neural Architecture Search for Dense Prediction Tasks in Computer Vision [74.9839082859151]
Deep learning has led to a rising demand for neural network architecture engineering.
neural architecture search (NAS) aims at automatically designing neural network architectures in a data-driven manner rather than manually.
NAS has become applicable to a much wider range of problems in computer vision.
arXiv Detail & Related papers (2022-02-15T08:06:50Z) - Learning through structure: towards deep neuromorphic knowledge graph
embeddings [0.5906031288935515]
We propose a strategy to map deep graph learning architectures for knowledge graph reasoning to neuromorphic architectures.
Based on the insight that randomly and untrained graph neural networks are able to preserve local graph structures, we compose a frozen neural network shallow knowledge graph embedding models.
We experimentally show that already on conventional computing hardware, this leads to a significant speedup and memory reduction while maintaining a competitive performance level.
arXiv Detail & Related papers (2021-09-21T18:01:04Z) - SIGN: Scalable Inception Graph Neural Networks [4.5158585619109495]
We propose a new, efficient and scalable graph deep learning architecture that sidesteps the need for graph sampling.
Our architecture allows using different local graph operators to best suit the task at hand.
We obtain state-of-the-art results on ogbn-papers100M, the largest public graph dataset, with over 110 million nodes and 1.5 billion edges.
arXiv Detail & Related papers (2020-04-23T14:46:10Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.