Directed Acyclic Graph Convolutional Networks
- URL: http://arxiv.org/abs/2506.12218v1
- Date: Fri, 13 Jun 2025 20:40:50 GMT
- Title: Directed Acyclic Graph Convolutional Networks
- Authors: Samuel Rey, Hamed Ajorlou, Gonzalo Mateos,
- Abstract summary: Directed acyclic graphs (DAGs) are central to science and engineering applications including causal inference, scheduling, and neural architecture search.<n>We introduce the DAG Convolutional Network (DCN), a novel graph neural network (GNN) architecture designed specifically for convolutional learning from signals supported on DAGs.
- Score: 10.282099295800322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Directed acyclic graphs (DAGs) are central to science and engineering applications including causal inference, scheduling, and neural architecture search. In this work, we introduce the DAG Convolutional Network (DCN), a novel graph neural network (GNN) architecture designed specifically for convolutional learning from signals supported on DAGs. The DCN leverages causal graph filters to learn nodal representations that account for the partial ordering inherent to DAGs, a strong inductive bias does not present in conventional GNNs. Unlike prior art in machine learning over DAGs, DCN builds on formal convolutional operations that admit spectral-domain representations. We further propose the Parallel DCN (PDCN), a model that feeds input DAG signals to a parallel bank of causal graph-shift operators and processes these DAG-aware features using a shared multilayer perceptron. This way, PDCN decouples model complexity from graph size while maintaining satisfactory predictive performance. The architectures' permutation equivariance and expressive power properties are also established. Comprehensive numerical tests across several tasks, datasets, and experimental conditions demonstrate that (P)DCN compares favorably with state-of-the-art baselines in terms of accuracy, robustness, and computational efficiency. These results position (P)DCN as a viable framework for deep learning from DAG-structured data that is designed from first (graph) signal processing principles.
Related papers
- LayerDAG: A Layerwise Autoregressive Diffusion Model for Directed Acyclic Graph Generation [17.94316378710172]
This paper introduces LayerDAG, an autoregressive diffusion model, to generate realistic directed acyclic graphs (DAGs)<n>By interpreting the partial order of nodes as a sequence of bipartite graphs, LayerDAG decouples the strong node dependencies into manageable units that can be processed sequentially.<n>Experiments on both synthetic and real-world flow graphs from various computing platforms show that LayerDAG generates valid DAGs with superior statistical properties and benchmarking performance.
arXiv Detail & Related papers (2024-11-04T17:47:15Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Convolutional Learning on Directed Acyclic Graphs [10.282099295800322]
We develop a novel convolutional architecture tailored for learning from data defined over directed acyclic graphs (DAGs)
We develop a novel convolutional graph neural network that integrates learnable DAG filters to account for the partial ordering induced by the graph topology.
arXiv Detail & Related papers (2024-05-05T21:30:18Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias [72.33336385797944]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias.<n>We show that LD significantly outperforms state-of-the-art methods on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs [30.294095901315746]
We propose a Parallelizable Attention-based structure (PACE) that processes nodes simultaneously and encodes DAGs in parallel.
PACE not only improves the effectiveness over previous sequential DAG encoders with a significantly boosted training and inference speed, but also generates smooth latent (DAG encoding) spaces.
arXiv Detail & Related papers (2022-03-19T11:56:51Z) - Graph Neural Networks with Parallel Neighborhood Aggregations for Graph
Classification [14.112444998191698]
We focus on graph classification using a graph neural network (GNN) model that precomputes the node features using a bank of neighborhood aggregation graph operators arranged in parallel.
These GNN models have a natural advantage of reduced training and inference time due to the precomputations.
We demonstrate via numerical experiments that the developed model achieves state-of-the-art performance on many diverse real-world datasets.
arXiv Detail & Related papers (2021-11-22T19:19:40Z) - Directed Acyclic Graph Neural Networks [9.420935957200518]
We focus on a special, yet widely used, type of graphs -- DAGs -- and inject a stronger inductive bias -- partial ordering -- into the neural network design.
We propose the emphdirected acyclic graph relational neural network, DAGNN, an architecture that processes information according to the flow defined by the partial order.
arXiv Detail & Related papers (2021-01-20T04:50:16Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.