Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for
Multi-Agent Learning
- URL: http://arxiv.org/abs/2302.12826v1
- Date: Fri, 24 Feb 2023 18:59:13 GMT
- Title: Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for
Multi-Agent Learning
- Authors: Ryan Kortvelesy, Steven Morad, Amanda Prorok
- Abstract summary: We introduce a Permutation-Invariant Set Autoencoder (PISA)
PISA produces encodings with significantly lower reconstruction error than existing baselines.
We demonstrate its usefulness in a multi-agent application.
- Score: 7.22614468437919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of permutation-invariant learning over set representations is
particularly relevant in the field of multi-agent systems -- a few potential
applications include unsupervised training of aggregation functions in graph
neural networks (GNNs), neural cellular automata on graphs, and prediction of
scenes with multiple objects. Yet existing approaches to set encoding and
decoding tasks present a host of issues, including non-permutation-invariance,
fixed-length outputs, reliance on iterative methods, non-deterministic outputs,
computationally expensive loss functions, and poor reconstruction accuracy. In
this paper we introduce a Permutation-Invariant Set Autoencoder (PISA), which
tackles these problems and produces encodings with significantly lower
reconstruction error than existing baselines. PISA also provides other
desirable properties, including a similarity-preserving latent space, and the
ability to insert or remove elements from the encoding. After evaluating PISA
against baseline methods, we demonstrate its usefulness in a multi-agent
application. Using PISA as a subcomponent, we introduce a novel GNN
architecture which serves as a generalised communication scheme, allowing
agents to use communication to gain full observability of a system.
Related papers
- Multilevel CNNs for Parametric PDEs based on Adaptive Finite Elements [0.0]
A neural network architecture is presented that exploits the multilevel properties of high-dimensional parameter-dependent partial differential equations.
The network is trained with data on adaptively refined finite element meshes.
A complete convergence and complexity analysis is carried out for the adaptive multilevel scheme.
arXiv Detail & Related papers (2024-08-20T13:32:11Z) - The Balanced-Pairwise-Affinities Feature Transform [2.3020018305241337]
TheBPA feature transform is designed to upgrade the features of a set of input items to facilitate downstream matching or grouping related tasks.
A particular min-cost-max-flow fractional matching problem leads to a transform which is efficient, differentiable, equivariant, parameterless and probabilistically interpretable.
Empirically, the transform is highly effective and flexible in its use and consistently improves networks it is inserted into, in a variety of tasks and training schemes.
arXiv Detail & Related papers (2024-06-25T14:28:05Z) - GEC-DePenD: Non-Autoregressive Grammatical Error Correction with
Decoupled Permutation and Decoding [52.14832976759585]
Grammatical error correction (GEC) is an important NLP task that is usually solved with autoregressive sequence-to-sequence models.
We propose a novel non-autoregressive approach to GEC that decouples the architecture into a permutation network.
We show that the resulting network improves over previously known non-autoregressive methods for GEC.
arXiv Detail & Related papers (2023-11-14T14:24:36Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - PowerQuant: Automorphism Search for Non-Uniform Quantization [37.82255888371488]
We identify the uniformity of the quantization operator as a limitation of existing approaches, and propose a data-free non-uniform method.
We show that our approach, dubbed PowerQuant, only require simple modifications in the quantized DNN activation functions.
arXiv Detail & Related papers (2023-01-24T08:30:14Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z) - Set Interdependence Transformer: Set-to-Sequence Neural Networks for
Permutation Learning and Structure Prediction [6.396288020763144]
Set-to-sequence problems occur in natural language processing, computer vision and structure prediction.
Previous attention-based methods require $n$ layers of their set transformations to explicitly represent $n$-th order relations.
We propose a novel neural set encoding method called the Set Interdependence Transformer, capable of relating the set's permutation invariant representation to its elements within sets of any cardinality.
arXiv Detail & Related papers (2022-06-08T07:46:49Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Learn to Predict Sets Using Feed-Forward Neural Networks [63.91494644881925]
This paper addresses the task of set prediction using deep feed-forward neural networks.
We present a novel approach for learning to predict sets with unknown permutation and cardinality.
We demonstrate the validity of our set formulations on relevant vision problems.
arXiv Detail & Related papers (2020-01-30T01:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.