On the (Non) Injectivity of Piecewise Linear Janossy Pooling
- URL: http://arxiv.org/abs/2505.20150v1
- Date: Mon, 26 May 2025 15:53:09 GMT
- Title: On the (Non) Injectivity of Piecewise Linear Janossy Pooling
- Authors: Ilai Reshef, Nadav Dym,
- Abstract summary: We consider the family of k-ary Janossy pooling, which includes many of the most popular multiset models, and prove that no piecewise linear Janossy pooling function can be injective.<n>On the positive side, we show that when restricted to multisets without multiplicities, even simple deep-sets models suffice for injectivity and bi-Lipschitzness.
- Score: 3.396731589928944
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Multiset functions, which are functions that map multisets to vectors, are a fundamental tool in the construction of neural networks for multisets and graphs. To guarantee that the vector representation of the multiset is faithful, it is often desirable to have multiset mappings that are both injective and bi-Lipschitz. Currently, there are several constructions of multiset functions achieving both these guarantees, leading to improved performance in some tasks but often also to higher compute time than standard constructions. Accordingly, it is natural to inquire whether simpler multiset functions achieving the same guarantees are available. In this paper, we make a large step towards giving a negative answer to this question. We consider the family of k-ary Janossy pooling, which includes many of the most popular multiset models, and prove that no piecewise linear Janossy pooling function can be injective. On the positive side, we show that when restricted to multisets without multiplicities, even simple deep-sets models suffice for injectivity and bi-Lipschitzness.
Related papers
- MambaPro: Multi-Modal Object Re-Identification with Mamba Aggregation and Synergistic Prompt [60.10555128510744]
Multi-modal object Re-IDentification (ReID) aims to retrieve specific objects by utilizing complementary image information from different modalities.<n>Recently, large-scale pre-trained models like CLIP have demonstrated impressive performance in traditional single-modal object ReID tasks.<n>We introduce a novel framework called MambaPro for multi-modal object ReID.
arXiv Detail & Related papers (2024-12-14T06:33:53Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Multilinear Operator Networks [60.7432588386185]
Polynomial Networks is a class of models that does not require activation functions.
We propose MONet, which relies solely on multilinear operators.
arXiv Detail & Related papers (2024-01-31T16:52:19Z) - Universal Representation of Permutation-Invariant Functions on Vectors
and Tensors [11.345796608258434]
A main object of our study is multiset functions -- that is, permutation-invariant functions over inputs of varying sizes.
Deep Sets, proposed by citezaheer 2017deep, provides a emphuniversal representation for continuous multiset functions on scalars via a sum-decomposable model.
We prove that universal representation is guaranteed for continuous and discontinuous multiset functions though a latent space dimension of $O(ND)$.
arXiv Detail & Related papers (2023-10-20T22:00:59Z) - Neural Injective Functions for Multisets, Measures and Graphs via a
Finite Witness Theorem [4.416503115535553]
Injective multiset functions have a key role in the theoretical of machine learning on multipsets and graphs.
Yet, there remains a gap between the provably injective multiset functions considered in theory and those considered in practice.
We show that moments of injective multiset functions cannot be considered as neural multi functions.
arXiv Detail & Related papers (2023-06-10T21:55:28Z) - Multi-Objective GFlowNets [59.16787189214784]
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization.
In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives.
We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse optimal solutions, based on GFlowNets.
arXiv Detail & Related papers (2022-10-23T16:15:36Z) - Universal Approximation of Functions on Sets [39.35754251872388]
Deep Sets is known to be a universal approximator for continuous set functions.
Deep Sets may be viewed as the most efficient incarnation of the Janossy pooling paradigm.
arXiv Detail & Related papers (2021-07-05T11:56:26Z) - Recursive Multi-model Complementary Deep Fusion forRobust Salient Object
Detection via Parallel Sub Networks [62.26677215668959]
Fully convolutional networks have shown outstanding performance in the salient object detection (SOD) field.
This paper proposes a wider'' network architecture which consists of parallel sub networks with totally different network architectures.
Experiments on several famous benchmarks clearly demonstrate the superior performance, good generalization, and powerful learning ability of the proposed wider framework.
arXiv Detail & Related papers (2020-08-07T10:39:11Z) - From Sets to Multisets: Provable Variational Inference for Probabilistic
Integer Submodular Models [82.95892656532696]
Submodular functions have been studied extensively in machine learning and data mining.
In this work, we propose a continuous DR-submodular extension for integer submodular functions.
We formulate a new probabilistic model which is defined through integer submodular functions.
arXiv Detail & Related papers (2020-06-01T22:20:45Z) - Deep Multi-Modal Sets [29.983311598563542]
Deep Multi-Modal Sets is a technique that represents a collection of features as an unordered set rather than one long ever-growing fixed-size vector.
We demonstrate a scalable, multi-modal framework that reasons over different modalities to learn various types of tasks.
arXiv Detail & Related papers (2020-03-03T15:48:44Z) - Representing Unordered Data Using Complex-Weighted Multiset Automata [23.68657135308002]
We show how the multiset representations of certain existing neural architectures can be viewed as special cases of ours.
Namely, we provide a new theoretical and intuitive justification for the Transformer model's representation of positions using sinusoidal functions.
We extend the DeepSets model to use complex numbers, enabling it to outperform the existing model on an extension of one of their tasks.
arXiv Detail & Related papers (2020-01-02T20:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.