The Balanced-Pairwise-Affinities Feature Transform
- URL: http://arxiv.org/abs/2407.01467v1
- Date: Tue, 25 Jun 2024 14:28:05 GMT
- Title: The Balanced-Pairwise-Affinities Feature Transform
- Authors: Daniel Shalam, Simon Korman,
- Abstract summary: TheBPA feature transform is designed to upgrade the features of a set of input items to facilitate downstream matching or grouping related tasks.
A particular min-cost-max-flow fractional matching problem leads to a transform which is efficient, differentiable, equivariant, parameterless and probabilistically interpretable.
Empirically, the transform is highly effective and flexible in its use and consistently improves networks it is inserted into, in a variety of tasks and training schemes.
- Score: 2.3020018305241337
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The Balanced-Pairwise-Affinities (BPA) feature transform is designed to upgrade the features of a set of input items to facilitate downstream matching or grouping related tasks. The transformed set encodes a rich representation of high order relations between the input features. A particular min-cost-max-flow fractional matching problem, whose entropy regularized version can be approximated by an optimal transport (OT) optimization, leads to a transform which is efficient, differentiable, equivariant, parameterless and probabilistically interpretable. While the Sinkhorn OT solver has been adapted extensively in many contexts, we use it differently by minimizing the cost between a set of features to $itself$ and using the transport plan's $rows$ as the new representation. Empirically, the transform is highly effective and flexible in its use and consistently improves networks it is inserted into, in a variety of tasks and training schemes. We demonstrate state-of-the-art results in few-shot classification, unsupervised image clustering and person re-identification. Code is available at \url{github.com/DanielShalam/BPA}.
Related papers
- Variable-size Symmetry-based Graph Fourier Transforms for image compression [65.7352685872625]
We propose a new family of Symmetry-based Graph Fourier Transforms of variable sizes into a coding framework.
Our proposed algorithm generates symmetric graphs on the grid by adding specific symmetrical connections between nodes.
Experiments show that SBGFTs outperform the primary transforms integrated in the explicit Multiple Transform Selection.
arXiv Detail & Related papers (2024-11-24T13:00:44Z) - Graph Invariant Learning with Subgraph Co-mixup for Out-Of-Distribution
Generalization [51.913685334368104]
We propose a novel graph invariant learning method based on invariant and variant patterns co-mixup strategy.
Our method significantly outperforms state-of-the-art under various distribution shifts.
arXiv Detail & Related papers (2023-12-18T07:26:56Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for
Multi-Agent Learning [7.22614468437919]
We introduce a Permutation-Invariant Set Autoencoder (PISA)
PISA produces encodings with significantly lower reconstruction error than existing baselines.
We demonstrate its usefulness in a multi-agent application.
arXiv Detail & Related papers (2023-02-24T18:59:13Z) - B-cos Networks: Alignment is All We Need for Interpretability [136.27303006772294]
We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training.
A B-cos transform induces a single linear transform that faithfully summarises the full model computations.
We show that it can easily be integrated into common models such as VGGs, ResNets, InceptionNets, and DenseNets.
arXiv Detail & Related papers (2022-05-20T16:03:29Z) - The Self-Optimal-Transport Feature Transform [2.804721532913997]
We show how to upgrade the set of features of a data instance to facilitate downstream matching or grouping related tasks.
A particular min-cost-max-flow fractional matching problem, whose entropy regularized version can be approximated by an optimal transport (OT) optimization, results in our transductive transform.
Empirically, the transform is highly effective and flexible in its use, consistently improving networks it is inserted into.
arXiv Detail & Related papers (2022-04-06T20:00:39Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - OneDConv: Generalized Convolution For Transform-Invariant Representation [76.15687106423859]
We propose a novel generalized one dimension convolutional operator (OneDConv)
It dynamically transforms the convolution kernels based on the input features in a computationally and parametrically efficient manner.
It improves the robustness and generalization of convolution without sacrificing the performance on common images.
arXiv Detail & Related papers (2022-01-15T07:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.