Partial Soft-Matching Distance for Neural Representational Comparison with Partial Unit Correspondence
- URL: http://arxiv.org/abs/2602.19331v1
- Date: Sun, 22 Feb 2026 20:31:35 GMT
- Title: Partial Soft-Matching Distance for Neural Representational Comparison with Partial Unit Correspondence
- Authors: Chaitanya Kapoor, Alex H. Williams, Meenakshi Khosla,
- Abstract summary: We extend the soft-matching distance to a partial optimal transport setting that allows some neurons to remain unmatched.<n>It preserves correct matches under outliers and reliably selects the correct model in noise-corrupted identification tasks.<n>It achieves higher alignment precision across brain areas than standard soft-matching, which is forced to match all units regardless of quality.
- Score: 6.914720821302567
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Representational similarity metrics typically force all units to be matched, making them susceptible to noise and outliers common in neural representations. We extend the soft-matching distance to a partial optimal transport setting that allows some neurons to remain unmatched, yielding rotation-sensitive but robust correspondences. This partial soft-matching distance provides theoretical advantages -- relaxing strict mass conservation while maintaining interpretable transport costs -- and practical benefits through efficient neuron ranking in terms of cross-network alignment without costly iterative recomputation. In simulations, it preserves correct matches under outliers and reliably selects the correct model in noise-corrupted identification tasks. On fMRI data, it automatically excludes low-reliability voxels and produces voxel rankings by alignment quality that closely match computationally expensive brute-force approaches. It achieves higher alignment precision across homologous brain areas than standard soft-matching, which is forced to match all units regardless of quality. In deep networks, highly matched units exhibit similar maximally exciting images, while unmatched units show divergent patterns. This ability to partition by match quality enables focused analyses, e.g., testing whether networks have privileged axes even within their most aligned subpopulations. Overall, partial soft-matching provides a principled and practical method for representational comparison under partial correspondence.
Related papers
- SWAN: Self-supervised Wavelet Neural Network for Hyperspectral Image Unmixing [0.2624902795082451]
We present SWAN: a three-stage, self-supervised wavelet neural network for estimation of endmembers and abundances from hyperspectral imagery.<n>The idea is to exploit latent symmetries from thus obtained invariant and covariant features using a self-supervised learning paradigm.<n> Experiments are conducted on two benchmark synthetic data sets with different signal-to-noise ratios as well as on three real benchmark hyperspectral data sets.
arXiv Detail & Related papers (2025-10-26T10:05:48Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Soft Matching Distance: A metric on neural representations that captures
single-neuron tuning [6.5714523708869566]
Common measures of neural representational (dis)similarity are designed to be insensitive to rotations and reflections of the neural activation space.
We propose a new metric to measure distances between networks with different sizes.
arXiv Detail & Related papers (2023-11-16T00:13:00Z) - Neural Matching Fields: Implicit Representation of Matching Fields for
Visual Correspondence [41.39740414165091]
We present a novel method for semantic correspondence, called Neural Matching Field (NeMF)
We learn a high-dimensional matching field, since a naive exhaustive inference would require querying from all pixels in the 4D space to infer pixel-wise correspondences.
With these combined, competitive results are attained on several standard benchmarks for semantic correspondence.
arXiv Detail & Related papers (2022-10-06T05:38:27Z) - Contextual Model Aggregation for Fast and Robust Federated Learning in
Edge Computing [88.76112371510999]
Federated learning is a prime candidate for distributed machine learning at the network edge.
Existing algorithms face issues with slow convergence and/or robustness of performance.
We propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction.
arXiv Detail & Related papers (2022-03-23T21:42:31Z) - On the training of sparse and dense deep neural networks: less
parameters, same performance [0.0]
We propose a variant of the spectral learning method as appeared in Giambagli et al Nat. Comm. 2021.
The eigenvalues act as veritable knobs which can be freely tuned so as to (i) enhance, or alternatively silence, the contribution of the input nodes.
Each spectral parameter reflects back on the whole set of inter-nodes weights, an attribute which we shall effectively exploit to yield sparse networks with stunning classification abilities.
arXiv Detail & Related papers (2021-06-17T14:54:23Z) - Consensus-Guided Correspondence Denoising [67.35345850146393]
We propose to denoise correspondences with a local-to-global consensus learning framework to robustly identify correspondence.
A novel "pruning" block is introduced to distill reliable candidates from initial matches according to their consensus scores estimated by dynamic graphs from local to global regions.
Our method outperforms state-of-the-arts on robust line fitting, wide-baseline image matching and image localization benchmarks by noticeable margins.
arXiv Detail & Related papers (2021-01-03T09:10:00Z) - GOCor: Bringing Globally Optimized Correspondence Volumes into Your
Neural Network [176.3781969089004]
Feature correlation layer serves as a key neural network module in computer vision problems that involve dense correspondences between image pairs.
We propose GOCor, a fully differentiable dense matching module, acting as a direct replacement to the feature correlation layer.
Our approach significantly outperforms the feature correlation layer for the tasks of geometric matching, optical flow, and dense semantic matching.
arXiv Detail & Related papers (2020-09-16T17:33:01Z) - Adaptive feature recombination and recalibration for semantic
segmentation with Fully Convolutional Networks [57.64866581615309]
We propose recombination of features and a spatially adaptive recalibration block that is adapted for semantic segmentation with Fully Convolutional Networks.
Results indicate that Recombination and Recalibration improve the results of a competitive baseline, and generalize across three different problems.
arXiv Detail & Related papers (2020-06-19T15:45:03Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.