Immiscible Color Flows in Optimal Transport Networks for Image
Classification
- URL: http://arxiv.org/abs/2205.02938v1
- Date: Wed, 4 May 2022 12:41:36 GMT
- Title: Immiscible Color Flows in Optimal Transport Networks for Image
Classification
- Authors: Alessandro Lonardi, Diego Baptista, Caterina De Bacco
- Abstract summary: We propose a physics-inspired system that adapts Optimal Transport principles to leverage color distributions of images.
Our dynamics regulates immiscible of colors traveling on a network built from images.
Our method outperforms competitor algorithms on image classification tasks in datasets where color information matters.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In classification tasks, it is crucial to meaningfully exploit information
contained in data. Here, we propose a physics-inspired dynamical system that
adapts Optimal Transport principles to effectively leverage color distributions
of images. Our dynamics regulates immiscible fluxes of colors traveling on a
network built from images. Instead of aggregating colors together, it treats
them as different commodities that interact with a shared capacity on edges.
Our method outperforms competitor algorithms on image classification tasks in
datasets where color information matters.
Related papers
- Color-Oriented Redundancy Reduction in Dataset Distillation [39.0015492336067]
We propose a framework that minimizes color redundancy at the individual image and overall dataset levels.
At the image level, we employ a palette network, a specialized neural network, to dynamically allocate colors from a reduced color space to each pixel.
A comprehensive performance study is conducted, demonstrating the superior performance of our proposed color-aware DD compared to existing DD methods.
arXiv Detail & Related papers (2024-11-18T06:48:11Z) - Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - Collaboration among Image and Object Level Features for Image
Colourisation [25.60139324272782]
Image colourisation is an ill-posed problem, with multiple correct solutions which depend on the context and object instances present in the input datum.
Previous approaches attacked the problem either by requiring intense user interactions or by exploiting the ability of convolutional neural networks (CNNs) in learning image level (context) features.
We propose a single network, named UCapsNet, that separate image-level features obtained through convolutions and object-level features captured by means of capsules.
Then, by skip connections over different layers, we enforce collaboration between such disentangling factors to produce high quality and plausible image colourisation.
arXiv Detail & Related papers (2021-01-19T11:48:12Z) - Visualizing Color-wise Saliency of Black-Box Image Classification Models [0.0]
A classification result given by an advanced method, including deep learning, is often hard to interpret.
We propose MC-RISE (Multi-Color RISE), which is an enhancement of RISE to take color information into account in an explanation.
Our method not only shows the saliency of each pixel in a given image as the original RISE does, but the significance of color components of each pixel; a saliency map with color information is useful especially in the domain where the color information matters.
arXiv Detail & Related papers (2020-10-06T04:27:18Z) - Learning to Compose Hypercolumns for Visual Correspondence [57.93635236871264]
We introduce a novel approach to visual correspondence that dynamically composes effective features by leveraging relevant layers conditioned on the images to match.
The proposed method, dubbed Dynamic Hyperpixel Flow, learns to compose hypercolumn features on the fly by selecting a small number of relevant layers from a deep convolutional neural network.
arXiv Detail & Related papers (2020-07-21T04:03:22Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.