Applications of the Streaming Networks
- URL: http://arxiv.org/abs/2004.11805v1
- Date: Fri, 27 Mar 2020 08:13:17 GMT
- Title: Applications of the Streaming Networks
- Authors: Sergey Tarasenko and Fumihiko Takahashi
- Abstract summary: Streaming Networks (STnets) have been introduced as a mechanism of robust noise-corrupted images classification.
In this paper, we demonstrate that STnets are capable of high accuracy classification of images corrupted with noise.
We also introduce a new type of STnets called Hybrid STnets.
- Score: 0.2538209532048866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most recently Streaming Networks (STnets) have been introduced as a mechanism
of robust noise-corrupted images classification. STnets is a family of
convolutional neural networks, which consists of multiple neural networks
(streams), which have different inputs and their outputs are concatenated and
fed into a single joint classifier. The original paper has illustrated how
STnets can successfully classify images from Cifar10, EuroSat and UCmerced
datasets, when images were corrupted with various levels of random zero noise.
In this paper, we demonstrate that STnets are capable of high accuracy
classification of images corrupted with Gaussian noise, fog, snow, etc.
(Cifar10 corrupted dataset) and low light images (subset of Carvana dataset).
We also introduce a new type of STnets called Hybrid STnets. Thus, we
illustrate that STnets is a universal tool of image classification when
original training dataset is corrupted with noise or other transformations,
which lead to information loss from original images.
Related papers
- RATLIP: Generative Adversarial CLIP Text-to-Image Synthesis Based on Recurrent Affine Transformations [0.0]
Conditional affine transformations (CAT) have been applied to different layers of GAN to control content synthesis in images.
We first model CAT and a recurrent neural network (RAT) to ensure that different layers can access global information.
We then introduce shuffle attention between RAT to mitigate the characteristic of information forgetting in recurrent neural networks.
arXiv Detail & Related papers (2024-05-13T18:49:18Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - A Perturbation Resistant Transformation and Classification System for
Deep Neural Networks [0.685316573653194]
Deep convolutional neural networks accurately classify a diverse range of natural images, but may be easily deceived when designed.
In this paper, we design a multi-pronged training, unbounded input transformation, and image ensemble system that is attack and not easily estimated.
arXiv Detail & Related papers (2022-08-25T02:58:47Z) - SpectralFormer: Rethinking Hyperspectral Image Classification with
Transformers [91.09957836250209]
Hyperspectral (HS) images are characterized by approximately contiguous spectral information.
CNNs have been proven to be a powerful feature extractor in HS image classification.
We propose a novel backbone network called ulSpectralFormer for HS image classification.
arXiv Detail & Related papers (2021-07-07T02:59:21Z) - Learning degraded image classification with restoration data fidelity [0.0]
We investigate the influence of degradation types and levels on four widely-used classification networks.
We propose a novel method leveraging a fidelity map to calibrate the image features obtained by pre-trained networks.
Our results reveal that the proposed method is a promising solution to mitigate the effect caused by image degradation.
arXiv Detail & Related papers (2021-01-23T23:47:03Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Sparsifying and Down-scaling Networks to Increase Robustness to
Distortions [0.0]
Streaming Network (STNet) is a novel architecture capable of robust classification of distorted images.
Recent results prove STNet is robust to 20 types of noise and distortions.
New STNets exhibit higher or equal accuracy in comparison with original networks.
arXiv Detail & Related papers (2020-06-08T03:58:27Z) - Networks with pixels embedding: a method to improve noise resistance in
images classification [6.399560915757414]
We provide a noise-resistance network in images classification by introducing a technique of pixel embedding.
We test the network with pixel embedding, which is abbreviated as the network with PE, on the mnist database of handwritten digits.
arXiv Detail & Related papers (2020-05-24T07:55:08Z) - AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning [112.95742995816367]
We propose a new few-shot fewshot learning setting termed FSFSL.
Under FSFSL, both the source and target classes have limited training samples.
We also propose a graph convolutional network (GCN)-based label denoising (LDN) method to remove irrelevant images.
arXiv Detail & Related papers (2020-02-28T10:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.