Graph-based Active Learning for Semi-supervised Classification of SAR
Data
- URL: http://arxiv.org/abs/2204.00005v1
- Date: Thu, 31 Mar 2022 00:14:06 GMT
- Title: Graph-based Active Learning for Semi-supervised Classification of SAR
Data
- Authors: Kevin Miller, John Mauro, Jason Setiadi, Xoaquin Baca, Zhan Shi, Jeff
Calder, Andrea L. Bertozzi
- Abstract summary: We present a novel method for classification of Synthetic Aperture Radar (SAR) data by combining ideas from graph-based learning and neural network methods.
CNNVAE feature embedding and graph construction requires no labeled data, which reduces overfitting.
The method easily incorporates a human-in-the-loop for active learning in the data-labeling process.
- Score: 8.92985438874948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel method for classification of Synthetic Aperture Radar
(SAR) data by combining ideas from graph-based learning and neural network
methods within an active learning framework. Graph-based methods in machine
learning are based on a similarity graph constructed from the data. When the
data consists of raw images composed of scenes, extraneous information can make
the classification task more difficult. In recent years, neural network methods
have been shown to provide a promising framework for extracting patterns from
SAR images. These methods, however, require ample training data to avoid
overfitting. At the same time, such training data are often unavailable for
applications of interest, such as automatic target recognition (ATR) and SAR
data. We use a Convolutional Neural Network Variational Autoencoder (CNNVAE) to
embed SAR data into a feature space, and then construct a similarity graph from
the embedded data and apply graph-based semi-supervised learning techniques.
The CNNVAE feature embedding and graph construction requires no labeled data,
which reduces overfitting and improves the generalization performance of graph
learning at low label rates. Furthermore, the method easily incorporates a
human-in-the-loop for active learning in the data-labeling process. We present
promising results and compare them to other standard machine learning methods
on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset
for ATR with small amounts of labeled data.
Related papers
- Novel Representation Learning Technique using Graphs for Performance
Analytics [0.0]
We propose a novel idea of transforming performance data into graphs to leverage the advancement of Graph Neural Network-based (GNN) techniques.
In contrast to other Machine Learning application domains, such as social networks, the graph is not given; instead, we need to build it.
We evaluate the effectiveness of the generated embeddings from GNNs based on how well they make even a simple feed-forward neural network perform for regression tasks.
arXiv Detail & Related papers (2024-01-19T16:34:37Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - A Unifying Generative Model for Graph Learning Algorithms: Label
Propagation, Graph Convolutions, and Combinations [39.8498896531672]
Semi-supervised learning on graphs is a widely applicable problem in network science and machine learning.
We develop a Markov random field model for the data generation process of node attributes.
We show that label propagation, a linearized graph convolutional network, and their combination can all be derived as conditional expectations.
arXiv Detail & Related papers (2021-01-19T17:07:08Z) - Sparse Signal Models for Data Augmentation in Deep Learning ATR [0.8999056386710496]
We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm.
We exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting.
arXiv Detail & Related papers (2020-12-16T21:46:33Z) - Dataset Meta-Learning from Kernel Ridge-Regression [18.253682891579402]
Kernel Inducing Points (KIP) can compress datasets by one or two orders of magnitude.
KIP-learned datasets are transferable to the training of finite-width neural networks even beyond the lazy-training regime.
arXiv Detail & Related papers (2020-10-30T18:54:04Z) - Active Learning on Attributed Graphs via Graph Cognizant Logistic
Regression and Preemptive Query Generation [37.742218733235084]
We propose a novel graph-based active learning algorithm for the task of node classification in attributed graphs.
Our algorithm uses graph cognizant logistic regression, equivalent to a linearized graph convolutional neural network (GCN) for the prediction phase and maximizes the expected error reduction in the query phase.
We conduct experiments on five public benchmark datasets, demonstrating a significant improvement over state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-09T18:00:53Z) - Data-Driven Factor Graphs for Deep Symbol Detection [107.63351413549992]
We propose to implement factor graph methods in a data-driven manner.
In particular, we propose to use machine learning (ML) tools to learn the factor graph.
We demonstrate that the proposed system, referred to as BCJRNet, learns to implement the BCJR algorithm from a small training set.
arXiv Detail & Related papers (2020-01-31T09:23:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.