Spatial Classification With Limited Observations Based On Physics-Aware
Structural Constraint
- URL: http://arxiv.org/abs/2009.01072v1
- Date: Tue, 25 Aug 2020 20:07:28 GMT
- Title: Spatial Classification With Limited Observations Based On Physics-Aware
Structural Constraint
- Authors: Arpan Man Sainju, Wenchong He, Zhe Jiang, Da Yan and Haiquan Chen
- Abstract summary: spatial classification with limited feature observations has been a challenging problem in machine learning.
This paper extends our recent approach by allowing feature values of samples in each class to follow a multi-modal distribution.
We propose learning algorithms for the extended model with multi-modal distribution.
- Score: 18.070762916388272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatial classification with limited feature observations has been a
challenging problem in machine learning. The problem exists in applications
where only a subset of sensors are deployed at certain spots or partial
responses are collected in field surveys. Existing research mostly focuses on
addressing incomplete or missing data, e.g., data cleaning and imputation,
classification models that allow for missing feature values or model missing
features as hidden variables in the EM algorithm. These methods, however,
assume that incomplete feature observations only happen on a small subset of
samples, and thus cannot solve problems where the vast majority of samples have
missing feature observations. To address this issue, we recently proposed a new
approach that incorporates physics-aware structural constraint into the model
representation. Our approach assumes that a spatial contextual feature is
observed for all sample locations and establishes spatial structural constraint
from the underlying spatial contextual feature map. We design efficient
algorithms for model parameter learning and class inference. This paper extends
our recent approach by allowing feature values of samples in each class to
follow a multi-modal distribution. We propose learning algorithms for the
extended model with multi-modal distribution. Evaluations on real-world
hydrological applications show that our approach significantly outperforms
baseline methods in classification accuracy, and the multi-modal extension is
more robust than our early single-modal version especially when feature
distribution in training samples is multi-modal. Computational experiments show
that the proposed solution is computationally efficient on large datasets.
Related papers
- Anomaly Detection Under Uncertainty Using Distributionally Robust
Optimization Approach [0.9217021281095907]
Anomaly detection is defined as the problem of finding data points that do not follow the patterns of the majority.
The one-class Support Vector Machines (SVM) method aims to find a decision boundary to distinguish between normal data points and anomalies.
A distributionally robust chance-constrained model is proposed in which the probability of misclassification is low.
arXiv Detail & Related papers (2023-12-03T06:13:22Z) - Generating collective counterfactual explanations in score-based
classification via mathematical optimization [4.281723404774889]
A counterfactual explanation of an instance indicates how this instance should be minimally modified so that the perturbed instance is classified in the desired class.
Most of the Counterfactual Analysis literature focuses on the single-instance single-counterfactual setting.
By means of novel Mathematical Optimization models, we provide a counterfactual explanation for each instance in a group of interest.
arXiv Detail & Related papers (2023-10-19T15:18:42Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Domain Generalization via Domain-based Covariance Minimization [4.414778226415752]
We propose a novel variance measurement for multiple domains so as to minimize the difference between conditional distributions across domains.
We show that for small-scale datasets, we are able to achieve better quantitative results indicating better generalization performance over unseen test datasets.
arXiv Detail & Related papers (2021-10-12T19:30:15Z) - Low-rank Dictionary Learning for Unsupervised Feature Selection [11.634317251468968]
We introduce a novel unsupervised feature selection approach by applying dictionary learning ideas in a low-rank representation.
A unified objective function for unsupervised feature selection is proposed in a sparse way by an $ell_2,1$-norm regularization.
Our experimental findings reveal that the proposed method outperforms the state-of-the-art algorithm.
arXiv Detail & Related papers (2021-06-21T13:39:10Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.