Regressing Relative Fine-Grained Change for Sub-Groups in Unreliable
Heterogeneous Data Through Deep Multi-Task Metric Learning
- URL: http://arxiv.org/abs/2208.05800v1
- Date: Thu, 11 Aug 2022 12:57:11 GMT
- Title: Regressing Relative Fine-Grained Change for Sub-Groups in Unreliable
Heterogeneous Data Through Deep Multi-Task Metric Learning
- Authors: Niall O' Mahony, Sean Campbell, Lenka Krpalkova, Joseph Walsh, Daniel
Riordan
- Abstract summary: We investigate how techniques in multi-task metric learning can be applied for theregression of fine-grained change in real data.
The techniques investigated are specifically tailored for handling heterogeneous data sources.
- Score: 0.5999777817331317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fine-Grained Change Detection and Regression Analysis are essential in many
applications of ArtificialIntelligence. In practice, this task is often
challenging owing to the lack of reliable ground truth information
andcomplexity arising from interactions between the many underlying factors
affecting a system. Therefore,developing a framework which can represent the
relatedness and reliability of multiple sources of informationbecomes critical.
In this paper, we investigate how techniques in multi-task metric learning can
be applied for theregression of fine-grained change in real data.The key idea
is that if we incorporate the incremental change in a metric of interest
between specific instancesof an individual object as one of the tasks in a
multi-task metric learning framework, then interpreting thatdimension will
allow the user to be alerted to fine-grained change invariant to what the
overall metric isgeneralised to be. The techniques investigated are
specifically tailored for handling heterogeneous data sources,i.e. the input
data for each of the tasks might contain missing values, the scale and
resolution of the values is notconsistent across tasks and the data contains
non-independent and identically distributed (non-IID) instances. Wepresent the
results of our initial experimental implementations of this idea and discuss
related research in thisdomain which may offer direction for further research.
Related papers
- A Multitask Deep Learning Model for Classification and Regression of Hyperspectral Images: Application to the large-scale dataset [44.94304541427113]
We propose a multitask deep learning model to perform multiple classification and regression tasks simultaneously on hyperspectral images.
We validated our approach on a large hyperspectral dataset called TAIGA.
A comprehensive qualitative and quantitative analysis of the results shows that the proposed method significantly outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-23T11:14:54Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Understanding and Exploiting Dependent Variables with Deep Metric
Learning [0.5025737475817937]
Deep Metric Learning (DML) approaches learn to represent inputs to a lower-dimensional latent space.
This paper investigates how the mapping element of DML may be exploited in situations where the salient features in arbitrary classification problems vary over time.
arXiv Detail & Related papers (2020-09-08T15:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.