Collaborative Anomaly Detection
- URL: http://arxiv.org/abs/2209.09923v1
- Date: Tue, 20 Sep 2022 18:01:07 GMT
- Title: Collaborative Anomaly Detection
- Authors: Ke Bai, Aonan Zhang, Zhizhong Li, Ricardo Heano, Chong Wang, Lawrence
Carin
- Abstract summary: We propose collaborative anomaly detection (CAD) to jointly learn all tasks with an embedding encoding correlations among tasks.
We explore CAD with conditional density estimation and conditional likelihood ratio estimation.
It is beneficial to select a small number of tasks in advance to learn a task embedding model, and then use it to warm-start all task embeddings.
- Score: 66.51075412012581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recommendation systems, items are likely to be exposed to various users
and we would like to learn about the familiarity of a new user with an existing
item. This can be formulated as an anomaly detection (AD) problem
distinguishing between "common users" (nominal) and "fresh users" (anomalous).
Considering the sheer volume of items and the sparsity of user-item paired
data, independently applying conventional single-task detection methods on each
item quickly becomes difficult, while correlations between items are ignored.
To address this multi-task anomaly detection problem, we propose collaborative
anomaly detection (CAD) to jointly learn all tasks with an embedding encoding
correlations among tasks. We explore CAD with conditional density estimation
and conditional likelihood ratio estimation. We found that: $i$) estimating a
likelihood ratio enjoys more efficient learning and yields better results than
density estimation. $ii$) It is beneficial to select a small number of tasks in
advance to learn a task embedding model, and then use it to warm-start all task
embeddings. Consequently, these embeddings can capture correlations between
tasks and generalize to new correlated tasks.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Regressing Relative Fine-Grained Change for Sub-Groups in Unreliable
Heterogeneous Data Through Deep Multi-Task Metric Learning [0.5999777817331317]
We investigate how techniques in multi-task metric learning can be applied for theregression of fine-grained change in real data.
The techniques investigated are specifically tailored for handling heterogeneous data sources.
arXiv Detail & Related papers (2022-08-11T12:57:11Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Active Multitask Learning with Committees [15.862634213775697]
The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches.
We propose an active multitask learning algorithm that achieves knowledge transfer between tasks.
Our approach reduces the number of queries needed during training while maintaining high accuracy on test data.
arXiv Detail & Related papers (2021-03-24T18:07:23Z) - Robust Learning Through Cross-Task Consistency [92.42534246652062]
We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency.
We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs.
arXiv Detail & Related papers (2020-06-07T09:24:33Z) - Mining Implicit Entity Preference from User-Item Interaction Data for
Knowledge Graph Completion via Adversarial Learning [82.46332224556257]
We propose a novel adversarial learning approach by leveraging user interaction data for the Knowledge Graph Completion task.
Our generator is isolated from user interaction data, and serves to improve the performance of the discriminator.
To discover implicit entity preference of users, we design an elaborate collaborative learning algorithms based on graph neural networks.
arXiv Detail & Related papers (2020-03-28T05:47:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.