Toward Defining a Domain Complexity Measure Across Domains
- URL: http://arxiv.org/abs/2303.04141v1
- Date: Tue, 7 Mar 2023 18:56:50 GMT
- Title: Toward Defining a Domain Complexity Measure Across Domains
- Authors: Katarina Doctor, Christine Task, Eric Kildebeck, Mayank Kejriwal,
Lawrence Holder, and Russell Leong
- Abstract summary: transition from simulators, testbeds, and benchmark datasets to more open-world domains poses significant challenges to AI systems.
We propose a path to a general, domain-independent measure of domain complexity level.
- Score: 5.427513049774693
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Artificial Intelligence (AI) systems planned for deployment in real-world
applications frequently are researched and developed in closed simulation
environments where all variables are controlled and known to the simulator or
labeled benchmark datasets are used. Transition from these simulators,
testbeds, and benchmark datasets to more open-world domains poses significant
challenges to AI systems, including significant increases in the complexity of
the domain and the inclusion of real-world novelties; the open-world
environment contains numerous out-of-distribution elements that are not part in
the AI systems' training set. Here, we propose a path to a general,
domain-independent measure of domain complexity level. We distinguish two
aspects of domain complexity: intrinsic and extrinsic. The intrinsic domain
complexity is the complexity that exists by itself without any action or
interaction from an AI agent performing a task on that domain. This is an
agent-independent aspect of the domain complexity. The extrinsic domain
complexity is agent- and task-dependent. Intrinsic and extrinsic elements
combined capture the overall complexity of the domain. We frame the components
that define and impact domain complexity levels in a domain-independent light.
Domain-independent measures of complexity could enable quantitative predictions
of the difficulty posed to AI systems when transitioning from one testbed or
environment to another, when facing out-of-distribution data in open-world
tasks, and when navigating the rapidly expanding solution and search spaces
encountered in open-world domains.
Related papers
- Understanding and Estimating Domain Complexity Across Domains [2.1613662656419406]
We propose a general framework for estimating domain complexity across diverse environments.
By analyzing dimensionality, sparsity, and diversity within these categories, we offer a comprehensive view of domain challenges.
arXiv Detail & Related papers (2023-12-20T23:47:17Z) - Context-aware Domain Adaptation for Time Series Anomaly Detection [69.3488037353497]
Time series anomaly detection is a challenging task with a wide range of real-world applications.
Recent efforts have been devoted to time series domain adaptation to leverage knowledge from similar domains.
We propose a framework that combines context sampling and anomaly detection into a joint learning procedure.
arXiv Detail & Related papers (2023-04-15T02:28:58Z) - Towards Generalization on Real Domain for Single Image Dehazing via
Meta-Learning [41.99615673136883]
Internal information learned from synthesized images is usually sub-optimal in real domains.
We present a domain generalization framework based on meta-learning to dig out representative internal properties of real hazy domains.
Our proposed method has superior generalization ability than the state-of-the-art competitors.
arXiv Detail & Related papers (2022-11-14T07:04:00Z) - Generalizing to Evolving Domains with Latent Structure-Aware Sequential
Autoencoder [32.46804768486719]
We introduce a probabilistic framework called Latent Structure-aware Sequential Autoencoder (LSSAE) to tackle the problem of evolving domain generalization.
Experimental results on both synthetic and real-world datasets show that LSSAE can lead to superior performances.
arXiv Detail & Related papers (2022-05-16T13:11:29Z) - Understanding the Domain Gap in LiDAR Object Detection Networks [1.6661840375100232]
We show two distinct domain gaps - an inference domain gap and a training domain gap.
The inference gap is characterised by a strong dependence on the number of LiDAR points per object, while the training gap shows no such dependence.
These fndings show that different approaches are required to close these inference and training domain gaps.
arXiv Detail & Related papers (2022-04-21T11:18:48Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Learning Task-oriented Disentangled Representations for Unsupervised
Domain Adaptation [165.61511788237485]
Unsupervised domain adaptation (UDA) aims to address the domain-shift problem between a labeled source domain and an unlabeled target domain.
We propose a dynamic task-oriented disentangling network (DTDN) to learn disentangled representations in an end-to-end fashion for UDA.
arXiv Detail & Related papers (2020-07-27T01:21:18Z) - Spatial Attention Pyramid Network for Unsupervised Domain Adaptation [66.75008386980869]
Unsupervised domain adaptation is critical in various computer vision tasks.
We design a new spatial attention pyramid network for unsupervised domain adaptation.
Our method performs favorably against the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-03-29T09:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.