TransCORALNet: A Two-Stream Transformer CORAL Networks for Supply Chain
Credit Assessment Cold Start
- URL: http://arxiv.org/abs/2311.18749v1
- Date: Thu, 30 Nov 2023 17:47:02 GMT
- Title: TransCORALNet: A Two-Stream Transformer CORAL Networks for Supply Chain
Credit Assessment Cold Start
- Authors: Jie Shi, Arno P. J. M. Siebes, Siamak Mehrkanoon
- Abstract summary: The model aims to provide accurate credit assessment prediction for new supply chain borrowers with limited historical data.
The proposed model addresses four significant supply chain credit assessment challenges: domain shift, cold start, imbalanced-class and interpretability.
- Score: 5.0299791897740675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes an interpretable two-stream transformer CORAL networks
(TransCORALNet) for supply chain credit assessment under the segment industry
and cold start problem. The model aims to provide accurate credit assessment
prediction for new supply chain borrowers with limited historical data. Here,
the two-stream domain adaptation architecture with correlation alignment
(CORAL) loss is used as a core model and is equipped with transformer, which
provides insights about the learned features and allow efficient
parallelization during training. Thanks to the domain adaptation capability of
the proposed model, the domain shift between the source and target domain is
minimized. Therefore, the model exhibits good generalization where the source
and target do not follow the same distribution, and a limited amount of target
labeled instances exist. Furthermore, we employ Local Interpretable
Model-agnostic Explanations (LIME) to provide more insight into the model
prediction and identify the key features contributing to supply chain credit
assessment decisions. The proposed model addresses four significant supply
chain credit assessment challenges: domain shift, cold start, imbalanced-class
and interpretability. Experimental results on a real-world data set demonstrate
the superiority of TransCORALNet over a number of state-of-the-art baselines in
terms of accuracy. The code is available on GitHub
https://github.com/JieJieNiu/TransCORALN .
Related papers
- Gradually Vanishing Gap in Prototypical Network for Unsupervised Domain Adaptation [32.58201185195226]
We propose an efficient UDA framework named Gradually Vanishing Gap in Prototypical Network (GVG-PN)
Our model achieves transfer learning from both global and local perspectives.
Experiments on several UDA benchmarks validated that the proposed GVG-PN can clearly outperform the SOTA models.
arXiv Detail & Related papers (2024-05-28T03:03:32Z) - Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error [50.86671887712424]
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
arXiv Detail & Related papers (2023-08-06T03:28:34Z) - Cross-Inferential Networks for Source-free Unsupervised Domain
Adaptation [17.718392065388503]
We propose to explore a new method called cross-inferential networks (CIN)
Our main idea is that, when we adapt the network model to predict the sample labels from encoded features, we use these prediction results to construct new training samples with derived labels.
Our experimental results on benchmark datasets demonstrate that our proposed CIN approach can significantly improve the performance of source-free UDA.
arXiv Detail & Related papers (2023-06-29T14:04:24Z) - Confidence Attention and Generalization Enhanced Distillation for
Continuous Video Domain Adaptation [62.458968086881555]
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to a series of individually available changing target domains.
We propose a Confidence-Attentive network with geneRalization enhanced self-knowledge disTillation (CART) to address the challenge in CVDA.
arXiv Detail & Related papers (2023-03-18T16:40:10Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - ACDC: Online Unsupervised Cross-Domain Adaptation [15.72925931271688]
We propose ACDC, an adversarial unsupervised domain adaptation framework.
ACDC encapsulates three modules into a single model: A denoising autoencoder that extracts features, an adversarial module that performs domain conversion, and an estimator that learns the source stream and predicts the target stream.
Our experimental results under the prequential test-then-train protocol indicate an improvement in target accuracy over the baseline methods, achieving more than a 10% increase in some cases.
arXiv Detail & Related papers (2021-10-04T11:08:32Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - Accuracy on the Line: On the Strong Correlation Between
Out-of-Distribution and In-Distribution Generalization [89.73665256847858]
We show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts.
Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet.
We also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS.
arXiv Detail & Related papers (2021-07-09T19:48:23Z) - Explaining a Series of Models by Propagating Local Feature Attributions [9.66840768820136]
Pipelines involving several machine learning models improve performance in many domains but are difficult to understand.
We introduce a framework to propagate local feature attributions through complex pipelines of models based on a connection to the Shapley value.
Our framework enables us to draw higher-level conclusions based on groups of gene expression features for Alzheimer's and breast cancer histologic grade prediction.
arXiv Detail & Related papers (2021-04-30T22:20:58Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.