Deeply-Learned Generalized Linear Models with Missing Data
- URL: http://arxiv.org/abs/2207.08911v3
- Date: Thu, 26 Oct 2023 18:31:03 GMT
- Title: Deeply-Learned Generalized Linear Models with Missing Data
- Authors: David K Lim and Naim U Rashid and Junier B Oliva and Joseph G Ibrahim
- Abstract summary: We provide a formal treatment of missing data in the context of deeply learned generalized linear models.
We propose a new architecture, textitdlglm, that is able to flexibly account for both ignorable and non-ignorable patterns of missingness.
We conclude with a case study of a Bank Marketing dataset from the UCI Machine Learning Repository.
- Score: 6.302686933168439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning (DL) methods have dramatically increased in popularity in
recent years, with significant growth in their application to supervised
learning problems in the biomedical sciences. However, the greater prevalence
and complexity of missing data in modern biomedical datasets present
significant challenges for DL methods. Here, we provide a formal treatment of
missing data in the context of deeply learned generalized linear models, a
supervised DL architecture for regression and classification problems. We
propose a new architecture, \textit{dlglm}, that is one of the first to be able
to flexibly account for both ignorable and non-ignorable patterns of
missingness in input features and response at training time. We demonstrate
through statistical simulation that our method outperforms existing approaches
for supervised learning tasks in the presence of missing not at random (MNAR)
missingness. We conclude with a case study of a Bank Marketing dataset from the
UCI Machine Learning Repository, in which we predict whether clients subscribed
to a product based on phone survey data. Supplementary materials for this
article are available online.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Multi-OCT-SelfNet: Integrating Self-Supervised Learning with Multi-Source Data Fusion for Enhanced Multi-Class Retinal Disease Classification [2.5091334993691206]
Development of a robust deep-learning model for retinal disease diagnosis requires a substantial dataset for training.
The capacity to generalize effectively on smaller datasets remains a persistent challenge.
We've combined a wide range of data sources to improve performance and generalization to new data.
arXiv Detail & Related papers (2024-09-17T17:22:35Z) - Not Another Imputation Method: A Transformer-based Model for Missing Values in Tabular Datasets [1.02138250640885]
"Not Another Imputation Method" (NAIM) is a transformer-based model designed to handle missing values without traditional imputation techniques.
NAIM employs feature-specific embeddings and a masked self-attention mechanism that effectively learns from available data.
We extensively evaluated NAIM on 5 publicly available datasets.
arXiv Detail & Related papers (2024-07-16T09:43:47Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Predicting Seriousness of Injury in a Traffic Accident: A New Imbalanced
Dataset and Benchmark [62.997667081978825]
The paper introduces a new dataset to assess the performance of machine learning algorithms in the prediction of the seriousness of injury in a traffic accident.
The dataset is created by aggregating publicly available datasets from the UK Department for Transport.
arXiv Detail & Related papers (2022-05-20T21:15:26Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Categorical EHR Imputation with Generative Adversarial Nets [11.171712535005357]
We propose a simple and yet effective approach that is based on previous work on GANs for data imputation.
We show that our imputation approach largely improves the prediction accuracy, compared to more traditional data imputation approaches.
arXiv Detail & Related papers (2021-08-03T18:50:26Z) - On the Pitfalls of Learning with Limited Data: A Facial Expression
Recognition Case Study [0.5249805590164901]
We focus on the problem of Facial Expression Recognition from videos.
We performed an extensive study with four databases at a different complexity and nine deep-learning architectures for video classification.
We found that complex training sets translate better to more stable test sets when trained with transfer learning and synthetically generated data.
arXiv Detail & Related papers (2021-04-02T18:53:41Z) - Handling Non-ignorably Missing Features in Electronic Health Records
Data Using Importance-Weighted Autoencoders [8.518166245293703]
We propose a novel extension of VAEs called Importance-Weighted Autoencoders (IWAEs) to flexibly handle Missing Not At Random patterns in the Physionet data.
Our proposed method models the missingness mechanism using an embedded neural network, eliminating the need to specify the exact form of the missingness mechanism a priori.
arXiv Detail & Related papers (2021-01-18T22:53:29Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.