Network Module Detection from Multi-Modal Node Features with a Greedy
Decision Forest for Actionable Explainable AI
- URL: http://arxiv.org/abs/2108.11674v1
- Date: Thu, 26 Aug 2021 09:42:44 GMT
- Title: Network Module Detection from Multi-Modal Node Features with a Greedy
Decision Forest for Actionable Explainable AI
- Authors: Bastian Pfeifer, Anna Saranti and Andreas Holzinger
- Abstract summary: In this work, we demonstrate subnetwork detection based on multi-modal node features using a new Greedy Decision Forest.
Our glass-box approach could help to uncover disease-causing network modules from multi-omics data to better understand diseases such as cancer.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Network-based algorithms are used in most domains of research and industry in
a wide variety of applications and are of great practical use. In this work, we
demonstrate subnetwork detection based on multi-modal node features using a new
Greedy Decision Forest for better interpretability. The latter will be a
crucial factor in retaining experts and gaining their trust in such algorithms
in the future. To demonstrate a concrete application example, we focus in this
paper on bioinformatics and systems biology with a special focus on
biomedicine. However, our methodological approach is applicable in many other
domains as well. Systems biology serves as a very good example of a field in
which statistical data-driven machine learning enables the analysis of large
amounts of multi-modal biomedical data. This is important to reach the future
goal of precision medicine, where the complexity of patients is modeled on a
system level to best tailor medical decisions, health practices and therapies
to the individual patient. Our glass-box approach could help to uncover
disease-causing network modules from multi-omics data to better understand
diseases such as cancer.
Related papers
- Automated Ensemble Multimodal Machine Learning for Healthcare [52.500923923797835]
We introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning.
AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies.
arXiv Detail & Related papers (2024-07-25T17:46:38Z) - Simplicity within biological complexity [0.0]
We survey the literature and argue for the development of a comprehensive framework for embedding of multi-scale molecular network data.
Network embedding methods map nodes to points in low-dimensional space, so that proximity in the learned space reflects the network's topology-function relationships.
We propose to develop a general, comprehensive embedding framework for multi-omic network data, from models to efficient and scalable software implementation.
arXiv Detail & Related papers (2024-05-15T13:32:45Z) - An Evaluation of Large Language Models in Bioinformatics Research [52.100233156012756]
We study the performance of large language models (LLMs) on a wide spectrum of crucial bioinformatics tasks.
These tasks include the identification of potential coding regions, extraction of named entities for genes and proteins, detection of antimicrobial and anti-cancer peptides, molecular optimization, and resolution of educational bioinformatics problems.
Our findings indicate that, given appropriate prompts, LLMs like GPT variants can successfully handle most of these tasks.
arXiv Detail & Related papers (2024-02-21T11:27:31Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review [0.0]
Integrating diverse data types can improve the accuracy and reliability of cancer diagnosis and treatment.
Deep neural networks have facilitated the development of sophisticated multimodal data fusion approaches.
Recent deep learning frameworks such as Graph Neural Networks (GNNs) and Transformers have shown remarkable success in multimodal learning.
arXiv Detail & Related papers (2023-03-11T17:52:03Z) - Multimodal Machine Learning in Precision Health [10.068890037410316]
This review was conducted to summarize this field and identify topics ripe for future research.
We used a combination of content analysis and literature searches to establish search strings and databases of PubMed, Google Scholar, and IEEEXplore from 2011 to 2021.
The most common form of information fusion was early fusion. Notably, there was an improvement in predictive performance performing heterogeneous data fusion.
arXiv Detail & Related papers (2022-04-10T21:56:07Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z) - Analyzing the Effect of Multi-task Learning for Biomedical Named Entity
Recognition [6.09170287691728]
State-of-the-art deep-learning based solutions for entity recognition often require large annotated datasets.
We performed an extensive analysis to understand the transferability between different biomedical entity datasets.
We propose combining transfer learning and multi-task learning to improve the performance of biomedical named entity recognition systems.
arXiv Detail & Related papers (2020-11-01T04:52:56Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - A Survey on Incorporating Domain Knowledge into Deep Learning for
Medical Image Analysis [38.90186125141749]
Small size of medical datasets remains a major bottleneck in deep learning.
Traditional approaches leverage the information from natural images via transfer learning.
More recent works utilize the domain knowledge from medical doctors to create networks that resemble how medical doctors are trained.
arXiv Detail & Related papers (2020-04-25T14:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.