Biologically-informed deep learning models for cancer: fundamental
trends for encoding and interpreting oncology data
- URL: http://arxiv.org/abs/2207.00812v1
- Date: Sat, 2 Jul 2022 12:11:35 GMT
- Title: Biologically-informed deep learning models for cancer: fundamental
trends for encoding and interpreting oncology data
- Authors: Magdalena Wysocka, Oskar Wysocki, Marie Zufferey, D\'onal Landers,
Andr\'e Freitas
- Abstract summary: We provide a structured literature analysis focused on Deep Learning (DL) models used to support inference in cancer biology.
The work focuses on how existing models address the need for better dialogue with prior knowledge, biological plausibility and interpretability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper we provide a structured literature analysis focused on Deep
Learning (DL) models used to support inference in cancer biology with a
particular emphasis on multi-omics analysis. The work focuses on how existing
models address the need for better dialogue with prior knowledge, biological
plausibility and interpretability, fundamental properties in the biomedical
domain. We discuss the recent evolutionary arch of DL models in the direction
of integrating prior biological relational and network knowledge to support
better generalisation (e.g. pathways or Protein-Protein-Interaction networks)
and interpretability. This represents a fundamental functional shift towards
models which can integrate mechanistic and statistical inference aspects. We
discuss representational methodologies for the integration of domain prior
knowledge in such models. The paper also provides a critical outlook into
contemporary methods for explainability and interpretabiltiy. This analysis
points in the direction of a convergence between encoding prior knowledge and
improved interpretability.
Related papers
- Causal Representation Learning from Multimodal Biological Observations [57.00712157758845]
We aim to develop flexible identification conditions for multimodal data.
We establish identifiability guarantees for each latent component, extending the subspace identification results from prior work.
Our key theoretical ingredient is the structural sparsity of the causal connections among distinct modalities.
arXiv Detail & Related papers (2024-11-10T16:40:27Z) - Explainable AI Methods for Multi-Omics Analysis: A Survey [3.885941688264509]
Multi-omics refers to the integrative analysis of data derived from multiple 'omes'
Deep learning methods are increasingly utilized to integrate multi-omics data, offering insights into molecular interactions and enhancing research into complex diseases.
These models, with their numerous interconnected layers and nonlinear relationships, often function as black boxes, lacking transparency in decision-making processes.
This review explores how xAI can improve the interpretability of deep learning models in multi-omics research, highlighting its potential to provide clinicians with clear insights.
arXiv Detail & Related papers (2024-10-15T05:01:17Z) - Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Ontologies are widely used for representing domain knowledge and meta data.
One straightforward solution is to integrate statistical analysis and machine learning.
Numerous papers have been published on embedding, but a lack of systematic reviews hinders researchers from gaining a comprehensive understanding of this field.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - Exploration of Attention Mechanism-Enhanced Deep Learning Models in the Mining of Medical Textual Data [3.22071437711162]
The research explores the utilization of a deep learning model employing an attention mechanism in medical text mining.
It aims to enhance the model's capability to identify essential medical information by incorporating deep learning and attention mechanisms.
arXiv Detail & Related papers (2024-05-23T00:20:14Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Leveraging Biomolecule and Natural Language through Multi-Modal
Learning: A Survey [75.47055414002571]
The integration of biomolecular modeling with natural language (BL) has emerged as a promising interdisciplinary area at the intersection of artificial intelligence, chemistry and biology.
We provide an analysis of recent advancements achieved through cross modeling of biomolecules and natural language.
arXiv Detail & Related papers (2024-03-03T14:59:47Z) - Investigating the Role of Centering Theory in the Context of Neural
Coreference Resolution Systems [71.57556446474486]
We investigate the connection between centering theory and modern coreference resolution systems.
We show that high-quality neural coreference resolvers may not benefit much from explicitly modeling centering ideas.
We formulate a version of CT that also models recency and show that it captures coreference information better compared to vanilla CT.
arXiv Detail & Related papers (2022-10-26T12:55:26Z) - Testing Pre-trained Language Models' Understanding of Distributivity via
Causal Mediation Analysis [13.07356367140208]
We introduce DistNLI, a new diagnostic dataset for natural language inference.
We find that the extent of models' understanding is associated with model size and vocabulary size.
arXiv Detail & Related papers (2022-09-11T00:33:28Z) - Generalized Shape Metrics on Neural Representations [26.78835065137714]
We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
arXiv Detail & Related papers (2021-10-27T19:48:55Z) - Multimodal Graph-based Transformer Framework for Biomedical Relation
Extraction [21.858440542249934]
We introduce a novel framework that enables the model to learn multi-omnics biological information about entities (proteins) with the help of additional multi-modal cues like molecular structure.
We evaluate our proposed method on ProteinProtein Interaction task from the biomedical corpus.
arXiv Detail & Related papers (2021-07-01T16:37:17Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.