Reusability report: Prostate cancer stratification with diverse
biologically-informed neural architectures
- URL: http://arxiv.org/abs/2309.16645v2
- Date: Mon, 30 Oct 2023 21:48:22 GMT
- Title: Reusability report: Prostate cancer stratification with diverse
biologically-informed neural architectures
- Authors: Christian Pedersen, Tiberiu Tesileanu, Tinghui Wu, Siavash Golkar,
Miles Cranmer, Zijun Zhang, Shirley Ho
- Abstract summary: A feedforward neural network with biologically informed, sparse connections (P-NET) was presented to model the state of prostate cancer.
We quantified the contribution of network sparsification by Reactome biological pathways, and confirmed its importance to P-NET's superior performance.
We experimented with three types of graph neural networks on the same training data, and investigated the clinical prediction agreement between different models.
- Score: 7.417447233454902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Elmarakeby et al., "Biologically informed deep neural network for prostate
cancer discovery", a feedforward neural network with biologically informed,
sparse connections (P-NET) was presented to model the state of prostate cancer.
We verified the reproducibility of the study conducted by Elmarakeby et al.,
using both their original codebase, and our own re-implementation using more
up-to-date libraries. We quantified the contribution of network sparsification
by Reactome biological pathways, and confirmed its importance to P-NET's
superior performance. Furthermore, we explored alternative neural architectures
and approaches to incorporating biological information into the networks. We
experimented with three types of graph neural networks on the same training
data, and investigated the clinical prediction agreement between different
models. Our analyses demonstrated that deep neural networks with distinct
architectures make incorrect predictions for individual patient that are
persistent across different initializations of a specific neural architecture.
This suggests that different neural architectures are sensitive to different
aspects of the data, an important yet under-explored challenge for clinical
prediction tasks.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Deception Detection from Linguistic and Physiological Data Streams Using Bimodal Convolutional Neural Networks [19.639533220155965]
This paper explores the application of convolutional neural networks for the purpose of multimodal deception detection.
We use a dataset built by interviewing 104 subjects about two topics, with one truthful and one falsified response from each subject about each topic.
arXiv Detail & Related papers (2023-11-18T02:44:33Z) - Decoding Neuronal Networks: A Reservoir Computing Approach for
Predicting Connectivity and Functionality [0.0]
Our model deciphers data obtained from electrophysiological measurements of neuronal cultures.
Notably, our model outperforms common methods like Cross-Correlation and Transfer-Entropy in predicting the network's connectivity map.
arXiv Detail & Related papers (2023-11-06T14:28:11Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Persistent Homology Captures the Generalization of Neural Networks
Without A Validation Set [0.0]
We suggest studying the training of neural networks with Algebraic Topology, specifically Persistent Homology.
Using simplicial complex representations of neural networks, we study the PH diagram distance evolution on the neural network learning process.
Results show that the PH diagram distance between consecutive neural network states correlates with the validation accuracy.
arXiv Detail & Related papers (2021-05-31T09:17:31Z) - On the Exploitation of Neuroevolutionary Information: Analyzing the Past
for a More Efficient Future [60.99717891994599]
We propose an approach that extracts information from neuroevolutionary runs, and use it to build a metamodel.
We inspect the best structures found during neuroevolutionary searches of generative adversarial networks with varying characteristics.
arXiv Detail & Related papers (2021-05-26T20:55:29Z) - Can you tell? SSNet -- a Sagittal Stratum-inspired Neural Network
Framework for Sentiment Analysis [1.0312968200748118]
We propose a neural network architecture that combines predictions of different models on the same text to construct robust, accurate and computationally efficient classifiers for sentiment analysis.
Among them, we propose a systematic new approach to combining multiple predictions based on a dedicated neural network and develop mathematical analysis of it along with state-of-the-art experimental results.
arXiv Detail & Related papers (2020-06-23T12:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.