Abstractive summarization of hospitalisation histories with transformer
networks
- URL: http://arxiv.org/abs/2204.02208v1
- Date: Tue, 5 Apr 2022 13:38:39 GMT
- Title: Abstractive summarization of hospitalisation histories with transformer
networks
- Authors: Alexander Yalunin, Dmitriy Umerenkov, Vladimir Kokh
- Abstract summary: We present a novel approach to abstractive summarization of patient hospitalisation histories.
We applied an encoder-decoder framework with Longformer neural network as an encoder and BERT as a decoder.
- Score: 68.96380145211093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a novel approach to abstractive summarization of
patient hospitalisation histories. We applied an encoder-decoder framework with
Longformer neural network as an encoder and BERT as a decoder. Our experiments
show improved quality on some summarization tasks compared with
pointer-generator networks. We also conducted a study with experienced
physicians evaluating the results of our model in comparison with PGN baseline
and human-generated abstracts, which showed the effectiveness of our model.
Related papers
- Deep Generative Models Unveil Patterns in Medical Images Through Vision-Language Conditioning [3.4299097748670255]
Deep generative models have significantly advanced medical imaging analysis by enhancing dataset size and quality.
We employ a generative structure with hybrid conditions, combining clinical data and segmentation masks to guide the image synthesis process.
Our approach differs from and presents a more challenging task than traditional medical report-guided synthesis due to the less visual correlation of our clinical information with the images.
arXiv Detail & Related papers (2024-10-17T17:48:36Z) - Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - Neuro-TransUNet: Segmentation of stroke lesion in MRI using transformers [0.6554326244334866]
This study introduces the Neuro-TransUNet framework, which synergizes the U-Net's spatial feature extraction with SwinUNETR's global contextual processing ability.
The proposed Neuro-TransUNet model, trained with the ATLAS v2.0 emphtraining dataset, outperforms existing deep learning algorithms and establishes a new benchmark in stroke lesion segmentation.
arXiv Detail & Related papers (2024-06-10T04:36:21Z) - A Sentiment Analysis of Medical Text Based on Deep Learning [1.8130068086063336]
This paper focuses on the medical domain, using bidirectional encoder representations from transformers (BERT) as the basic pre-trained model.
Experiments and analyses were conducted on the METS-CoV dataset to explore the training performance after integrating different deep learning networks.
CNN models outperform other networks when trained on smaller medical text datasets in combination with pre-trained models like BERT.
arXiv Detail & Related papers (2024-04-16T12:20:49Z) - METGAN: Generative Tumour Inpainting and Modality Synthesis in Light
Sheet Microscopy [4.872960046536882]
We introduce a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours.
We construct a dual-pathway generator, for the anatomical image and label, trained in a cycle-consistent setup, constrained by an independent, pretrained segmentor.
The generated images yield significant quantitative improvement compared to existing methods.
arXiv Detail & Related papers (2021-04-22T11:18:17Z) - Transformer-based Methods for Recognizing Ultra Fine-grained Entities
(RUFES) [1.456207068672607]
This paper summarizes the participation of the Laboratoire Informatique, Image et Interaction (L3i laboratory) of the University of La Rochelle in the Recognizing Ultra Fine-grained Entities (RUFES) track within the Text Analysis Conference (TAC) series of evaluation workshops.
Our participation relies on two neural-based models, one based on a pre-trained and fine-tuned language model with a stack of Transformer layers for fine-grained entity extraction and one out-of-the-box model for within-document entity coreference.
arXiv Detail & Related papers (2021-04-13T09:23:16Z) - Learning to Segment Human Body Parts with Synthetically Trained Deep
Convolutional Networks [58.0240970093372]
This paper presents a new framework for human body part segmentation based on Deep Convolutional Neural Networks trained using only synthetic data.
The proposed approach achieves cutting-edge results without the need of training the models with real annotated data of human body parts.
arXiv Detail & Related papers (2021-02-02T12:26:50Z) - Predicting Clinical Diagnosis from Patients Electronic Health Records
Using BERT-based Neural Networks [62.9447303059342]
We show the importance of this problem in medical community.
We present a modification of Bidirectional Representations from Transformers (BERT) model for classification sequence.
We use a large-scale Russian EHR dataset consisting of about 4 million unique patient visits.
arXiv Detail & Related papers (2020-07-15T09:22:55Z) - Deep Residual 3D U-Net for Joint Segmentation and Texture Classification
of Nodules in Lung [91.3755431537592]
We present a method for lung nodules segmentation, their texture classification and subsequent follow-up recommendation from the CT image of lung.
Our method consists of neural network model based on popular U-Net architecture family but modified for the joint nodule segmentation and its texture classification tasks and an ensemble-based model for the follow-up recommendation.
arXiv Detail & Related papers (2020-06-25T07:20:41Z) - The efficiency of deep learning algorithms for detecting anatomical
reference points on radiological images of the head profile [55.41644538483948]
A U-Net neural network allows performing the detection of anatomical reference points more accurately than a fully convolutional neural network.
The results of the detection of anatomical reference points by the U-Net neural network are closer to the average results of the detection of reference points by a group of orthodontists.
arXiv Detail & Related papers (2020-05-25T13:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.