MeDaS: An open-source platform as service to help break the walls
between medicine and informatics
- URL: http://arxiv.org/abs/2007.06013v2
- Date: Tue, 14 Jul 2020 01:59:08 GMT
- Title: MeDaS: An open-source platform as service to help break the walls
between medicine and informatics
- Authors: Liang Zhang, Johann Li, Ping Li, Xiaoyuan Lu, Peiyi Shen, Guangming
Zhu, Syed Afaq Shah, Mohammed Bennarmoun, Kun Qian, Bj\"orn W. Schuller
- Abstract summary: We propose MeDaS -- the MeDical open-source platform as Service.
MeDaS is a collaborative and interactive service for researchers from a medical background easily using DL related toolkits.
Based on a series of toolkits and utilities from the idea of RINV, our proposed MeDaS platform can implement pre-processing, post-processing, augmentation, visualization, and other phases needed in medical image analysis.
- Score: 20.618938647463654
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the past decade, deep learning (DL) has achieved unprecedented success in
numerous fields including computer vision, natural language processing, and
healthcare. In particular, DL is experiencing an increasing development in
applications for advanced medical image analysis in terms of analysis,
segmentation, classification, and furthermore. On the one hand, tremendous
needs that leverage the power of DL for medical image analysis are arising from
the research community of a medical, clinical, and informatics background to
jointly share their expertise, knowledge, skills, and experience. On the other
hand, barriers between disciplines are on the road for them often hampering a
full and efficient collaboration. To this end, we propose our novel open-source
platform, i.e., MeDaS -- the MeDical open-source platform as Service. To the
best of our knowledge, MeDaS is the first open-source platform proving a
collaborative and interactive service for researchers from a medical background
easily using DL related toolkits, and at the same time for scientists or
engineers from information sciences to understand the medical knowledge side.
Based on a series of toolkits and utilities from the idea of RINV (Rapid
Implementation aNd Verification), our proposed MeDaS platform can implement
pre-processing, post-processing, augmentation, visualization, and other phases
needed in medical image analysis. Five tasks including the subjects of lung,
liver, brain, chest, and pathology, are validated and demonstrated to be
efficiently realisable by using MeDaS.
Related papers
- Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning [57.873833577058]
We build a multimodal dataset enriched with extensive medical knowledge.<n>We then introduce our medical-specialized MLLM: Lingshu.<n>Lingshu undergoes multi-stage training to embed medical expertise and enhance its task-solving capabilities.
arXiv Detail & Related papers (2025-06-08T08:47:30Z) - MediTools -- Medical Education Powered by LLMs [0.0]
This research project leverages large language models to enhance medical education and address workflow challenges.
Our first tool is a dermatology case simulation tool that uses real patient images depicting various dermatological conditions.
The application also features two additional tools: an AI-enhanced tool for engaging with LLMs to gain deeper insights into research papers, and a Google News tool that offers LLM generated summaries of articles for various medical specialties.
arXiv Detail & Related papers (2025-03-28T03:57:32Z) - Beyond Knowledge Silos: Task Fingerprinting for Democratization of Medical Imaging AI [0.36366740831145616]
We propose a framework for secure knowledge transfer in the field of medical image analysis.
Key to our approach is dataset "fingerprints", structured representations of feature distributions.
Our method outperforms traditional methods for identifying relevant knowledge.
arXiv Detail & Related papers (2024-12-11T20:28:42Z) - A Survey of Medical Vision-and-Language Applications and Their Techniques [48.268198631277315]
Medical vision-and-language models (MVLMs) have attracted substantial interest due to their capability to offer a natural language interface for interpreting complex medical data.
Here, we provide a comprehensive overview of MVLMs and the various medical tasks to which they have been applied.
We also examine the datasets used for these tasks and compare the performance of different models based on standardized evaluation metrics.
arXiv Detail & Related papers (2024-11-19T03:27:05Z) - MedKP: Medical Dialogue with Knowledge Enhancement and Clinical Pathway
Encoding [48.348511646407026]
We introduce the Medical dialogue with Knowledge enhancement and clinical Pathway encoding framework.
The framework integrates an external knowledge enhancement module through a medical knowledge graph and an internal clinical pathway encoding via medical entities and physician actions.
arXiv Detail & Related papers (2024-03-11T10:57:45Z) - Review of multimodal machine learning approaches in healthcare [0.0]
Clinicians rely on a variety of data sources to make informed decisions.
Recent advances in machine learning have facilitated the more efficient incorporation of multimodal data.
arXiv Detail & Related papers (2024-02-04T12:21:38Z) - A scoping review on multimodal deep learning in biomedical images and
texts [29.10320016193946]
Multimodal deep learning has the potential to revolutionize the analysis and interpretation of biomedical data.
This study reviewed the current uses of multimodal deep learning on five tasks.
arXiv Detail & Related papers (2023-07-14T14:08:54Z) - Towards Medical Artificial General Intelligence via Knowledge-Enhanced
Multimodal Pretraining [121.89793208683625]
Medical artificial general intelligence (MAGI) enables one foundation model to solve different medical tasks.
We propose a new paradigm called Medical-knedge-enhanced mulTimOdal pretRaining (MOTOR)
arXiv Detail & Related papers (2023-04-26T01:26:19Z) - ChatCAD: Interactive Computer-Aided Diagnosis on Medical Image using
Large Language Models [53.73049253535025]
Large language models (LLMs) have recently demonstrated their potential in clinical applications.
This paper presents a method for integrating LLMs into medical-image CAD networks.
The goal is to merge the strengths of LLMs' medical domain knowledge and logical reasoning with the vision understanding capability of existing medical-image CAD models.
arXiv Detail & Related papers (2023-02-14T18:54:06Z) - Align, Reason and Learn: Enhancing Medical Vision-and-Language
Pre-training with Knowledge [68.90835997085557]
We propose a systematic and effective approach to enhance structured medical knowledge from three perspectives.
First, we align the representations of the vision encoder and the language encoder through knowledge.
Second, we inject knowledge into the multi-modal fusion model to enable the model to perform reasoning using knowledge as the supplementation of the input image and text.
Third, we guide the model to put emphasis on the most critical information in images and texts by designing knowledge-induced pretext tasks.
arXiv Detail & Related papers (2022-09-15T08:00:01Z) - An overview of deep learning in medical imaging [0.0]
Deep learning (DL) systems are cutting-edge ML systems spanning a broad range of disciplines.
Recent advances can bring tremendous improvement to the medical field.
Recent developments with relevant problems in the field of DL used for medical imaging has been provided.
arXiv Detail & Related papers (2022-02-17T09:44:57Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.