The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies
- URL: http://arxiv.org/abs/2212.08104v1
- Date: Thu, 8 Dec 2022 23:23:39 GMT
- Title: The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies
- Authors: Alexandre Blanco-Gonzalez, Alfonso Cabezon, Alejandro Seco-Gonzalez,
Daniel Conde-Torres, Paula Antelo-Riveiro, Angel Pineiro, Rebeca
Garcia-Fandino
- Abstract summary: The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
- Score: 97.5153823429076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) has the potential to revolutionize the drug
discovery process, offering improved efficiency, accuracy, and speed. However,
the successful application of AI is dependent on the availability of
high-quality data, the addressing of ethical concerns, and the recognition of
the limitations of AI-based approaches. In this article, the benefits,
challenges and drawbacks of AI in this field are reviewed, and possible
strategies and approaches for overcoming the present obstacles are proposed.
The use of data augmentation, explainable AI, and the integration of AI with
traditional experimental methods, as well as the potential advantages of AI in
pharmaceutical research are also discussed. Overall, this review highlights the
potential of AI in drug discovery and provides insights into the challenges and
opportunities for realizing its potential in this field.
Note from the human-authors: This article was created to test the ability of
ChatGPT, a chatbot based on the GPT-3.5 language model, to assist human authors
in writing review articles. The text generated by the AI following our
instructions (see Supporting Information) was used as a starting point, and its
ability to automatically generate content was evaluated. After conducting a
thorough review, human authors practically rewrote the manuscript, striving to
maintain a balance between the original proposal and scientific criteria. The
advantages and limitations of using AI for this purpose are discussed in the
last section.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Emotional Intelligence Through Artificial Intelligence : NLP and Deep Learning in the Analysis of Healthcare Texts [1.9374282535132377]
This manuscript presents a methodical examination of the utilization of Artificial Intelligence in the assessment of emotions in texts related to healthcare.
We scrutinize numerous research studies that employ AI to augment sentiment analysis, categorize emotions, and forecast patient outcomes.
There persist challenges, which encompass ensuring the ethical application of AI, safeguarding patient confidentiality, and addressing potential biases in algorithmic procedures.
arXiv Detail & Related papers (2024-03-14T15:58:13Z) - Generative AI in Writing Research Papers: A New Type of Algorithmic Bias
and Uncertainty in Scholarly Work [0.38850145898707145]
Large language models (LLMs) and generative AI tools present challenges in identifying and addressing biases.
generative AI tools are susceptible to goal misgeneralization, hallucinations, and adversarial attacks such as red teaming prompts.
We find that incorporating generative AI in the process of writing research manuscripts introduces a new type of context-induced algorithmic bias.
arXiv Detail & Related papers (2023-12-04T04:05:04Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - AI and Non AI Assessments for Dementia [11.5631890541199]
Current progress in the artificial intelligence domain has led to the development of various types of AI-powered dementia assessments.
This paper fills the gap in the literature in explaining the existing solutions for the recognition of dementia to clinicians.
It follows a review of papers on AI and non-AI assessments for dementia to provide valuable information about various dementia assessments for both the AI and medical communities.
arXiv Detail & Related papers (2023-06-30T03:28:47Z) - HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence
for Digital Medicine [7.089952396422835]
ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity.
As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular.
arXiv Detail & Related papers (2023-06-09T16:50:02Z) - Why we do need Explainable AI for Healthcare [0.0]
We argue that the Explainable AI research program is still central to human-machine interaction.
Despite valid concerns, we argue that the Explainable AI research program is still central to human-machine interaction.
arXiv Detail & Related papers (2022-06-30T15:35:50Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.