Friend or Foe? Exploring the Implications of Large Language Models on
the Science System
- URL: http://arxiv.org/abs/2306.09928v1
- Date: Fri, 16 Jun 2023 15:50:17 GMT
- Title: Friend or Foe? Exploring the Implications of Large Language Models on
the Science System
- Authors: Benedikt Fecher, Marcel Hebing, Melissa Laufer, J\"org Pohle, Fabian
Sofsky
- Abstract summary: ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education.
This research contributes to informed discussions on the impact of generative AI in science.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of ChatGPT by OpenAI has prompted extensive discourse on its
potential implications for science and higher education. While the impact on
education has been a primary focus, there is limited empirical research on the
effects of large language models (LLMs) and LLM-based chatbots on science and
scientific practice. To investigate this further, we conducted a Delphi study
involving 72 experts specialising in research and AI. The study focused on
applications and limitations of LLMs, their effects on the science system,
ethical and legal considerations, and the required competencies for their
effective use. Our findings highlight the transformative potential of LLMs in
science, particularly in administrative, creative, and analytical tasks.
However, risks related to bias, misinformation, and quality assurance need to
be addressed through proactive regulation and science education. This research
contributes to informed discussions on the impact of generative AI in science
and helps identify areas for future action.
Related papers
- Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation [58.064940977804596]
A plethora of new AI models and tools has been proposed, promising to empower researchers and academics worldwide to conduct their research more effectively and efficiently.
Ethical concerns regarding shortcomings of these tools and potential for misuse take a particularly prominent place in our discussion.
arXiv Detail & Related papers (2025-02-07T18:26:45Z) - Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning [51.11965014462375]
Multimodal Large Language Models (MLLMs) integrate text, images, and other modalities.
This paper argues that MLLMs can significantly advance scientific reasoning across disciplines such as mathematics, physics, chemistry, and biology.
arXiv Detail & Related papers (2025-02-05T04:05:27Z) - Bridging AI and Science: Implications from a Large-Scale Literature Analysis of AI4Science [25.683422870223076]
We present a large-scale analysis of the AI4Science literature.
We quantitatively highlight key disparities between AI methods and scientific problems.
We explore the potential and challenges of facilitating collaboration between AI and scientific communities.
arXiv Detail & Related papers (2024-11-27T00:40:51Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - SciKnowEval: Evaluating Multi-level Scientific Knowledge of Large Language Models [35.98892300665275]
We introduce the SciKnowEval benchmark, a framework that evaluates large language models (LLMs) across five progressive levels of scientific knowledge.
These levels aim to assess the breadth and depth of scientific knowledge in LLMs, including memory, comprehension, reasoning, discernment, and application.
We benchmark 26 advanced open-source and proprietary LLMs using zero-shot and few-shot prompting strategies.
arXiv Detail & Related papers (2024-06-13T13:27:52Z) - Scientific Large Language Models: A Survey on Biological & Chemical Domains [47.97810890521825]
Large Language Models (LLMs) have emerged as a transformative power in enhancing natural language comprehension.
The application of LLMs extends beyond conventional linguistic boundaries, encompassing specialized linguistic systems developed within various scientific disciplines.
As a burgeoning area in the community of AI for Science, scientific LLMs warrant comprehensive exploration.
arXiv Detail & Related papers (2024-01-26T05:33:34Z) - SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models [70.5763210869525]
We introduce an expansive benchmark suite SciBench for Large Language Model (LLM)
SciBench contains a dataset featuring a range of collegiate-level scientific problems from mathematics, chemistry, and physics domains.
The results reveal that the current LLMs fall short of delivering satisfactory performance, with the best overall score of merely 43.22%.
arXiv Detail & Related papers (2023-07-20T07:01:57Z) - Learnings from Frontier Development Lab and SpaceML -- AI Accelerators
for NASA and ESA [57.06643156253045]
Research with AI and ML technologies lives in a variety of settings with often asynchronous goals and timelines.
We perform a case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA.
FDL research follows principled practices that are grounded in responsible development, conduct, and dissemination of AI research.
arXiv Detail & Related papers (2020-11-09T21:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.