Friend or Foe? Exploring the Implications of Large Language Models on
the Science System
- URL: http://arxiv.org/abs/2306.09928v1
- Date: Fri, 16 Jun 2023 15:50:17 GMT
- Title: Friend or Foe? Exploring the Implications of Large Language Models on
the Science System
- Authors: Benedikt Fecher, Marcel Hebing, Melissa Laufer, J\"org Pohle, Fabian
Sofsky
- Abstract summary: ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education.
This research contributes to informed discussions on the impact of generative AI in science.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of ChatGPT by OpenAI has prompted extensive discourse on its
potential implications for science and higher education. While the impact on
education has been a primary focus, there is limited empirical research on the
effects of large language models (LLMs) and LLM-based chatbots on science and
scientific practice. To investigate this further, we conducted a Delphi study
involving 72 experts specialising in research and AI. The study focused on
applications and limitations of LLMs, their effects on the science system,
ethical and legal considerations, and the required competencies for their
effective use. Our findings highlight the transformative potential of LLMs in
science, particularly in administrative, creative, and analytical tasks.
However, risks related to bias, misinformation, and quality assurance need to
be addressed through proactive regulation and science education. This research
contributes to informed discussions on the impact of generative AI in science
and helps identify areas for future action.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions [1.1970409518725493]
The article highlights the application areas that could have a positive impact on society along with the ethical considerations.
It includes responsible development considerations, algorithmic improvements, ethical challenges, and societal implications.
arXiv Detail & Related papers (2024-09-25T14:36:30Z) - SciKnowEval: Evaluating Multi-level Scientific Knowledge of Large Language Models [35.98892300665275]
We introduce the SciKnowEval benchmark, a framework that evaluates large language models (LLMs) across five progressive levels of scientific knowledge.
These levels aim to assess the breadth and depth of scientific knowledge in LLMs, including memory, comprehension, reasoning, discernment, and application.
We benchmark 26 advanced open-source and proprietary LLMs using zero-shot and few-shot prompting strategies.
arXiv Detail & Related papers (2024-06-13T13:27:52Z) - LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery [141.39722070734737]
We propose to enhance the knowledge-driven, abstract reasoning abilities of Large Language Models with the computational strength of simulations.
We introduce Scientific Generative Agent (SGA), a bilevel optimization framework.
We conduct experiments to demonstrate our framework's efficacy in law discovery and molecular design.
arXiv Detail & Related papers (2024-05-16T03:04:10Z) - Scientific Large Language Models: A Survey on Biological & Chemical Domains [47.97810890521825]
Large Language Models (LLMs) have emerged as a transformative power in enhancing natural language comprehension.
The application of LLMs extends beyond conventional linguistic boundaries, encompassing specialized linguistic systems developed within various scientific disciplines.
As a burgeoning area in the community of AI for Science, scientific LLMs warrant comprehensive exploration.
arXiv Detail & Related papers (2024-01-26T05:33:34Z) - Taking the Next Step with Generative Artificial Intelligence: The Transformative Role of Multimodal Large Language Models in Science Education [13.87944568193996]
Multimodal Large Language Models (MLLMs) are capable of processing multimodal data including text, sound, and visual inputs.
This paper explores the transformative role of MLLMs in central aspects of science education by presenting exemplary innovative learning scenarios.
arXiv Detail & Related papers (2024-01-01T18:11:43Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models [70.5763210869525]
We introduce an expansive benchmark suite SciBench for Large Language Model (LLM)
SciBench contains a dataset featuring a range of collegiate-level scientific problems from mathematics, chemistry, and physics domains.
The results reveal that the current LLMs fall short of delivering satisfactory performance, with the best overall score of merely 43.22%.
arXiv Detail & Related papers (2023-07-20T07:01:57Z) - Science in the Era of ChatGPT, Large Language Models and Generative AI:
Challenges for Research Ethics and How to Respond [3.3504365823045044]
This paper reviews challenges, ethical and integrity risks in science conduct in the advent of generative AI.
The role of AI language models as a research instrument and subject is scrutinized along with ethical implications for scientists, participants and reviewers.
arXiv Detail & Related papers (2023-05-24T16:23:46Z) - Learnings from Frontier Development Lab and SpaceML -- AI Accelerators
for NASA and ESA [57.06643156253045]
Research with AI and ML technologies lives in a variety of settings with often asynchronous goals and timelines.
We perform a case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA.
FDL research follows principled practices that are grounded in responsible development, conduct, and dissemination of AI research.
arXiv Detail & Related papers (2020-11-09T21:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.