A Large-Scale, Automated Study of Language Surrounding Artificial
Intelligence
- URL: http://arxiv.org/abs/2102.12516v1
- Date: Wed, 24 Feb 2021 19:14:53 GMT
- Title: A Large-Scale, Automated Study of Language Surrounding Artificial
Intelligence
- Authors: Autumn Toney
- Abstract summary: This work presents a large-scale analysis of artificial intelligence (AI) and machine learning (ML) references within news articles and scientific publications between 2011 and 2019.
We implement word association measurements that automatically identify shifts in language co-occurring with AI/ML and quantify the strength of these word associations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work presents a large-scale analysis of artificial intelligence (AI) and
machine learning (ML) references within news articles and scientific
publications between 2011 and 2019. We implement word association measurements
that automatically identify shifts in language co-occurring with AI/ML and
quantify the strength of these word associations. Our results highlight the
evolution of perceptions and definitions around AI/ML and detect emerging
application areas, models, and systems (e.g., blockchain and cybersecurity).
Recent small-scale, manual studies have explored AI/ML discourse within the
general public, the policymaker community, and researcher community, but are
limited in their scalability and longevity. Our methods provide new views into
public perceptions and subject-area expert discussions of AI/ML and greatly
exceed the explanative power of prior work.
Related papers
- How to Measure the Intelligence of Large Language Models? [0.24578723416255752]
We argue that the intelligence of language models should not only be assessed by task-specific statistical metrics.
We show that the choice of metrics has already been shown to dramatically influence assessments on potential intelligence emergence.
arXiv Detail & Related papers (2024-07-30T13:53:48Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Deep Learning Approaches for Improving Question Answering Systems in
Hepatocellular Carcinoma Research [0.0]
In recent years, advancements in natural language processing (NLP) have been fueled by deep learning techniques.
BERT and GPT-3, trained on vast amounts of data, have revolutionized language understanding and generation.
This paper delves into the current landscape and future prospects of large-scale model-based NLP.
arXiv Detail & Related papers (2024-02-25T09:32:17Z) - LB-KBQA: Large-language-model and BERT based Knowledge-Based Question
and Answering System [7.626368876843794]
We propose a novel KBQA system based on a Large Language Model(LLM) and BERT (LB-KBQA)
With the help of generative AI, our proposed method could detect newly appeared intent and acquire new knowledge.
In experiments on financial domain question answering, our model has demonstrated superior effectiveness.
arXiv Detail & Related papers (2024-02-05T16:47:17Z) - AI for social science and social science of AI: A Survey [47.5235291525383]
Recent advancements in artificial intelligence have sparked a rethinking of artificial general intelligence possibilities.
The increasing human-like capabilities of AI are also attracting attention in social science research.
arXiv Detail & Related papers (2024-01-22T10:57:09Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - Measuring Ethics in AI with AI: A Methodology and Dataset Construction [1.6861004263551447]
We propose to use such newfound capabilities of AI technologies to augment our AI measuring capabilities.
We do so by training a model to classify publications related to ethical issues and concerns.
We highlight the implications of AI metrics, in particular their contribution towards developing trustful and fair AI-based tools and technologies.
arXiv Detail & Related papers (2021-07-26T00:26:12Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.