Scientia Potentia Est -- On the Role of Knowledge in Computational
Argumentation
- URL: http://arxiv.org/abs/2107.00281v1
- Date: Thu, 1 Jul 2021 08:12:41 GMT
- Title: Scientia Potentia Est -- On the Role of Knowledge in Computational
Argumentation
- Authors: Anne Lauscher, Henning Wachsmuth, Iryna Gurevych, and Goran Glava\v{s}
- Abstract summary: We propose a pyramid of types of knowledge required in computational argumentation.
We briefly discuss the state of the art on the role and integration of these types in the field.
- Score: 52.903665881174845
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Despite extensive research in the past years, the computational modeling of
argumentation remains challenging. The primary reason lies in the inherent
complexity of the human processes behind, which commonly requires the
integration of extensive knowledge far beyond what is needed for many other
natural language understanding tasks. Existing work on the mining, assessment,
reasoning, and generation of arguments acknowledges this issue, calling for
more research on the integration of common sense and world knowledge into
computational models. However, a systematic effort to collect and organize the
types of knowledge needed is still missing, hindering targeted progress in the
field. In this opinionated survey paper, we address the issue by (1) proposing
a pyramid of types of knowledge required in computational argumentation, (2)
briefly discussing the state of the art on the role and integration of these
types in the field, and (3) outlining the main challenges for future work.
Related papers
- Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Machine-assisted quantitizing designs: augmenting humanities and social sciences with artificial intelligence [0.0]
Large language models (LLMs) have been shown to present an unprecedented opportunity to scale up data analytics in the humanities and social sciences.
We build on mixed methods quantitizing and converting design principles, and feature analysis from linguistics, to transparently integrate human expertise and machine scalability.
The approach is discussed and demonstrated in over a dozen LLM-assisted case studies, covering 9 diverse languages, multiple disciplines and tasks.
arXiv Detail & Related papers (2023-09-24T14:21:50Z) - Reinforcement Learning with Knowledge Representation and Reasoning: A
Brief Survey [24.81327556378729]
Reinforcement Learning has achieved tremendous development in recent years.
Still faces significant obstacles in addressing complex real-life problems.
Recently, there has been a rapidly growing interest in the use of Knowledge Representation and Reasoning.
arXiv Detail & Related papers (2023-04-24T13:35:11Z) - A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational
Finance Question Answering [70.6359636116848]
We propose a new large-scale dataset, ConvFinQA, to study the chain of numerical reasoning in conversational question answering.
Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations.
arXiv Detail & Related papers (2022-10-07T23:48:50Z) - Solving Quantitative Reasoning Problems with Language Models [53.53969870599973]
We introduce Minerva, a large language model pretrained on general natural language data and further trained on technical content.
The model achieves state-of-the-art performance on technical benchmarks without the use of external tools.
We also evaluate our model on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other sciences.
arXiv Detail & Related papers (2022-06-29T18:54:49Z) - Computational Argumentation and Cognition [0.3867363075280543]
This paper stems from the 1st Workshop on Computational Argumentation and Cognition (COGNITAR)
It argues that within the context of Human-Centric AI the use of theory and methods from Computational Argumentation for the study of Cognition can be a promising avenue to pursue.
The paper presents the main problems and challenges in the area that would need to be addressed, both at the scientific level but also at the level of synthesis of ideas and approaches from the various disciplines involved.
arXiv Detail & Related papers (2021-11-12T21:44:30Z) - A Data-Driven Study of Commonsense Knowledge using the ConceptNet
Knowledge Base [8.591839265985412]
Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI)
In this paper, we propose and conduct a systematic study to enable a deeper understanding of commonsense knowledge by doing an empirical and structural analysis of the ConceptNet knowledge base.
Detailed experimental results on three carefully designed research questions, using state-of-the-art unsupervised graph representation learning ('embedding') and clustering techniques, reveal deep substructures in ConceptNet relations.
arXiv Detail & Related papers (2020-11-28T08:08:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.