Connecting the Dots of Knowledge in Agile Software Development
- URL: http://arxiv.org/abs/2306.05742v1
- Date: Fri, 9 Jun 2023 08:19:07 GMT
- Title: Connecting the Dots of Knowledge in Agile Software Development
- Authors: Raquel Ouriques, Tony Gorschek, Daniel Mendez, Fabian Fagerholm
- Abstract summary: This article discusses the importance of managing knowledge as a resource due to its great potential to create economic value.
We detail the types of knowledge resources, the challenges associated with their management, and potential solutions to maximise their utility.
- Score: 2.233835326994069
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This article discusses the importance of managing knowledge as a resource due
to its great potential to create economic value. We detail the types of
knowledge resources, the challenges associated with their management, and
potential solutions to maximise their utility. Our contribution is based on
empirical studies performed in an industry context.
Related papers
- GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Towards Knowledge-Grounded Natural Language Understanding and Generation [1.450405446885067]
This thesis investigates how natural language understanding and generation with transformer models can benefit from grounding the models with knowledge representations.
Studies in this thesis find that incorporating relevant and up-to-date knowledge of entities benefits fake news detection.
It is established that other general forms of knowledge, such as parametric and distilled knowledge, enhance multimodal and multilingual knowledge-intensive tasks.
arXiv Detail & Related papers (2024-03-22T17:32:43Z) - InfuserKI: Enhancing Large Language Models with Knowledge Graphs via
Infuser-Guided Knowledge Integration [61.554209059971576]
Large Language Models (LLMs) have shown remarkable open-generation capabilities across diverse domains.
Injecting new knowledge poses the risk of forgetting previously acquired knowledge.
We propose a novel Infuser-Guided Knowledge Integration framework.
arXiv Detail & Related papers (2024-02-18T03:36:26Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Worth of knowledge in deep learning [3.132595571344153]
We present a framework inspired by interpretable machine learning to evaluate the worth of knowledge.
Our findings elucidate the complex relationship between data and knowledge, including dependence, synergistic, and substitution effects.
Our model-agnostic framework can be applied to a variety of common network architectures, providing a comprehensive understanding of the role of prior knowledge in deep learning models.
arXiv Detail & Related papers (2023-07-03T02:25:19Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Materialized Knowledge Bases from Commonsense Transformers [8.678138390075077]
No materialized resource of commonsense knowledge generated this way is publicly available.
This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall.
We identify common problem cases, and outline use cases enabled by materialized resources.
arXiv Detail & Related papers (2021-12-29T20:22:05Z) - Dimensions of Commonsense Knowledge [60.49243784752026]
We survey a wide range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting over more specific relations found in sources.
arXiv Detail & Related papers (2021-01-12T17:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.