A Contextual Approach to Technological Understanding and Its Assessment
- URL: http://arxiv.org/abs/2503.21437v1
- Date: Thu, 27 Mar 2025 12:23:25 GMT
- Title: A Contextual Approach to Technological Understanding and Its Assessment
- Authors: Eline de Jong, Sebastian De Haro,
- Abstract summary: We refine De Jong and De Haro's notion of technological understanding as the ability to realise an aim by using a technological artefact.<n>We extend its original specification for a design context by introducing two additional contexts: operation and innovation.<n>To further clarify the nature of technological understanding, we propose an assessment framework based on counterfactual reasoning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Technological understanding is not a singular concept but varies depending on the context. Building on De Jong and De Haro's (2025) notion of technological understanding as the ability to realise an aim by using a technological artefact, this paper further refines the concept as an ability that varies by context and degree. We extend its original specification for a design context by introducing two additional contexts: operation and innovation. Each context represents a distinct way of realising an aim through technology, resulting in three types (specifications) of technological understanding. To further clarify the nature of technological understanding, we propose an assessment framework based on counterfactual reasoning. Each type of understanding is associated with the ability to answer a specific set of what-if questions, addressing changes in an artefact's structure, performance, or appropriateness. Explicitly distinguishing these different types helps to focus efforts to improve technological understanding, clarifies the epistemic requirements for different forms of engagement with technology, and promotes a pluralistic perspective on expertise.
Related papers
- Form-Substance Discrimination: Concept, Cognition, and Pedagogy [55.2480439325792]
This paper examines form-substance discrimination as an essential learning outcome for curriculum development in higher education.
We propose practical strategies for fostering this ability through curriculum design, assessment practices, and explicit instruction.
arXiv Detail & Related papers (2025-04-01T04:15:56Z) - Explainability in AI Based Applications: A Framework for Comparing Different Techniques [2.5874041837241304]
In business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy.
This paper proposes a novel method for the assessment of the agreement of different explainability techniques.
By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications.
arXiv Detail & Related papers (2024-10-28T09:45:34Z) - Diffusion-Based Visual Art Creation: A Survey and New Perspectives [51.522935314070416]
This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives.
Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation.
We aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.
arXiv Detail & Related papers (2024-08-22T04:49:50Z) - Mapping acceptance: micro scenarios as a dual-perspective approach for assessing public opinion and individual differences in technology perception [0.0]
This article introduces micro scenarios as an integrative method to evaluate mental models and social acceptance across numerous technologies and concepts.
Average evaluations of each participant can be seen as individual differences, providing reflexive measurements across technologies or topics.
This paper aims to bridge the gap between technological advancement and societal perception, offering a tool for more informed decision-making in technology development and policy-making.
arXiv Detail & Related papers (2024-02-02T16:43:32Z) - Forms of Understanding of XAI-Explanations [2.887772793510463]
This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
arXiv Detail & Related papers (2023-11-15T08:06:51Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Foundations and Recent Trends in Multimodal Machine Learning:
Principles, Challenges, and Open Questions [68.6358773622615]
This paper provides an overview of the computational and theoretical foundations of multimodal machine learning.
We propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification.
Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches.
arXiv Detail & Related papers (2022-09-07T19:21:19Z) - Learning to Express in Knowledge-Grounded Conversation [62.338124154016825]
We consider two aspects of knowledge expression, namely the structure of the response and style of the content in each part.
We propose a segmentation-based generation model and optimize the model by a variational approach to discover the underlying pattern of knowledge expression in a response.
arXiv Detail & Related papers (2022-04-12T13:43:47Z) - Explainability Is in the Mind of the Beholder: Establishing the
Foundations of Explainable Artificial Intelligence [11.472707084860875]
We define explainability as (logical) reasoning applied to transparent insights (into black boxes) interpreted under certain background knowledge.
We revisit the trade-off between transparency and predictive power and its implications for ante-hoc and post-hoc explainers.
We discuss components of the machine learning workflow that may be in need of interpretability, building on a range of ideas from human-centred explainability.
arXiv Detail & Related papers (2021-12-29T09:21:33Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.