Model Positionality and Computational Reflexivity: Promoting Reflexivity
in Data Science
- URL: http://arxiv.org/abs/2203.07031v1
- Date: Tue, 8 Mar 2022 16:02:03 GMT
- Title: Model Positionality and Computational Reflexivity: Promoting Reflexivity
in Data Science
- Authors: Scott Allen Cambo, Darren Gergle
- Abstract summary: We describe how the concepts of positionality and reflexivity can be adapted to provide a framework for understanding data science work.
We describe the challenges of adapting these concepts for data science work and offer annotator fingerprinting and position mining as promising solutions.
- Score: 10.794642538442107
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data science and machine learning provide indispensable techniques for
understanding phenomena at scale, but the discretionary choices made when doing
this work are often not recognized. Drawing from qualitative research
practices, we describe how the concepts of positionality and reflexivity can be
adapted to provide a framework for understanding, discussing, and disclosing
the discretionary choices and subjectivity inherent to data science work. We
first introduce the concepts of model positionality and computational
reflexivity that can help data scientists to reflect on and communicate the
social and cultural context of a model's development and use, the data
annotators and their annotations, and the data scientists themselves. We then
describe the unique challenges of adapting these concepts for data science work
and offer annotator fingerprinting and position mining as promising solutions.
Finally, we demonstrate these techniques in a case study of the development of
classifiers for toxic commenting in online communities.
Related papers
- User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study [5.775094401949666]
This study is located in the Human-Centered Artificial Intelligence (HCAI)
It focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms.
arXiv Detail & Related papers (2024-10-21T12:32:39Z) - A review on data-driven constitutive laws for solids [0.0]
This review article highlights state-of-the-art data-driven techniques to discover, encode, surrogate, or emulate laws.
Our objective is to provide an organized taxonomy to a large spectrum of methodologies developed in the past decades.
arXiv Detail & Related papers (2024-05-06T17:33:58Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Towards Explainable Artificial Intelligence (XAI): A Data Mining
Perspective [35.620874971064765]
This work takes a "data-centric" view, examining how data collection, processing, and analysis contribute to explainable AI (XAI)
We categorize existing work into three categories subject to their purposes: interpretations of deep models, influences of training data, and insights of domain knowledge.
Specifically, we distill XAI methodologies into data mining operations on training and testing data across modalities.
arXiv Detail & Related papers (2024-01-09T06:27:09Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - A Vision for Semantically Enriched Data Science [19.604667287258724]
Key areas such as utilizing domain knowledge and data semantics are areas where we have seen little automation.
We envision how leveraging "semantic" understanding and reasoning on data in combination with novel tools for data science automation can help with consistent and explainable data augmentation and transformation.
arXiv Detail & Related papers (2023-03-02T16:03:12Z) - Data Centred Intelligent Geosciences: Research Agenda and Opportunities,
Position Paper [1.3632312903156156]
This knowledge is produced from applying statistical modelling, Machine Learning, and modern data analytics methods on geodata collections.
The problems address open methodological questions in model building, models' assessment, prediction, and forecasting.
arXiv Detail & Related papers (2022-08-20T12:30:32Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.