Designing for Critical Algorithmic Literacies
- URL: http://arxiv.org/abs/2008.01719v1
- Date: Tue, 4 Aug 2020 17:51:02 GMT
- Title: Designing for Critical Algorithmic Literacies
- Authors: Sayamindu Dasgupta and Benjamin Mako Hill
- Abstract summary: Children's ability to interrogate computational algorithms has become crucially important.
A growing body of work has attempted to articulate a set of "literacies" to describe the intellectual tools that children can use to understand, interrogate, and critique the algorithmic systems that shape their lives.
We present four design principles that we argue can help children develop that allow them to understand not only how algorithms work, but also to critique and question them.
- Score: 11.6402289139761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As pervasive data collection and powerful algorithms increasingly shape
children's experience of the world and each other, their ability to interrogate
computational algorithms has become crucially important. A growing body of work
has attempted to articulate a set of "literacies" to describe the intellectual
tools that children can use to understand, interrogate, and critique the
algorithmic systems that shape their lives. Unfortunately, because many
algorithms are invisible, only a small number of children develop the
literacies required to critique these systems. How might designers support the
development of critical algorithmic literacies? Based on our experience
designing two data programming systems, we present four design principles that
we argue can help children develop literacies that allow them to understand not
only how algorithms work, but also to critique and question them.
Related papers
- Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications [0.41942958779358674]
algorithm auditing is a method for understanding algorithmic systems' opaque inner workings and external impacts from the outside in.
This paper proposes five steps that can support young people in auditing algorithms.
arXiv Detail & Related papers (2024-12-09T20:55:54Z) - Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners? [140.9751389452011]
We study the biases of large language models (LLMs) in relation to those known in children when solving arithmetic word problems.
We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features.
arXiv Detail & Related papers (2024-01-31T18:48:20Z) - Problem-Solving Guide: Predicting the Algorithm Tags and Difficulty for Competitive Programming Problems [7.955313479061445]
Most tech companies require the ability to solve algorithm problems including Google, Meta, and Amazon.
Our study addresses the task of predicting the algorithm tag as a useful tool for engineers and developers.
We also consider predicting the difficulty levels of algorithm problems, which can be used as useful guidance to calculate the required time to solve that problem.
arXiv Detail & Related papers (2023-10-09T15:26:07Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Language Model Decoding as Likelihood-Utility Alignment [54.70547032876017]
We introduce a taxonomy that groups decoding strategies based on their implicit assumptions about how well the model's likelihood is aligned with the task-specific notion of utility.
Specifically, by analyzing the correlation between the likelihood and the utility of predictions across a diverse set of tasks, we provide the first empirical evidence supporting the proposed taxonomy.
arXiv Detail & Related papers (2022-10-13T17:55:51Z) - An Approach for Automatic Construction of an Algorithmic Knowledge Graph
from Textual Resources [3.723553383515688]
We introduce an approach for automatically developing a knowledge graph for algorithmic problems from unstructured data.
An algorithm KG will give additional context and explainability to the algorithm metadata.
arXiv Detail & Related papers (2022-05-13T18:59:23Z) - How to transfer algorithmic reasoning knowledge to learn new algorithms? [23.335939830754747]
We investigate how we can use algorithms for which we have access to the execution trace to learn to solve similar tasks for which we do not.
We create a dataset including 9 algorithms and 3 different graph types.
We validate this empirically and show how instead multi-task learning can be used to achieve the transfer of algorithmic reasoning knowledge.
arXiv Detail & Related papers (2021-10-26T22:14:47Z) - Beyond Algorithmic Bias: A Socio-Computational Interrogation of the
Google Search by Image Algorithm [0.799536002595393]
We audit the algorithm by presenting it with more than 40 thousands faces of all ages and more than four races.
We find that the algorithm reproduces white male patriarchal structures, often simplifying, stereotyping and discriminating females and non-white individuals.
arXiv Detail & Related papers (2021-05-26T21:40:43Z) - HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving [104.79156980475686]
Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
arXiv Detail & Related papers (2021-02-22T20:37:01Z) - A Brief Look at Generalization in Visual Meta-Reinforcement Learning [56.50123642237106]
We evaluate the generalization performance of meta-reinforcement learning algorithms.
We find that these algorithms can display strong overfitting when they are evaluated on challenging tasks.
arXiv Detail & Related papers (2020-06-12T15:17:17Z) - Machine Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning [95.18337034090648]
We propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG)
These visual arithmetic problems are in the form of geometric figures.
We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task.
arXiv Detail & Related papers (2020-04-25T17:14:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.