Information Access of the Oppressed: A Problem-Posing Framework for Envisioning Emancipatory Information Access Platforms
- URL: http://arxiv.org/abs/2601.09600v1
- Date: Wed, 14 Jan 2026 16:15:26 GMT
- Title: Information Access of the Oppressed: A Problem-Posing Framework for Envisioning Emancipatory Information Access Platforms
- Authors: Bhaskar Mitra, Nicola Neophytou, Sireesh Gururaja,
- Abstract summary: Online information access platforms are targets of authoritarian capture.<n>We explore this question through the lens of Paulo Freire's theories of emancipatory pedagogy.
- Score: 5.801539233803859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online information access (IA) platforms are targets of authoritarian capture. These concerns are particularly serious and urgent today in light of the rising levels of democratic erosion worldwide, the emerging capabilities of generative AI technologies such as AI persuasion, and the increasing concentration of economic and political power in the hands of Big Tech. This raises the question of what alternative IA infrastructure we must reimagine and build to mitigate the risks of authoritarian capture of our information ecosystems. We explore this question through the lens of Paulo Freire's theories of emancipatory pedagogy. Freire's theories provide a radically different lens for exploring IA's sociotechnical concerns relative to the current dominating frames of fairness, accountability, confidentiality, transparency, and safety. We make explicit, with the intention to challenge, the dichotomy of how we relate to technology as either technologists (who envision and build technology) and its users. We posit that this mirrors the teacher-student relationship in Freire's analysis. By extending Freire's analysis to IA, we challenge the notion that it is the burden of the (altruistic) technologists to come up with interventions to mitigate the risks that emerging technologies pose to marginalized communities. Instead, we advocate that the first task for the technologists is to pose these as problems to the marginalized communities, to encourage them to make and unmake the technology as part of their material struggle against oppression. Their second task is to redesign our online technology stacks to structurally expose spaces for community members to co-opt and co-construct the technology in aid of their emancipatory struggles. We operationalize Freire's theories to develop a problem-posing framework for envisioning emancipatory IA platforms of the future.
Related papers
- Should AI Become an Intergenerational Civil Right? [2.7937298764423573]
We argue that access to AI should not be treated solely as a commercial service, but as a fundamental civil interest requiring explicit protection.<n>We propose recognizing access to AI as an emphIntergenerational Civil Right, establishing a legal and ethical framework that safeguards present-day inclusion and the rights of future generations.
arXiv Detail & Related papers (2025-12-09T20:22:16Z) - Deploying AI for Signal Processing education: Selected challenges and intriguing opportunities [44.18936398140735]
Article explores the use of AI tools to facilitate and enhance education.<n> Primers are provided on several core technical issues that arise when using AI in educational settings.<n>The article serves as a resource for researchers and educators seeking to advance AI's role in engineering education.
arXiv Detail & Related papers (2025-09-10T19:19:26Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [138.07763415496288]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - Mapping Technological Futures: Anticipatory Discourse Through Text Mining [1.998140290950519]
This study examines anticipatory discourse surrounding technological futures by analysing 1.5 million posts from 400 key opinion leaders (KOLs) published on the X platform (from 2021 to 2023)<n>Using advanced text mining techniques, including BERTopic modelling, sentiment, emotion, and attitude analyses, the research identifies 100 distinct topics reflecting anticipated tech-driven futures.
arXiv Detail & Related papers (2025-03-25T15:20:15Z) - Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.<n>Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.<n>Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models [7.835719708227145]
Deepfakes and the spread of m/disinformation have emerged as formidable threats to the integrity of information ecosystems worldwide.
We highlight the mechanisms through which generative AI based on large models (LM-based GenAI) craft seemingly convincing yet fabricated contents.
We introduce an integrated framework that combines advanced detection algorithms, cross-platform collaboration, and policy-driven initiatives.
arXiv Detail & Related papers (2023-11-29T06:47:58Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Interdisciplinary Approaches to Understanding Artificial Intelligence's
Impact on Society [7.016365171255391]
AI has come with a storm of unanticipated socio-technical problems.
We need tighter coupling of computer science and those disciplines that study society and societal values.
arXiv Detail & Related papers (2020-12-11T00:43:47Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.