How Technology Impacts and Compares to Humans in Socially Consequential
Arenas
- URL: http://arxiv.org/abs/2211.03554v1
- Date: Wed, 2 Nov 2022 18:01:11 GMT
- Title: How Technology Impacts and Compares to Humans in Socially Consequential
Arenas
- Authors: Samuel Dooley
- Abstract summary: I make comparative analyses between humans and machines in three scenarios.
I seek to understand how sentiment about a technology, performance of that technology, and the impacts of that technology combine to influence how one decides to answer my main research question.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main promises of technology development is for it to be adopted by
people, organizations, societies, and governments -- incorporated into their
life, work stream, or processes. Often, this is socially beneficial as it
automates mundane tasks, frees up more time for other more important things, or
otherwise improves the lives of those who use the technology. However, these
beneficial results do not apply in every scenario and may not impact everyone
in a system the same way. Sometimes a technology is developed which produces
both benefits and inflicts some harm. These harms may come at a higher cost to
some people than others, raising the question: {\it how are benefits and harms
weighed when deciding if and how a socially consequential technology gets
developed?} The most natural way to answer this question, and in fact how
people first approach it, is to compare the new technology to what used to
exist. As such, in this work, I make comparative analyses between humans and
machines in three scenarios and seek to understand how sentiment about a
technology, performance of that technology, and the impacts of that technology
combine to influence how one decides to answer my main research question.
Related papers
- Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - Responsible and Inclusive Technology Framework: A Formative Framework to
Promote Societal Considerations in Information Technology Contexts [1.9991645269305982]
This paper contributes a formative framework -- the Responsible and Inclusive Technology Framework -- that orients critical reflection around the social contexts of technology creation and use.
We expect that the implementation of the Responsible and Inclusive Technology framework, especially in business-to-business industry settings, will serve as a catalyst for more intentional and socially-grounded practices.
arXiv Detail & Related papers (2023-02-22T18:59:04Z) - Redefining Relationships in Music [55.478320310047785]
We argue that AI tools will fundamentally reshape our music culture.
People working in this space could decrease the possible negative impacts on the practice, consumption and meaning of music.
arXiv Detail & Related papers (2022-12-13T19:44:32Z) - Technology and COVID-19: How Reliant is Society on Technology? [0.0]
Social media and messaging platforms have become a support system for those in fear of COVID-19.
This article may be useful for people of all ages and backgrounds who are interested in understanding the impact of technology on society.
arXiv Detail & Related papers (2022-10-21T00:43:59Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - From Learning to Relearning: A Framework for Diminishing Bias in Social
Robot Navigation [3.3511723893430476]
We argue that social navigation models can replicate, promote, and amplify societal unfairness such as discrimination and segregation.
Our proposed framework consists of two components: textitlearning which incorporates social context into the learning process to account for safety and comfort, and textitrelearning to detect and correct potentially harmful outcomes before the onset.
arXiv Detail & Related papers (2021-01-07T17:42:35Z) - Data Leverage: A Framework for Empowering the Public in its Relationship
with Technology Companies [13.174512123890015]
Many powerful computing technologies rely on implicit and explicit data contributions from the public.
This dependency suggests a potential source of leverage for the public in its relationship with technology companies.
We present a framework for understanding data leverage that highlights new opportunities to change technology company behavior.
arXiv Detail & Related papers (2020-12-18T00:46:26Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z) - Towards a framework for understanding societal and ethical implications
of Artificial Intelligence [2.28438857884398]
The objective of this paper is to identify the main societal and ethical challenges implied by a massive uptake of AI.
We have surveyed the literature for the most common challenges and classified them in seven groups: 1) Non-desired effects, 2) Liability, 3) Unknown consequences, 4) Relation people-robots, 5) Concentration of power and wealth, 6) Intentional bad uses, and 7) AI for weapons and warfare.
arXiv Detail & Related papers (2020-01-03T17:55:15Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.