Robots in the Danger Zone: Exploring Public Perception through
Engagement
- URL: http://arxiv.org/abs/2004.00689v1
- Date: Wed, 1 Apr 2020 20:10:53 GMT
- Title: Robots in the Danger Zone: Exploring Public Perception through
Engagement
- Authors: David A. Robb, Muneeb I. Ahmad, Carlo Tiseo, Simona Aracri, Alistair
C. McConnell, Vincent Page, Christian Dondrup, Francisco J. Chiyah Garcia,
Hai-Nguyen Nguyen, \`Eric Pairet, Paola Ard\'on Ram\'irez, Tushar Semwal,
Hazel M. Taylor, Lindsay J. Wilson, David Lane, Helen Hastie, Katrin Lohan
- Abstract summary: Public perceptions of Robotics and Artificial Intelligence (RAI) are important in the acceptance, uptake, government regulation and research funding.
Recent research has shown that the public's understanding of RAI can be negative or inaccurate.
We describe our first iteration of a high throughput in-person public engagement activity.
- Score: 4.051559940977775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Public perceptions of Robotics and Artificial Intelligence (RAI) are
important in the acceptance, uptake, government regulation and research funding
of this technology. Recent research has shown that the public's understanding
of RAI can be negative or inaccurate. We believe effective public engagement
can help ensure that public opinion is better informed. In this paper, we
describe our first iteration of a high throughput in-person public engagement
activity. We describe the use of a light touch quiz-format survey instrument to
integrate in-the-wild research participation into the engagement, allowing us
to probe both the effectiveness of our engagement strategy, and public
perceptions of the future roles of robots and humans working in dangerous
settings, such as in the off-shore energy sector. We critique our methods and
share interesting results into generational differences within the public's
view of the future of Robotics and AI in hazardous environments. These findings
include that older peoples' views about the future of robots in hazardous
environments were not swayed by exposure to our exhibit, while the views of
younger people were affected by our exhibit, leading us to consider carefully
in future how to more effectively engage with and inform older people.
Related papers
- Public sentiments on the fourth industrial revolution: An unsolicited public opinion poll from Twitter [0.0]
This article explores public perceptions on the Fourth Industrial Revolution (4IR) through an analysis of social media discourse across six European countries.
Using sentiment analysis and machine learning techniques, we assess how the public reacts to the integration of technologies such as artificial intelligence, robotics, and blockchain into society.
The results highlight a significant polarization of opinions, with a shift from neutral to more definitive stances either embracing or resisting technological impacts.
arXiv Detail & Related papers (2024-11-21T15:39:53Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community [0.060998359915727114]
We look at the origins and risks of AI hype to the research community and society more broadly.
We propose a set of measures that researchers, regulators, and the public can take to mitigate these risks and reduce the prevalence of unfounded claims about the technology.
arXiv Detail & Related papers (2024-08-08T20:47:17Z) - Public Perception of AI: Sentiment and Opportunity [0.0]
We present results of public perception of AI from a survey conducted with 10,000 respondents across ten countries in four continents around the world.
Results show that currently an equal percentage of respondents who believe AI will change the world as we know it, also believe AI needs to be heavily regulated.
arXiv Detail & Related papers (2024-07-22T19:11:28Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - Anticipating Impacts: Using Large-Scale Scenario Writing to Explore
Diverse Implications of Generative AI in the News Environment [3.660182910533372]
We aim to broaden the perspective and capture the expectations of three stakeholder groups about the potential negative impacts of generative AI.
We apply scenario writing and use participatory foresight to delve into cognitively diverse imaginations of the future.
We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.
arXiv Detail & Related papers (2023-10-10T06:59:27Z) - Human-Centered Responsible Artificial Intelligence: Current & Future
Trends [76.94037394832931]
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
All of this work is aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI.
In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends.
arXiv Detail & Related papers (2023-02-16T08:59:42Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [64.77385130665128]
The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots.
We particularly focus on AI techniques required to implement autonomous and proactive interactions.
arXiv Detail & Related papers (2022-06-07T11:12:45Z) - Imagining new futures beyond predictive systems in child welfare: A
qualitative study with impacted stakeholders [89.6319385008397]
We conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system.
We found that participants worried current PRMs perpetuate or exacerbate existing problems in child welfare.
Participants suggested new ways to use data and data-driven tools to better support impacted communities.
arXiv Detail & Related papers (2022-05-18T13:49:55Z) - Proposing an Interactive Audit Pipeline for Visual Privacy Research [0.0]
We argue for the use of fairness to discover bias and fairness issues in systems, assert the need for a responsible human-over-the-loop, and reflect on the need to explore research agendas that have harmful societal impacts.
Our goal is to provide a systematic analysis of the machine learning pipeline for visual privacy and bias issues.
arXiv Detail & Related papers (2021-11-07T01:51:43Z) - Empowering Local Communities Using Artificial Intelligence [70.17085406202368]
It has become an important topic to explore the impact of AI on society from a people-centered perspective.
Previous works in citizen science have identified methods of using AI to engage the public in research.
This article discusses the challenges of applying AI in Community Citizen Science.
arXiv Detail & Related papers (2021-10-05T12:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.