How the Future Works at SOUPS: Analyzing Future Work Statements and Their Impact on Usable Security and Privacy Research
- URL: http://arxiv.org/abs/2405.20785v1
- Date: Thu, 30 May 2024 07:07:18 GMT
- Title: How the Future Works at SOUPS: Analyzing Future Work Statements and Their Impact on Usable Security and Privacy Research
- Authors: Jacques Suray, Jan H. Klemmer, Juliane Schmüser, Sascha Fahl,
- Abstract summary: We reviewed all 27 papers from the 2019 SOUPS proceedings and analyzed their future work statements.
We find that most papers from the SOUPS 2019 proceedings include future work statements. However, they are often unspecific or ambiguous, and not always easy to find.
We conclude with recommendations for the usable security and privacy community to improve the utility of future work statements.
- Score: 9.307988641609834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extending knowledge by identifying and investigating valuable research questions and problems is a core function of research. Research publications often suggest avenues for future work to extend and build upon their results. Considering these suggestions can contribute to developing research ideas that build upon previous work and produce results that tie into existing knowledge. Usable security and privacy researchers commonly add future work statements to their publications. However, our community lacks an in-depth understanding of their prevalence, quality, and impact on future research. Our work aims to address this gap in the research literature. We reviewed all 27 papers from the 2019 SOUPS proceedings and analyzed their future work statements. Additionally, we analyzed 978 publications that cite any paper from SOUPS 2019 proceedings to assess their future work statements' impact. We find that most papers from the SOUPS 2019 proceedings include future work statements. However, they are often unspecific or ambiguous, and not always easy to find. Therefore, the citing publications often matched the future work statements' content thematically, but rarely explicitly acknowledged them, indicating a limited impact. We conclude with recommendations for the usable security and privacy community to improve the utility of future work statements by making them more tangible and actionable, and avenues for future work.
Related papers
- What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Tackling Cyberattacks through AI-based Reactive Systems: A Holistic Review and Future Vision [0.10923877073891446]
This paper presents a comprehensive survey of recent advancements in AI-driven threat response systems.
The most recent survey covering the AI reaction domain was conducted in 2017.
A total of seven research challenges have been identified, pointing out potential gaps and suggesting possible areas of development.
arXiv Detail & Related papers (2023-12-11T09:17:01Z) - Privacy Issues in Large Language Models: A Survey [2.707979363409351]
This is the first survey of the active area of AI research that focuses on privacy issues in Large Language Models (LLMs)
We focus on work that red-teams models to highlight privacy risks, attempts to build privacy into the training or inference process, and tries to mitigate copyright issues.
arXiv Detail & Related papers (2023-12-11T01:26:53Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual
Learning [76.47138162283714]
Forgetting refers to the loss or deterioration of previously acquired information or knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
Survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - "That's important, but...": How Computer Science Researchers Anticipate
Unintended Consequences of Their Research Innovations [12.947525301829835]
We show that considering unintended consequences is generally seen as important but rarely practiced.
Principal barriers are a lack of formal process and strategy as well as the academic practice that prioritizes fast progress and publications.
We intend for our work to pave the way for routine explorations of the societal implications of technological innovations before, during, and after the research process.
arXiv Detail & Related papers (2023-03-27T18:21:29Z) - A Major Obstacle for NLP Research: Let's Talk about Time Allocation! [25.820755718678786]
This paper argues that we have been less successful than we should have been in the field of natural language processing.
We demonstrate that, in recent years, subpar time allocation has been a major obstacle for NLP research.
arXiv Detail & Related papers (2022-11-30T10:00:12Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Automatic Related Work Generation: A Meta Study [5.025654873456755]
In natural language processing, a literature review is usually conducted under the "Related Work" section.
The task of automatic related work generation aims to automatically generate the "Related Work" section.
We conduct a meta-study to compare the existing literature on related work generation from the perspectives of problem formulation, dataset collection, methodological approach, performance evaluation, and future prospects.
arXiv Detail & Related papers (2022-01-06T01:16:38Z) - CitationIE: Leveraging the Citation Graph for Scientific Information
Extraction [89.33938657493765]
We use the citation graph of referential links between citing and cited papers.
We observe a sizable improvement in end-to-end information extraction over the state-of-the-art.
arXiv Detail & Related papers (2021-06-03T03:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.