PreDefense: Defending Underserved AI Students and Researchers from
Predatory Conferences
- URL: http://arxiv.org/abs/2201.13268v1
- Date: Wed, 26 Jan 2022 02:04:19 GMT
- Title: PreDefense: Defending Underserved AI Students and Researchers from
Predatory Conferences
- Authors: Thomas Y. Chen
- Abstract summary: Mentorship in the AI community is crucial to maintaining and increasing diversity.
There is not sufficient emphasis on the submission, presentation, and publication process.
PreDefense is a mentorship program that seeks to guide underrepresented students through the scientific conference and workshop process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mentorship in the AI community is crucial to maintaining and increasing
diversity, especially with respect to fostering the academic growth of
underserved students. While the research process itself is important, there is
not sufficient emphasis on the submission, presentation, and publication
process, which is a cause for concern given the meteoric rise of predatory
scientific conferences, which are based on profit only and have little to no
peer review. These conferences are a direct threat to integrity in science by
promoting work with little to no scientific merit. However, they also threaten
diversity in the AI community by marginalizing underrepresented groups away
from legitimate conferences due to convenience and targeting mechanisms like
e-mail invitations. Due to the importance of conference presentation in AI
research, this very specific problem must be addressed through direct
mentorship. In this work, we propose PreDefense, a mentorship program that
seeks to guide underrepresented students through the scientific conference and
workshop process, with an emphasis on choosing legitimate venues that align
with the specific work that the students are focused in and preparing students
of all backgrounds for future successful, integrous AI research careers.
Related papers
- Position: The Current AI Conference Model is Unsustainable! Diagnosing the Crisis of Centralized AI Conference [40.70597237357474]
This paper offers a data-driven diagnosis of a structural crisis that threatens the foundational goals of scientific dissemination, equity, and community well-being.<n>We identify four key areas of strain: (1) scientifically, with per-author publication rates more than doubling over the past decade to over 4.5 papers annually; (2) environmentally, with the carbon footprint of a single conference exceeding the daily emissions of its host city; and (3) psychologically, with 71% of online community discourse reflecting negative sentiment and 35% referencing mental health concerns.<n>In response, we propose the Community-Federated Conference (CFC) model, which separates peer review, presentation,
arXiv Detail & Related papers (2025-08-06T16:08:27Z) - Report on NSF Workshop on Science of Safe AI [75.96202715567088]
New advances in machine learning are leading to new opportunities to develop technology-based solutions to societal problems.<n>To fulfill the promise of AI, we must address how to develop AI-based systems that are accurate and performant but also safe and trustworthy.<n>This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop.
arXiv Detail & Related papers (2025-06-24T18:55:29Z) - Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice [57.94036023167952]
We argue that the efforts aiming to study AI's ethical ramifications should be made in tandem with those evaluating its impacts on the environment.
We propose best practices to better integrate AI ethics and sustainability in AI research and practice.
arXiv Detail & Related papers (2025-04-01T13:53:11Z) - Stop treating `AGI' as the north-star goal of AI research [7.292737756666293]
We argue that focusing on the topic of artificial general intelligence' (AGI') undermines our ability to choose effective goals.
We identify six key traps -- obstacles to productive goal setting -- that are aggravated by AGI discourse.
arXiv Detail & Related papers (2025-02-06T00:49:16Z) - From Stem to Stern: Contestability Along AI Value Chains [21.781422547251676]
This workshop will grow and consolidate a community of interdisciplinary CSCW researchers focusing on the topic of contestable AI.
As an outcome of the workshop, we will synthesize the most pressing opportunities and challenges for contestability along AI value chains in the form of a research roadmap.
Considering the length and depth of AI value chains, it will especially spur discussions around the contestability of AI systems along various sites of such chains.
arXiv Detail & Related papers (2024-08-02T06:57:52Z) - A University Framework for the Responsible use of Generative AI in Research [0.0]
Generative Artificial Intelligence (generative AI) poses both opportunities and risks for the integrity of research.
We propose a framework to help institutions promote and facilitate the responsible use of generative AI.
arXiv Detail & Related papers (2024-04-30T04:00:15Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - An Undergraduate Consortium for Addressing the Leaky Pipeline to Computing Research [1.9336815376402718]
This experience report describes a first-of-its-kind Undergraduate Consortium (UC)
The UC aims to broaden participation in the AI research community by recruiting students, particularly those from historically marginalized groups.
This paper presents our program design, inspired by a rich set of evidence-based practices, and a preliminary evaluation of the first years that points to the UC achieving many of its desired outcomes.
arXiv Detail & Related papers (2024-03-25T21:43:43Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - A Systematic Literature Review of Human-Centered, Ethical, and
Responsible AI [12.456385305888341]
We review and analyze 164 research papers from leading conferences in ethical, social, and human factors of AI.
We find that the current emphasis on governance and fairness in AI research may not adequately address the potential unforeseen and unknown implications of AI.
arXiv Detail & Related papers (2023-02-10T14:47:33Z) - Teaching Computer Science Students to Communicate Scientific Findings
More Effectively [8.832687148248716]
Science communication forms the bridge between computer science researchers and their target audience.
The necessary skills for good science communication must also be taught, and this has so far been neglected in the field of software engineering education.
We designed and implemented a science communication seminar for bachelor students of computer science curricula.
arXiv Detail & Related papers (2023-01-16T11:54:23Z) - Coordinated Science Laboratory 70th Anniversary Symposium: The Future of
Computing [80.72844751804166]
In 2021, the Coordinated Science Laboratory CSL hosted the Future of Computing Symposium to celebrate its 70th anniversary.
We summarize the major technological points, insights, and directions that speakers brought forward during the symposium.
Participants discussed topics related to new computing paradigms, technologies, algorithms, behaviors, and research challenges to be expected in the future.
arXiv Detail & Related papers (2022-10-04T17:32:27Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [64.77385130665128]
The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots.
We particularly focus on AI techniques required to implement autonomous and proactive interactions.
arXiv Detail & Related papers (2022-06-07T11:12:45Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.