Controversy and consensus: common ground and best practices for life cycle assessment of emerging technologies
- URL: http://arxiv.org/abs/2501.10382v3
- Date: Wed, 02 Jul 2025 23:48:03 GMT
- Title: Controversy and consensus: common ground and best practices for life cycle assessment of emerging technologies
- Authors: Rachel Woods-Robinson, Mik Carbajales-Dale, Anthony Cheng, Gregory Cooney, Abby Kirchofer, Heather P. H. Liddell, Lisa Peterson, I. Daniel Posen, Sheikh Moni, Sylvia Sleep, Liz Wachs, Shiva Zargar, Joule Bergerson,
- Abstract summary: Life cycle assessment (LCA) can guide design choices, steer innovation, and avoid "lock-in" of adverse environmental impacts.<n>This paper examines unresolved challenges around best practices for assessing sustainability at early stages of technology development.<n>Rather than a comprehensive review with definitive conclusions, this paper adopts a Faraday Discussion-style approach to spotlight areas of agreement and disagreement among our network of LCA experts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The past decade has seen a surge in public and private applications of life cycle assessment (LCA), accelerated by emerging policies and disclosure practices mandating its use for sustainability impact reporting. Simultaneously, the magnitude and diversity of stakeholder groups affected by decisions informed by LCA have expanded rapidly. This has intensified the need for LCA to be conducted more quickly, accurately, and--crucially--earlier in the technology development cycle, when products and materials can still be readily modified, replaced, or optimized. When applied early, LCA has the potential to guide design choices, steer innovation, and avoid "lock-in" of adverse environmental impacts. However, this growing demand has surfaced several unresolved challenges around best practices for assessing sustainability at early stages of technology development. In this paper, we examine six such controversial topics--(1) appropriate use of LCA, (2) uncertainty assessment, (3) comparison with incumbents, (4) methodological standardization, (5) scale-up from laboratory or pilot data, and (6) stakeholder engagement--selected to highlight key debates from a series of workshop-style discussions convened by the LCA of Emerging Technologies Research Network. Rather than a comprehensive review with definitive conclusions, this paper adopts a Faraday Discussion-style approach to spotlight areas of agreement and disagreement among our network of LCA experts. For each issue, we present a declarative resolution, summarize key arguments for and against it, identify points of consensus, and provide recommendations. We aim to raise awareness of shared challenges in emerging technology assessment and foster more transparent, evidence-based, and context-informed approaches within the LCA community.
Related papers
- Point of Interest Recommendation: Pitfalls and Viable Solutions [44.68478552919453]
Point of interest (POI) recommendation can play a pivotal role in enriching tourists' experiences.<n>POI recommendation is inherently high-stakes: users invest significant time, money, and effort to search, choose, and consume these suggested POIs.<n>Despite the numerous research works in the area, several fundamental issues remain unresolved.
arXiv Detail & Related papers (2025-07-18T08:10:09Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - OpenReview Should be Protected and Leveraged as a Community Asset for Research in the Era of Large Language Models [55.21589313404023]
OpenReview is a continually evolving repository of research papers, peer reviews, author rebuttals, meta-reviews, and decision outcomes.<n>We highlight three promising areas in which OpenReview can uniquely contribute: enhancing the quality, scalability, and accountability of peer review processes; enabling meaningful, open-ended benchmarks rooted in genuine expert deliberation; and supporting alignment research through real-world interactions reflecting expert assessment, intentions, and scientific values.<n>We suggest the community collaboratively explore standardized benchmarks and usage guidelines around OpenReview, inviting broader dialogue on responsible data use, ethical considerations, and collective stewardship.
arXiv Detail & Related papers (2025-05-24T09:07:13Z) - Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice [57.94036023167952]
We argue that the efforts aiming to study AI's ethical ramifications should be made in tandem with those evaluating its impacts on the environment.<n>We propose best practices to better integrate AI ethics and sustainability in AI research and practice.
arXiv Detail & Related papers (2025-04-01T13:53:11Z) - Evaluating System 1 vs. 2 Reasoning Approaches for Zero-Shot Time Series Forecasting: A Benchmark and Insights [21.663682332422216]
Self-consistency emerges as the most effective test-time reasoning strategy.<n>Group-relative policy optimization emerges as a more suitable approach for incentivizing reasoning ability during post-training.
arXiv Detail & Related papers (2025-02-27T23:27:37Z) - Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey [92.36487127683053]
Retrieval-Augmented Generation (RAG) is an advanced technique designed to address the challenges of Artificial Intelligence-Generated Content (AIGC)
RAG provides reliable and up-to-date external knowledge, reduces hallucinations, and ensures relevant context across a wide range of tasks.
Despite RAG's success and potential, recent studies have shown that the RAG paradigm also introduces new risks, including privacy concerns, adversarial attacks, and accountability issues.
arXiv Detail & Related papers (2025-02-08T06:50:47Z) - SoK: "Interoperability vs Security" Arguments: A Technical Framework [1.4049479722250835]
Concerns about big tech's monopoly power have featured prominently in recent media and policy discourse.<n>Regulators across the EU, the US, and beyond have ramped up efforts to promote healthier market competition.<n> Unsurprisingly, interoperability initiatives have generally been met with resistance by big tech companies.
arXiv Detail & Related papers (2025-02-06T22:21:14Z) - From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.<n>We contend that a narrow focus on direct emissions misrepresents AI's true climate footprint, limiting the scope for meaningful interventions.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - Revisiting Technical Bias Mitigation Strategies [0.11510009152620666]
Efforts to mitigate bias and enhance fairness in the artificial intelligence (AI) community have predominantly focused on technical solutions.
While numerous reviews have addressed bias in AI, this review uniquely focuses on the practical limitations of technical solutions in healthcare settings.
We illustrate each limitation with empirical studies focusing on healthcare and biomedical applications.
arXiv Detail & Related papers (2024-10-22T21:17:19Z) - A Survey of Ontology Expansion for Conversational Understanding [25.39780882479585]
This survey paper provides a comprehensive review of the state-of-the-art techniques in OnExp for conversational understanding.
It categorizes the existing literature into three main areas: (1) New Discovery, (2) New Slot-Value Discovery, and (3) Joint OnExp.
arXiv Detail & Related papers (2024-10-19T07:27:30Z) - The Imperative of Conversation Analysis in the Era of LLMs: A Survey of Tasks, Techniques, and Trends [64.99423243200296]
Conversation Analysis (CA) strives to uncover and analyze critical information from conversation data.
In this paper, we perform a thorough review and systematize CA task to summarize the existing related work.
We derive four key steps of CA from conversation scene reconstruction, to in-depth attribution analysis, and then to performing targeted training, finally generating conversations.
arXiv Detail & Related papers (2024-09-21T16:52:43Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Exploring Links between Conversational Agent Design Challenges and
Interdisciplinary Collaboration [0.0]
The paper focuses on the socio-technical challenges of Conversational Agents (CA) creation.
It proposes a taxonomy of CA design challenges using interdisciplinary collaboration (IDC) as a lens.
The paper proposes practical strategies to overcome them which complement existing design principles.
arXiv Detail & Related papers (2023-11-15T10:20:49Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research [75.84463664853125]
We provide a first attempt to quantify concerns regarding three topics, namely, environmental impact, equity, and impact on peer reviewing.
We capture existing (dis)parities between different and within groups with respect to seniority, academia, and industry.
We devise recommendations to mitigate found disparities, some of which already successfully implemented.
arXiv Detail & Related papers (2023-06-29T12:44:53Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - A Methodology for Assessing the Environmental Effects Induced by ICT
Services. Part I: Single Services [0.0]
Information and communication technologies (ICT) are increasingly seen as key enablers for climate change mitigation measures.
Different initiatives have started to estimate the environmental effects of ICT services.
This article identifies the shortcomings of existing methodologies and proposes solutions.
arXiv Detail & Related papers (2020-06-18T19:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.