Err
Err
Related papers
- Securing External Deeper-than-black-box GPAI Evaluations [49.1574468325115]
This paper examines the critical challenges and potential solutions for conducting secure and effective external evaluations of general-purpose AI (GPAI) models.
With the exponential growth in size, capability, reach and accompanying risk, ensuring accountability, safety, and public trust requires frameworks that go beyond traditional black-box methods.
arXiv Detail & Related papers (2025-03-10T16:13:45Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Position: A taxonomy for reporting and describing AI security incidents [57.98317583163334]
We argue that specific are required to describe and report security incidents of AI systems.
Existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges [0.0]
In November 2023, the UK and US announced the creation of their AI Safety Institutes.
This primer describes one cluster of similar clusters, the "first wave"
First-waves have several fundamental characteristics in common.
arXiv Detail & Related papers (2024-10-11T19:50:23Z) - The potential functions of an international institution for AI safety. Insights from adjacent policy areas and recent trends [0.0]
The OECD, the G7, the G20, UNESCO, and the Council of Europe have already started developing frameworks for ethical and responsible AI governance.
This chapter reflects on what functions an international AI safety institute could perform.
arXiv Detail & Related papers (2024-08-31T10:04:53Z) - Unveiling Legitimacy in the unexpected events context : An Inquiry into Information System Consultancy companies and international organizations through Topic Modeling Analysis [0.0]
This study focuses on the communication of two key stakeholders: IS consultancy companies and international organizations.
To achieve this objective, we examined a diverse array of publications released by both actors.
arXiv Detail & Related papers (2024-07-08T07:44:03Z) - "Model Cards for Model Reporting" in 2024: Reclassifying Category of Ethical Considerations in Terms of Trustworthiness and Risk Management [0.0]
In 2019, the paper entitled "Model Cards for Model Reporting" introduced a new tool for documenting model performance.
One of the categories detailed in that paper is ethical considerations, which includes the subcategories of data, human life, mitigations, risks and harms, and use cases.
We propose to reclassify this category in the original model card due to the recent maturing of the field known as trustworthy AI.
arXiv Detail & Related papers (2024-02-15T14:56:00Z) - Service Level Agreements and Security SLA: A Comprehensive Survey [51.000851088730684]
This survey paper identifies state of the art covering concepts, approaches, and open problems of SLA management.
It contributes by carrying out a comprehensive review and covering the gap between the analyses proposed in existing surveys and the most recent literature on this topic.
It proposes a novel classification criterium to organize the analysis based on SLA life cycle phases.
arXiv Detail & Related papers (2024-01-31T12:33:41Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Complying with the EU AI Act [0.0]
The EU AI Act is a proposed EU legislation concerning AI systems.
This paper identifies several categories of the AI Act.
The influence of organization characteristics, such as size and sector, is examined to determine the impact on compliance.
arXiv Detail & Related papers (2023-07-19T21:04:46Z) - Information Retrieval Meets Large Language Models: A Strategic Report
from Chinese IR Community [180.28262433004113]
Large Language Models (LLMs) have demonstrated exceptional capabilities in text understanding, generation, and knowledge inference.
LLMs and humans form a new technical paradigm that is more powerful for information seeking.
To thoroughly discuss the transformative impact of LLMs on IR research, the Chinese IR community conducted a strategic workshop in April 2023.
arXiv Detail & Related papers (2023-07-19T05:23:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.