Artificial Intelligence Security Competition (AISC)
- URL: http://arxiv.org/abs/2212.03412v1
- Date: Wed, 7 Dec 2022 02:45:27 GMT
- Title: Artificial Intelligence Security Competition (AISC)
- Authors: Yinpeng Dong, Peng Chen, Senyou Deng, Lianji L, Yi Sun, Hanyu Zhao,
Jiaxing Li, Yunteng Tan, Xinyu Liu, Yangyi Dong, Enhui Xu, Jincai Xu, Shu Xu,
Xuelin Fu, Changfeng Sun, Haoliang Han, Xuchong Zhang, Shen Chen, Zhimin Sun,
Junyi Cao, Taiping Yao, Shouhong Ding, Yu Wu, Jian Lin, Tianpeng Wu, Ye Wang,
Yu Fu, Lin Feng, Kangkang Gao, Zeyu Liu, Yuanzhe Pang, Chengqi Duan, Huipeng
Zhou, Yajie Wang, Yuhang Zhao, Shangbo Wu, Haoran Lyu, Zhiyu Lin, Yifei Gao,
Shuang Li, Haonan Wang, Jitao Sang, Chen Ma, Junhao Zheng, Yijia Li, Chao
Shen, Chenhao Lin, Zhichao Cui, Guoshuai Liu, Huafeng Shi, Kun Hu, Mengxin
Zhang
- Abstract summary: The Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI.
The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition.
This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
- Score: 52.20676747225118
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The security of artificial intelligence (AI) is an important research area
towards safe, reliable, and trustworthy AI systems. To accelerate the research
on AI security, the Artificial Intelligence Security Competition (AISC) was
organized by the Zhongguancun Laboratory, China Industrial Control Systems
Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua
University, and RealAI as part of the Zhongguancun International Frontier
Technology Innovation Competition (https://www.zgc-aisc.com/en). The
competition consists of three tracks, including Deepfake Security Competition,
Autonomous Driving Security Competition, and Face Recognition Security
Competition. This report will introduce the competition rules of these three
tracks and the solutions of top-ranking teams in each track.
Related papers
- The Role of AI Safety Institutes in Contributing to International Standards for Frontier AI Safety [0.0]
We argue that the AI Safety Institutes (AISIs) are well-positioned to contribute to the international standard-setting processes for AI safety.
We propose and evaluate three models for involvement: Seoul Declaration Signatories, US (and other Seoul Declaration Signatories) and China, and Globally Inclusive.
arXiv Detail & Related papers (2024-09-17T16:12:54Z) - Trustworthy, Responsible, and Safe AI: A Comprehensive Architectural Framework for AI Safety with Challenges and Mitigations [14.150792596344674]
AI Safety is an emerging area of critical importance to the safe adoption and deployment of AI systems.
Our goal is to promote advancement in AI safety research, and ultimately enhance people's trust in digital transformation.
arXiv Detail & Related papers (2024-08-23T09:33:48Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Quantifying AI Vulnerabilities: A Synthesis of Complexity, Dynamical Systems, and Game Theory [0.0]
We propose a novel approach that introduces three metrics: System Complexity Index (SCI), Lyapunov Exponent for AI Stability (LEAIS), and Nash Equilibrium Robustness (NER)
SCI quantifies the inherent complexity of an AI system, LEAIS captures its stability and sensitivity to perturbations, and NER evaluates its strategic robustness against adversarial manipulation.
arXiv Detail & Related papers (2024-04-07T07:05:59Z) - Taking control: Policies to address extinction risks from AI [0.0]
We argue that voluntary commitments from AI companies would be an inappropriate and insufficient response.
We describe three policy proposals that would meaningfully address the threats from advanced AI.
arXiv Detail & Related papers (2023-10-31T15:53:14Z) - The Promise and Peril of Artificial Intelligence -- Violet Teaming
Offers a Balanced Path Forward [56.16884466478886]
This paper reviews emerging issues with opaque and uncontrollable AI systems.
It proposes an integrative framework called violet teaming to develop reliable and responsible AI.
It emerged from AI safety research to manage risks proactively by design.
arXiv Detail & Related papers (2023-08-28T02:10:38Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Adversarial Attacks on ML Defense Models Competition [82.37504118766452]
The TSAIL group at Tsinghua University and the Alibaba Security group organized this competition.
The purpose of this competition is to motivate novel attack algorithms to evaluate adversarial robustness.
arXiv Detail & Related papers (2021-10-15T12:12:41Z) - TanksWorld: A Multi-Agent Environment for AI Safety Research [5.218815947097599]
The ability to create artificial intelligence capable of performing complex tasks is rapidly outpacing our ability to ensure the safe and assured operation of AI-enabled systems.
Recent simulation environments to illustrate AI safety risks are relatively simple or narrowly-focused on a particular issue.
We introduce the AI safety TanksWorld as an environment for AI safety research with three essential aspects.
arXiv Detail & Related papers (2020-02-25T21:00:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.