ExTRUST: Reducing Exploit Stockpiles with a Privacy-Preserving Depletion
System for Inter-State Relationships
- URL: http://arxiv.org/abs/2306.00589v1
- Date: Thu, 1 Jun 2023 12:02:17 GMT
- Title: ExTRUST: Reducing Exploit Stockpiles with a Privacy-Preserving Depletion
System for Inter-State Relationships
- Authors: Thomas Reinhold, Philipp Kuehn, Daniel G\"unther, Thomas Schneider,
Christian Reuter
- Abstract summary: This paper proposes a privacy-preserving approach that allows multiple state parties to privately compare their stock of vulnerabilities and exploits.
We call our system Extrust and show that it is scalable and can withstand several attack scenarios.
- Score: 4.349142920611964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cyberspace is a fragile construct threatened by malicious cyber operations of
different actors, with vulnerabilities in IT hardware and software forming the
basis for such activities, thus also posing a threat to global IT security.
Advancements in the field of artificial intelligence accelerate this
development, either with artificial intelligence enabled cyber weapons,
automated cyber defense measures, or artificial intelligence-based threat and
vulnerability detection. Especially state actors, with their long-term
strategic security interests, often stockpile such knowledge of vulnerabilities
and exploits to enable their military or intelligence service cyberspace
operations. While treaties and regulations to limit these developments and to
enhance global IT security by disclosing vulnerabilities are currently being
discussed on the international level, these efforts are hindered by state
concerns about the disclosure of unique knowledge and about giving up tactical
advantages. This leads to a situation where multiple states are likely to
stockpile at least some identical exploits, with technical measures to enable a
depletion process for these stockpiles that preserve state secrecy interests
and consider the special constraints of interacting states as well as the
requirements within such environments being non-existent. This paper proposes
such a privacy-preserving approach that allows multiple state parties to
privately compare their stock of vulnerabilities and exploits to check for
items that occur in multiple stockpiles without revealing them so that their
disclosure can be considered. We call our system ExTRUST and show that it is
scalable and can withstand several attack scenarios. Beyond the
intergovernmental setting, ExTRUST can also be used for other zero-trust use
cases, such as bug-bounty programs.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Security in IS and social engineering -- an overview and state of the art [0.6345523830122166]
The digitization of all processes and the opening to IoT devices has fostered the emergence of a new formof crime, i.e. cybercrime.
The maliciousness of such attacks lies in the fact that they turn users into facilitators of cyber-attacks, to the point of being perceived as the weak link'' of cybersecurity.
Knowing how to anticipate, identifying weak signals and outliers, detect early and react quickly to computer crime are therefore priority issues requiring a prevention and cooperation approach.
arXiv Detail & Related papers (2024-06-17T13:25:27Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Generative AI in Cybersecurity [0.0]
Generative Artificial Intelligence (GAI) has been pivotal in reshaping the field of data analysis, pattern recognition, and decision-making processes.
As GAI rapidly progresses, it outstrips the current pace of cybersecurity protocols and regulatory frameworks.
The study highlights the critical need for organizations to proactively identify and develop more complex defensive strategies to counter the sophisticated employment of GAI in malware creation.
arXiv Detail & Related papers (2024-05-02T19:03:11Z) - Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and
Legal Implications [0.4665186371356556]
In July 2022, the Center for Security and Emerging Technology at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities.
Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation.
arXiv Detail & Related papers (2023-05-23T22:27:53Z) - Exploring the Limits of Transfer Learning with Unified Model in the
Cybersecurity Domain [17.225973170682604]
We introduce a generative multi-task model, Unified Text-to-Text Cybersecurity (UTS)
UTS is trained on malware reports, phishing site URLs, programming code constructs, social media data, blogs, news articles, and public forum posts.
We show UTS improves the performance of some cybersecurity datasets.
arXiv Detail & Related papers (2023-02-20T22:21:26Z) - ThreatKG: An AI-Powered System for Automated Open-Source Cyber Threat Intelligence Gathering and Management [65.0114141380651]
ThreatKG is an automated system for OSCTI gathering and management.
It efficiently collects a large number of OSCTI reports from multiple sources.
It uses specialized AI-based techniques to extract high-quality knowledge about various threat entities.
arXiv Detail & Related papers (2022-12-20T16:13:59Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.