SoK: "Interoperability vs Security" Arguments: A Technical Framework
- URL: http://arxiv.org/abs/2502.04538v2
- Date: Tue, 24 Jun 2025 22:44:35 GMT
- Title: SoK: "Interoperability vs Security" Arguments: A Technical Framework
- Authors: Daji Landis, Elettra Bietti, Sunoo Park,
- Abstract summary: Concerns about big tech's monopoly power have featured prominently in recent media and policy discourse.<n>Regulators across the EU, the US, and beyond have ramped up efforts to promote healthier market competition.<n> Unsurprisingly, interoperability initiatives have generally been met with resistance by big tech companies.
- Score: 1.4049479722250835
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Concerns about big tech's monopoly power have featured prominently in recent media and policy discourse, as regulators across the EU, the US, and beyond have ramped up efforts to promote healthier market competition. One favored approach is to require certain kinds of interoperation between platforms, to mitigate the current concentration of power in the biggest companies. Unsurprisingly, interoperability initiatives have generally been met with resistance by big tech companies. Perhaps more surprisingly, a significant part of that pushback has been in the name of security -- that is, arguing against interoperation on the basis that it will undermine security. We conduct a systematic examination of "security vs. interoperability" (SvI) discourse in the context of EU antitrust proceedings. Our resulting contributions are threefold. First, we propose a taxonomy of SvI concerns in three categories: engineering, vetting, and hybrid. Second, we present an analytical framework for assessing real-world SvI concerns, and illustrate its utility by analyzing several case studies spanning our three taxonomy categories. Third, we undertake a comparative analysis that highlights key considerations around the interplay of economic incentives, market power, and security across our diverse case study contexts, identifying common patterns in each taxonomy category. Our contributions provide valuable analytical tools for experts and non-experts alike to critically assess SvI discourse in today's fast-paced regulatory landscape.
Related papers
- Understanding and Mitigating Risks of Generative AI in Financial Services [22.673239064487667]
We aim to highlight AI content safety considerations specific to the financial services domain and outline an associated AI content risk taxonomy.<n>We evaluate how existing open-source technical guardrail solutions cover this taxonomy by assessing them on data collected via red-teaming activities.
arXiv Detail & Related papers (2025-04-25T16:55:51Z) - The BIG Argument for AI Safety Cases [4.0675753909100445]
The BIG argument adopts a whole-system approach to constructing a safety case for AI systems of varying capability, autonomy and criticality.
It is balanced by addressing safety alongside other critical ethical issues such as privacy and equity.
It is integrated by bringing together the social, ethical and technical aspects of safety assurance in a way that is traceable and accountable.
arXiv Detail & Related papers (2025-03-12T11:33:28Z) - MITRE ATT&CK Applications in Cybersecurity and The Way Forward [18.339713576170396]
The MITRE ATT&CK framework is a widely adopted tool for enhancing cybersecurity, supporting threat intelligence, incident response, attack modeling, and vulnerability prioritization.
This paper synthesizes research on its application across these domains by analyzing 417 peer-reviewed publications.
We identify commonly used adversarial tactics, techniques, and procedures (TTPs) and examine the integration of natural language processing (NLP) and machine learning (ML) with ATT&CK to improve threat detection and response.
arXiv Detail & Related papers (2025-02-15T15:01:04Z) - A Critical Analysis of Foundations, Challenges and Directions for Zero Trust Security in Cloud Environments [0.0]
This review discusses the theoretical frameworks and application prospects of Zero Trust Security (ZTS) in cloud computing context.
This paper analyzes the core principles of ZTS, including micro-segmentation, least privileged access, and continuous monitoring.
Main barriers to implementing zero trust security were outlined, including the dimensions of decreased performance in large-scale production.
arXiv Detail & Related papers (2024-11-09T10:26:02Z) - Enhancing Enterprise Security with Zero Trust Architecture [0.0]
Zero Trust Architecture (ZTA) represents a transformative approach to modern cybersecurity.
ZTA shifts the security paradigm by assuming that no user, device, or system can be trusted by default.
This paper explores the key components of ZTA, such as identity and access management (IAM), micro-segmentation, continuous monitoring, and behavioral analytics.
arXiv Detail & Related papers (2024-10-23T21:53:16Z) - Jailbreaking and Mitigation of Vulnerabilities in Large Language Models [4.564507064383306]
Large Language Models (LLMs) have transformed artificial intelligence by advancing natural language understanding and generation.
Despite these advancements, LLMs have shown considerable vulnerabilities, particularly to prompt injection and jailbreaking attacks.
This review analyzes the state of research on these vulnerabilities and presents available defense strategies.
arXiv Detail & Related papers (2024-10-20T00:00:56Z) - PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis [74.41260927676747]
This paper bridges the gaps by introducing a multimodal conversational Sentiment Analysis (ABSA)
To benchmark the tasks, we construct PanoSent, a dataset annotated both manually and automatically, featuring high quality, large scale, multimodality, multilingualism, multi-scenarios, and covering both implicit and explicit sentiment elements.
To effectively address the tasks, we devise a novel Chain-of-Sentiment reasoning framework, together with a novel multimodal large language model (namely Sentica) and a paraphrase-based verification mechanism.
arXiv Detail & Related papers (2024-08-18T13:51:01Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.<n>We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Iterative Utility Judgment Framework via LLMs Inspired by Relevance in Philosophy [66.95501113584541]
Utility and topical relevance are critical measures in information retrieval.
We propose an Iterative utiliTy judgmEnt fraMework to promote each step of the cycle of Retrieval-Augmented Generation.
arXiv Detail & Related papers (2024-06-17T07:52:42Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Unsealing the secrets of blockchain consensus: A systematic comparison of the formal security of proof-of-work and proof-of-stake [0.40964539027092917]
The choice of consensus mechanism is at the core of many controversial discussions.
PoW-based consensus with the longest chain rule provides the strongest formal security guarantees.
PoS can achieve similar guarantees when addressing its more pronounced tradeoff between safety and liveness.
arXiv Detail & Related papers (2024-01-25T21:40:22Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - APPRAISE: a governance framework for innovation with AI systems [0.0]
The EU Artificial Intelligence Act (AIA) is the first serious legislative attempt to contain the harmful effects of AI systems.
This paper proposes a governance framework for AI innovation.
The framework bridges the gap between strategic variables and responsible value creation.
arXiv Detail & Related papers (2023-09-26T12:20:07Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and
Benchmarks [95.29345070102045]
In this paper, we focus our investigation on social bias detection of dialog safety problems.
We first propose a novel Dial-Bias Frame for analyzing the social bias in conversations pragmatically.
We introduce CDail-Bias dataset that is the first well-annotated Chinese social bias dialog dataset.
arXiv Detail & Related papers (2022-02-16T11:59:29Z) - Big data and big values: When companies need to rethink themselves [0.0]
We propose a new methodological approach that combines text mining, social network and big data analytics.
We collected more than 94,000 tweets related to the core values of the firms listed in Fortune's ranking of the World's Most Admired Companies.
For the Italian scenario, we found three predominant core values orientations (Customers, Employees and Excellence) and three latent ones (Economic-Financial Growth, Citizenship and Social Responsibility)
Our contribution is mostly methodological and extends the research on text mining and on online big data analytics applied in complex business contexts.
arXiv Detail & Related papers (2021-05-25T16:26:38Z) - Discovering Airline-Specific Business Intelligence from Online Passenger
Reviews: An Unsupervised Text Analytics Approach [3.2872586139884623]
Airlines can capitalize on the abundantly available online customer reviews (OCR)
This paper is to discover company- and competitor-specific intelligence from OCR using an unsupervised text analytics approach.
A case study involving 99,147 airline reviews of a US-based target carrier and four of its competitors is used to validate the proposed approach.
arXiv Detail & Related papers (2020-12-14T23:09:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.