Security Patchworking in Lebanon: Infrastructuring Across Failing
Infrastructures
- URL: http://arxiv.org/abs/2310.16969v1
- Date: Wed, 25 Oct 2023 20:12:20 GMT
- Title: Security Patchworking in Lebanon: Infrastructuring Across Failing
Infrastructures
- Authors: Jessica McClearn, Rikke Bjerg Jensen, Reem Talhouk
- Abstract summary: We look at the infrastructuring work carried out by people in Lebanon to establish and maintain everyday security in response to multiple failing infrastructures.
Through our analysis we develop the notion of security patchworking that makes visible the infrastructuring work necessitated to secure basic needs.
Such practices are rooted in differing mechanisms of protection that often result in new forms of insecurity.
- Score: 13.04459271722538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we bring to light the infrastructuring work carried out by
people in Lebanon to establish and maintain everyday security in response to
multiple simultaneously failing infrastructures. We do so through interviews
with 13 participants from 12 digital and human rights organisations and two
weeks of ethnographically informed fieldwork in Beirut, Lebanon, in July 2022.
Through our analysis we develop the notion of security patchworking that makes
visible the infrastructuring work necessitated to secure basic needs such as
electricity provision, identity authentication and financial resources. Such
practices are rooted in differing mechanisms of protection that often result in
new forms of insecurity. We discuss the implications for CSCW and HCI
researchers and point to security patchworking as a lens to be used when
designing technologies to support infrastructuring, while advocating for
collaborative work across CSCW and security research.
Related papers
- S3C2 SICP Summit 2025-06: Vulnerability Response Summit [51.90004414779634]
Researchers from the NSF-supported Secure Software Supply Chain Center (S3C2) and the Software Innovation Campus Paderborn (SICP) conducted a Vulnerability Response Summit.<n>The goal of the Summit is to enable sharing between industry practitioners having practical experiences and challenges with software supply chain security.
arXiv Detail & Related papers (2025-12-02T10:05:41Z) - Large AI Model-Enabled Secure Communications in Low-Altitude Wireless Networks: Concepts, Perspectives and Case Study [92.15255222408636]
Low-altitude wireless networks (LAWNs) have the potential to revolutionize communications by supporting a range of applications.<n>We investigate some large artificial intelligence model (LAM)-enabled solutions for secure communications in LAWNs.<n>To demonstrate the practical benefits of LAMs for secure communications in LAWNs, we propose a novel LAM-based optimization framework.
arXiv Detail & Related papers (2025-08-01T01:53:58Z) - A Different Approach to AI Safety: Proceedings from the Columbia Convening on Openness in Artificial Intelligence and AI Safety [12.885990679810831]
Open-weight and open-source foundation models are intensifying the obligation to make AI systems safe.<n>This paper reports outcomes from the Columbia Convening on AI Openness and Safety.
arXiv Detail & Related papers (2025-06-27T12:45:44Z) - Report on NSF Workshop on Science of Safe AI [75.96202715567088]
New advances in machine learning are leading to new opportunities to develop technology-based solutions to societal problems.<n>To fulfill the promise of AI, we must address how to develop AI-based systems that are accurate and performant but also safe and trustworthy.<n>This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop.
arXiv Detail & Related papers (2025-06-24T18:55:29Z) - Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report [50.268821168513654]
We present Foundation-Sec-8B, a cybersecurity-focused large language model (LLMs) built on the Llama 3.1 architecture.
We evaluate it across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks.
By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.
arXiv Detail & Related papers (2025-04-28T08:41:12Z) - AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.
AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.
We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - Global Challenge for Safe and Secure LLMs Track 1 [57.08717321907755]
The Global Challenge for Safe and Secure Large Language Models (LLMs) is a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO)
This paper introduces the Global Challenge for Safe and Secure Large Language Models (LLMs), a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO) to foster the development of advanced defense mechanisms against automated jailbreaking attacks.
arXiv Detail & Related papers (2024-11-21T08:20:31Z) - SoK: The Security-Safety Continuum of Multimodal Foundation Models through Information Flow and Game-Theoretic Defenses [58.93030774141753]
Multimodal foundation models (MFMs) integrate diverse data modalities to support complex and wide-ranging tasks.<n>In this paper, we unify the concepts of safety and security in the context of MFMs by identifying critical threats that arise from both model behavior and system-level interactions.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - Critical Infrastructure Protection: Generative AI, Challenges, and Opportunities [3.447031974719732]
Critical National Infrastructure (CNI) encompasses a nation's essential assets that are fundamental to the operation of society and the economy.
Growing cybersecurity threats targeting these infrastructures can potentially interfere with operations and seriously risk national security and public safety.
We examine the intricate issues raised by cybersecurity risks to vital infrastructure, highlighting these systems' vulnerability to different types of cyberattacks.
arXiv Detail & Related papers (2024-05-08T08:08:50Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - A Review of Cybersecurity Incidents in the Food and Agriculture Sector [2.0358239640633737]
This manuscript reviews disclosed and documented cybersecurity incidents in the Food & Agriculture (FA) sector.
Thirty cybersecurity incidents were identified, which took place between July 2011 and April 2023.
The need for AI assurance in the FA sector is explained, and the Farmer-Centered AI (FCAI) framework is proposed.
arXiv Detail & Related papers (2024-03-12T19:15:20Z) - Othered, Silenced and Scapegoated: Understanding the Situated Security
of Marginalised Populations in Lebanon [17.10104036777213]
We situate our work in the post-conflict Lebanese context, shaped by sectarian divides, failing governance and economic collapse.
Our research highlights how LGBTQI+ identifying people and refugees are scapegoated for the failings of the Lebanese government.
We show how government-supported incitements of violence aimed at transferring blame from the political leadership to these groups lead to amplified digital security risks.
arXiv Detail & Related papers (2023-06-16T19:36:39Z) - Emerging Technology and Policy Co-Design Considerations for the Safe and
Transparent Use of Small Unmanned Aerial Systems [55.60330679737718]
The rapid technological growth observed in the sUAS sector has left gaps in policies and regulations to provide for a safe and trusted environment in which to operate these devices.
From human factors to autonomy, we recommend a series of steps that can be taken by partners in the academic, commercial, and government sectors to reduce policy gaps introduced in the wake of the growth of the sUAS industry.
arXiv Detail & Related papers (2022-12-06T07:17:46Z) - Security and Safety Aspects of AI in Industry Applications [0.0]
We summarise issues in the domains of safety and security in machine learning that will affect industry sectors in the next five to ten years.
Reports of underlying problems in both safety and security related domains, for instance adversarial attacks have unsettled early adopters.
The problem for real-world applicability lies in being able to assess the risk of applying these technologies.
arXiv Detail & Related papers (2022-07-16T16:41:00Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - BYOD Security: A Study of Human Dimensions [0.0]
The prevalence and maturity of Bring Your Own Device security along with subsequent frameworks and security mechanisms in Australian organisations is a growing phenomenon.
The aim of this paper is to discover, through a study conducted using a survey questionnaire instrument, how employees practice and perceive the BYOD security mechanisms deployed by Australian businesses.
arXiv Detail & Related papers (2022-02-23T13:31:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.