Threadbox: Sandboxing for Modular Security
- URL: http://arxiv.org/abs/2506.23683v1
- Date: Mon, 30 Jun 2025 10:04:38 GMT
- Title: Threadbox: Sandboxing for Modular Security
- Authors: Maysara Alhindi, Joseph Hallett,
- Abstract summary: Threadbox is a sandboxing mechanism that enables having modular and independent sandboxes.<n>We present case studies to illustrate the applicability of the idea and discuss its limitations.
- Score: 0.8594140167290099
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are many sandboxing mechanisms provided by operating systems to limit what resources applications can access, however, sometimes the use of these mechanisms requires developers to refactor their code to fit the sandboxing model. In this work, we investigate what makes existing sandboxing mechanisms challenging to apply to certain types of applications, and propose Threadbox, a sandboxing mechanism that enables having modular and independent sandboxes, and can be applied to threads and sandbox specific functions. We present case studies to illustrate the applicability of the idea and discuss its limitations.
Related papers
- Quantifying Frontier LLM Capabilities for Container Sandbox Escape [1.6245103041408155]
Large language models (LLMs) increasingly act as autonomous agents, using tools to execute code, read and write files, and access networks.<n>To mitigate these risks, agents are commonly deployed and evaluated in isolated "sandbox" environments.<n>We introduce SANDBOXESCAPEBENCH, an open benchmark that safely measures an LLM's capacity to break out of these sandboxes.
arXiv Detail & Related papers (2026-03-01T22:47:39Z) - SandCell: Sandboxing Rust Beyond Unsafe Code [14.279471205248532]
Rust is a modern systems programming language that ensures memory safety by enforcing ownership and borrowing rules at compile time.<n>Various approaches for isolating unsafe code to protect safe Rust from vulnerabilities have been proposed.<n>This paper presents SandCell for flexible and lightweight isolation in Rust by leveraging existing syntactic boundaries.
arXiv Detail & Related papers (2025-09-28T19:01:51Z) - Playing in the Sandbox: A Study on the Usability of Seccomp [0.8594140167290099]
We report a usability trial with 7 experienced Seccomp developers exploring how they approached sandboxing an application.<n>We highlight many challenges of using Seccomp, the sandboxing designs by the participants, and what developers think would make it easier for them to sandbox applications effectively.
arXiv Detail & Related papers (2025-06-11T23:27:16Z) - Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey [50.031628043029244]
Multimodal generative models are susceptible to jailbreak attacks, which can bypass built-in safety mechanisms and induce the production of potentially harmful content.<n>We present a detailed taxonomy of attack methods, defense mechanisms, and evaluation frameworks specific to multimodal generative models.
arXiv Detail & Related papers (2024-11-14T07:51:51Z) - Sandboxing Adoption in Open Source Ecosystems [0.8594140167290099]
This study looks at the use of sandboxing mechanisms in four open-source operating systems.
It reveals interesting usage patterns, such as cases where developers simplify their sandbox implementation.
It also highlights challenges that may be hindering the widespread adoption of sandboxing mechanisms.
arXiv Detail & Related papers (2024-05-10T12:52:46Z) - LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models [50.259006481656094]
We present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models.
Our interface is designed to enhance the interpretability of the image patches, which are instrumental in generating an answer.
We present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA.
arXiv Detail & Related papers (2024-04-03T23:57:34Z) - Is Modularity Transferable? A Case Study through the Lens of Knowledge Distillation [59.37775534633868]
We present an extremely straightforward approach to transferring pre-trained, task-specific PEFT modules between same-family PLMs.
We also propose a method that allows the transfer of modules between incompatible PLMs without any change in the inference complexity.
arXiv Detail & Related papers (2024-03-27T17:50:00Z) - SoK: An Essential Guide For Using Malware Sandboxes In Security Applications: Challenges, Pitfalls, and Lessons Learned [9.24505310582519]
This paper systematizes 84 representative papers for using x86/64 malware sandboxes in the academic literature.
We propose a novel framework to simplify sandbox components and organize the literature to derive practical guidelines for using sandboxes.
arXiv Detail & Related papers (2024-03-24T21:41:41Z) - PiShield: A PyTorch Package for Learning with Requirements [49.03568411956408]
Deep learning models often struggle to meet safety requirements for their outputs.
In this paper, we introduce PiShield, the first package ever allowing for the integration of the requirements into the neural networks' topology.
arXiv Detail & Related papers (2024-02-28T12:24:27Z) - MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks [50.61968901704187]
We introduce a framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules.<n>Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions.
arXiv Detail & Related papers (2023-12-26T08:49:57Z) - Multi-Agent Verification and Control with Probabilistic Model Checking [4.56877715768796]
Probabilistic model checking is a technique for formal automated reasoning about software or hardware systems.
It builds upon ideas and techniques from a diverse range of fields, from logic, automata and graph theory, to optimisation, numerical methods and control.
In recent years, probabilistic model checking has also been extended to integrate ideas from game theory.
arXiv Detail & Related papers (2023-08-05T09:31:32Z) - Machine Learning with Requirements: a Manifesto [114.97965827971132]
We argue that requirements definition and satisfaction can go a long way to make machine learning models even more fitting to the real world.
We show how the requirements specification can be fruitfully integrated into the standard machine learning development pipeline.
arXiv Detail & Related papers (2023-04-07T14:47:13Z) - Modular Deep Learning [120.36599591042908]
Transfer learning has recently become the dominant paradigm of machine learning.
It remains unclear how to develop models that specialise towards multiple tasks without incurring negative interference.
Modular deep learning has emerged as a promising solution to these challenges.
arXiv Detail & Related papers (2023-02-22T18:11:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.