CRISP: Confidentiality, Rollback, and Integrity Storage Protection for Confidential Cloud-Native Computing
- URL: http://arxiv.org/abs/2408.06822v2
- Date: Thu, 15 Aug 2024 08:35:59 GMT
- Title: CRISP: Confidentiality, Rollback, and Integrity Storage Protection for Confidential Cloud-Native Computing
- Authors: Ardhi Putra Pratama Hartono, Andrey Brito, Christof Fetzer,
- Abstract summary: Cloud-native applications rely on orchestration and have their services frequently restarted.
During restarts, attackers can revert the state of confidential services to a previous version that may aid their malicious intent.
This paper presents CRISP, a rollback protection mechanism that uses an existing runtime for Intel SGX and transparently prevents rollback.
- Score: 0.757843972001219
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trusted execution environments (TEEs) protect the integrity and confidentiality of running code and its associated data. Nevertheless, TEEs' integrity protection does not extend to the state saved on disk. Furthermore, modern cloud-native applications heavily rely on orchestration (e.g., through systems such as Kubernetes) and, thus, have their services frequently restarted. During restarts, attackers can revert the state of confidential services to a previous version that may aid their malicious intent. This paper presents CRISP, a rollback protection mechanism that uses an existing runtime for Intel SGX and transparently prevents rollback. Our approach can constrain the attack window to a fixed and short period or give developers the tools to avoid the vulnerability window altogether. Finally, experiments show that applying CRISP in a critical stateful cloud-native application may incur a resource increase but only a minor performance penalty.
Related papers
- Authentication and identity management based on zero trust security model in micro-cloud environment [0.0]
The Zero Trust framework can better track and block external attackers while limiting security breaches resulting from insider attacks in the cloud paradigm.
This paper focuses on authentication mechanisms, calculation of trust score, and generation of policies in order to establish required access control to resources.
arXiv Detail & Related papers (2024-10-29T09:06:13Z) - Preventing Rowhammer Exploits via Low-Cost Domain-Aware Memory Allocation [46.268703252557316]
Rowhammer is a hardware security vulnerability at the heart of every system with modern DRAM-based memory.
C Citadel is a new memory allocator design that prevents Rowhammer-initiated security exploits.
C Citadel supports thousands of security domains at a modest 7.4% average memory overhead and no performance loss.
arXiv Detail & Related papers (2024-09-23T18:41:14Z) - Reliable and Efficient Concept Erasure of Text-to-Image Diffusion Models [76.39651111467832]
We introduce Reliable and Efficient Concept Erasure (RECE), a novel approach that modifies the model in 3 seconds without necessitating additional fine-tuning.
To mitigate inappropriate content potentially represented by derived embeddings, RECE aligns them with harmless concepts in cross-attention layers.
The derivation and erasure of new representation embeddings are conducted iteratively to achieve a thorough erasure of inappropriate concepts.
arXiv Detail & Related papers (2024-07-17T08:04:28Z) - Cabin: Confining Untrusted Programs within Confidential VMs [13.022056111810599]
Confidential computing safeguards sensitive computations from untrusted clouds.
CVMs often come with large and vulnerable operating system kernels, making them susceptible to attacks exploiting kernel weaknesses.
This study proposes Cabin, an isolated execution framework within guest VM utilizing the latest AMD SEV-SNP technology.
arXiv Detail & Related papers (2024-07-17T06:23:28Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Trustworthy confidential virtual machines for the masses [1.6503985024334136]
We present Revelio, an approach that allows confidential virtual machine (VM)-based workloads to be designed and deployed in a way that disallows tampering even by the service providers.
We focus on web-facing workloads, protect them leveraging SEV-SNP, and enable end-users to remotely attest them seamlessly each time a new web session is established.
arXiv Detail & Related papers (2024-02-23T11:54:07Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - A Last-Level Defense for Application Integrity and Confidentiality [0.610240618821149]
We introduce a novel system, LLD, that enforces the integrity and consistency of applications in a transparent and scalable fashion.
Our solution mitigates TEEs with instantiation control and rollback protection.
Our rollback detection mechanism does not need excessive replication, nor does it sacrifice durability.
arXiv Detail & Related papers (2023-11-10T16:15:44Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Privacy-Preserving Machine Learning in Untrusted Clouds Made Simple [2.3518279773643287]
We present a practical framework to deploy privacy-preserving machine learning applications in untrusted clouds.
We shield unmodified PyTorch ML applications by running them in Intel SGX enclaves with model parameters and encrypted input data.
Our approach is completely transparent to the machine learning application: the developer and the end-user do not need to modify the application in any way.
arXiv Detail & Related papers (2020-09-09T16:16:06Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.