Attacking with Something That Does Not Exist: 'Proof of Non-Existence' Can Exhaust DNS Resolver CPU
- URL: http://arxiv.org/abs/2403.15233v2
- Date: Mon, 17 Jun 2024 18:57:10 GMT
- Title: Attacking with Something That Does Not Exist: 'Proof of Non-Existence' Can Exhaust DNS Resolver CPU
- Authors: Olivia Gruza, Elias Heftrig, Oliver Jacobsen, Haya Schulmann, Niklas Vogel, Michael Waidner,
- Abstract summary: NSEC3 is a proof of non-existence computation in DNSSEC.
NSEC3-encloser attack can still create a 72x increase in CPU instruction count.
We show that with a sufficient volume of DNS packets the attack can increase CPU load and cause packet loss.
- Score: 17.213183581342502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: NSEC3 is a proof of non-existence in DNSSEC, which provides an authenticated assertion that a queried resource does not exist in the target domain. NSEC3 consists of alphabetically sorted hashed names before and after the queried hostname. To make dictionary attacks harder, the hash function can be applied in multiple iterations, which however also increases the load on the DNS resolver during the computation of the SHA-1 hashes in NSEC3 records. Concerns about the load created by the computation of NSEC3 records on the DNS resolvers were already considered in the NSEC3 specifications RFC5155 and RFC9276. In February 2024, the potential of NSEC3 to exhaust DNS resolvers' resources was assigned a CVE-2023-50868, confirming that extra iterations of NSEC3 created substantial load. However, there is no published evaluation of the attack and the impact of the attack on the resolvers was not clarified. In this work we perform the first evaluation of the NSEC3-encloser attack against DNS resolver implementations and find that the NSEC3-encloser attack can still create a 72x increase in CPU instruction count, despite the victim resolver following RFC5155 recommendations in limiting hash iteration counts. The impact of the attack varies across the different DNS resolvers, but we show that with a sufficient volume of DNS packets the attack can increase CPU load and cause packet loss. We find that at a rate of 150 malicious NSEC3 records per second, depending on the DNS implementation, the loss rate of benign DNS requests varies between 2.7% and 30%. We provide a detailed description and implementation of the NSEC3-encloser attack. We also develop the first analysis how each NSEC3 parameter impacts the load inflicted on the victim resolver during NSEC3-encloser attack.
Related papers
- MTDNS: Moving Target Defense for Resilient DNS Infrastructure [2.8721132391618256]
DNS (Domain Name System) is one of the most critical components of the Internet.
Researchers have been constantly developing methods to detect and defend against the attacks against DNS.
Most solutions discard packets for defensive approaches, which can cause legitimate packets to be dropped.
We propose MTDNS, a resilient MTD-based approach that employs Moving Target Defense techniques.
arXiv Detail & Related papers (2024-10-03T06:47:16Z) - DNSSEC+: An Enhanced DNS Scheme Motivated by Benefits and Pitfalls of DNSSEC [1.8379423176822356]
We introduce DNSSEC+, a novel DNS scheme designed to mitigate the security and privacy vulnerabilities of the DNS resolution process between resolvers and name servers.
We show that for server-side processing latency, resolution time, and CPU usage, DNSSEC+ is comparable to less-secure schemes but significantly outperforms DNS-over-TLS.
arXiv Detail & Related papers (2024-08-02T01:25:14Z) - The Harder You Try, The Harder You Fail: The KeyTrap Denial-of-Service Algorithmic Complexity Attacks on DNSSEC [19.568025360483702]
We develop a new class of DNSSEC-based algorithmic complexity attacks on DNS, we dub KeyTrap attacks.
With just a single DNS packet, the KeyTrap attacks lead to a 2.0x spike in CPU count in vulnerable DNS resolvers, stalling some for as long as 16 hours.
Exploiting KeyTrap, an attacker could effectively disable Internet access in any system utilizing a DNSSEC-validating resolver.
arXiv Detail & Related papers (2024-06-05T10:33:04Z) - Guardians of DNS Integrity: A Remote Method for Identifying DNSSEC Validators Across the Internet [0.9319432628663636]
We propose a novel technique for identifying DNSSEC-validating resolvers.
We find that while most open resolvers are DNSSEC-enabled, less than 18% in IPv4 (38% in IPv6) validate received responses.
arXiv Detail & Related papers (2024-05-30T08:58:18Z) - TI-DNS: A Trusted and Incentive DNS Resolution Architecture based on Blockchain [8.38094558878305]
Domain Name System (DNS) is vulnerable to some malicious attacks, including DNS cache poisoning.
This paper presents TI-DNS, a blockchain-based DNS resolution architecture designed to detect and correct the forged DNS records.
TI-DNS is easy to be adopted as it only requires modifications to the resolver side of current DNS infrastructure.
arXiv Detail & Related papers (2023-12-07T08:03:10Z) - ResolverFuzz: Automated Discovery of DNS Resolver Vulnerabilities with
Query-Response Fuzzing [22.15711226930362]
Domain Name System (DNS) resolvers are the central piece of the DNS infrastructure.
Finding the resolver vulnerabilities is non-trivial, and this problem is not well addressed by the existing tools.
In this paper, we present a new fuzzing system termed ResolverFuzz to address the challenges related to DNS resolvers.
arXiv Detail & Related papers (2023-10-04T23:17:32Z) - Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Robust and Verifiable Information Embedding Attacks to Deep Neural
Networks via Error-Correcting Codes [81.85509264573948]
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier.
In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool.
In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods.
arXiv Detail & Related papers (2020-10-26T17:42:42Z) - Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to
Any-Layer Graph Neural Networks via Influence Function [62.89388227354517]
Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks.
Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters.
We propose an influence-based emphefficient, direct, and restricted black-box evasion attack to emphany-layer GNNs.
arXiv Detail & Related papers (2020-09-01T03:24:51Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.