TurboTLS: TLS connection establishment with 1 less round trip
- URL: http://arxiv.org/abs/2302.05311v2
- Date: Mon, 15 Jul 2024 16:37:09 GMT
- Title: TurboTLS: TLS connection establishment with 1 less round trip
- Authors: Carlos Aguilar-Melchor, Thomas Bailleux, Jason Goertzen, Adrien Guinet, David Joseph, Douglas Stebila,
- Abstract summary: We show how to establish TLS connections using one less round trip.
In our approach, which we call TurboTLS, the initial client-to-server and server-to-client flows of the TLS handshake are sent over UDP rather than TCP.
- Score: 2.307447796778038
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show how to establish TLS connections using one less round trip. In our approach, which we call TurboTLS, the initial client-to-server and server-to-client flows of the TLS handshake are sent over UDP rather than TCP. At the same time, in the same flights, the three-way TCP handshake is carried out. Once the TCP connection is established, the client and server can complete the final flight of the TLS handshake over the TCP connection and continue using it for application data. No changes are made to the contents of the TLS handshake protocol, only its delivery mechanism. We avoid problems with UDP fragmentation by using request-based fragmentation, in which the client sends in advance enough UDP requests to provide sufficient room for the server to fit its response with one response packet per request packet. Clients can detect which servers support this without an additional round trip, if the server advertises its support in a DNS HTTPS resource record. Experiments using our software implementation show substantial latency improvements. On reliable connections, we effectively eliminate a round trip without any noticeable cost. To ensure adequate performance on unreliable connections, we use lightweight packet ordering and buffering; we can have a client wait a very small time to receive a potentially lost packet (e.g., a fraction of the RTT observed for the first fragment) before falling back to TCP without any further delay, since the TCP connection was already in the process of being established. This approach offers substantial performance improvements with low complexity, even in heterogeneous network environments with poorly configured middleboxes.
Related papers
- CycleSL: Server-Client Cyclical Update Driven Scalable Split Learning [60.59553507555341]
We introduce CycleSL, a novel aggregation-free split learning framework.<n>Inspired by alternating block coordinate descent, CycleSL treats server-side training as an independent higher-level machine learning task.<n>Our empirical findings highlight the effectiveness of CycleSL in enhancing model performance.
arXiv Detail & Related papers (2025-11-23T21:00:21Z) - TLoRa: Implementing TLS Over LoRa for Secure HTTP Communication in IoT [13.530498941051677]
TLoRa is an end-to-end architecture for HTTPS communication over LoRa.<n>It enables a seamless and secure communication channel between WiFi-enabled end devices and the Internet over LoRa.
arXiv Detail & Related papers (2025-10-02T19:47:03Z) - Faster and Better LLMs via Latency-Aware Test-Time Scaling [52.10888685395448]
Test-Time Scaling (TTS) has proven effective in improving the performance of Large Language Models (LLMs) during inference.<n>Existing research has overlooked the efficiency of TTS from a latency-sensitive perspective.<n>We demonstrate that a compute-optimal TTS does not always result in the lowest latency in scenarios where latency is critical.
arXiv Detail & Related papers (2025-05-26T07:51:30Z) - Task-Oriented Feature Compression for Multimodal Understanding via Device-Edge Co-Inference [49.77734021302196]
We propose a task-oriented feature compression (TOFC) method for multimodal understanding in a device-edge co-inference framework.
To enhance compression efficiency, multiple entropy models are adaptively selected based on the characteristics of the visual features.
Results show that TOFC achieves up to 60% reduction in data transmission overhead and 50% reduction in system latency.
arXiv Detail & Related papers (2025-03-17T08:37:22Z) - Learning in Strategic Queuing Systems with Small Buffers [3.6480791907166306]
We show that when queues are learning, a small constant factor increase in server capacity, compared to what would be needed if centrally coordinating, suffices to keep the system stable.
This work contributes to the growing literature on the impact of selfish learning in systems with carryover effects between rounds.
arXiv Detail & Related papers (2025-02-13T02:23:23Z) - Tracezip: Efficient Distributed Tracing via Trace Compression [26.353398496686854]
Distributed tracing serves as a fundamental building block in the monitoring and testing of cloud service systems.
Head-based sampling indiscriminately selects requests to trace when they enter the system, which may miss critical events.
tail-based sampling first captures all requests and then selectively persists the edge-case traces.
We propose Tracezip to enhance the efficiency of distributed tracing via trace compression.
arXiv Detail & Related papers (2025-02-10T10:13:57Z) - Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch [66.84195842685459]
Training of large language models (LLMs) is typically distributed across a large number of accelerators to reduce training time.
Recently, distributed algorithms like DiLoCo have relaxed such co-location constraint.
We show experimentally that we can distribute training of billion-scale parameters and reach similar quality as before.
arXiv Detail & Related papers (2025-01-30T17:23:50Z) - Designing a Secure Device-to-Device File Transfer Mechanism [0.6138671548064355]
We propose a protocol that uses a relay server to relay files from client to the server.
In this paper we study available file transfer approaches and their known flaws.
arXiv Detail & Related papers (2024-11-21T04:24:37Z) - Exploiting Sequence Number Leakage: TCP Hijacking in NAT-Enabled Wi-Fi Networks [22.72218888270886]
We uncover a new side-channel vulnerability in the widely used NAT port preservation strategy and an insufficient reverse path validation strategy of Wi-Fi routers.
Off-path attackers can infer if there is one victim client in the same network communicating with another host on the Internet using TCP.
We test 67 widely used routers from 30 vendors and discover that 52 of them are affected by this attack.
arXiv Detail & Related papers (2024-04-06T11:59:35Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Random Segmentation: New Traffic Obfuscation against Packet-Size-Based Side-Channel Attacks [3.519713290901182]
Despite encryption, the packet size is still visible, enabling observers to infer private information in the Internet of Things (IoT) environment.
Packet padding obfuscates packet-length characteristics with a high data overhead because it relies on adding noise to the data.
This paper proposes a more data-efficient approach that randomizes packet sizes without adding noise.
arXiv Detail & Related papers (2023-09-12T03:33:36Z) - Boosting Distributed Machine Learning Training Through Loss-tolerant
Transmission Protocol [11.161913989794257]
Distributed Machine Learning (DML) systems are utilized to enhance the speed of model training in data centers (DCs) and edge nodes.
PS communication architecture faces severe long-tail latency caused by many-to-one "incast" traffic patterns, negatively impacting training throughput.
textbfLoss-tolerant textbfTransmission textbfProcol allows partial loss of gradients during synchronization to avoid unneeded retransmission.
textitEarly Close adjusts the loss-tolerant threshold based on network conditions and textit
arXiv Detail & Related papers (2023-05-07T14:01:52Z) - BAFFLE: A Baseline of Backpropagation-Free Federated Learning [71.09425114547055]
Federated learning (FL) is a general principle for decentralized clients to train a server model collectively without sharing local data.
We develop backpropagation-free federated learning, dubbed BAFFLE, in which backpropagation is replaced by multiple forward processes to estimate gradients.
BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments.
arXiv Detail & Related papers (2023-01-28T13:34:36Z) - Fast Federated Edge Learning with Overlapped Communication and
Computation and Channel-Aware Fair Client Scheduling [2.294014185517203]
We consider federated edge learning (FEEL) over wireless fading channels taking into account the downlink and uplink channel latencies.
We propose two alternative schemes with fairness considerations, termed as age-aware MRTP (A-MRTP), and opportunistically fair MRTP (OF-MRTP)
It is shown through numerical simulations that OF-MRTP provides significant reduction in latency without sacrificing test accuracy.
arXiv Detail & Related papers (2021-09-14T14:16:01Z) - Quantum Private Information Retrieval for Quantum Messages [71.78056556634196]
Quantum private information retrieval (QPIR) for quantum messages is the protocol in which a user retrieves one of the multiple quantum states from one or multiple servers without revealing which state is retrieved.
We consider QPIR in two different settings: the blind setting, in which the servers contain one copy of the message states, and the visible setting, in which the servers contain the description of the message states.
arXiv Detail & Related papers (2021-01-22T10:28:32Z) - A Deep Learning Approach for Low-Latency Packet Loss Concealment of
Audio Signals in Networked Music Performance Applications [66.56753488329096]
Networked Music Performance (NMP) is envisioned as a potential game changer among Internet applications.
This article describes a technique for predicting lost packet content in real-time using a deep learning approach.
arXiv Detail & Related papers (2020-07-14T15:51:52Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z) - Dynamic Parameter Allocation in Parameter Servers [74.250687861348]
We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse.
We found that Lapse provides near-linear scaling and can be orders of magnitude faster than existing parameter servers.
arXiv Detail & Related papers (2020-02-03T11:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.