Task-Aware Network Coding Over Butterfly Network
- URL: http://arxiv.org/abs/2201.11917v1
- Date: Fri, 28 Jan 2022 03:35:51 GMT
- Title: Task-Aware Network Coding Over Butterfly Network
- Authors: Jiangnan Cheng, Sandeep Chinchali, Ao Tang
- Abstract summary: We analyze a new task-driven network coding problem, where distributed receivers pass transmitted data through machine learning tasks.
We formulate a task-aware network coding problem over a butterfly network in real-coordinate space, where lossy analog compression can be applied.
We introduce ML algorithms to solve the problem in the general case, and our evaluation demonstrates the effectiveness of task-aware network coding.
- Score: 3.5366052026723547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network coding allows distributed information sources such as sensors to
efficiently compress and transmit data to distributed receivers across a
bandwidth-limited network. Classical network coding is largely task-agnostic --
the coding schemes mainly aim to faithfully reconstruct data at the receivers,
regardless of what ultimate task the received data is used for. In this paper,
we analyze a new task-driven network coding problem, where distributed
receivers pass transmitted data through machine learning (ML) tasks, which
provides an opportunity to improve efficiency by transmitting salient
task-relevant data representations. Specifically, we formulate a task-aware
network coding problem over a butterfly network in real-coordinate space, where
lossy analog compression through principal component analysis (PCA) can be
applied. A lower bound for the total loss function for the formulated problem
is given, and necessary and sufficient conditions for achieving this lower
bound are also provided. We introduce ML algorithms to solve the problem in the
general case, and our evaluation demonstrates the effectiveness of task-aware
network coding.
Related papers
- Enabling robust sensor network design with data processing and
optimization making use of local beehive image and video files [0.0]
We of er a revolutionary paradigm that uses cutting-edge edge computing techniques to optimize data transmission and storage.
Our approach encompasses data compression for images and videos, coupled with a data aggregation technique for numerical data.
A key aspect of our approach is its ability to operate in resource-constrained environments.
arXiv Detail & Related papers (2024-02-26T15:27:47Z) - netFound: Foundation Model for Network Security [12.062547301932966]
We develop netFound, a foundational model for network security.
Our experiments demonstrate netFound's superiority over existing state-of-the-art ML-based solutions.
arXiv Detail & Related papers (2023-10-25T22:04:57Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Optimal transfer protocol by incremental layer defrosting [66.76153955485584]
Transfer learning is a powerful tool enabling model training with limited amounts of data.
The simplest transfer learning protocol is based on freezing" the feature-extractor layers of a network pre-trained on a data-rich source task.
We show that this protocol is often sub-optimal and the largest performance gain may be achieved when smaller portions of the pre-trained network are kept frozen.
arXiv Detail & Related papers (2023-03-02T17:32:11Z) - A Proper Orthogonal Decomposition approach for parameters reduction of
Single Shot Detector networks [0.0]
We propose a dimensionality reduction framework based on Proper Orthogonal Decomposition, a classical model order reduction technique.
We have applied such framework to SSD300 architecture using PASCAL VOC dataset, demonstrating a reduction of the network dimension and a remarkable speedup in the fine-tuning of the network in a transfer learning context.
arXiv Detail & Related papers (2022-07-27T14:43:14Z) - Semi-supervised Network Embedding with Differentiable Deep Quantisation [81.49184987430333]
We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
arXiv Detail & Related papers (2021-08-20T11:53:05Z) - Verifying Low-dimensional Input Neural Networks via Input Quantization [12.42030531015912]
This paper revisits the original problem of verifying ACAS Xu networks.
We propose to prepend an input quantization layer to the network.
Our technique can deliver exact verification results immune to floating-point error.
arXiv Detail & Related papers (2021-08-18T03:42:05Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Binarized Aggregated Network with Quantization: Flexible Deep Learning
Deployment for CSI Feedback in Massive MIMO System [22.068682756598914]
A novel network named aggregated channel reconstruction network (ACRNet) is designed to boost the feedback performance.
The elastic feedback scheme is proposed to flexibly adapt the network to meet different resource limitations.
Experiments show that the proposed ACRNet outperforms loads of previous state-of-the-art networks.
arXiv Detail & Related papers (2021-05-01T22:50:25Z) - Towards Accurate Quantization and Pruning via Data-free Knowledge
Transfer [61.85316480370141]
We study data-free quantization and pruning by transferring knowledge from trained large networks to compact networks.
Our data-free compact networks achieve competitive accuracy to networks trained and fine-tuned with training data.
arXiv Detail & Related papers (2020-10-14T18:02:55Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.