NANCY: Neural Adaptive Network Coding methodologY for video distribution
over wireless networks
- URL: http://arxiv.org/abs/2008.09559v1
- Date: Fri, 21 Aug 2020 15:55:32 GMT
- Title: NANCY: Neural Adaptive Network Coding methodologY for video distribution
over wireless networks
- Authors: Paresh Saxena, Mandan Naresh, Manik Gupta, Anirudh Achanta, Sastri
Kota and Smrati Gupta
- Abstract summary: NANCY is a system that generates adaptive bit rates (ABR) for video and adaptive network coding rates (ANCR)
NANCY trains a neural network model with rewards formulated as quality of experience (QoE) metrics.
Our results show that NANCY provides 29.91% and 60.34% higher average QoE than Pensieve and robustMPC, respectively.
- Score: 1.636104578028594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents NANCY, a system that generates adaptive bit rates (ABR)
for video and adaptive network coding rates (ANCR) using reinforcement learning
(RL) for video distribution over wireless networks. NANCY trains a neural
network model with rewards formulated as quality of experience (QoE) metrics.
It performs joint optimization in order to select: (i) adaptive bit rates for
future video chunks to counter variations in available bandwidth and (ii)
adaptive network coding rates to encode the video chunk slices to counter
packet losses in wireless networks. We present the design and implementation of
NANCY, and evaluate its performance compared to state-of-the-art video rate
adaptation algorithms including Pensieve and robustMPC. Our results show that
NANCY provides 29.91% and 60.34% higher average QoE than Pensieve and
robustMPC, respectively.
Related papers
- Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - DeepWiVe: Deep-Learning-Aided Wireless Video Transmission [0.0]
We present DeepWiVe, the first-ever end-to-end joint source-channel coding (JSCC) video transmission scheme.
We use deep neural networks (DNNs) to map video signals to channel symbols, combining video compression, channel coding, and modulation steps into a single neural transform.
Our results show that DeepWiVe can overcome the cliff-effect, which is prevalent in conventional separation-based digital communication schemes.
arXiv Detail & Related papers (2021-11-25T11:34:24Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Improved CNN-based Learning of Interpolation Filters for Low-Complexity
Inter Prediction in Video Coding [5.46121027847413]
This paper introduces a novel explainable neural network-based inter-prediction scheme.
A novel training framework enables each network branch to resemble a specific fractional shift.
When implemented in the context of the Versatile Video Coding (VVC) test model, 0.77%, 1.27% and 2.25% BD-rate savings can be achieved.
arXiv Detail & Related papers (2021-06-16T16:48:01Z) - ANT: Learning Accurate Network Throughput for Better Adaptive Video
Streaming [20.544139447901113]
Adaptive Bit Rate (ABR) decision plays a crucial role for ensuring satisfactory Quality of Experience (QoE) in video streaming applications.
This paper proposes to learn the ANT (a.k.a., Accurate Network Throughput) model to characterize the full spectrum of network throughput dynamics in the past.
Experiment results show that our approach can significantly improve the user QoE by 65.5% and 31.3% respectively, compared with the state-of-the-art Pensive and Oboe.
arXiv Detail & Related papers (2021-04-26T12:15:53Z) - End-to-end learnable EEG channel selection with deep neural networks [72.21556656008156]
We propose a framework to embed the EEG channel selection in the neural network itself.
We deal with the discrete nature of this new optimization problem by employing continuous relaxations of the discrete channel selection parameters.
This generic approach is evaluated on two different EEG tasks.
arXiv Detail & Related papers (2021-02-11T13:44:07Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation [86.35434065681925]
This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
arXiv Detail & Related papers (2020-10-02T11:40:09Z) - Efficient Adaptation of Neural Network Filter for Video Compression [10.769305738505071]
We present an efficient finetuning methodology for neural-network filters.
The fine-tuning is performed at encoder side to adapt the neural network to the specific content that is being encoded.
The proposed method achieves much faster than conventional finetuning approaches.
arXiv Detail & Related papers (2020-07-28T14:24:28Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.