Resource-aware Deep Learning for Wireless Fingerprinting Localization
- URL: http://arxiv.org/abs/2211.01759v1
- Date: Wed, 12 Oct 2022 12:39:29 GMT
- Title: Resource-aware Deep Learning for Wireless Fingerprinting Localization
- Authors: Gregor Cerar, Bla\v{z} Bertalani\v{c}, Carolina Fortuna
- Abstract summary: We discuss the latest results and trends in wireless localization and look at paths towards achieving more sustainable AI.
Considering only mobile users, estimated to exceed 7.4 billion by the end of 2025, and assuming that the networks serving these users will need to perform one localization per user per hour on average, the machine learning models used for the calculation would need to perform $65 times 1012$ predictions per year.
- Score: 0.7133136338850781
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Location based services, already popular with end users, are now inevitably
becoming part of new wireless infrastructures and emerging business processes.
The increasingly popular Deep Learning (DL) artificial intelligence methods
perform very well in wireless fingerprinting localization based on extensive
indoor radio measurement data. However, with the increasing complexity these
methods become computationally very intensive and energy hungry, both for their
training and subsequent operation. Considering only mobile users, estimated to
exceed 7.4 billion by the end of 2025, and assuming that the networks serving
these users will need to perform only one localization per user per hour on
average, the machine learning models used for the calculation would need to
perform $65 \times 10^{12}$ predictions per year. Add to this equation tens of
billions of other connected devices and applications that rely heavily on more
frequent location updates, and it becomes apparent that localization will
contribute significantly to carbon emissions unless more energy-efficient
models are developed and used. In this Chapter, we discuss the latest results
and trends in wireless localization and look at paths towards achieving more
sustainable AI. We then elaborate on a methodology for computing DL model
complexity, energy consumption and carbon footprint and show on a concrete
example how to develop a more resource-aware model for fingerprinting. We
finally compare relevant works in terms of complexity and training CO$_2$
footprint.
Related papers
- DRL-based Dolph-Tschebyscheff Beamforming in Downlink Transmission for Mobile Users [52.9870460238443]
We propose a deep reinforcement learning-based blind beamforming technique using a learnable Dolph-Tschebyscheff antenna array.
Our simulation results show that the proposed method can support data rates very close to the best possible values.
arXiv Detail & Related papers (2025-02-03T11:50:43Z) - ssProp: Energy-Efficient Training for Convolutional Neural Networks with Scheduled Sparse Back Propagation [4.77407121905745]
Back-propagation (BP) is a major source of computational expense during training deep learning models.
We propose a general, energy-efficient convolution module that can be seamlessly integrated into any deep learning architecture.
arXiv Detail & Related papers (2024-08-22T17:22:59Z) - Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - Dynamic Early Exiting Predictive Coding Neural Networks [3.542013483233133]
With the urge for smaller and more accurate devices, Deep Learning models became too heavy to deploy.
We propose a shallow bidirectional network based on predictive coding theory and dynamic early exiting for halting further computations.
We achieve comparable accuracy to VGG-16 in image classification on CIFAR-10 with fewer parameters and less computational complexity.
arXiv Detail & Related papers (2023-09-05T08:00:01Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Green Federated Learning [7.003870178055125]
Federated Learning (FL) is a machine learning technique for training a centralized model using data of decentralized entities.
FL may leverage as many as hundreds of millions of globally distributed end-user devices with diverse energy sources.
We propose the concept of Green FL, which involves optimizing FL parameters and making design choices to minimize carbon emissions.
arXiv Detail & Related papers (2023-03-26T02:23:38Z) - Towards Sustainable Deep Learning for Wireless Fingerprinting
Localization [0.541530201129053]
Location based services are becoming part of new wireless infrastructures and emerging business processes.
Deep Learning (DL) artificial intelligence methods perform very well in wireless fingerprinting localization based on extensive indoor radio measurement data.
With the increasing complexity these methods become computationally very intensive and energy hungry.
We present a new DL-based architecture for indoor localization that is more energy efficient compared to related state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-22T15:13:44Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Efficient Training of Deep Convolutional Neural Networks by Augmentation
in Embedding Space [24.847651341371684]
In applications where data are scarce, transfer learning and data augmentation techniques are commonly used to improve the generalization of deep learning models.
Fine-tuning a transfer model with data augmentation in the raw input space has a high computational cost to run the full network for every augmented input.
We propose a method that replaces the augmentation in the raw input space with an approximate one that acts purely in the embedding space.
arXiv Detail & Related papers (2020-02-12T03:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.