MapLUR: Exploring a new Paradigm for Estimating Air Pollution using Deep
Learning on Map Images
- URL: http://arxiv.org/abs/2002.07493v1
- Date: Tue, 18 Feb 2020 11:21:55 GMT
- Title: MapLUR: Exploring a new Paradigm for Estimating Air Pollution using Deep
Learning on Map Images
- Authors: Michael Steininger, Konstantin Kobs, Albin Zehe, Florian
Lautenschlager, Martin Becker, Andreas Hotho
- Abstract summary: Land-use regression models are important for the assessment of air pollution concentrations in areas without measurement stations.
We propose the Data-driven, Open, Global (DOG) paradigm that entails models based on purely data-driven approaches using only openly and globally available data.
- Score: 4.7791671364702575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Land-use regression (LUR) models are important for the assessment of air
pollution concentrations in areas without measurement stations. While many such
models exist, they often use manually constructed features based on restricted,
locally available data. Thus, they are typically hard to reproduce and
challenging to adapt to areas beyond those they have been developed for. In
this paper, we advocate a paradigm shift for LUR models: We propose the
Data-driven, Open, Global (DOG) paradigm that entails models based on purely
data-driven approaches using only openly and globally available data. Progress
within this paradigm will alleviate the need for experts to adapt models to the
local characteristics of the available data sources and thus facilitate the
generalizability of air pollution models to new areas on a global scale. In
order to illustrate the feasibility of the DOG paradigm for LUR, we introduce a
deep learning model called MapLUR. It is based on a convolutional neural
network architecture and is trained exclusively on globally and openly
available map data without requiring manual feature engineering. We compare our
model to state-of-the-art baselines like linear regression, random forests and
multi-layer perceptrons using a large data set of modeled $\text{NO}_2$
concentrations in Central London. Our results show that MapLUR significantly
outperforms these approaches even though they are provided with manually
tailored features. Furthermore, we illustrate that the automatic feature
extraction inherent to models based on the DOG paradigm can learn features that
are readily interpretable and closely resemble those commonly used in
traditional LUR approaches.
Related papers
- Learning from the Giants: A Practical Approach to Underwater Depth and Surface Normals Estimation [3.0516727053033392]
This paper presents a novel deep learning model for Monocular Depth and Surface Normals Estimation (MDSNE)
It is specifically tailored for underwater environments, using a hybrid architecture that integrates CNNs with Transformers.
Our model reduces parameters by 90% and training costs by 80%, allowing real-time 3D perception on resource-constrained devices.
arXiv Detail & Related papers (2024-10-02T22:41:12Z) - FedDistill: Global Model Distillation for Local Model De-Biasing in Non-IID Federated Learning [10.641875933652647]
Federated Learning (FL) is a novel approach that allows for collaborative machine learning.
FL faces challenges due to non-uniformly distributed (non-iid) data across clients.
This paper introduces FedDistill, a framework enhancing the knowledge transfer from the global model to local models.
arXiv Detail & Related papers (2024-04-14T10:23:30Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Data-Free Knowledge Distillation for Heterogeneous Federated Learning [31.364314540525218]
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data.
Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users.
We propose a data-free knowledge distillation approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner.
arXiv Detail & Related papers (2021-05-20T22:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.