Conv-NILM-Net, a causal and multi-appliance model for energy source
separation
- URL: http://arxiv.org/abs/2208.02173v1
- Date: Wed, 3 Aug 2022 15:59:32 GMT
- Title: Conv-NILM-Net, a causal and multi-appliance model for energy source
separation
- Authors: Mohamed Alami C. and J\'er\'emie Decock and Rim Kaddah and Jesse Read
- Abstract summary: Non-Intrusive Load Monitoring seeks to save energy by estimating individual appliance power usage from a single aggregate measurement.
Deep neural networks have become increasingly popular in attempting to solve NILM problems.
We propose Conv-NILM-net, a fully convolutional framework for end-to-end NILM.
- Score: 1.1355370218310157
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-Intrusive Load Monitoring (NILM) seeks to save energy by estimating
individual appliance power usage from a single aggregate measurement. Deep
neural networks have become increasingly popular in attempting to solve NILM
problems. However most used models are used for Load Identification rather than
online Source Separation. Among source separation models, most use a
single-task learning approach in which a neural network is trained exclusively
for each appliance. This strategy is computationally expensive and ignores the
fact that multiple appliances can be active simultaneously and dependencies
between them. The rest of models are not causal, which is important for
real-time application. Inspired by Convtas-Net, a model for speech separation,
we propose Conv-NILM-net, a fully convolutional framework for end-to-end NILM.
Conv-NILM-net is a causal model for multi appliance source separation. Our
model is tested on two real datasets REDD and UK-DALE and clearly outperforms
the state of the art while keeping a significantly smaller size than the
competing models.
Related papers
- MSDC: Exploiting Multi-State Power Consumption in Non-intrusive Load
Monitoring based on A Dual-CNN Model [18.86649389838833]
Non-intrusive load monitoring (NILM) aims to decompose aggregated electrical usage signal into appliance-specific power consumption.
We design a new neural NILM model Multi-State Dual CNN (MSDC)
MSDC explicitly extracts information about the appliance's multiple states and state transitions, which in turn regulates the prediction of signals for appliances.
arXiv Detail & Related papers (2023-02-11T01:56:54Z) - SWARM Parallelism: Training Large Models Can Be Surprisingly
Communication-Efficient [69.61083127540776]
Deep learning applications benefit from using large models with billions of parameters.
Training these models is notoriously expensive due to the need for specialized HPC clusters.
We consider alternative setups for training large models: using cheap "preemptible" instances or pooling existing resources from multiple regions.
arXiv Detail & Related papers (2023-01-27T18:55:19Z) - Energy Disaggregation & Appliance Identification in a Smart Home: Transfer Learning enables Edge Computing [2.921708254378147]
Non-intrusive load monitoring (NILM) or energy disaggregation aims to extract the load profiles of individual consumer electronic appliances.
This work proposes a novel deep-learning and edge computing approach to solve the NILM problem.
arXiv Detail & Related papers (2023-01-08T10:59:44Z) - MutualNet: Adaptive ConvNet via Mutual Learning from Different Model
Configurations [51.85020143716815]
We propose MutualNet to train a single network that can run at a diverse set of resource constraints.
Our method trains a cohort of model configurations with various network widths and input resolutions.
MutualNet is a general training methodology that can be applied to various network structures.
arXiv Detail & Related papers (2021-05-14T22:30:13Z) - A Federated Learning Framework for Non-Intrusive Load Monitoring [0.1657441317977376]
Non-intrusive load monitoring (NILM) aims at decomposing the total reading of the household power consumption into appliance-wise ones.
Data cooperation among utilities and DNOs who own the NILM data has been increasingly significant.
A framework to improve the performance of NILM with federated learning (FL) has been set up.
arXiv Detail & Related papers (2021-04-04T14:24:50Z) - Ensemble Distillation for Robust Model Fusion in Federated Learning [72.61259487233214]
Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model.
In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side.
We propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients.
arXiv Detail & Related papers (2020-06-12T14:49:47Z) - UVeQFed: Universal Vector Quantization for Federated Learning [179.06583469293386]
Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their possibly private labeled data.
In FL, each user trains its copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model.
We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion.
arXiv Detail & Related papers (2020-06-05T07:10:22Z) - A Unified Object Motion and Affinity Model for Online Multi-Object
Tracking [127.5229859255719]
We propose a novel MOT framework that unifies object motion and affinity model into a single network, named UMA.
UMA integrates single object tracking and metric learning into a unified triplet network by means of multi-task learning.
We equip our model with a task-specific attention module, which is used to boost task-aware feature learning.
arXiv Detail & Related papers (2020-03-25T09:36:43Z) - Multi-channel U-Net for Music Source Separation [3.814858728853163]
Conditioned U-Net (C-U-Net) uses a control mechanism to train a single model for multi-source separation.
We propose a multi-channel U-Net (M-U-Net) trained using a weighted multi-task loss.
arXiv Detail & Related papers (2020-03-23T17:42:35Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.