Monotonic Neural Network: combining Deep Learning with Domain Knowledge
for Chiller Plants Energy Optimization
- URL: http://arxiv.org/abs/2106.06143v1
- Date: Fri, 11 Jun 2021 03:01:26 GMT
- Title: Monotonic Neural Network: combining Deep Learning with Domain Knowledge
for Chiller Plants Energy Optimization
- Authors: Fanhe Ma, Faen Zhang, Shenglan Ben, Shuxin Qin, Pengcheng Zhou,
Changsheng Zhou and Fengyi Xu
- Abstract summary: It is difficult to collect enormous data for deep network training in real-world physical systems.
This paper considers domain knowledge in the structure and loss design of deep network to build a nonlinear model.
Specifically, the energy consumption estimation of most chillers can be physically viewed as an input-output monotonic problem.
- Score: 1.4508333270892002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we are interested in building a domain knowledge based deep
learning framework to solve the chiller plants energy optimization problems.
Compared to the hotspot applications of deep learning (e.g. image
classification and NLP), it is difficult to collect enormous data for deep
network training in real-world physical systems. Most existing methods reduce
the complex systems into linear model to facilitate the training on small
samples. To tackle the small sample size problem, this paper considers domain
knowledge in the structure and loss design of deep network to build a nonlinear
model with lower redundancy function space. Specifically, the energy
consumption estimation of most chillers can be physically viewed as an
input-output monotonic problem. Thus, we can design a Neural Network with
monotonic constraints to mimic the physical behavior of the system. We verify
the proposed method in a cooling system of a data center, experimental results
show the superiority of our framework in energy optimization compared to the
existing ones.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Neural Incremental Data Assimilation [8.817223931520381]
We introduce a deep learning approach where the physical system is modeled as a sequence of coarse-to-fine Gaussian prior distributions parametrized by a neural network.
This allows us to define an assimilation operator, which is trained in an end-to-end fashion to minimize the reconstruction error.
We illustrate our approach on chaotic dynamical physical systems with sparse observations, and compare it to traditional variational data assimilation methods.
arXiv Detail & Related papers (2024-06-21T11:42:55Z) - The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth [5.869633234882029]
We propose an efficiency strategy that leverages prior knowledge transferred by large models.
Simple but effective, we propose a method relying on an Entropy-bASed Importance mEtRic (EASIER) to reduce the depth of over-parametrized deep neural networks.
arXiv Detail & Related papers (2024-04-27T08:28:25Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Energy Consumption of Neural Networks on NVIDIA Edge Boards: an
Empirical Model [6.809944967863927]
Recently, there has been a trend of shifting the execution of deep learning inference tasks toward the edge of the network, closer to the user, to reduce latency and preserve data privacy.
In this work, we aim at profiling the energetic consumption of inference tasks for some modern edge nodes.
We have then distilled a simple, practical model that can provide an estimate of the energy consumption of a certain inference task on the considered boards.
arXiv Detail & Related papers (2022-10-04T14:12:59Z) - On-Device Domain Generalization [93.79736882489982]
Domain generalization is critical to on-device machine learning applications.
We find that knowledge distillation is a strong candidate for solving the problem.
We propose a simple idea called out-of-distribution knowledge distillation (OKD), which aims to teach the student how the teacher handles (synthetic) out-of-distribution data.
arXiv Detail & Related papers (2022-09-15T17:59:31Z) - On Energy-Based Models with Overparametrized Shallow Neural Networks [44.74000986284978]
Energy-based models (EBMs) are a powerful framework for generative modeling.
In this work we focus on shallow neural networks.
We show that models trained in the so-called "active" regime provide a statistical advantage over their associated "lazy" or kernel regime.
arXiv Detail & Related papers (2021-04-15T15:34:58Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - Compact Neural Representation Using Attentive Network Pruning [1.0152838128195465]
We describe a Top-Down attention mechanism that is added to a Bottom-Up feedforward network to select important connections and subsequently prune redundant ones at all parametric layers.
Our method not only introduces a novel hierarchical selection mechanism as the basis of pruning but also remains competitive with previous baseline methods in the experimental evaluation.
arXiv Detail & Related papers (2020-05-10T03:20:01Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.