Explainable Graph Theory-Based Identification of Meter-Transformer
Mapping
- URL: http://arxiv.org/abs/2205.09874v1
- Date: Thu, 19 May 2022 21:47:07 GMT
- Title: Explainable Graph Theory-Based Identification of Meter-Transformer
Mapping
- Authors: Bilal Saleem, Yang Weng
- Abstract summary: Distributed energy resources are better for the environment but may cause transformer overload in distribution grids.
The challenge lies in recovering meter-transformer (M.T.) mapping for two common scenarios.
- Score: 6.18054021053899
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Distributed energy resources are better for the environment but may cause
transformer overload in distribution grids, calling for recovering
meter-transformer mapping to provide situational awareness, i.e., the
transformer loading. The challenge lies in recovering meter-transformer (M.T.)
mapping for two common scenarios, e.g., large distances between a meter and its
parent transformer or high similarity of a meter's consumption pattern to a
non-parent transformer's meters. Past methods either assume a variety of data
as in the transmission grid or ignore the two common scenarios mentioned above.
Therefore, we propose to utilize the above observation via spectral embedding
by using the property that inter-transformer meter consumptions are not the
same and that the noise in data is limited so that all the k smallest
eigenvalues of the voltage-based Laplacian matrix are smaller than the next
smallest eigenvalue of the ideal Laplacian matrix. We also provide a guarantee
based on this understanding. Furthermore, we partially relax the assumption by
utilizing location information to aid voltage information for areas
geographically far away but with similar voltages. Numerical simulations on the
IEEE test systems and real feeders from our partner utility show that the
proposed method correctly identifies M.T. mapping.
Related papers
- Measure-to-measure interpolation using Transformers [6.13239149235581]
Transformers are deep neural network architectures that underpin the recent successes of large language models.
A Transformer acts as a measure-to-measure map implemented as specific interacting particle system on the unit sphere.
We provide an explicit choice of parameters that allows a single Transformer to match $N$ arbitrary input measures to $N$ arbitrary target measures.
arXiv Detail & Related papers (2024-11-07T09:18:39Z) - MART: MultiscAle Relational Transformer Networks for Multi-agent Trajectory Prediction [5.8919870666241945]
We present a Multiscleimat Transformer (MART) network for multi-agent trajectory prediction.
MART is a hypergraph transformer architecture to consider individual and group behaviors in transformer machinery.
In addition, we propose an Adaptive Group Estor (AGE) designed to infer complex group relations in real-world environments.
arXiv Detail & Related papers (2024-07-31T14:31:49Z) - CT-MVSNet: Efficient Multi-View Stereo with Cross-scale Transformer [8.962657021133925]
Cross-scale transformer (CT) processes feature representations at different stages without additional computation.
We introduce an adaptive matching-aware transformer (AMT) that employs different interactive attention combinations at multiple scales.
We also present a dual-feature guided aggregation (DFGA) that embeds the coarse global semantic information into the finer cost volume construction.
arXiv Detail & Related papers (2023-12-14T01:33:18Z) - Mitigating Bias in Visual Transformers via Targeted Alignment [8.674650784377196]
We study the fairness of transformers applied to computer vision and benchmark several bias mitigation approaches from prior work.
We propose TADeT, a targeted alignment strategy for debiasing transformers that aims to discover and remove bias primarily from query matrix features.
arXiv Detail & Related papers (2023-02-08T22:11:14Z) - The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in
Transformers [59.87030906486969]
This paper studies the curious phenomenon for machine learning models with Transformer architectures that their activation maps are sparse.
We show that sparsity is a prevalent phenomenon that occurs for both natural language processing and vision tasks.
We discuss how sparsity immediately implies a way to significantly reduce the FLOP count and improve efficiency for Transformers.
arXiv Detail & Related papers (2022-10-12T15:25:19Z) - Cost Aggregation with 4D Convolutional Swin Transformer for Few-Shot
Segmentation [58.4650849317274]
Volumetric Aggregation with Transformers (VAT) is a cost aggregation network for few-shot segmentation.
VAT attains state-of-the-art performance for semantic correspondence as well, where cost aggregation also plays a central role.
arXiv Detail & Related papers (2022-07-22T04:10:30Z) - SepTr: Separable Transformer for Audio Spectrogram Processing [74.41172054754928]
We propose a new vision transformer architecture called Separable Transformer (SepTr)
SepTr employs two transformer blocks in a sequential manner, the first attending to tokens within the same frequency bin, and the second attending to tokens within the same time interval.
We conduct experiments on three benchmark data sets, showing that our architecture outperforms conventional vision transformers and other state-of-the-art methods.
arXiv Detail & Related papers (2022-03-17T19:48:43Z) - XAI for Transformers: Better Explanations through Conservative
Propagation [60.67748036747221]
We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction.
Our proposal can be seen as a proper extension of the well-established LRP method to Transformers.
arXiv Detail & Related papers (2022-02-15T10:47:11Z) - TransVG: End-to-End Visual Grounding with Transformers [102.11922622103613]
We present a transformer-based framework for visual grounding, namely TransVG, to address the task of grounding a language query to an image.
We show that the complex fusion modules can be replaced by a simple stack of transformer encoder layers with higher performance.
arXiv Detail & Related papers (2021-04-17T13:35:24Z) - Spatiotemporal Transformer for Video-based Person Re-identification [102.58619642363958]
We show that, despite the strong learning ability, the vanilla Transformer suffers from an increased risk of over-fitting.
We propose a novel pipeline where the model is pre-trained on a set of synthesized video data and then transferred to the downstream domains.
The derived algorithm achieves significant accuracy gain on three popular video-based person re-identification benchmarks.
arXiv Detail & Related papers (2021-03-30T16:19:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.