Test Time Training for AC Power Flow Surrogates via Physics and Operational Constraint Refinement
- URL: http://arxiv.org/abs/2511.22343v1
- Date: Thu, 27 Nov 2025 11:27:54 GMT
- Title: Test Time Training for AC Power Flow Surrogates via Physics and Operational Constraint Refinement
- Authors: Panteleimon Dogoulis, Mohammad Iman Alizadeh, Sylvain Kubler, Maxime Cordy,
- Abstract summary: Power Flow calculation based on machine learning (ML) techniques offer significant computational advantages over traditional numerical methods.<n>This paper introduces a physics-informed test-time training framework that enhances the accuracy and feasibility of ML-based PF surrogates.
- Score: 11.02886935871606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Power Flow (PF) calculation based on machine learning (ML) techniques offer significant computational advantages over traditional numerical methods but often struggle to maintain full physical consistency. This paper introduces a physics-informed test-time training (PI-TTT) framework that enhances the accuracy and feasibility of ML-based PF surrogates by enforcing AC power flow equalities and operational constraints directly at inference time. The proposed method performs a lightweight self-supervised refinement of the surrogate outputs through few gradient-based updates, enabling local adaptation to unseen operating conditions without requiring labeled data. Extensive experiments on the IEEE 14-, 118-, and 300-bus systems and the PEGASE 1354-bus network show that PI-TTT reduces power flow residuals and operational constraint violations by one to two orders of magnitude compared with purely ML-based models, while preserving their computational advantage. The results demonstrate that PI-TTT provides fast, accurate, and physically reliable predictions, representing a promising direction for scalable and physics-consistent learning in power system analysis.
Related papers
- ETS: Energy-Guided Test-Time Scaling for Training-Free RL Alignment [20.498600810211293]
We propose a training-free inference method to sample directly from the optimalReinforcement Learning policy.<n>Our algorithm, Energy-Guided Test-Time Scaling (ETS), estimates the key energy term via online Monte Carlo, with a provable convergence rate.<n>ETS substantially reduces inference latency while provably preserving sampling quality.
arXiv Detail & Related papers (2026-01-29T10:06:52Z) - A Lightweight Transfer Learning-Based State-of-Health Monitoring with Application to Lithium-ion Batteries in Autonomous Air Vehicles [53.158733310637295]
Transfer learning is a promising technique for leveraging knowledge from data-rich source working conditions.<n>Traditional TL-based state-of-health (SOH) monitoring is infeasible when applied in portable mobile devices.<n>This paper proposes a lightweight TL-based SOH monitoring approach with constructive transfer incremental learning (CITL)
arXiv Detail & Related papers (2025-12-09T11:54:09Z) - Feature-Based Semantics-Aware Scheduling for Energy-Harvesting Federated Learning [18.65400996570946]
Federated Learning (FL) on resource-constrained edge devices faces a critical challenge: The computational energy required for training Deep Neural Networks (DNNs) often dominates communication costs.<n>We propose a lightweight client scheduling framework using the Version Age of Information (VAoI), a semantics-aware metric that quantifies update timeliness and significance.<n>Our framework establishes semantics-aware scheduling as a practical and vital solution for EHFL in realistic scenarios where training costs dominate transmission costs.
arXiv Detail & Related papers (2025-12-01T18:40:26Z) - EconProver: Towards More Economical Test-Time Scaling for Automated Theorem Proving [64.15371139980802]
Large Language Models (LLMs) have recently advanced the field of Automated Theorem Proving (ATP)<n>We show that different test-time scaling strategies for ATP models introduce significant computational overhead for inference.<n>We propose two complementary methods that can be integrated into a unified EconRL pipeline for amplified benefits.
arXiv Detail & Related papers (2025-09-16T03:00:13Z) - RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models [53.571195477043496]
We propose an algorithm named Rotated Straight-Through-Estimator (RoSTE)<n>RoSTE combines quantization-aware supervised fine-tuning (QA-SFT) with an adaptive rotation strategy to reduce activation outliers.<n>Our findings reveal that the prediction error is directly proportional to the quantization error of the converged weights, which can be effectively managed through an optimized rotation configuration.
arXiv Detail & Related papers (2025-02-13T06:44:33Z) - Refining Salience-Aware Sparse Fine-Tuning Strategies for Language Models [14.68920095399595]
sparsity-based PEFT (SPEFT) introduces trainable sparse adaptations to the weight matrices in the model.<n>We conduct the first systematic evaluation of salience metrics for SPEFT, inspired by zero-cost NAS proxies.<n>We compare static and dynamic masking strategies, finding that static masking, which predetermines non-zero entries before training, delivers efficiency without sacrificing performance.
arXiv Detail & Related papers (2024-12-18T04:14:35Z) - Federated Learning framework for LoRaWAN-enabled IIoT communication: A case study [41.831392507864415]
Anomaly detection plays a crucial role in preventive maintenance and spotting irregularities in industrial components.
Traditional Machine Learning faces challenges in deploying anomaly detection models in resource-constrained environments like LoRaWAN.
Federated Learning (FL) solves this problem by enabling distributed model training, addressing privacy concerns, and minimizing data transmission.
arXiv Detail & Related papers (2024-10-15T13:48:04Z) - Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - LTAU-FF: Loss Trajectory Analysis for Uncertainty in Atomistic Force Fields [5.396675151318325]
Model ensembles are effective tools for estimating prediction uncertainty in deep learning atomistic force fields.
However, their widespread adoption is hindered by high computational costs and overconfident error estimates.
We address these challenges by leveraging distributions of per-sample errors obtained during training and employing a distance-based similarity search in the model latent space.
Our method, which we call LTAU, efficiently estimates the full probability distribution function (PDF) of errors for any test point using the logged training errors.
arXiv Detail & Related papers (2024-02-01T18:50:42Z) - Weak Supervision Performance Evaluation via Partial Identification [46.73061437177238]
Programmatic Weak Supervision (PWS) enables supervised model training without direct access to ground truth labels.
We present a novel method to address this challenge by framing model evaluation as a partial identification problem.
Our approach derives reliable bounds on key metrics without requiring labeled data, overcoming core limitations in current weak supervision evaluation techniques.
arXiv Detail & Related papers (2023-12-07T07:15:11Z) - Approximated Prompt Tuning for Vision-Language Pre-trained Models [54.326232586461614]
In vision-language pre-trained models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pre-training and downstream tasks.
We propose a novel Approximated Prompt Tuning (APT) approach towards efficient VL transfer learning.
arXiv Detail & Related papers (2023-06-27T05:43:47Z) - Physics Informed Neural Networks for Phase Locked Loop Transient
Stability Assessment [0.0]
Using power-electronic controllers, such as Phase Locked Loops (PLLs), to keep grid-tied renewable resources in synchronism with the grid can cause fast transient behavior during grid faults leading to instability.
This paper proposes a Neural Network algorithm that accurately predicts the transient dynamics of a controller under fault with less labeled training data.
The algorithm's performance is compared against a ROM and an EMT simulation in PSCAD for the CIGRE benchmark model C4.49, demonstrating its ability to accurately approximate trajectories and ROAs of a controller under varying grid impedance.
arXiv Detail & Related papers (2023-03-21T18:09:20Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - FINETUNA: Fine-tuning Accelerated Molecular Simulations [5.543169726358164]
We present an online active learning framework for accelerating the simulation of atomic systems efficiently and accurately.
A method of transfer learning to incorporate prior information from pre-trained models accelerates simulations by reducing the number of DFT calculations by 91%.
Experiments on 30 benchmark adsorbate-catalyst systems show that our method of transfer learning to incorporate prior information from pre-trained models accelerates simulations by reducing the number of DFT calculations by 91%.
arXiv Detail & Related papers (2022-05-02T21:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.