Explainable Artificial Intelligence for Exhaust Gas Temperature of
Turbofan Engines
- URL: http://arxiv.org/abs/2203.13108v2
- Date: Fri, 25 Mar 2022 08:38:36 GMT
- Title: Explainable Artificial Intelligence for Exhaust Gas Temperature of
Turbofan Engines
- Authors: Marios Kefalas, Juan de Santiago Rojo Jr., Asteris Apostolidis, Dirk
van den Herik, Bas van Stein, Thomas B\"ack
- Abstract summary: symbolic regression is an interpretable alternative to the "black box" models.
In this work, we apply SR on real-life exhaust gas temperature (EGT) data, collected at high frequencies through the entire flight.
Results exhibit promising model accuracy, as well as explainability returning an absolute difference of 3degC compared to the ground truth.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven modeling is an imperative tool in various industrial
applications, including many applications in the sectors of aeronautics and
commercial aviation. These models are in charge of providing key insights, such
as which parameters are important on a specific measured outcome or which
parameter values we should expect to observe given a set of input parameters.
At the same time, however, these models rely heavily on assumptions (e.g.,
stationarity) or are "black box" (e.g., deep neural networks), meaning that
they lack interpretability of their internal working and can be viewed only in
terms of their inputs and outputs. An interpretable alternative to the "black
box" models and with considerably less assumptions is symbolic regression (SR).
SR searches for the optimal model structure while simultaneously optimizing the
model's parameters without relying on an a-priori model structure. In this
work, we apply SR on real-life exhaust gas temperature (EGT) data, collected at
high frequencies through the entire flight, in order to uncover meaningful
algebraic relationships between the EGT and other measurable engine parameters.
The experimental results exhibit promising model accuracy, as well as
explainability returning an absolute difference of 3{\deg}C compared to the
ground truth and demonstrating consistency from an engineering perspective.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - Deep learning modelling of manufacturing and build variations on multi-stage axial compressors aerodynamics [0.0]
This paper demonstrates the development and application of a deep learning framework for predictions of the flow field and aerodynamic performance of multi-stage axial compressors.
A physics-based dimensionality reduction unlocks the potential for flow-field predictions.
The proposed architecture is proven to achieve an accuracy comparable to that of the CFD benchmark, in real-time, for an industrially relevant application.
arXiv Detail & Related papers (2023-10-06T14:11:21Z) - Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for
Parameter-Efficient BERT [6.029590006321152]
We present Sensi-BERT, a sensitivity driven efficient fine-tuning of BERT models for downstream tasks.
Our experiments show the efficacy of Sensi-BERT across different downstream tasks including MNLI, QQP, QNLI, SST-2 and SQuAD.
arXiv Detail & Related papers (2023-07-14T17:24:15Z) - Knowledge-embedded meta-learning model for lift coefficient prediction
of airfoils [25.546237636065182]
A knowledge-embedded meta learning model is developed to obtain the lift coefficients of an arbitrary supercritical airfoil under various angle of attacks.
Compared to the ordinary neural network, our proposed model can exhibit better generalization capability with competitive prediction accuracy.
Results show that the proposed model can tend to assess the influence of airfoil geometry to the physical characteristics.
arXiv Detail & Related papers (2023-03-06T02:47:31Z) - Analyzing Transformers in Embedding Space [59.434807802802105]
We present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space.
We show that parameters of both pretrained and fine-tuned models can be interpreted in embedding space.
Our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only.
arXiv Detail & Related papers (2022-09-06T14:36:57Z) - Approximate Bayesian Computation for Physical Inverse Modeling [0.32771631221674324]
We propose a new method for automating the model parameter extraction process resulting in an accurate model fitting.
It is shown that the extracted parameters can be accurately predicted from the mobility curves using gradient boosted trees.
This work also provides a comparative analysis of the proposed framework with fine-tuned neural networks wherein the proposed framework is shown to perform better.
arXiv Detail & Related papers (2021-11-26T02:23:05Z) - Interpreting Machine Learning Models for Room Temperature Prediction in
Non-domestic Buildings [0.0]
This work presents an interpretable machine learning model aimed at predicting room temperature in non-domestic buildings.
We demonstrate experimentally that the proposed model can accurately forecast room temperatures eight hours ahead in real-time.
arXiv Detail & Related papers (2021-11-23T11:16:35Z) - Prediction of liquid fuel properties using machine learning models with
Gaussian processes and probabilistic conditional generative learning [56.67751936864119]
The present work aims to construct cheap-to-compute machine learning (ML) models to act as closure equations for predicting the physical properties of alternative fuels.
Those models can be trained using the database from MD simulations and/or experimental measurements in a data-fusion-fidelity approach.
The results show that ML models can predict accurately the fuel properties of a wide range of pressure and temperature conditions.
arXiv Detail & Related papers (2021-10-18T14:43:50Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - On the Sparsity of Neural Machine Translation Models [65.49762428553345]
We investigate whether redundant parameters can be reused to achieve better performance.
Experiments and analyses are systematically conducted on different datasets and NMT architectures.
arXiv Detail & Related papers (2020-10-06T11:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.