Explaining automated gender classification of human gait
- URL: http://arxiv.org/abs/2211.17015v1
- Date: Sun, 16 Oct 2022 13:57:09 GMT
- Title: Explaining automated gender classification of human gait
- Authors: Fabian Horst, Djordje Slijepcevic, Matthias Zeppelzauer, Anna-Maria
Raberger, Sebastian Lapuschkin, Wojciech Samek, Wolfgang I. Sch\"ollhorn,
Christian Breiteneder and Brian Horsak
- Abstract summary: State-of-the-art machine learning (ML) models are highly effective in classifying gait analysis data, however, they lack in providing explanations for their predictions.
This "black-box" characteristic makes it impossible to understand on which input patterns, ML models base their predictions.
The present study investigates whether Explainable Artificial Intelligence methods can be useful to enhance the explainability of ML predictions in gait classification.
- Score: 10.968267030101211
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: State-of-the-art machine learning (ML) models are highly effective in
classifying gait analysis data, however, they lack in providing explanations
for their predictions. This "black-box" characteristic makes it impossible to
understand on which input patterns, ML models base their predictions. The
present study investigates whether Explainable Artificial Intelligence methods,
i.e., Layer-wise Relevance Propagation (LRP), can be useful to enhance the
explainability of ML predictions in gait classification. The research question
was: Which input patterns are most relevant for an automated gender
classification model and do they correspond to characteristics identified in
the literature? We utilized a subset of the GAITREC dataset containing five
bilateral ground reaction force (GRF) recordings per person during barefoot
walking of 62 healthy participants: 34 females and 28 males. Each input signal
(right and left side) was min-max normalized before concatenation and fed into
a multi-layer Convolutional Neural Network (CNN). The classification accuracy
was obtained over a stratified ten-fold cross-validation. To identify
gender-specific patterns, the input relevance scores were derived using LRP.
The mean classification accuracy of the CNN with 83.3% showed a clear
superiority over the zero-rule baseline of 54.8%.
Related papers
- GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations [1.0000511213628438]
We create a gender-controlled text dataset, GECO, in which otherwise identical sentences appear in male and female forms.
This gives rise to ground-truth 'world explanations' for gender classification tasks.
We also provide GECOBench, a rigorous quantitative evaluation framework benchmarking popular XAI methods.
arXiv Detail & Related papers (2024-06-17T13:44:37Z) - Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals [91.59906995214209]
We propose a new evaluation method, Counterfactual Attentiveness Test (CAT)
CAT uses counterfactuals by replacing part of the input with its counterpart from a different example, expecting an attentive model to change its prediction.
We show that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves.
arXiv Detail & Related papers (2023-11-16T06:27:35Z) - Right for the Wrong Reason: Can Interpretable ML Techniques Detect
Spurious Correlations? [2.7558542803110244]
We propose a rigorous evaluation strategy to assess an explanation technique's ability to correctly identify spurious correlations.
We find that the post-hoc technique SHAP, as well as the inherently interpretable Attri-Net provide the best performance.
arXiv Detail & Related papers (2023-07-23T14:43:17Z) - Quantifying Human Bias and Knowledge to guide ML models during Training [0.0]
We introduce an experimental approach to dealing with skewed datasets by including humans in the training process.
We ask humans to rank the importance of features of the dataset, and through rank aggregation, determine the initial weight bias for the model.
We show that collective human bias can allow ML models to learn insights about the true population instead of the biased sample.
arXiv Detail & Related papers (2022-11-19T20:49:07Z) - Explaining machine learning models for age classification in human gait
analysis [10.570744839131775]
The research question was: Which input features are used by ML models to classify age-related differences in walking patterns?
We utilized a subset of the AIST Gait Database 2019 containing five bilateral ground reaction force (GRF) recordings per person during barefoot walking of healthy participants.
The mean classification accuracy of 60.1% was clearly higher than the zero-rule baseline of 37.3%.
The confusion matrix shows that the CNN distinguished younger and older adults well, but had difficulty modeling the middle-aged adults.
arXiv Detail & Related papers (2022-10-16T13:53:51Z) - An Efficient End-to-End Deep Neural Network for Interstitial Lung
Disease Recognition and Classification [0.5424799109837065]
This paper introduces an end-to-end deep convolution neural network (CNN) for classifying ILDs patterns.
The proposed model comprises four convolutional layers with different kernel sizes and Rectified Linear Unit (ReLU) activation function.
A dataset consisting of 21328 image patches of 128 CT scans with five classes is taken to train and assess the proposed model.
arXiv Detail & Related papers (2022-04-21T06:36:10Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - How do Decisions Emerge across Layers in Neural Models? Interpretation
with Differentiable Masking [70.92463223410225]
DiffMask learns to mask-out subsets of the input while maintaining differentiability.
Decision to include or disregard an input token is made with a simple model based on intermediate hidden layers.
This lets us not only plot attribution heatmaps but also analyze how decisions are formed across network layers.
arXiv Detail & Related papers (2020-04-30T17:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.