Towards Causal Physical Error Discovery in Video Analytics Systems
- URL: http://arxiv.org/abs/2405.17686v1
- Date: Mon, 27 May 2024 22:40:33 GMT
- Title: Towards Causal Physical Error Discovery in Video Analytics Systems
- Authors: Jinjin Zhao, Ted Shaowang, Stavos Sintos, Sanjay Krishnan,
- Abstract summary: Video analytics systems based on deep learning models are often opaque and brittle.
Current model explanation system are very good at giving literal explanations of behavior in terms of pixel contributions.
This paper introduces the idea that a simple form of causal reasoning, called a regression discontinuity design, can be used to associate changes in multiple key performance indicators to physical real world phenomena.
- Score: 5.889781890680243
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video analytics systems based on deep learning models are often opaque and brittle and require explanation systems to help users debug. Current model explanation system are very good at giving literal explanations of behavior in terms of pixel contributions but cannot integrate information about the physical or systems processes that might influence a prediction. This paper introduces the idea that a simple form of causal reasoning, called a regression discontinuity design, can be used to associate changes in multiple key performance indicators to physical real world phenomena to give users a more actionable set of video analytics explanations. We overview the system architecture and describe a vision of the impact that such a system might have.
Related papers
- VACT: A Video Automatic Causal Testing System and a Benchmark [55.53300306960048]
VACT is an **automated** framework for modeling, evaluating, and measuring the causal understanding of VGMs in real-world scenarios.
We introduce multi-level causal evaluation metrics to provide a detailed analysis of the causal performance of VGMs.
arXiv Detail & Related papers (2025-03-08T10:54:42Z) - Using Causal Threads to Explain Changes in a Dynamic System [0.0]
We explore developing rich semantic models of systems.
Specifically, we consider structured causal explanations about state changes in those systems.
We construct a model of the causal threads for geological changes proposed by the Snowball Earth theory.
arXiv Detail & Related papers (2023-11-19T14:32:06Z) - Causalainer: Causal Explainer for Automatic Video Summarization [77.36225634727221]
In many application scenarios, improper video summarization can have a large impact.
Modeling explainability is a key concern.
A Causal Explainer, dubbed Causalainer, is proposed to address this issue.
arXiv Detail & Related papers (2023-04-30T11:42:06Z) - Perception Visualization: Seeing Through the Eyes of a DNN [5.9557391359320375]
We develop a new form of explanation that is radically different in nature from current explanation methods, such as Grad-CAM.
Perception visualization provides a visual representation of what the DNN perceives in the input image by depicting what visual patterns the latent representation corresponds to.
Results of our user study demonstrate that humans can better understand and predict the system's decisions when perception visualizations are available.
arXiv Detail & Related papers (2022-04-21T07:18:55Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Causal Discovery in Physical Systems from Videos [123.79211190669821]
Causal discovery is at the core of human cognition.
We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure.
arXiv Detail & Related papers (2020-07-01T17:29:57Z) - Don't Explain without Verifying Veracity: An Evaluation of Explainable
AI with Video Activity Recognition [24.10997778856368]
This paper explores how explanation veracity affects user performance and agreement in intelligent systems.
We compare variations in explanation veracity for a video review and querying task.
Results suggest that low veracity explanations significantly decrease user performance and agreement.
arXiv Detail & Related papers (2020-05-05T17:06:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.