On Thin Ice: Towards Explainable Conservation Monitoring via Attribution and Perturbations
- URL: http://arxiv.org/abs/2510.21689v1
- Date: Fri, 24 Oct 2025 17:46:24 GMT
- Title: On Thin Ice: Towards Explainable Conservation Monitoring via Attribution and Perturbations
- Authors: Jiayi Zhou, Günel Aghakishiyeva, Saagar Arya, Julian Dale, James David Poling, Holly R. Houliston, Jamie N. Womble, Gregory D. Larsen, David W. Johnston, Brinnae Bent,
- Abstract summary: We train a Faster R-CNN to detect harbor seals using aerial imagery from Glacier Bay National Park.<n>We assess explanations along three axes relevant to field use.<n>We translate these findings into actionable next steps for model development.
- Score: 3.4574594310498266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer vision can accelerate ecological research and conservation monitoring, yet adoption in ecology lags in part because of a lack of trust in black-box neural-network-based models. We seek to address this challenge by applying post-hoc explanations to provide evidence for predictions and document limitations that are important to field deployment. Using aerial imagery from Glacier Bay National Park, we train a Faster R-CNN to detect pinnipeds (harbor seals) and generate explanations via gradient-based class activation mapping (HiResCAM, LayerCAM), local interpretable model-agnostic explanations (LIME), and perturbation-based explanations. We assess explanations along three axes relevant to field use: (i) localization fidelity: whether high-attribution regions coincide with the animal rather than background context; (ii) faithfulness: whether deletion/insertion tests produce changes in detector confidence; and (iii) diagnostic utility: whether explanations reveal systematic failure modes. Explanations concentrate on seal torsos and contours rather than surrounding ice/rock, and removal of the seals reduces detection confidence, providing model-evidence for true positives. The analysis also uncovers recurrent error sources, including confusion between seals and black ice and rocks. We translate these findings into actionable next steps for model development, including more targeted data curation and augmentation. By pairing object detection with post-hoc explainability, we can move beyond "black-box" predictions toward auditable, decision-supporting tools for conservation monitoring.
Related papers
- Reason-IAD: Knowledge-Guided Dynamic Latent Reasoning for Explainable Industrial Anomaly Detection [85.29900916231655]
Reason-IAD is a knowledge-guided dynamic latent reasoning framework for explainable industrial anomaly detection.<n>Experiments demonstrate that Reason-IAD consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2026-02-10T14:54:17Z) - Think Locally, Explain Globally: Graph-Guided LLM Investigations via Local Reasoning and Belief Propagation [5.191980417814362]
LLM agents excel when environments are mostly static and the needed information fits in a model's context window.<n>ReAct-style agents are especially brittle in this regime.<n>We propose EoG, a framework in which an LLM performs bounded local evidence mining and labeling (cause vs symptom) while a deterministic controller manages, state, and belief propagation to compute a minimal explanatory frontier.
arXiv Detail & Related papers (2026-01-25T17:27:19Z) - Fantastic Reasoning Behaviors and Where to Find Them: Unsupervised Discovery of the Reasoning Process [66.38541693477181]
We propose an unsupervised framework for discovering reasoning vectors, which we define as directions in the activation space that encode distinct reasoning behaviors.<n>By segmenting chain-of-thought traces into sentence-level'steps', we uncover disentangled features corresponding to interpretable behaviors such as reflection and backtracking.<n>We demonstrate the ability to control response confidence by identifying confidence-related vectors in the SAE decoder space.
arXiv Detail & Related papers (2025-12-30T05:09:11Z) - PROVEX: Enhancing SOC Analyst Trust with Explainable Provenance-Based IDS [1.9336815376402718]
This paper presents a comprehensive XAI framework designed to bridge the trust gap in Security Operations Centers (SOCs) by making graph-based detection transparent.<n>We implement this framework on top of KAIROS, a state-of-the-art temporal graph-based IDS, though our design is applicable to any temporal graph-based detector with minimal adaptation.
arXiv Detail & Related papers (2025-12-20T03:45:21Z) - Active Inference for an Intelligent Agent in Autonomous Reconnaissance Missions [0.764671395172401]
We develop an active inference route-planning method for autonomous control of intelligent agents.<n>The aim is to reconnoiter a geographical area to maintain a common operational picture.
arXiv Detail & Related papers (2025-10-20T11:35:46Z) - Towards Inference-time Scaling for Continuous Space Reasoning [55.40260529506702]
Inference-time scaling has proven effective for text-based reasoning in large language models.<n>This paper investigates whether such established techniques can be successfully adapted to reasoning in the continuous space.<n>We demonstrate the feasibility of generating diverse reasoning paths through dropout-based sampling.
arXiv Detail & Related papers (2025-10-14T05:53:41Z) - Photorealistic Inpainting for Perturbation-based Explanations in Ecological Monitoring [3.4574594310498266]
We present an inpainting-guided explanation technique that produces perturbation, mask-localized edits that preserve scene context.<n>We demonstrate the approach on a YOLOv9 detector fine-tuned for harbor seal detection in Glacier Bay drone imagery.<n>The resulting explanations localize diagnostic structures, avoid deletion artifacts common to traditional perturbations, and yield domain-relevant insights.
arXiv Detail & Related papers (2025-10-01T01:18:27Z) - Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Generative Edge Detection with Stable Diffusion [52.870631376660924]
Edge detection is typically viewed as a pixel-level classification problem mainly addressed by discriminative methods.
We propose a novel approach, named Generative Edge Detector (GED), by fully utilizing the potential of the pre-trained stable diffusion model.
We conduct extensive experiments on multiple datasets and achieve competitive performance.
arXiv Detail & Related papers (2024-10-04T01:52:23Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv Detail & Related papers (2023-03-28T14:27:27Z) - Explanation Method for Anomaly Detection on Mixed Numerical and
Categorical Spaces [0.9543943371833464]
We present EADMNC (Explainable Anomaly Detection on Mixed Numerical and Categorical spaces)
It adds explainability to the predictions obtained with the original model.
We report experimental results on extensive real-world data, particularly in the domain of network intrusion detection.
arXiv Detail & Related papers (2022-09-09T08:20:13Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - CRAUM-Net: Contextual Recursive Attention with Uncertainty Modeling for Salient Object Detection [0.0]
We present a novel framework that integrates multi-scale context aggregation, advanced attention mechanisms, and an uncertainty-aware module for improved SOD performance.<n>Our Adaptive Cross-Scale Context Module effectively fuses features from multiple levels, leveraging Recursive Channel Spatial Attention and Convolutional Block Attention.<n>To train our network robustly, we employ a combination of boundary-sensitive and topology-preserving loss functions, including Boundary IoU, Focal Tversky, and Topological Saliency losses.
arXiv Detail & Related papers (2020-06-04T18:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.