A Review of Statistical and Machine Learning Approaches for Coral Bleaching Assessment
- URL: http://arxiv.org/abs/2511.12234v1
- Date: Sat, 15 Nov 2025 14:22:56 GMT
- Title: A Review of Statistical and Machine Learning Approaches for Coral Bleaching Assessment
- Authors: Soham Sarkar, Arnab Hazra,
- Abstract summary: More than half of the world's coral reefs have either bleached or died over the past three decades.<n>Data-driven strategies are crucial for effective reef management.
- Score: 0.880899367147235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Coral bleaching is a major concern for marine ecosystems; more than half of the world's coral reefs have either bleached or died over the past three decades. Increasing sea surface temperatures, along with various spatiotemporal environmental factors, are considered the primary reasons behind coral bleaching. The statistical and machine learning communities have focused on multiple aspects of the environment in detail. However, the literature on various stochastic modeling approaches for assessing coral bleaching is extremely scarce. Data-driven strategies are crucial for effective reef management, and this review article provides an overview of existing statistical and machine learning methods for assessing coral bleaching. Statistical frameworks, including simple regression models, generalized linear models, generalized additive models, Bayesian regression models, spatiotemporal models, and resilience indicators, such as Fisher's Information and Variance Index, are commonly used to explore how different environmental stressors influence coral bleaching. On the other hand, machine learning methods, including random forests, decision trees, support vector machines, and spatial operators, are more popular for detecting nonlinear relationships, analyzing high-dimensional data, and allowing integration of heterogeneous data from diverse sources. In addition to summarizing these models, we also discuss potential data-driven future research directions, with a focus on constructing statistical and machine learning models in specific contexts related to coral bleaching.
Related papers
- Investigating the Impact of Histopathological Foundation Models on Regressive Prediction of Homologous Recombination Deficiency [52.50039435394964]
We systematically evaluate foundation models for regression-based tasks.<n>We extract patch-level features from whole slide images (WSI) using five state-of-the-art foundation models.<n>Models are trained to predict continuous HRD scores based on these extracted features across breast, endometrial, and lung cancer cohorts.
arXiv Detail & Related papers (2026-01-29T14:06:50Z) - Deep Learning Models for Coral Bleaching Classification in Multi-Condition Underwater Image Datasets [0.0]
Coral reefs support numerous marine organisms and are an important source of coastal protection from storms and floods.<n>This study presents a novel machine-learning-based coral bleaching classification system based on a diverse global dataset.<n>We benchmarked and compared three state-of-the-art models: Residual Neural Network (ResNet), Vision Transformer (ViT), and Convolutional Neural Network (CNN)
arXiv Detail & Related papers (2025-10-24T06:13:15Z) - Symbolically Regressing Fish Biomass Spectral Data: A Linear Genetic Programming Method with Tunable Primitives [5.163542749660303]
This paper models fish biomass spectral data as a symbolic regression problem and solves it by a linear genetic programming method.<n>In the symbolic regression problem, linear genetic programming automatically synthesizes regression models based on the given primitives and training data.<n>Our empirical results over ten fish biomass targets show that the proposed method improves the overall performance of fish biomass composition prediction.
arXiv Detail & Related papers (2025-05-28T02:27:49Z) - The Coralscapes Dataset: Semantic Scene Understanding in Coral Reefs [12.535323016915122]
We release the first general-purpose dense semantic segmentation dataset for coral reefs, covering 2075 images, 39 benthic classes, and 174k segmentation masks annotated by experts.<n>We benchmark a wide range of semantic segmentation models, and find that transfer learning from Coralscapes to existing smaller datasets consistently leads to state-of-the-art performance.<n>Coralscapes will catalyze research on efficient, scalable, and standardized coral reef surveying methods based on computer vision, and holds the potential to streamline the development of underwater ecological robotics.
arXiv Detail & Related papers (2025-03-25T18:33:59Z) - Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias [47.79659355705916]
Model-induced distribution shifts (MIDS) occur as previous model outputs pollute new model training sets over generations of models.
We introduce a framework that allows us to track multiple MIDS over many generations, finding that they can lead to loss in performance, fairness, and minoritized group representation.
Despite these negative consequences, we identify how models might be used for positive, intentional, interventions in their data ecosystems.
arXiv Detail & Related papers (2024-03-12T17:48:08Z) - Deep learning for multi-label classification of coral conditions in the
Indo-Pacific via underwater photogrammetry [24.00646413446011]
This study created a dataset representing common coral conditions and associated stressors in the Indo-Pacific.
It assessed existing classification algorithms and proposed a new multi-label method for automatically detecting coral conditions and extracting ecological information.
The proposed method accurately classified coral conditions as healthy, compromised, dead, and rubble.
arXiv Detail & Related papers (2024-03-09T14:42:16Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Pengembangan Model untuk Mendeteksi Kerusakan pada Terumbu Karang dengan
Klasifikasi Citra [3.254879465902239]
This study utilizes a specialized dataset consisting of 923 images collected from Flickr using the Flickr API.
The method employed in this research involves the use of machine learning models, particularly convolutional neural networks (CNN)
It was found that a from-scratch ResNet model can outperform pretrained models in terms of precision and accuracy.
arXiv Detail & Related papers (2023-08-08T15:30:08Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.