Machine Learning in management of precautionary closures caused by
lipophilic biotoxins
- URL: http://arxiv.org/abs/2402.09266v1
- Date: Wed, 14 Feb 2024 15:51:58 GMT
- Title: Machine Learning in management of precautionary closures caused by
lipophilic biotoxins
- Authors: Andres Molares-Ulloa, Enrique Fernandez-Blanco, Alejandro Pazos and
Daniel Rivero
- Abstract summary: Mussel farming is one of the most important aquaculture industries.
The main risk to mussel farming is harmful algal blooms (HABs), which pose a risk to human consumption.
This work proposes a predictive model capable of supporting the application of precautionary closures.
- Score: 43.51581973358462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mussel farming is one of the most important aquaculture industries. The main
risk to mussel farming is harmful algal blooms (HABs), which pose a risk to
human consumption. In Galicia, the Spanish main producer of cultivated mussels,
the opening and closing of the production areas is controlled by a monitoring
program. In addition to the closures resulting from the presence of toxicity
exceeding the legal threshold, in the absence of a confirmatory sampling and
the existence of risk factors, precautionary closures may be applied. These
decisions are made by experts without the support or formalisation of the
experience on which they are based. Therefore, this work proposes a predictive
model capable of supporting the application of precautionary closures.
Achieving sensitivity, accuracy and kappa index values of 97.34%, 91.83% and
0.75 respectively, the kNN algorithm has provided the best results. This allows
the creation of a system capable of helping in complex situations where
forecast errors are more common.
Related papers
- Conformal Thinking: Risk Control for Reasoning on a Compute Budget [60.65072883773352]
Reasoning Large Language Models (LLMs) enable test-time scaling, with dataset-level accuracy improving as the token budget increases.<n>We re-frame the budget setting problem as risk control, limiting the error rate while minimizing compute.<n>Our framework introduces an upper threshold that stops reasoning when the model is confident and a novel lower threshold that preemptively stops unsolvable instances.
arXiv Detail & Related papers (2026-02-03T18:17:22Z) - Mitigating Safety Tax via Distribution-Grounded Refinement in Large Reasoning Models [63.368505631152594]
Safety alignment incurs safety tax that perturbs a large reasoning model's (LRM) general reasoning ability.<n>Existing datasets used for safety alignment for an LRM are usually constructed by distilling safety reasoning traces and answers from an external LRM or human labeler.<n>We propose a safety alignment dataset construction method, dubbed DGR. DGR transforms and refines an existing out-of-distributional safety reasoning dataset to be aligned with the target's LLM inner distribution.
arXiv Detail & Related papers (2026-02-02T14:18:48Z) - Statistical Estimation of Adversarial Risk in Large Language Models under Best-of-N Sampling [50.872910438715486]
Large Language Models (LLMs) are typically evaluated for safety under single-shot or low-budget adversarial prompting.<n>We propose a scaling-aware Best-of-N estimation of risk, SABER, for modeling jailbreak vulnerability under Best-of-N sampling.
arXiv Detail & Related papers (2026-01-30T06:54:35Z) - The Eminence in Shadow: Exploiting Feature Boundary Ambiguity for Robust Backdoor Attacks [51.468144272905135]
Deep neural networks (DNNs) underpin critical applications yet remain vulnerable to backdoor attacks.<n>We provide a theoretical analysis targeting backdoor attacks, focusing on how sparse decision boundaries enable disproportionate model manipulation.<n>We propose Eminence, an explainable and robust black-box backdoor framework with provable theoretical guarantees and inherent stealth properties.
arXiv Detail & Related papers (2025-12-11T08:09:07Z) - SafeRBench: A Comprehensive Benchmark for Safety Assessment in Large Reasoning Models [60.8821834954637]
We present SafeRBench, the first benchmark that assesses LRM safety end-to-end.<n>We pioneer the incorporation of risk categories and levels into input design.<n>We introduce a micro-thought chunking mechanism to segment long reasoning traces into semantically coherent units.
arXiv Detail & Related papers (2025-11-19T06:46:33Z) - Explainable Probabilistic Machine Learning for Predicting Drilling Fluid Loss of Circulation in Marun Oil Field [0.5217870815854703]
This study presents a probabilistic machine learning framework based on Gaussian Process Regression (GPR) for predicting drilling fluid loss in complex formations.<n>The GPR model captures nonlinear dependencies among drilling parameters while quantifying predictive uncertainty, offering enhanced reliability for high-risk decision-making.
arXiv Detail & Related papers (2025-11-10T01:34:02Z) - Beta Distribution Learning for Reliable Roadway Crash Risk Assessment [21.371420424228077]
Roadway traffic accidents represent a global health crisis, responsible for over a million deaths annually and costing many countries up to 3% of their GDP.<n>Traditional traffic safety studies often examine risk factors in isolation, overlooking the spatial complexity and contextual interactions inherent in the built environment.<n>We introduce a novel deep learning framework that leverages satellite imagery as a comprehensive spatial input.<n>This approach enables the model to capture the nuanced spatial patterns and embedded environmental risk factors that contribute to fatal crash risks.
arXiv Detail & Related papers (2025-11-07T00:08:55Z) - Informed Learning for Estimating Drought Stress at Fine-Scale Resolution Enables Accurate Yield Prediction [10.780371055923304]
Water is essential for agricultural productivity. Assessing water shortages and reduced yield potential is a critical factor in decision-making.<n>Crop simulation models, which align with physical processes, offer intrinsic explainability but often perform poorly.<n>Machine learning models for crop yield modeling are powerful and scalable, yet they commonly operate as black boxes and lack adherence to the physical principles of crop growth.
arXiv Detail & Related papers (2025-10-21T13:58:04Z) - Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention [65.47632669243657]
A dishonest institution can exploit mechanisms to discriminate or unjustly deny services under the guise of uncertainty.<n>We demonstrate the practicality of this threat by introducing an uncertainty-inducing attack called Mirage.<n>We propose Confidential Guardian, a framework that analyzes calibration metrics on a reference dataset to detect artificially suppressed confidence.
arXiv Detail & Related papers (2025-05-29T19:47:50Z) - Statistical Learning for Heterogeneous Treatment Effects: Pretraining, Prognosis, and Prediction [40.96453902709292]
We propose pretraining strategies that leverage a phenomenon in real-world applications.
In medicine, components of the same biological signaling pathways frequently influence both baseline risk and treatment response.
We use this structure to incorporate "side information" and develop models that can exploit synergies between risk prediction and causal effect estimation.
arXiv Detail & Related papers (2025-05-01T05:12:14Z) - Uncertainty Guarantees on Automated Precision Weeding using Conformal Prediction [0.5172964916120902]
The article showcases conformal prediction in action on the task of precision weeding through deep learning-based image classification.
After a detailed presentation of the conformal prediction methodology, the article evaluates this pipeline on two real-world scenarios.
The results show that we are able to provide formal, i.e. certifiable, guarantees on spraying at least 90% of the weeds.
arXiv Detail & Related papers (2025-01-13T10:30:10Z) - Confidence Aware Learning for Reliable Face Anti-spoofing [52.23271636362843]
We propose a Confidence Aware Face Anti-spoofing model, which is aware of its capability boundary.
We estimate its confidence during the prediction of each sample.
Experiments show that the proposed CA-FAS can effectively recognize samples with low prediction confidence.
arXiv Detail & Related papers (2024-11-02T14:29:02Z) - Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering [55.15192437680943]
Generative models lack rigorous statistical guarantees for their outputs.
We propose a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee.
This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example.
arXiv Detail & Related papers (2024-10-02T15:26:52Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework [77.45983464131977]
We focus on how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications.
Our research identifies two critical latent factors affecting RAG's confidence in its predictions.
We develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers.
arXiv Detail & Related papers (2024-09-24T14:52:14Z) - Explainable machine learning for predicting shellfish toxicity in the Adriatic Sea using long-term monitoring data of HABs [0.0]
We train and evaluate machine learning models to accurately predict diarrhetic shellfish poisoning events.
The random forest model provided the best prediction of positive toxicity results based on the F1 score.
Key species (Dinophysis fortii and D. caudata) and environmental factors (salinity, river discharge and precipitation) were the best predictors of DSP outbreaks.
arXiv Detail & Related papers (2024-05-07T14:55:42Z) - ABCD: Trust enhanced Attention based Convolutional Autoencoder for Risk Assessment [0.0]
Anomaly detection in industrial systems is crucial for preventing equipment failures, ensuring risk identification, and maintaining overall system efficiency.
Traditional monitoring methods often rely on fixed thresholds and empirical rules, which may not be sensitive enough to detect subtle changes in system health and predict impending failures.
This paper proposes Attention-based convolutional autoencoder (ABCD) for risk detection and map the risk value derive to the maintenance planning.
ABCD learns the normal behavior of conductivity from historical data of a real-world industrial cooling system and reconstructs the input data, identifying anomalies that deviate from the expected patterns.
arXiv Detail & Related papers (2024-04-24T20:15:57Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - Hybrid Machine Learning techniques in the management of harmful algal
blooms impact [0.7864304771129751]
Mollusc farming can be affected by Harmful algal blooms (HABs)
HABs are episodes of high concentrations of algae that are potentially toxic for human consumption.
To avoid the risk to human consumption, harvesting is prohibited when toxicity is detected.
arXiv Detail & Related papers (2024-02-14T15:59:22Z) - Task-Driven Causal Feature Distillation: Towards Trustworthy Risk
Prediction [19.475933293993076]
We propose a Task-Driven Causal Feature Distillation model (TDCFD) to transform original feature values into causal feature attributions.
After the causal feature distillation, a deep neural network is applied to produce trustworthy prediction results.
We evaluate the performance of our TDCFD method on several synthetic and real datasets.
arXiv Detail & Related papers (2023-12-20T08:16:53Z) - SMARLA: A Safety Monitoring Approach for Deep Reinforcement Learning Agents [7.33319373357049]
This paper introduces SMARLA, a black-box safety monitoring approach specifically designed for Deep Reinforcement Learning (DRL) agents.
SMARLA utilizes machine learning to predict safety violations by observing the agent's behavior during execution.
Empirical results reveal that SMARLA is accurate at predicting safety violations, with a low false positive rate, and can predict violations at an early stage, approximately halfway through the execution of the agent, before violations occur.
arXiv Detail & Related papers (2023-08-03T21:08:51Z) - PAC$^m$-Bayes: Narrowing the Empirical Risk Gap in the Misspecified
Bayesian Regime [75.19403612525811]
This work develops a multi-sample loss which can close the gap by spanning a trade-off between the two risks.
Empirical study demonstrates improvement to the predictive distribution.
arXiv Detail & Related papers (2020-10-19T16:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.