Is AI currently capable of identifying wild oysters? A comparison of human annotators against the AI model, ODYSSEE
- URL: http://arxiv.org/abs/2505.03108v1
- Date: Tue, 06 May 2025 02:01:27 GMT
- Title: Is AI currently capable of identifying wild oysters? A comparison of human annotators against the AI model, ODYSSEE
- Authors: Brendan Campbell, Alan Williams, Kleio Baxevani, Alyssa Campbell, Rushabh Dhoke, Rileigh E. Hudock, Xiaomin Lin, Vivek Mange, Bernhard Neuberger, Arjun Suresh, Alhim Vera, Arthur Trembanis, Herbert G. Tanner, Edward Hale,
- Abstract summary: The ODYSSEE model was developed to identify live oysters using video or images taken in the field of oyster reefs to assess abundance.<n>The validity of this model in identifying live oysters on a reef was compared to expert and non-expert annotators.<n>Although the model can make inferences significantly faster than expert and non-expert annotators, the model overpredicted the number of live oysters.
- Score: 3.5969970396484934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Oysters are ecologically and commercially important species that require frequent monitoring to track population demographics (e.g. abundance, growth, mortality). Current methods of monitoring oyster reefs often require destructive sampling methods and extensive manual effort. Therefore, they are suboptimal for small-scale or sensitive environments. A recent alternative, the ODYSSEE model, was developed to use deep learning techniques to identify live oysters using video or images taken in the field of oyster reefs to assess abundance. The validity of this model in identifying live oysters on a reef was compared to expert and non-expert annotators. In addition, we identified potential sources of prediction error. Although the model can make inferences significantly faster than expert and non-expert annotators (39.6 s, $2.34 \pm 0.61$ h, $4.50 \pm 1.46$ h, respectively), the model overpredicted the number of live oysters, achieving lower accuracy (63\%) in identifying live oysters compared to experts (74\%) and non-experts (75\%) alike. Image quality was an important factor in determining the accuracy of the model and the annotators. Better quality images improved human accuracy and worsened model accuracy. Although ODYSSEE was not sufficiently accurate, we anticipate that future training on higher-quality images, utilizing additional live imagery, and incorporating additional annotation training classes will greatly improve the model's predictive power based on the results of this analysis. Future research should address methods that improve the detection of living vs. dead oysters.
Related papers
- Automated Detection of Salvin's Albatrosses: Improving Deep Learning Tools for Aerial Wildlife Surveys [4.936287307711449]
Unmanned Aerial Vehicles (UAVs) provide a cost-effective means of capturing high-resolution imagery.<n>We assess the performance of a general-purpose avian detection model, BirdDetector, in estimating the breeding population of Salvin's albatross (Thalassarche salvini) on the Bounty Islands, New Zealand.
arXiv Detail & Related papers (2025-05-15T22:42:44Z) - Detecting Discrepancies Between AI-Generated and Natural Images Using Uncertainty [91.64626435585643]
We propose a novel approach for detecting AI-generated images by leveraging predictive uncertainty to mitigate misuse and associated risks.<n>The motivation arises from the fundamental assumption regarding the distributional discrepancy between natural and AI-generated images.<n>We propose to leverage large-scale pre-trained models to calculate the uncertainty as the score for detecting AI-generated images.
arXiv Detail & Related papers (2024-12-08T11:32:25Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Improved detection of discarded fish species through BoxAL active learning [0.2544632696242629]
In this study, we present an active learning technique, named BoxAL, which includes estimation of epistemic certainty of the Faster R-CNN object-detection model.
The method allows selecting the most uncertain training images from an unlabeled pool, which are then used to train the object-detection model.
Our study additionally showed that the sampled new data is more valuable for training than the remaining unlabeled data.
arXiv Detail & Related papers (2024-10-07T10:01:30Z) - ODYSSEE: Oyster Detection Yielded by Sensor Systems on Edge Electronics [14.935296890629795]
Oysters are a vital keystone species in coastal ecosystems, providing significant economic, environmental, and cultural benefits.<n>Current monitoring strategies often rely on destructive methods.<n>We propose a novel pipeline using stable diffusion to augment a collected real dataset with realistic synthetic data.
arXiv Detail & Related papers (2024-09-11T04:31:09Z) - ReAugment: Model Zoo-Guided RL for Few-Shot Time Series Augmentation and Forecasting [74.00765474305288]
We present a pilot study on using reinforcement learning (RL) for time series data augmentation.<n>Our method, ReAugment, tackles three critical questions: which parts of the training set should be augmented, how the augmentation should be performed, and what advantages RL brings to the process.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - OysterNet: Enhanced Oyster Detection Using Simulation [12.282807381883542]
Oysters play a pivotal role in the bay living ecosystem and are considered the living filters for the ocean.
We present a novel method to mathematically model oysters and render images of oysters in simulation to boost the detection performance with minimal real data.
arXiv Detail & Related papers (2022-09-16T21:35:45Z) - Generative models-based data labeling for deep networks regression:
application to seed maturity estimation from UAV multispectral images [3.6868861317674524]
Monitoring seed maturity is an increasing challenge in agriculture due to climate change and more restrictive practices.
Traditional methods are based on limited sampling in the field and analysis in laboratory.
We propose a method for estimating parsley seed maturity using multispectral UAV imagery, with a new approach for automatic data labeling.
arXiv Detail & Related papers (2022-08-09T09:06:51Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Efficient remedies for outlier detection with variational autoencoders [8.80692072928023]
Likelihoods computed by deep generative models are a candidate metric for outlier detection with unlabeled data.
We show that a theoretically-grounded correction readily ameliorates a key bias with VAE likelihood estimates.
We also show that the variance of the likelihoods computed over an ensemble of VAEs also enables robust outlier detection.
arXiv Detail & Related papers (2021-08-19T16:00:58Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.