Enhanced Fish Freshness Classification with Incremental Handcrafted Feature Fusion
- URL: http://arxiv.org/abs/2510.17145v1
- Date: Mon, 20 Oct 2025 04:36:34 GMT
- Title: Enhanced Fish Freshness Classification with Incremental Handcrafted Feature Fusion
- Authors: Phi-Hung Hoang, Nam-Thuan Trinh, Van-Manh Tran, Thi-Thu-Hong Phan,
- Abstract summary: We propose a handcrafted feature-based approach to assess fish freshness.<n>Our method captures global chromatic variations from full images and localized degradations from ROI segments.<n>Experiments on the Freshness of the Fish Eyes dataset demonstrate the approach's effectiveness.
- Score: 0.05599792629509228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate assessment of fish freshness remains a major challenge in the food industry, with direct consequences for product quality, market value, and consumer health. Conventional sensory evaluation is inherently subjective, inconsistent, and difficult to standardize across contexts, often limited by subtle, species-dependent spoilage cues. To address these limitations, we propose a handcrafted feature-based approach that systematically extracts and incrementally fuses complementary descriptors, including color statistics, histograms across multiple color spaces, and texture features such as Local Binary Patterns (LBP) and Gray-Level Co-occurrence Matrices (GLCM), from fish eye images. Our method captures global chromatic variations from full images and localized degradations from ROI segments, fusing each independently to evaluate their effectiveness in assessing freshness. Experiments on the Freshness of the Fish Eyes (FFE) dataset demonstrate the approach's effectiveness: in a standard train-test setting, a LightGBM classifier achieved 77.56% accuracy, a 14.35% improvement over the previous deep learning baseline of 63.21%. With augmented data, an Artificial Neural Network (ANN) reached 97.16% accuracy, surpassing the prior best of 77.3% by 19.86%. These results demonstrate that carefully engineered, handcrafted features, when strategically processed, yield a robust, interpretable, and reliable solution for automated fish freshness assessment, providing valuable insights for practical applications in food quality monitoring.
Related papers
- Deep Feature Optimization for Enhanced Fish Freshness Assessment [0.05599792629509228]
Assessing fish freshness is vital for ensuring food safety and minimizing economic losses in the seafood industry.<n>Recent advances in deep learning have automated visual freshness prediction, but challenges related to accuracy and feature transparency persist.<n>This study introduces a unified three-stage framework that refines and leverages deep visual representations for reliable fish freshness assessment.
arXiv Detail & Related papers (2025-10-28T09:02:10Z) - Innovative Deep Learning Architecture for Enhanced Altered Fingerprint Recognition [0.0]
We present DeepAFRNet, a deep learning recognition model that matches and recognizes distorted fingerprint samples.<n>The approach uses a VGG16 backbone to extract high-dimensional features and cosine similarity to compare embeddings.<n>With strict thresholds, DeepAFRNet achieves accuracies of 96.7 percent, 98.76 percent, and 99.54 percent for the three levels.
arXiv Detail & Related papers (2025-09-24T20:12:37Z) - AQUA20: A Benchmark Dataset for Underwater Species Classification under Challenging Conditions [1.2289361708127877]
This paper introduces AQUA20, a comprehensive benchmark dataset comprising 8,171 underwater images across 20 marine species.<n>Thirteen state-of-the-art deep learning models were evaluated to benchmark their performance in classifying marine species under challenging conditions.<n>Results show ConvNeXt achieving the best performance, with a Top-3 accuracy of 98.82% and a Top-1 accuracy of 90.69%, as well as the highest overall F1-score of 88.92% with moderately large parameter size.
arXiv Detail & Related papers (2025-06-20T19:54:35Z) - Contrastive Visual Data Augmentation [119.51630737874855]
Large multimodal models (LMMs) often struggle to recognize novel concepts, as they rely on pre-trained knowledge and have limited ability to capture subtle visual details.<n>We propose Contrastive visual Data Augmentation (CoDA) strategy to help LMMs better align nuanced visual features with language.<n>CoDA extracts key contrastive textual and visual features of target concepts against the known concepts they are misrecognized as, and then uses multimodal generative models to produce targeted synthetic data.
arXiv Detail & Related papers (2025-02-24T23:05:31Z) - NVS-SQA: Exploring Self-Supervised Quality Representation Learning for Neurally Synthesized Scenes without References [55.35182166250742]
We propose NVS-SQA, a quality assessment method to learn no-reference quality representations through self-supervision.<n>Traditional self-supervised learning predominantly relies on the "same instance, similar representation" assumption and extensive datasets.<n>We employ photorealistic cues and quality scores as learning objectives, along with a specialized contrastive pair preparation process to improve the effectiveness and efficiency of learning.
arXiv Detail & Related papers (2025-01-11T09:12:43Z) - FISHing in Uncertainty: Synthetic Contrastive Learning for Genetic Aberration Detection [1.3373458503586262]
Existing FISH image classification methods face challenges due to signal variability and intrinsic uncertainty.
We introduce a novel approach that leverages synthetic images to eliminate the requirement for manual annotations.
We demonstrate the superior generalization capabilities and uncertainty calibration of our method, which is trained on synthetic data.
arXiv Detail & Related papers (2024-11-01T20:50:48Z) - SLYKLatent: A Learning Framework for Gaze Estimation Using Deep Facial Feature Learning [0.0]
We present SLYKLatent, a novel approach for enhancing gaze estimation by addressing appearance instability challenges in datasets.
SLYKLatent utilizes Self-Supervised Learning for initial training with facial expression datasets, followed by refinement with a patch-based tri-branch network.
Our evaluation on benchmark datasets achieves a 10.9% improvement on Gaze360, supersedes top MPIIFaceGaze results with 3.8%, and leads on a subset of ETH-XGaze by 11.6%.
arXiv Detail & Related papers (2024-02-02T16:47:18Z) - Fruit Quality Assessment with Densely Connected Convolutional Neural
Network [0.0]
We have exploited the concept of Densely Connected Convolutional Neural Networks (DenseNets) for fruit quality assessment.
The proposed pipeline achieved a remarkable accuracy of 99.67%.
The robustness of the model was further tested for fruit classification and quality assessment tasks where the model produced a similar performance.
arXiv Detail & Related papers (2022-12-08T13:11:47Z) - Learning Diversified Feature Representations for Facial Expression
Recognition in the Wild [97.14064057840089]
We propose a mechanism to diversify the features extracted by CNN layers of state-of-the-art facial expression recognition architectures.
Experimental results on three well-known facial expression recognition in-the-wild datasets, AffectNet, FER+, and RAF-DB, show the effectiveness of our method.
arXiv Detail & Related papers (2022-10-17T19:25:28Z) - Improving Visual Grounding by Encouraging Consistent Gradient-based
Explanations [58.442103936918805]
We show that Attention Mask Consistency produces superior visual grounding results than previous methods.
AMC is effective, easy to implement, and is general as it can be adopted by any vision-language model.
arXiv Detail & Related papers (2022-06-30T17:55:12Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.