Plastic Waste Classification Using Deep Learning: Insights from the WaDaBa Dataset
- URL: http://arxiv.org/abs/2412.20232v1
- Date: Sat, 28 Dec 2024 18:00:52 GMT
- Title: Plastic Waste Classification Using Deep Learning: Insights from the WaDaBa Dataset
- Authors: Suman Kunwar, Banji Raphael Owabumoye, Abayomi Simeon Alade,
- Abstract summary: This study focuses on convolutional neural networks (CNNs) and object detection models like YOLO (You Only Look Once)
The study shows that YOLO- 11m achieved highest accuracy (98.03%) and mAP50 (0.990), with YOLO-11n performing similarly but highest mAP50(0.992)
Lightweight models like YOLO-10n trained faster but with lower accuracy, whereas MobileNet V2 showed impressive performance (97.12% accuracy) but fell short in object detection.
- Score: 0.0
- License:
- Abstract: With the increasing use of plastic, the challenges associated with managing plastic waste have become more challenging, emphasizing the need of effective solutions for classification and recycling. This study explores the potential of deep learning, focusing on convolutional neural networks (CNNs) and object detection models like YOLO (You Only Look Once), to tackle this issue using the WaDaBa dataset. The study shows that YOLO- 11m achieved highest accuracy (98.03%) and mAP50 (0.990), with YOLO-11n performing similarly but highest mAP50(0.992). Lightweight models like YOLO-10n trained faster but with lower accuracy, whereas MobileNet V2 showed impressive performance (97.12% accuracy) but fell short in object detection. Our study highlights the potential of deep learning models in transforming how we classify plastic waste, with YOLO models proving to be the most effective. By balancing accuracy and computational efficiency, these models can help to create scalable, impactful solutions in waste management and recycling.
Related papers
- Evaluating the Evolution of YOLO (You Only Look Once) Models: A Comprehensive Benchmark Study of YOLO11 and Its Predecessors [0.0]
This study presents a benchmark analysis of various YOLO (You Only Look Once) algorithms, from YOLOv3 to the newest addition, YOLO11.
It evaluates their performance on three diverse datasets: Traffic Signs (with varying object sizes), African Wildlife (with diverse aspect ratios and at least one instance of the object per image), and Ships and Vessels (with small-sized objects of a single class)
arXiv Detail & Related papers (2024-10-31T20:45:00Z) - Optimizing YOLO Architectures for Optimal Road Damage Detection and Classification: A Comparative Study from YOLOv7 to YOLOv10 [0.0]
This paper presents a comprehensive workflow for road damage detection using deep learning models.
To accommodate hardware limitations, large images are cropped, and lightweight models are utilized.
The proposed approach employs multiple model architectures, including a custom YOLOv7 model with Coordinate Attention layers and a Tiny YOLOv7 model.
arXiv Detail & Related papers (2024-10-10T22:55:12Z) - EFA-YOLO: An Efficient Feature Attention Model for Fire and Flame Detection [3.334973867478745]
We propose two key modules: EAConv (Efficient Attention Convolution) and EADown (Efficient Attention Downsampling)
Based on these two modules, we design an efficient and lightweight flame detection model, EFA-YOLO (Efficient Feature Attention YOLO)
EFA-YOLO exhibits a significant enhancement in detection accuracy (mAP) and inference speed, with model parameter amount is reduced by 94.6 and the inference speed is improved by 88 times.
arXiv Detail & Related papers (2024-09-19T10:20:07Z) - YOLOv10: Real-Time End-to-End Object Detection [68.28699631793967]
YOLOs have emerged as the predominant paradigm in the field of real-time object detection.
The reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs.
We introduce the holistic efficiency-accuracy driven model design strategy for YOLOs.
arXiv Detail & Related papers (2024-05-23T11:44:29Z) - Learning from History: Task-agnostic Model Contrastive Learning for
Image Restoration [79.04007257606862]
This paper introduces an innovative method termed 'learning from history', which dynamically generates negative samples from the target model itself.
Our approach, named Model Contrastive Learning for Image Restoration (MCLIR), rejuvenates latency models as negative models, making it compatible with diverse image restoration tasks.
arXiv Detail & Related papers (2023-09-12T07:50:54Z) - YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time Object Detection [63.36722419180875]
We provide an efficient and performant object detector, termed YOLO-MS.
We train our YOLO-MS on the MS COCO dataset from scratch without relying on any other large-scale datasets.
Our work can also serve as a plug-and-play module for other YOLO models.
arXiv Detail & Related papers (2023-08-10T10:12:27Z) - PLASTIC: Improving Input and Label Plasticity for Sample Efficient
Reinforcement Learning [54.409634256153154]
In Reinforcement Learning (RL), enhancing sample efficiency is crucial.
In principle, off-policy RL algorithms can improve sample efficiency by allowing multiple updates per environment interaction.
Our study investigates the underlying causes of this phenomenon by dividing plasticity into two aspects.
arXiv Detail & Related papers (2023-06-19T06:14:51Z) - An Adversarial Active Sampling-based Data Augmentation Framework for
Manufacturable Chip Design [55.62660894625669]
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable.
Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks.
We propose a litho-aware data augmentation framework to resolve the dilemma of limited data and improve the machine learning model performance.
arXiv Detail & Related papers (2022-10-27T20:53:39Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z) - Evaluation of YOLO Models with Sliced Inference for Small Object
Detection [0.0]
This work aims to benchmark the YOLOv5 and YOLOX models for small object detection.
The effects of sliced fine-tuning and sliced inference combined produced substantial improvement for all models.
arXiv Detail & Related papers (2022-03-09T15:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.