Utilizing unsupervised learning to improve sward content prediction and
herbage mass estimation
- URL: http://arxiv.org/abs/2204.09343v1
- Date: Wed, 20 Apr 2022 09:28:11 GMT
- Title: Utilizing unsupervised learning to improve sward content prediction and
herbage mass estimation
- Authors: Paul Albert, Mohamed Saadeldin, Badri Narayanan, Brian Mac Namee,
Deirdre Hennessy, Aisling H. O'Connor, Noel E. O'Connor and Kevin McGuinness
- Abstract summary: In this work, we enhance the deep learning solution by reducing the need for ground-truthed (GT) images when training the neural network.
We demonstrate how unsupervised contrastive learning can be used in the sward composition prediction problem.
- Score: 15.297992694028807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sward species composition estimation is a tedious one. Herbage must be
collected in the field, manually separated into components, dried and weighed
to estimate species composition. Deep learning approaches using neural networks
have been used in previous work to propose faster and more cost efficient
alternatives to this process by estimating the biomass information from a
picture of an area of pasture alone. Deep learning approaches have, however,
struggled to generalize to distant geographical locations and necessitated
further data collection to retrain and perform optimally in different climates.
In this work, we enhance the deep learning solution by reducing the need for
ground-truthed (GT) images when training the neural network. We demonstrate how
unsupervised contrastive learning can be used in the sward composition
prediction problem and compare with the state-of-the-art on the publicly
available GrassClover dataset collected in Denmark as well as a more recent
dataset from Ireland where we tackle herbage mass and height estimation.
Related papers
- Enhancing Bronchoscopy Depth Estimation through Synthetic-to-Real Domain Adaptation [2.795503750654676]
We propose a transfer learning framework that leverages synthetic data with depth labels for training and adapts domain knowledge for accurate depth estimation in real bronchoscope data.
Our network demonstrates improved depth prediction on real footage using domain adaptation compared to training solely on synthetic data, validating our approach.
arXiv Detail & Related papers (2024-11-07T03:48:35Z) - KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training [2.8804804517897935]
We propose a method for hiding the least-important samples during the training of deep neural networks.
We adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process.
Our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline.
arXiv Detail & Related papers (2023-10-16T06:19:29Z) - Semantic Segmentation of Vegetation in Remote Sensing Imagery Using Deep
Learning [77.34726150561087]
We propose an approach for creating a multi-modal and large-temporal dataset comprised of publicly available Remote Sensing data.
We use Convolutional Neural Networks (CNN) models that are capable of separating different classes of vegetation.
arXiv Detail & Related papers (2022-09-28T18:51:59Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - Learning Topology from Synthetic Data for Unsupervised Depth Completion [66.26787962258346]
We present a method for inferring dense depth maps from images and sparse depth measurements.
We learn the association of sparse point clouds with dense natural shapes, and using the image as evidence to validate the predicted depth map.
arXiv Detail & Related papers (2021-06-06T00:21:12Z) - Unsupervised Scale-consistent Depth Learning from Video [131.3074342883371]
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training.
Thanks to the capability of scale-consistent prediction, we show that our monocular-trained deep networks are readily integrated into the ORB-SLAM2 system.
The proposed hybrid Pseudo-RGBD SLAM shows compelling results in KITTI, and it generalizes well to the KAIST dataset without additional training.
arXiv Detail & Related papers (2021-05-25T02:17:56Z) - Deep learning with self-supervision and uncertainty regularization to
count fish in underwater images [28.261323753321328]
Effective conservation actions require effective population monitoring.
Monitoring populations through image sampling has made data collection cheaper, wide-reaching and less intrusive.
Counting animals from such data is challenging, particularly when densely packed in noisy images.
Deep learning is the state-of-the-art method for many computer vision tasks, but it has yet to be properly explored to count animals.
arXiv Detail & Related papers (2021-04-30T13:02:19Z) - Dataset Condensation with Gradient Matching [36.14340188365505]
We propose a training set synthesis technique for data-efficient learning, called dataset Condensation, that learns to condense large dataset into a small set of informative synthetic samples for training deep neural networks from scratch.
We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2020-06-10T16:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.