Modelling Animal Biodiversity Using Acoustic Monitoring and Deep
Learning
- URL: http://arxiv.org/abs/2103.07276v1
- Date: Fri, 12 Mar 2021 13:50:31 GMT
- Title: Modelling Animal Biodiversity Using Acoustic Monitoring and Deep
Learning
- Authors: C. Chalmers, P.Fergus, S. Wich and S. N. Longmore
- Abstract summary: This paper outlines an approach for achieving this using state of the art in machine learning to automatically extract features from time-series audio signals.
The acquired bird songs are processed using mel-frequency cepstrum (MFC) to extract features which are later classified using a multilayer perceptron (MLP)
Our proposed method achieved promising results with 0.74 sensitivity, 0.92 specificity and an accuracy of 0.74.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For centuries researchers have used sound to monitor and study wildlife.
Traditionally, conservationists have identified species by ear; however, it is
now common to deploy audio recording technology to monitor animal and ecosystem
sounds. Animals use sound for communication, mating, navigation and territorial
defence. Animal sounds provide valuable information and help conservationists
to quantify biodiversity. Acoustic monitoring has grown in popularity due to
the availability of diverse sensor types which include camera traps, portable
acoustic sensors, passive acoustic sensors, and even smartphones. Passive
acoustic sensors are easy to deploy and can be left running for long durations
to provide insights on habitat and the sounds made by animals and illegal
activity. While this technology brings enormous benefits, the amount of data
that is generated makes processing a time-consuming process for
conservationists. Consequently, there is interest among conservationists to
automatically process acoustic data to help speed up biodiversity assessments.
Processing these large data sources and extracting relevant sounds from
background noise introduces significant challenges. In this paper we outline an
approach for achieving this using state of the art in machine learning to
automatically extract features from time-series audio signals and modelling
deep learning models to classify different bird species based on the sounds
they make. The acquired bird songs are processed using mel-frequency cepstrum
(MFC) to extract features which are later classified using a multilayer
perceptron (MLP). Our proposed method achieved promising results with 0.74
sensitivity, 0.92 specificity and an accuracy of 0.74.
Related papers
- Generalization in birdsong classification: impact of transfer learning methods and dataset characteristics [2.6740633963478095]
We explore the effectiveness of transfer learning in large-scale bird sound classification.
Our experiments demonstrate that both fine-tuning and knowledge distillation yield strong performance.
We advocate for more comprehensive labeling practices within the animal sound community.
arXiv Detail & Related papers (2024-09-21T11:33:12Z) - ActiveRIR: Active Audio-Visual Exploration for Acoustic Environment Modeling [57.1025908604556]
An environment acoustic model represents how sound is transformed by the physical characteristics of an indoor environment.
We propose active acoustic sampling, a new task for efficiently building an environment acoustic model of an unmapped environment.
We introduce ActiveRIR, a reinforcement learning policy that leverages information from audio-visual sensor streams to guide agent navigation and determine optimal acoustic data sampling positions.
arXiv Detail & Related papers (2024-04-24T21:30:01Z) - Exploring Meta Information for Audio-based Zero-shot Bird Classification [113.17261694996051]
This study investigates how meta-information can improve zero-shot audio classification.
We use bird species as an example case study due to the availability of rich and diverse meta-data.
arXiv Detail & Related papers (2023-09-15T13:50:16Z) - Active Bird2Vec: Towards End-to-End Bird Sound Monitoring with
Transformers [2.404305970432934]
We propose a shift towards end-to-end learning in bird sound monitoring by combining self-supervised (SSL) and deep active learning (DAL)
We aim to bypass traditional spectrogram conversions, enabling direct raw audio processing.
arXiv Detail & Related papers (2023-08-14T13:06:10Z) - Transferable Models for Bioacoustics with Human Language Supervision [0.0]
BioLingual is a new model for bioacoustics based on contrastive language-audio pretraining.
It can identify over a thousand species' calls across taxa, complete bioacoustic tasks zero-shot, and retrieve animal vocalization recordings from natural text queries.
arXiv Detail & Related papers (2023-08-09T14:22:18Z) - Few-shot Long-Tailed Bird Audio Recognition [3.8073142980733]
We propose a sound detection and classification pipeline to analyze soundscape recordings.
Our solution achieved 18th place of 807 teams at the BirdCLEF 2022 Challenge hosted on Kaggle.
arXiv Detail & Related papers (2022-06-22T04:14:25Z) - Fish sounds: towards the evaluation of marine acoustic biodiversity
through data-driven audio source separation [1.9116784879310027]
The marine ecosystem is changing at an alarming rate, exhibiting biodiversity loss and the migration of tropical species to temperate basins.
One of the most popular and effective methods for monitoring marine biodiversity is passive acoustics monitoring (PAM)
In this work, we show that the same techniques can be successfully used to automatically extract fish vocalizations in PAM recordings.
arXiv Detail & Related papers (2022-01-13T14:57:34Z) - Seeing biodiversity: perspectives in machine learning for wildlife
conservation [49.15793025634011]
We argue that machine learning can meet this analytic challenge to enhance our understanding, monitoring capacity, and conservation of wildlife species.
In essence, by combining new machine learning approaches with ecological domain knowledge, animal ecologists can capitalize on the abundance of data generated by modern sensor technologies.
arXiv Detail & Related papers (2021-10-25T13:40:36Z) - Cetacean Translation Initiative: a roadmap to deciphering the
communication of sperm whales [97.41394631426678]
Recent research showed the promise of machine learning tools for analyzing acoustic communication in nonhuman species.
We outline the key elements required for the collection and processing of massive bioacoustic data of sperm whales.
The technological capabilities developed are likely to yield cross-applications and advancements in broader communities investigating non-human communication and animal behavioral research.
arXiv Detail & Related papers (2021-04-17T18:39:22Z) - Discriminative Singular Spectrum Classifier with Applications on
Bioacoustic Signal Recognition [67.4171845020675]
We present a bioacoustic signal classifier equipped with a discriminative mechanism to extract useful features for analysis and classification efficiently.
Unlike current bioacoustic recognition methods, which are task-oriented, the proposed model relies on transforming the input signals into vector subspaces.
The validity of the proposed method is verified using three challenging bioacoustic datasets containing anuran, bee, and mosquito species.
arXiv Detail & Related papers (2021-03-18T11:01:21Z) - Automatic image-based identification and biomass estimation of
invertebrates [70.08255822611812]
Time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed.
We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology.
We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task.
arXiv Detail & Related papers (2020-02-05T21:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.