Direct Localization in Underwater Acoustics via Convolutional Neural
Networks: A Data-Driven Approach
- URL: http://arxiv.org/abs/2207.10222v1
- Date: Wed, 20 Jul 2022 22:40:11 GMT
- Title: Direct Localization in Underwater Acoustics via Convolutional Neural
Networks: A Data-Driven Approach
- Authors: Amir Weiss, Toros Arikan and Gregory W. Wornell
- Abstract summary: Direct localization (DLOC) methods generally outperform their indirect two-step counterparts.
Underwater acoustic DLOC methods require prior knowledge of the environment.
We propose what is to the best of our knowledge, the first data-driven DLOC method.
- Score: 31.399611901926583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Direct localization (DLOC) methods, which use the observed data to localize a
source at an unknown position in a one-step procedure, generally outperform
their indirect two-step counterparts (e.g., using time-difference of arrivals).
However, underwater acoustic DLOC methods require prior knowledge of the
environment, and are computationally costly, hence slow. We propose, what is to
the best of our knowledge, the first data-driven DLOC method. Inspired by
classical and contemporary optimal model-based DLOC solutions, and leveraging
the capabilities of convolutional neural networks (CNNs), we devise a holistic
CNN-based solution. Our method includes a specifically-tailored input
structure, architecture, loss function, and a progressive training procedure,
which are of independent interest in the broader context of machine learning.
We demonstrate that our method outperforms attractive alternatives, and
asymptotically matches the performance of an oracle optimal model-based
solution.
Related papers
- Recursive Gaussian Process State Space Model [4.572915072234487]
We propose a new online GPSSM method with adaptive capabilities for both operating domains and GP hyper parameters.
Online selection algorithm for inducing points is developed based on informative criteria to achieve lightweight learning.
Comprehensive evaluations on both synthetic and real-world datasets demonstrate the superior accuracy, computational efficiency, and adaptability of our method.
arXiv Detail & Related papers (2024-11-22T02:22:59Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - On the Effective Usage of Priors in RSS-based Localization [56.68864078417909]
We propose a Received Signal Strength (RSS) fingerprint and convolutional neural network-based algorithm, LocUNet.
In this paper, we study the localization problem in dense urban settings.
We first recognize LocUNet's ability to learn the underlying prior distribution of the Rx position or Rx and transmitter (Tx) association preferences from the training data, and attribute its high performance to these.
arXiv Detail & Related papers (2022-11-28T00:31:02Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Visual-Language Navigation Pretraining via Prompt-based Environmental
Self-exploration [83.96729205383501]
We introduce prompt-based learning to achieve fast adaptation for language embeddings.
Our model can adapt to diverse vision-language navigation tasks, including VLN and REVERIE.
arXiv Detail & Related papers (2022-03-08T11:01:24Z) - A Near-Optimal Algorithm for Debiasing Trained Machine Learning Models [21.56208997475512]
We present a scalable post-processing algorithm for debiasing trained models, including deep neural networks (DNNs)
We prove to be near-optimal by bounding its excess Bayes risk.
We empirically validate its advantages on standard benchmark datasets.
arXiv Detail & Related papers (2021-06-06T09:45:37Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.