A deep scalable neural architecture for soil properties estimation from
spectral information
- URL: http://arxiv.org/abs/2210.17314v1
- Date: Wed, 26 Oct 2022 16:50:06 GMT
- Title: A deep scalable neural architecture for soil properties estimation from
spectral information
- Authors: Flavio Piccoli, Micol Rossini, Roberto Colombo, Raimondo Schettini,
Paolo Napoletano
- Abstract summary: We propose an adaptive deep neural architecture for the prediction of multiple soil characteristics from the analysis of hyperspectral signatures.
'Results, compared with other state-of-the-art methods, confirm the effectiveness of the proposed solution'
- Score: 20.981200039553144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose an adaptive deep neural architecture for the
prediction of multiple soil characteristics from the analysis of hyperspectral
signatures. The proposed method overcomes the limitations of previous methods
in the state of art: (i) it allows to predict multiple soil variables at once;
(ii) it permits to backtrace the spectral bands that most contribute to the
estimation of a given variable; (iii) it is based on a flexible neural
architecture capable of automatically adapting to the spectral library under
analysis. The proposed architecture is experimented on LUCAS, a large
laboratory dataset and on a dataset achieved by simulating PRISMA hyperspectral
sensor. 'Results, compared with other state-of-the-art methods confirm the
effectiveness of the proposed solution.
Related papers
- Evaluating and Explaining Earthquake-Induced Liquefaction Potential through Multi-Modal Transformers [0.0]
This study presents an explainable parallel transformer architecture for soil liquefaction prediction.
The architecture processes data from 165 case histories across 11 major earthquakes.
The model achieves 93.75% prediction accuracy on cross-regional validation sets.
arXiv Detail & Related papers (2025-02-11T09:18:07Z) - Investigating generalization capabilities of neural networks by means of loss landscapes and Hessian analysis [0.0]
This paper studies generalization capabilities of neural networks (NNs) using new and improved PyTorch library Loss Landscape Analysis (LLA)
LLA facilitates visualization and analysis of loss landscapes along with the properties of NN Hessian.
arXiv Detail & Related papers (2024-12-13T14:02:41Z) - Fast and Reliable Probabilistic Reflectometry Inversion with Prior-Amortized Neural Posterior Estimation [73.81105275628751]
Finding all structures compatible with reflectometry data is computationally prohibitive for standard algorithms.
We address this lack of reliability with a probabilistic deep learning method that identifies all realistic structures in seconds.
Our method, Prior-Amortized Neural Posterior Estimation (PANPE), combines simulation-based inference with novel adaptive priors.
arXiv Detail & Related papers (2024-07-26T10:29:16Z) - Learning high-dimensional causal effect [0.0]
In this work, we propose a method to generate a synthetic causal dataset that is high-dimensional.
The synthetic data simulates a causal effect using the MNIST dataset with Bernoulli treatment values.
We experiment on this dataset using Dragonnet architecture and modified architectures.
arXiv Detail & Related papers (2023-03-01T20:57:48Z) - A Local Optima Network Analysis of the Feedforward Neural Architecture
Space [0.0]
Local optima network (LON) analysis is a derivative of the fitness landscape of candidate solutions.
LONs may provide a viable paradigm by which to analyse and optimise neural architectures.
arXiv Detail & Related papers (2022-06-02T08:09:17Z) - Pre-training via Denoising for Molecular Property Prediction [53.409242538744444]
We describe a pre-training technique that utilizes large datasets of 3D molecular structures at equilibrium.
Inspired by recent advances in noise regularization, our pre-training objective is based on denoising.
arXiv Detail & Related papers (2022-05-31T22:28:34Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Self-Learning for Received Signal Strength Map Reconstruction with
Neural Architecture Search [63.39818029362661]
We present a model based on Neural Architecture Search (NAS) and self-learning for received signal strength ( RSS) map reconstruction.
The approach first finds an optimal NN architecture and simultaneously train the deduced model over some ground-truth measurements of a given ( RSS) map.
Experimental results show that signal predictions of this second model outperforms non-learning based state-of-the-art techniques and NN models with no architecture search.
arXiv Detail & Related papers (2021-05-17T12:19:22Z) - A SAR speckle filter based on Residual Convolutional Neural Networks [68.8204255655161]
This work aims to present a novel method for filtering the speckle noise from Sentinel-1 data by applying Deep Learning (DL) algorithms, based on Convolutional Neural Networks (CNNs)
The obtained results, if compared with the state of the art, show a clear improvement in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM)
arXiv Detail & Related papers (2021-04-19T14:43:07Z) - Stochastic analysis of heterogeneous porous material with modified
neural architecture search (NAS) based physics-informed neural networks using
transfer learning [0.0]
modified neural architecture search method (NAS) based physics-informed deep learning model is presented.
A three dimensional flow model is built to provide a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers.
arXiv Detail & Related papers (2020-10-03T19:57:54Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.