An Efficient Machine Learning Framework for Forest Height Estimation from Multi-Polarimetric Multi-Baseline SAR data
- URL: http://arxiv.org/abs/2507.20798v1
- Date: Mon, 28 Jul 2025 13:07:23 GMT
- Title: An Efficient Machine Learning Framework for Forest Height Estimation from Multi-Polarimetric Multi-Baseline SAR data
- Authors: Francesca Razzano, Wenyu Yang, Sergio Vitale, Giampaolo Ferraioli, Silvia Liberata Ullo, Gilda Schirinzi,
- Abstract summary: This paper introduces FGump, a forest height estimation framework by gradient boosting using multi-channel SAR processing with LiDAR profiles as Ground Truth(GT)<n>It ensures a strong balance between accuracy and computational efficiency, using a limited set of hand-designed features and avoiding heavy preprocessing (e.g., calibration and/or quantization)<n> Experimental results confirm that FGump outperforms State-of-the-Art (SOTA) AI-based and classical methods, achieving higher accuracy and significantly lower training and inference times.
- Score: 2.395410408500006
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate forest height estimation is crucial for climate change monitoring and carbon cycle assessment. Synthetic Aperture Radar (SAR), particularly in multi-channel configurations, has provided support for a long time in 3D forest structure reconstruction through model-based techniques. More recently, data-driven approaches using Machine Learning (ML) and Deep Learning (DL) have enabled new opportunities for forest parameter retrieval. This paper introduces FGump, a forest height estimation framework by gradient boosting using multi-channel SAR processing with LiDAR profiles as Ground Truth(GT). Unlike typical ML and DL approaches that require large datasets and complex architectures, FGump ensures a strong balance between accuracy and computational efficiency, using a limited set of hand-designed features and avoiding heavy preprocessing (e.g., calibration and/or quantization). Evaluated under both classification and regression paradigms, the proposed framework demonstrates that the regression formulation enables fine-grained, continuous estimations and avoids quantization artifacts by resulting in more precise measurements without rounding. Experimental results confirm that FGump outperforms State-of-the-Art (SOTA) AI-based and classical methods, achieving higher accuracy and significantly lower training and inference times, as demonstrated in our results.
Related papers
- Supervised Machine Learning Methods with Uncertainty Quantification for Exoplanet Atmospheric Retrievals from Transmission Spectroscopy [1.6874375111244329]
We present a systematic study of several existing machine learning regression techniques.<n>We compare their performance for retrieving exoplanet atmospheric parameters from transmission spectra.<n>The best performing combination of ML model and preprocessing scheme is validated on a the case study of JWST observation of WASP-39b.
arXiv Detail & Related papers (2025-08-07T02:28:21Z) - A Unified Graph-based Framework for Scalable 3D Tree Reconstruction and Non-Destructive Biomass Estimation from Point Clouds [8.821870725779071]
Estimating forest above-ground biomass (AGB) is crucial for assessing carbon storage and supporting sustainable forest management.<n> Quantitative Structural Model (QSM) offers a non-destructive approach to AGB estimation through 3D tree structural reconstruction.<n>This study presents a novel unified framework that enables end-to-end processing of large-scale point clouds.
arXiv Detail & Related papers (2025-06-18T15:55:47Z) - TreeLoRA: Efficient Continual Learning via Layer-Wise LoRAs Guided by a Hierarchical Gradient-Similarity Tree [52.44403214958304]
In this paper, we introduce TreeLoRA, a novel approach that constructs layer-wise adapters by leveraging hierarchical gradient similarity.<n>To reduce the computational burden of task similarity estimation, we employ bandit techniques to develop an algorithm based on lower confidence bounds.<n> experiments on both vision transformers (ViTs) and large language models (LLMs) demonstrate the effectiveness and efficiency of our approach.
arXiv Detail & Related papers (2025-06-12T05:25:35Z) - Quantile Regression with Large Language Models for Price Prediction [15.277244542405345]
Large Language Models (LLMs) have shown promise in structured prediction tasks, including regression.<n>We propose a novel quantile regression approach that enables LLMs to produce full predictive distributions.<n>A Mistral-7B model fine-tuned with quantile heads significantly outperforms traditional approaches for both point and distributional estimations.
arXiv Detail & Related papers (2025-06-07T04:19:28Z) - RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models [53.571195477043496]
We propose an algorithm named Rotated Straight-Through-Estimator (RoSTE)<n>RoSTE combines quantization-aware supervised fine-tuning (QA-SFT) with an adaptive rotation strategy to reduce activation outliers.<n>Our findings reveal that the prediction error is directly proportional to the quantization error of the converged weights, which can be effectively managed through an optimized rotation configuration.
arXiv Detail & Related papers (2025-02-13T06:44:33Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.<n>Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - Scalable Bayesian Tensor Ring Factorization for Multiway Data Analysis [24.04852523970509]
We propose a novel BTR model that incorporates a nonparametric Multiplicative Gamma Process (MGP) prior.<n>To handle discrete data, we introduce the P'olya-Gamma augmentation for closed-form updates.<n>We develop an efficient Gibbs sampler for consistent posterior simulation, which reduces the computational complexity of previous VI algorithm by two orders.
arXiv Detail & Related papers (2024-12-04T13:55:14Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Forest Parameter Prediction by Multiobjective Deep Learning of
Regression Models Trained with Pseudo-Target Imputation [6.853936752111048]
In prediction of forest parameters with data from remote sensing, regression models have traditionally been trained on a small sample of ground reference data.
This paper proposes to impute this sample of true prediction targets with data from an existing RS-based prediction map that we consider as pseudo-targets.
We use prediction maps constructed from airborne laser scanning (ALS) data to provide accurate pseudo-targets and free data from Sentinel-1's C-band synthetic aperture radar (SAR) as regressors.
arXiv Detail & Related papers (2023-06-19T18:10:47Z) - To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis [50.31589712761807]
Large language models (LLMs) are notoriously token-hungry during pre-training, and high-quality text data on the web is approaching its scaling limit for LLMs.
We investigate the consequences of repeating pre-training data, revealing that the model is susceptible to overfitting.
Second, we examine the key factors contributing to multi-epoch degradation, finding that significant factors include dataset size, model parameters, and training objectives.
arXiv Detail & Related papers (2023-05-22T17:02:15Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - An autoencoder wavelet based deep neural network with attention
mechanism for multistep prediction of plant growth [4.077787659104315]
This paper presents a novel approach for predicting plant growth in agriculture, focusing on prediction of plant Stem Diameter Variations (SDV)
Wavelet decomposition is applied to the original data, as to facilitate model fitting and reduce noise in them.
An encoder-decoder framework is developed using Long Short Term Memory (LSTM) and used for appropriate feature extraction from the data.
A recurrent neural network including LSTM and an attention mechanism is proposed for modelling long-term dependencies in the time series data.
arXiv Detail & Related papers (2020-12-07T20:30:39Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.