Hydra: Accurate Multi-Modal Leaf Wetness Sensing with mm-Wave and Camera Fusion
- URL: http://arxiv.org/abs/2508.02409v1
- Date: Mon, 04 Aug 2025 13:33:06 GMT
- Title: Hydra: Accurate Multi-Modal Leaf Wetness Sensing with mm-Wave and Camera Fusion
- Authors: Yimeng Liu, Maolin Gan, Huaili Zeng, Li Liu, Younsuk Dong, Zhichao Cao,
- Abstract summary: Leaf Wetness Duration (LWD) is crucial in the development of plant diseases.<n>Prior research proposes diverse approaches, but they fail to measure real natural leaves directly.<n>This paper presents Hydra, an innovative approach that integrates millimeter-wave (mm-Wave) radar with camera technology to detect leaf wetness.
- Score: 6.047529821743112
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Leaf Wetness Duration (LWD), the time that water remains on leaf surfaces, is crucial in the development of plant diseases. Existing LWD detection lacks standardized measurement techniques, and variations across different plant characteristics limit its effectiveness. Prior research proposes diverse approaches, but they fail to measure real natural leaves directly and lack resilience in various environmental conditions. This reduces the precision and robustness, revealing a notable practical application and effectiveness gap in real-world agricultural settings. This paper presents Hydra, an innovative approach that integrates millimeter-wave (mm-Wave) radar with camera technology to detect leaf wetness by determining if there is water on the leaf. We can measure the time to determine the LWD based on this detection. Firstly, we design a Convolutional Neural Network (CNN) to selectively fuse multiple mm-Wave depth images with an RGB image to generate multiple feature images. Then, we develop a transformer-based encoder to capture the inherent connection among the multiple feature images to generate a feature map, which is further fed to a classifier for detection. Moreover, we augment the dataset during training to generalize our model. Implemented using a frequency-modulated continuous-wave (FMCW) radar within the 76 to 81 GHz band, Hydra's performance is meticulously evaluated on plants, demonstrating the potential to classify leaf wetness with up to 96% accuracy across varying scenarios. Deploying Hydra in the farm, including rainy, dawn, or poorly light nights, it still achieves an accuracy rate of around 90%.
Related papers
- Hydra-Bench: A Benchmark for Multi-Modal Leaf Wetness Sensing [5.54739216930577]
We introduce a new multi-modal dataset specifically designed for evaluating and advancing machine learning algorithms in leaf wetness detection.<n>Our dataset comprises synchronized mmWave raw data, Synthetic Aperture Radar (SAR) images, and RGB images collected over six months from five diverse plant species.
arXiv Detail & Related papers (2025-07-30T13:47:56Z) - TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion [54.46664104437454]
We propose TacoDepth, an efficient and accurate Radar-Camera depth estimation model with one-stage fusion.<n>Specifically, the graph-based Radar structure extractor and the pyramid-based Radar fusion module are designed.<n>Compared with the previous state-of-the-art approach, TacoDepth improves depth accuracy and processing speed by 12.8% and 91.8%.
arXiv Detail & Related papers (2025-04-16T05:25:04Z) - Multi-photon enhanced resolution for Superconducting Nanowire Single-Photon Detector-based Time-of-Flight lidar systems [0.0]
We report a lidar system based on waveguide-integrated SNSPDs that excels in temporal accuracy, which translates into high range resolution.<n>For single-shot measurements, we find resolution in the millimeter regime, resulting from the jitter of the time-of-flight signal of 21$,$ps for low photon numbers.<n>For multi-shot measurements we find sub-millimeter range-accuracy of 0.75$,$mm and reveal additional surface information of scanned objects.
arXiv Detail & Related papers (2025-03-19T15:47:16Z) - A Multi-Sensor Fusion Approach for Rapid Orthoimage Generation in Large-Scale UAV Mapping [3.321306647655686]
A multi-sensor UAV system, integrating the Global Positioning System (GPS), Inertial Measurement Unit (IMU), 4D millimeter-wave radar and camera, can provide an effective solution to this problem.<n>A prior-pose-optimized feature matching method is introduced to enhance matching speed and accuracy.<n> Experiments show that our approach achieves accurate feature matching orthoimage generation in a short time.
arXiv Detail & Related papers (2025-03-03T05:55:30Z) - WAVES: Benchmarking the Robustness of Image Watermarks [67.955140223443]
WAVES (Watermark Analysis Via Enhanced Stress-testing) is a benchmark for assessing image watermark robustness.
We integrate detection and identification tasks and establish a standardized evaluation protocol comprised of a diverse range of stress tests.
We envision WAVES as a toolkit for the future development of robust watermarks.
arXiv Detail & Related papers (2024-01-16T18:58:36Z) - Mapping Walnut Water Stress with High Resolution Multispectral UAV
Imagery and Machine Learning [0.0]
This study presents a machine learning approach using Random Forest (RF) models to map stem water potential (SWP)
From 2017 to 2018, five flights of an UAV equipped with a seven-band multispectral camera were conducted over a commercial walnut orchard.
RF regression model, utilizing vegetation indices derived from orthomosaiced UAV imagery and weather data, effectively estimated ground-measured SWPs.
RF classification model predicted water stress levels in walnut trees with 85% accuracy, surpassing the 80% accuracy of the reduced classification model.
arXiv Detail & Related papers (2023-12-30T02:58:45Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Diffusion Probabilistic Model Made Slim [128.2227518929644]
We introduce a customized design for slim diffusion probabilistic models (DPM) for light-weight image synthesis.
We achieve 8-18x computational complexity reduction as compared to the latent diffusion models on a series of conditional and unconditional image generation tasks.
arXiv Detail & Related papers (2022-11-27T16:27:28Z) - Anomaly Detection in IR Images of PV Modules using Supervised
Contrastive Learning [4.409996772486956]
We train a ResNet-34 convolutional neural network with a supervised contrastive loss to detect anomalies in infrared images.
Our method converges quickly and reliably detects unknown types of anomalies making it well suited for practice.
Our work serves the community with a more realistic view on PV module fault detection using unsupervised domain adaptation.
arXiv Detail & Related papers (2021-12-06T10:42:28Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Recurrent Exposure Generation for Low-Light Face Detection [113.25331155337759]
We propose a novel Recurrent Exposure Generation (REG) module and a Multi-Exposure Detection (MED) module.
REG produces progressively and efficiently intermediate images corresponding to various exposure settings.
Such pseudo-exposures are then fused by MED to detect faces across different lighting conditions.
arXiv Detail & Related papers (2020-07-21T17:30:51Z) - Conditional Variational Image Deraining [158.76814157115223]
Conditional Variational Image Deraining (CVID) network for better deraining performance.
We propose a spatial density estimation (SDE) module to estimate a rain density map for each image.
Experiments on synthesized and real-world datasets show that the proposed CVID network achieves much better performance than previous deterministic methods on image deraining.
arXiv Detail & Related papers (2020-04-23T11:51:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.