RU-Net for Automatic Characterization of TRISO Fuel Cross Sections
- URL: http://arxiv.org/abs/2509.12244v1
- Date: Wed, 10 Sep 2025 23:04:28 GMT
- Title: RU-Net for Automatic Characterization of TRISO Fuel Cross Sections
- Authors: Lu Cai, Fei Xu, Min Xian, Yalei Tang, Shoukun Sun, John Stempien,
- Abstract summary: We use convolutional neural networks (CNNs) to segment cross-sectional images of microscopic TRISO layers.<n>CNNs are a class of machine-learning algorithms specifically designed for processing structured grid data.<n>Preliminary results show that the model based on RU-Net performs best in terms of Intersection over Union.
- Score: 4.456149258746652
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: During irradiation, phenomena such as kernel swelling and buffer densification may impact the performance of tristructural isotropic (TRISO) particle fuel. Post-irradiation microscopy is often used to identify these irradiation-induced morphologic changes. However, each fuel compact generally contains thousands of TRISO particles. Manually performing the work to get statistical information on these phenomena is cumbersome and subjective. To reduce the subjectivity inherent in that process and to accelerate data analysis, we used convolutional neural networks (CNNs) to automatically segment cross-sectional images of microscopic TRISO layers. CNNs are a class of machine-learning algorithms specifically designed for processing structured grid data. They have gained popularity in recent years due to their remarkable performance in various computer vision tasks, including image classification, object detection, and image segmentation. In this research, we generated a large irradiated TRISO layer dataset with more than 2,000 microscopic images of cross-sectional TRISO particles and the corresponding annotated images. Based on these annotated images, we used different CNNs to automatically segment different TRISO layers. These CNNs include RU-Net (developed in this study), as well as three existing architectures: U-Net, Residual Network (ResNet), and Attention U-Net. The preliminary results show that the model based on RU-Net performs best in terms of Intersection over Union (IoU). Using CNN models, we can expedite the analysis of TRISO particle cross sections, significantly reducing the manual labor involved and improving the objectivity of the segmentation results.
Related papers
- Image Segmentation: Inducing graph-based learning [4.499833362998488]
This study explores the potential of graph neural networks (GNNs) to enhance semantic segmentation across diverse image modalities.<n>GNNs explicitly model relationships between image regions by constructing and operating on a graph representation of the image features.<n>Our analysis demonstrates the versatility of GNNs in addressing diverse segmentation challenges and highlights their potential to improve segmentation accuracy in various applications.
arXiv Detail & Related papers (2025-01-07T13:09:44Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Enhancing Rock Image Segmentation in Digital Rock Physics: A Fusion of
Generative AI and State-of-the-Art Neural Networks [5.089732183029123]
In digital rock physics, analysing microstructures from CT and SEM scans is crucial for estimating properties like porosity and pore connectivity.
Traditional segmentation methods like thresholding and CNNs often fall short in accurately detailing rock microstructures and are prone to noise.
U-Net improved segmentation accuracy but required many expert-annotated samples, a laborious and error-prone process due to complex pore shapes.
Our study employed an advanced generative AI model, the diffusion model, to overcome these limitations.
TransU-Net sets a new standard in digital rock physics, paving the way for future geoscience and engineering breakthroughs.
arXiv Detail & Related papers (2023-11-10T14:24:50Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Classification of diffraction patterns using a convolutional neural
network in single particle imaging experiments performed at X-ray
free-electron lasers [53.65540150901678]
Single particle imaging (SPI) at X-ray free electron lasers (XFELs) is particularly well suited to determine the 3D structure of particles in their native environment.
For a successful reconstruction, diffraction patterns originating from a single hit must be isolated from a large number of acquired patterns.
We propose to formulate this task as an image classification problem and solve it using convolutional neural network (CNN) architectures.
arXiv Detail & Related papers (2021-12-16T17:03:14Z) - Segmentation of Roads in Satellite Images using specially modified U-Net
CNNs [0.0]
The aim of this paper is to build an image classifier for satellite images of urban scenes that identifies the portions of the images in which a road is located.
Unlike conventional computer vision algorithms, convolutional neural networks (CNNs) provide accurate and reliable results on this task.
arXiv Detail & Related papers (2021-09-29T19:08:32Z) - Learning Hybrid Representations for Automatic 3D Vessel Centerline
Extraction [57.74609918453932]
Automatic blood vessel extraction from 3D medical images is crucial for vascular disease diagnoses.
Existing methods may suffer from discontinuities of extracted vessels when segmenting such thin tubular structures from 3D images.
We argue that preserving the continuity of extracted vessels requires to take into account the global geometry.
We propose a hybrid representation learning approach to address this challenge.
arXiv Detail & Related papers (2020-12-14T05:22:49Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - The use of Convolutional Neural Networks for signal-background
classification in Particle Physics experiments [0.4301924025274017]
We present an extensive convolutional neural architecture search, achieving high accuracy for signal/background discrimination for a HEP classification use-case.
We demonstrate among other things that we can achieve the same accuracy as complex ResNet architectures with CNNs with less parameters.
arXiv Detail & Related papers (2020-02-13T19:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.