Descriptive Modeling of Textiles using FE Simulations and Deep Learning
- URL: http://arxiv.org/abs/2106.13982v1
- Date: Sat, 26 Jun 2021 09:32:24 GMT
- Title: Descriptive Modeling of Textiles using FE Simulations and Deep Learning
- Authors: Arturo Mendoza, Roger Trullo, Yanneck Wielhorski
- Abstract summary: We propose a novel and fully automated method for extracting the yarn geometrical features in woven composites.
The proposed approach employs two deep neural network architectures (U-Net and Mask RCNN)
Experimental results show that our method is accurate and robust for performing yarn instance segmentation on CT images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we propose a novel and fully automated method for extracting the
yarn geometrical features in woven composites so that a direct parametrization
of the textile reinforcement is achieved (e.g., FE mesh). Thus, our aim is not
only to perform yarn segmentation from tomographic images but rather to provide
a complete descriptive modeling of the fabric. As such, this direct approach
improves on previous methods that use voxel-wise masks as intermediate
representations followed by re-meshing operations (yarn envelope estimation).
The proposed approach employs two deep neural network architectures (U-Net and
Mask RCNN). First, we train the U-Net to generate synthetic CT images from the
corresponding FE simulations. This allows to generate large quantities of
annotated data without requiring costly manual annotations. This data is then
used to train the Mask R-CNN, which is focused on predicting contour points
around each of the yarns in the image. Experimental results show that our
method is accurate and robust for performing yarn instance segmentation on CT
images, this is further validated by quantitative and qualitative analyses.
Related papers
- Synthetic dual image generation for reduction of labeling efforts in semantic segmentation of micrographs with a customized metric function [0.0]
Training semantic segmentation models for material analysis requires micrographs and their corresponding masks.
We demonstrate a workflow for the improvement of semantic segmentation models through the generation of synthetic microstructural images in conjunction with masks.
The approach could be generalized to various types of image data such as it serves as a user-friendly solution for training models with a small number of real images.
arXiv Detail & Related papers (2024-08-01T16:54:11Z) - Fine-Grained Multi-View Hand Reconstruction Using Inverse Rendering [11.228453237603834]
We present a novel fine-grained multi-view hand mesh reconstruction method that leverages inverse rendering to restore hand poses and intricate details.
We also introduce a novel Hand Albedo and Mesh (HAM) optimization module to refine both the hand mesh and textures.
Our proposed approach outperforms the state-of-the-art methods on both reconstruction accuracy and rendering quality.
arXiv Detail & Related papers (2024-07-08T07:28:24Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks [4.424836140281846]
We present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input.
Our method maintains the polygonal mesh format throughout the inpainting process.
We demonstrate that our method outperforms traditional dataset-independent approaches.
arXiv Detail & Related papers (2023-05-01T02:51:38Z) - Neural inverse procedural modeling of knitting yarns from images [6.114281140793954]
We show that the complexity of yarn structures can be better encountered in terms of ensembles of networks that focus on individual characteristics.
We demonstrate that the combination of a carefully designed parametric, procedural yarn model with respective network ensembles as well as loss functions even allows robust parameter inference.
arXiv Detail & Related papers (2023-03-01T00:56:39Z) - A Model-data-driven Network Embedding Multidimensional Features for
Tomographic SAR Imaging [5.489791364472879]
We propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features.
We add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively.
Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
arXiv Detail & Related papers (2022-11-28T02:01:43Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Semantic keypoint-based pose estimation from single RGB frames [64.80395521735463]
We present an approach to estimating the continuous 6-DoF pose of an object from a single RGB image.
The approach combines semantic keypoints predicted by a convolutional network (convnet) with a deformable shape model.
We show that our approach can accurately recover the 6-DoF object pose for both instance- and class-based scenarios.
arXiv Detail & Related papers (2022-04-12T15:03:51Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.