Cyclegan Network for Sheet Metal Welding Drawing Translation
- URL: http://arxiv.org/abs/2209.14106v1
- Date: Wed, 28 Sep 2022 13:55:36 GMT
- Title: Cyclegan Network for Sheet Metal Welding Drawing Translation
- Authors: Zhiwei Song, Hui Yao, Dan Tian, Gaohui Zhan
- Abstract summary: This paper proposes an automatic translation method for welded structural engineering drawings based on Cyclic Generative Adversarial Networks (CycleGAN)
The CycleGAN network model of unpaired transfer learning is used to learn the feature mapping of real welding engineering drawings.
After training with our model, the PSNR, SSIM and MSE of welding engineering drawings reach about 44.89%, 99.58% and 2.11, respectively.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In intelligent manufacturing, the quality of machine translation engineering
drawings will directly affect its manufacturing accuracy. Currently, most of
the work is manually translated, greatly reducing production efficiency. This
paper proposes an automatic translation method for welded structural
engineering drawings based on Cyclic Generative Adversarial Networks
(CycleGAN). The CycleGAN network model of unpaired transfer learning is used to
learn the feature mapping of real welding engineering drawings to realize
automatic translation of engineering drawings. U-Net and PatchGAN are the main
network for the generator and discriminator, respectively. Based on removing
the identity mapping function, a high-dimensional sparse network is proposed to
replace the traditional dense network for the Cyclegan generator to improve
noise robustness. Increase the residual block hidden layer to increase the
resolution of the generated graph. The improved and fine-tuned network models
are experimentally validated, computing the gap between real and generated
data. It meets the welding engineering precision standard and solves the main
problem of low drawing recognition efficiency in the welding manufacturing
process. The results show. After training with our model, the PSNR, SSIM and
MSE of welding engineering drawings reach about 44.89%, 99.58% and 2.11,
respectively, which are superior to traditional networks in both training speed
and accuracy.
Related papers
- Efficient generative adversarial networks using linear additive-attention Transformers [0.8287206589886879]
We present LadaGAN, an efficient generative adversarial network that is built upon a novel Transformer block named Ladaformer.
LadaGAN consistently outperforms existing convolutional and Transformer GANs on benchmark datasets at different resolutions.
arXiv Detail & Related papers (2024-01-17T21:08:41Z) - A Comprehensive End-to-End Computer Vision Framework for Restoration and
Recognition of Low-Quality Engineering Drawings [19.375278164300987]
This paper focuses on restoring and recognizing low-quality engineering drawings.
An end-to-end framework is proposed to improve the quality of the drawings and identify the graphical symbols on them.
Experiments on real-world electrical diagrams show that the proposed framework achieves an accuracy of 98.98% and a recall of 99.33%.
arXiv Detail & Related papers (2023-12-21T07:22:25Z) - A Generative Approach for Production-Aware Industrial Network Traffic
Modeling [70.46446906513677]
We investigate the network traffic data generated from a laser cutting machine deployed in a Trumpf factory in Germany.
We analyze the traffic statistics, capture the dependencies between the internal states of the machine, and model the network traffic as a production state dependent process.
We compare the performance of various generative models including variational autoencoder (VAE), conditional variational autoencoder (CVAE), and generative adversarial network (GAN)
arXiv Detail & Related papers (2022-11-11T09:46:58Z) - An Adversarial Active Sampling-based Data Augmentation Framework for
Manufacturable Chip Design [55.62660894625669]
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable.
Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks.
We propose a litho-aware data augmentation framework to resolve the dilemma of limited data and improve the machine learning model performance.
arXiv Detail & Related papers (2022-10-27T20:53:39Z) - Segmentation method of U-net sheet metal engineering drawing based on
CBAM attention mechanism [0.0]
This paper proposes a U-net-based method for the segmentation and extraction of specific units in welding engineering drawings.
Using vgg16 as the backbone network, experiments have verified that the IoU, mAP, and Accu of our model in the welding engineering drawing dataset segmentation task are 84.72%, 86.84%, and 99.42%, respectively.
arXiv Detail & Related papers (2022-09-28T13:49:45Z) - LHNN: Lattice Hypergraph Neural Network for VLSI Congestion Prediction [70.31656245793302]
lattice hypergraph (LH-graph) is a novel graph formulation for circuits.
LHNN constantly achieves more than 35% improvements compared with U-nets and Pix2Pix on the F1 score.
arXiv Detail & Related papers (2022-03-24T03:31:18Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - Self-Compression in Bayesian Neural Networks [0.9176056742068814]
We propose a new insight into network compression through the Bayesian framework.
We show that Bayesian neural networks automatically discover redundancy in model parameters, thus enabling self-compression.
Our experimental results show that the network architecture can be successfully compressed by deleting parameters identified by the network itself.
arXiv Detail & Related papers (2021-11-10T21:19:40Z) - Efficient pre-training objectives for Transformers [84.64393460397471]
We study several efficient pre-training objectives for Transformers-based models.
We prove that eliminating the MASK token and considering the whole output during the loss are essential choices to improve performance.
arXiv Detail & Related papers (2021-04-20T00:09:37Z) - Learning Efficient GANs for Image Translation via Differentiable Masks
and co-Attention Distillation [130.30465659190773]
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computation and storage costs impede the deployment on mobile devices.
We introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation.
Experiments show DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model.
arXiv Detail & Related papers (2020-11-17T02:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.