Pixel-Wise Symbol Spotting via Progressive Points Location for Parsing CAD Images
- URL: http://arxiv.org/abs/2404.10985v1
- Date: Wed, 17 Apr 2024 01:35:52 GMT
- Title: Pixel-Wise Symbol Spotting via Progressive Points Location for Parsing CAD Images
- Authors: Junbiao Pang, Zailin Dong, Jiaxin Deng, Mengyuan Zhu, Yunwei Zhang,
- Abstract summary: We propose to label and spot symbols from CAD images that are converted from CAD drawings.
The advantage of spotting symbols from CAD images lies in the low requirement of labelers and the low-cost annotation.
Based on the keypoints detection, we propose a symbol grouping method to redraw the rectangle symbols in CAD images.
- Score: 1.5736099356327244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Parsing Computer-Aided Design (CAD) drawings is a fundamental step for CAD revision, semantic-based management, and the generation of 3D prototypes in both the architecture and engineering industries. Labeling symbols from a CAD drawing is a challenging yet notorious task from a practical point of view. In this work, we propose to label and spot symbols from CAD images that are converted from CAD drawings. The advantage of spotting symbols from CAD images lies in the low requirement of labelers and the low-cost annotation. However, pixel-wise spotting symbols from CAD images is challenging work. We propose a pixel-wise point location via Progressive Gaussian Kernels (PGK) to balance between training efficiency and location accuracy. Besides, we introduce a local offset to the heatmap-based point location method. Based on the keypoints detection, we propose a symbol grouping method to redraw the rectangle symbols in CAD images. We have released a dataset containing CAD images of equipment rooms from telecommunication industrial CAD drawings. Extensive experiments on this real-world dataset show that the proposed method has good generalization ability.
Related papers
- PICASSO: A Feed-Forward Framework for Parametric Inference of CAD Sketches via Rendering Self-Supervision [12.644368401427135]
Given a drawing of a CAD sketch, the proposed framework turns it into parametric primitives that can be imported into CAD software.
PICASSO enables the learning of parametric CAD sketches from either precise or hand-drawn sketch images.
arXiv Detail & Related papers (2024-07-18T11:02:52Z) - 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - Symbol as Points: Panoptic Symbol Spotting via Point-based
Representation [18.61469313164712]
This work studies the problem of panoptic symbol spotting in computer-aided design (CAD) drawings.
We take a different approach, which treats graphic primitives as a set of 2D points that are locally connected.
Specifically, we utilize a point transformer to extract the primitive features and append a mask2former-like spotting head to predict the final output.
arXiv Detail & Related papers (2024-01-19T08:44:52Z) - GAT-CADNet: Graph Attention Network for Panoptic Symbol Spotting in CAD
Drawings [0.0]
Spotting graphical symbols from the computer-aided design (CAD) drawings is essential to many industrial applications.
By treating each CAD drawing as a graph, we propose a novel graph attention network GAT-CADNet.
The proposed GAT-CADNet is intuitive yet effective and manages to solve the panoptic symbol spotting problem in one consolidated network.
arXiv Detail & Related papers (2022-01-03T13:08:28Z) - Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval
from a Single Image [58.953160501596805]
We propose a novel approach towards constructing a joint embedding space between 2D images and 3D CAD models in a patch-wise fashion.
Our approach is more robust than state of the art in real-world scenarios without any exact CAD matches.
arXiv Detail & Related papers (2021-08-20T20:58:52Z) - FloorPlanCAD: A Large-Scale CAD Drawing Dataset for Panoptic Symbol
Spotting [38.987494792258694]
We present FloorPlanCAD, a large-scale real-world CAD drawing dataset containing over 10,000 floor plans.
We propose a novel method by combining Graph Convolutional Networks (GCNs) with Convolutional Neural Networks (CNNs)
The proposed CNN-GCN method achieved state-of-the-art (SOTA) performance on the task of semantic symbol spotting.
arXiv Detail & Related papers (2021-05-15T06:01:11Z) - Mask2CAD: 3D Shape Prediction by Learning to Segment and Retrieve [54.054575408582565]
We propose to leverage existing large-scale datasets of 3D models to understand the underlying 3D structure of objects seen in an image.
We present Mask2CAD, which jointly detects objects in real-world images and for each detected object, optimize for the most similar CAD model and its pose.
This produces a clean, lightweight representation of the objects in an image.
arXiv Detail & Related papers (2020-07-26T00:08:37Z) - Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-based
Image Retrieval [55.29233996427243]
Low-shot sketch-based image retrieval is an emerging task in computer vision.
In this paper, we address any-shot, i.e. zero-shot and few-shot, sketch-based image retrieval (SBIR) tasks.
For solving these tasks, we propose a semantically aligned cycle-consistent generative adversarial network (SEM-PCYC)
Our results demonstrate a significant boost in any-shot performance over the state-of-the-art on the extended version of the Sketchy, TU-Berlin and QuickDraw datasets.
arXiv Detail & Related papers (2020-06-20T22:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.