Freehand Sketch Generation from Mechanical Components
- URL: http://arxiv.org/abs/2408.05966v2
- Date: Wed, 21 Aug 2024 10:28:18 GMT
- Title: Freehand Sketch Generation from Mechanical Components
- Authors: Zhichao Liao, Di Huang, Heming Fang, Yue Ma, Fengyuan Piao, Xinghui Li, Long Zeng, Pingfa Feng,
- Abstract summary: MSFormer is first time to produce humanoid freehand sketches tailored for mechanical components.
First stage employs Open CASCADE technology to obtain multi-view contour sketches from mechanical components.
Second stage translates contour sketches into freehand sketches by a transformer-based generator.
- Score: 16.761960706420066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Drawing freehand sketches of mechanical components on multimedia devices for AI-based engineering modeling has become a new trend. However, its development is being impeded because existing works cannot produce suitable sketches for data-driven research. These works either generate sketches lacking a freehand style or utilize generative models not originally designed for this task resulting in poor effectiveness. To address this issue, we design a two-stage generative framework mimicking the human sketching behavior pattern, called MSFormer, which is the first time to produce humanoid freehand sketches tailored for mechanical components. The first stage employs Open CASCADE technology to obtain multi-view contour sketches from mechanical components, filtering perturbing signals for the ensuing generation process. Meanwhile, we design a view selector to simulate viewpoint selection tasks during human sketching for picking out information-rich sketches. The second stage translates contour sketches into freehand sketches by a transformer-based generator. To retain essential modeling features as much as possible and rationalize stroke distribution, we introduce a novel edge-constraint stroke initialization. Furthermore, we utilize a CLIP vision encoder and a new loss function incorporating the Hausdorff distance to enhance the generalizability and robustness of the model. Extensive experiments demonstrate that our approach achieves state-of-the-art performance for generating freehand sketches in the mechanical domain. Project page: https://mcfreeskegen.github.io .
Related papers
- Customize StyleGAN with One Hand Sketch [0.0]
We propose a framework to control StyleGAN imagery with a single user sketch.
We learn a conditional distribution in the latent space of a pre-trained StyleGAN model via energy-based learning.
Our model can generate multi-modal images semantically aligned with the input sketch.
arXiv Detail & Related papers (2023-10-29T09:32:33Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing [46.778258706603005]
This paper studies the problem of learning the shape given in the form of point clouds by inverse sketch-and-extrude.
We present ExtrudeNet, an unsupervised end-to-end network for discovering sketch and extrude from point clouds.
arXiv Detail & Related papers (2022-09-30T17:58:11Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Vitruvion: A Generative Model of Parametric CAD Sketches [22.65229769427499]
We present an approach to generative modeling of parametric CAD sketches.
Our model, trained on real-world designs from the SketchGraphs dataset, autoregressively synthesizes sketches as sequences of primitives.
We condition the model on various contexts, including partial sketches (primers) and images of hand-drawn sketches.
arXiv Detail & Related papers (2021-09-29T01:02:30Z) - Engineering Sketch Generation for Computer-Aided Design [10.732102570751392]
We propose two generative models, CurveGen and TurtleGen, for engineering sketch generation.
Both models generate curve primitives without the need for a sketch constraint solver.
We find in our perceptual evaluation using human subjects that both CurveGen and TurtleGen produce more realistic engineering sketches.
arXiv Detail & Related papers (2021-04-19T20:38:36Z) - B\'ezierSketch: A generative model for scalable vector sketches [132.5223191478268]
We present B'ezierSketch, a novel generative model for fully vector sketches that are automatically scalable and high-resolution.
We first introduce a novel inverse graphics approach to stroke embedding that trains an encoder to embed each stroke to its best fit B'ezier curve.
This enables us to treat sketches as short sequences of paramaterized strokes and thus train a recurrent sketch generator with greater capacity for longer sketches.
arXiv Detail & Related papers (2020-07-04T21:30:52Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.