Geometric Understanding of Sketches
- URL: http://arxiv.org/abs/2204.06675v1
- Date: Wed, 13 Apr 2022 23:55:51 GMT
- Title: Geometric Understanding of Sketches
- Authors: Raghav Brahmadesam Venkataramaiyer
- Abstract summary: I explore two methods that help a system provide a geometric machine-understanding of sketches, and in-turn help a user accomplish a downstream task.
The first work deals with interpretation of a 2D-line drawing as a graph structure, and also illustrates its effectiveness through its physical reconstruction by a robot.
In the second work, we test the 3D-geometric understanding of a sketch-based system without explicit access to the information about 3D-geometry.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sketching is used as a ubiquitous tool of expression by novices and experts
alike. In this thesis I explore two methods that help a system provide a
geometric machine-understanding of sketches, and in-turn help a user accomplish
a downstream task.
The first work deals with interpretation of a 2D-line drawing as a graph
structure, and also illustrates its effectiveness through its physical
reconstruction by a robot. We setup a two-step pipeline to solve the problem.
Formerly, we estimate the vertices of the graph with sub-pixel level accuracy.
We achieve this using a combination of deep convolutional neural networks
learned under a supervised setting for pixel-level estimation followed by the
connected component analysis for clustering. Later we follow it up with a
feedback-loop-based edge estimation method. To complement the
graph-interpretation, we further perform data-interchange to a robot legible
ASCII format, and thus teach a robot to replicate a line drawing.
In the second work, we test the 3D-geometric understanding of a sketch-based
system without explicit access to the information about 3D-geometry. The
objective is to complete a contour-like sketch of a 3D-object, with
illumination and texture information. We propose a data-driven approach to
learn a conditional distribution modelled as deep convolutional neural networks
to be trained under an adversarial setting; and we validate it against a
human-in-the-loop. The method itself is further supported by synthetic data
generation using constructive solid geometry following a standard graphics
pipeline. In order to validate the efficacy of our method, we design a
user-interface plugged into a popular sketch-based workflow, and setup a simple
task-based exercise, for an artist. Thereafter, we also discover that
form-exploration is an additional utility of our application.
Related papers
- GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Building-GAN: Graph-Conditioned Architectural Volumetric Design
Generation [10.024367148266721]
This paper focuses on volumetric design generation conditioned on an input program graph.
Instead of outputting dense 3D voxels, we propose a new 3D representation named voxel graph that is both compact and expressive for building geometries.
Our generator is a cross-modal graph neural network that uses a pointer mechanism to connect the input program graph and the output voxel graph, and the whole pipeline is trained using the adversarial framework.
arXiv Detail & Related papers (2021-04-27T16:49:34Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - SHAD3S: A model to Sketch, Shade and Shadow [20.209172586699175]
Hatching is a common method used by artists to accentuate the third dimension of a sketch, and to illuminate the scene.
Our system SHAD3S attempts to compete with a human at hatching generic three-dimensional (3D) shapes, and also tries to assist her in a form exploration exercise.
arXiv Detail & Related papers (2020-11-13T09:25:46Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Interactive Annotation of 3D Object Geometry using 2D Scribbles [84.51514043814066]
In this paper, we propose an interactive framework for annotating 3D object geometry from point cloud data and RGB imagery.
Our framework targets naive users without artistic or graphics expertise.
arXiv Detail & Related papers (2020-08-24T21:51:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.