SSIG: A Visually-Guided Graph Edit Distance for Floor Plan Similarity
- URL: http://arxiv.org/abs/2309.04357v1
- Date: Fri, 8 Sep 2023 14:28:28 GMT
- Title: SSIG: A Visually-Guided Graph Edit Distance for Floor Plan Similarity
- Authors: Casper van Engelenburg, Seyran Khademi, Jan van Gemert
- Abstract summary: We propose a simple yet effective metric that measures structural similarity between visual instances of architectural floor plans.
In this paper, an effective evaluation metric for judging the structural similarity of floor plans, coined S SIG, is proposed based on both image and graph distances.
- Score: 11.09257948735229
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a simple yet effective metric that measures structural similarity
between visual instances of architectural floor plans, without the need for
learning. Qualitatively, our experiments show that the retrieval results are
similar to deeply learned methods. Effectively comparing instances of floor
plan data is paramount to the success of machine understanding of floor plan
data, including the assessment of floor plan generative models and floor plan
recommendation systems. Comparing visual floor plan images goes beyond a sole
pixel-wise visual examination and is crucially about similarities and
differences in the shapes and relations between subdivisions that compose the
layout. Currently, deep metric learning approaches are used to learn a
pair-wise vector representation space that closely mimics the structural
similarity, in which the models are trained on similarity labels that are
obtained by Intersection-over-Union (IoU). To compensate for the lack of
structural awareness in IoU, graph-based approaches such as Graph Matching
Networks (GMNs) are used, which require pairwise inference for comparing data
instances, making GMNs less practical for retrieval applications. In this
paper, an effective evaluation metric for judging the structural similarity of
floor plans, coined SSIG (Structural Similarity by IoU and GED), is proposed
based on both image and graph distances. In addition, an efficient algorithm is
developed that uses SSIG to rank a large-scale floor plan database. Code will
be openly available.
Related papers
- A simple way to learn metrics between attributed graphs [11.207372645301094]
We propose a new Simple Graph Metric Learning - SGML - model with few trainable parameters.
This model allows us to build an appropriate distance from a database of labeled (attributed) graphs to improve the performance of simple classification algorithms.
arXiv Detail & Related papers (2022-09-26T14:32:38Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - A Domain-Oblivious Approach for Learning Concise Representations of
Filtered Topological Spaces [7.717214217542406]
We propose a persistence diagram hashing framework that learns a binary code representation of persistence diagrams.
This framework is built upon a generative adversarial network (GAN) with a diagram distance loss function to steer the learning process.
Our proposed method is directly applicable to various datasets without the need of retraining the model.
arXiv Detail & Related papers (2021-05-25T20:44:28Z) - Graph-Based Generative Representation Learning of Semantically and
Behaviorally Augmented Floorplans [12.488287536032747]
We present a floorplan embedding technique that uses an attributed graph to represent the geometric information as well as design semantics and behavioral features of the inhabitants as node and edge attributes.
A Long Short-Term Memory (LSTM) Variational Autoencoder (VAE) architecture is proposed and trained to embed attributed graphs as vectors in a continuous space.
A user study is conducted to evaluate the coupling of similar floorplans retrieved from the embedding space with respect to a given input.
arXiv Detail & Related papers (2020-12-08T20:51:56Z) - Hallucinative Topological Memory for Zero-Shot Visual Planning [86.20780756832502]
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline.
Most previous works on VP approached the problem by planning in a learned latent space, resulting in low-quality visual plans.
Here, we propose a simple VP method that plans directly in image space and displays competitive performance.
arXiv Detail & Related papers (2020-02-27T18:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.