Learning Efficient Representations for Image-Based Patent Retrieval
- URL: http://arxiv.org/abs/2308.13749v1
- Date: Sat, 26 Aug 2023 03:19:14 GMT
- Title: Learning Efficient Representations for Image-Based Patent Retrieval
- Authors: Hongsong Wang and Yuqi Zhang
- Abstract summary: We present a simple and lightweight model for content-based patent retrieval.
Our approach significantly outperforms other counterparts on a large-scale benchmark.
Our model can be elaborately scaled up to achieve a surprisingly high mAP of 93.5%.
- Score: 16.323708969088557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Patent retrieval has been attracting tremendous interest from researchers in
intellectual property and information retrieval communities in the past
decades. However, most existing approaches rely on textual and metadata
information of the patent, and content-based image-based patent retrieval is
rarely investigated. Based on traits of patent drawing images, we present a
simple and lightweight model for this task. Without bells and whistles, this
approach significantly outperforms other counterparts on a large-scale
benchmark and noticeably improves the state-of-the-art by 33.5% with the mean
average precision (mAP) score. Further experiments reveal that this model can
be elaborately scaled up to achieve a surprisingly high mAP of 93.5%. Our
method ranks first in the ECCV 2022 Patent Diagram Image Retrieval Challenge.
Related papers
- Large Language Model Informed Patent Image Retrieval [0.0]
We propose a language-informed, distribution-aware multimodal approach to patent image feature learning.
Our proposed method achieves state-of-the-art or comparable performance in image-based patent retrieval with mAP +53.3%, Recall@10 +41.8%, and MRR@10 +51.9%.
arXiv Detail & Related papers (2024-04-30T08:45:16Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - PaECTER: Patent-level Representation Learning using Citation-informed
Transformers [0.16785092703248325]
PaECTER is a publicly available, open-source document-level encoder specific for patents.
We fine-tune BERT for Patents with examiner-added citation information to generate numerical representations for patent documents.
PaECTER performs better in similarity tasks than current state-of-the-art models used in the patent domain.
arXiv Detail & Related papers (2024-02-29T18:09:03Z) - Raising the Bar of AI-generated Image Detection with CLIP [50.345365081177555]
The aim of this work is to explore the potential of pre-trained vision-language models (VLMs) for universal detection of AI-generated images.
We develop a lightweight detection strategy based on CLIP features and study its performance in a wide variety of challenging scenarios.
arXiv Detail & Related papers (2023-11-30T21:11:20Z) - Classification of Visualization Types and Perspectives in Patents [9.123089032348311]
We adopt state-of-the-art deep learning methods for the classification of visualization types and perspectives in patent images.
We derive a set of hierarchical classes from a dataset that provides weakly-labeled data for image perspectives.
arXiv Detail & Related papers (2023-07-19T21:45:07Z) - Towards Artistic Image Aesthetics Assessment: a Large-scale Dataset and
a New Method [64.40494830113286]
We first introduce a large-scale AIAA dataset: Boldbrush Artistic Image dataset (BAID), which consists of 60,337 artistic images covering various art forms.
We then propose a new method, SAAN, which can effectively extract and utilize style-specific and generic aesthetic information to evaluate artistic images.
Experiments demonstrate that our proposed approach outperforms existing IAA methods on the proposed BAID dataset.
arXiv Detail & Related papers (2023-03-27T12:59:15Z) - IRGen: Generative Modeling for Image Retrieval [82.62022344988993]
In this paper, we present a novel methodology, reframing image retrieval as a variant of generative modeling.
We develop our model, dubbed IRGen, to address the technical challenge of converting an image into a concise sequence of semantic units.
Our model achieves state-of-the-art performance on three widely-used image retrieval benchmarks and two million-scale datasets.
arXiv Detail & Related papers (2023-03-17T17:07:36Z) - Estimating the Performance of Entity Resolution Algorithms: Lessons
Learned Through PatentsView.org [3.8494315501944736]
This paper introduces a novel evaluation methodology for entity resolution algorithms.
It is motivated by PatentsView.org, a U.S. Patents and Trademarks Office patent data exploration tool.
arXiv Detail & Related papers (2022-10-03T21:06:35Z) - A Survey on Sentence Embedding Models Performance for Patent Analysis [0.0]
We propose a standard library and dataset for assessing the accuracy of embeddings models based on PatentSBERTa approach.
Results show PatentSBERTa, Bert-for-patents, and TF-IDF Weighted Word Embeddings have the best accuracy for computing sentence embeddings at the subclass level.
arXiv Detail & Related papers (2022-04-28T12:04:42Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.