A Survey on Sentence Embedding Models Performance for Patent Analysis
- URL: http://arxiv.org/abs/2206.02690v3
- Date: Fri, 5 Aug 2022 14:38:44 GMT
- Title: A Survey on Sentence Embedding Models Performance for Patent Analysis
- Authors: Hamid Bekamiri, Daniel S. Hain, Roman Jurowetzki
- Abstract summary: We propose a standard library and dataset for assessing the accuracy of embeddings models based on PatentSBERTa approach.
Results show PatentSBERTa, Bert-for-patents, and TF-IDF Weighted Word Embeddings have the best accuracy for computing sentence embeddings at the subclass level.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Patent data is an important source of knowledge for innovation research,
while the technological similarity between pairs of patents is a key enabling
indicator for patent analysis. Recently researchers have been using patent
vector space models based on different NLP embeddings models to calculate the
technological similarity between pairs of patents to help better understand
innovations, patent landscaping, technology mapping, and patent quality
evaluation. More often than not, Text Embedding is a vital precursor to patent
analysis tasks. A pertinent question then arises: How should we measure and
evaluate the accuracy of these embeddings? To the best of our knowledge, there
is no comprehensive survey that builds a clear delineation of embedding models'
performance for calculating patent similarity indicators. Therefore, in this
study, we provide an overview of the accuracy of these algorithms based on
patent classification performance and propose a standard library and dataset
for assessing the accuracy of embeddings models based on PatentSBERTa approach.
In a detailed discussion, we report the performance of the top 3 algorithms at
section, class, and subclass levels. The results based on the first claim of
patents show that PatentSBERTa, Bert-for-patents, and TF-IDF Weighted Word
Embeddings have the best accuracy for computing sentence embeddings at the
subclass level. According to the first results, the performance of the models
in different classes varies, which shows researchers in patent analysis can
utilize the results of this study to choose the best proper model based on the
specific section of patent data they used.
Related papers
- PatentEdits: Framing Patent Novelty as Textual Entailment [62.8514393375952]
We introduce the PatentEdits dataset, which contains 105K examples of successful revisions.
We design algorithms to label edits sentence by sentence, then establish how well these edits can be predicted with large language models.
We demonstrate that evaluating textual entailment between cited references and draft sentences is especially effective in predicting which inventive claims remained unchanged or are novel in relation to prior art.
arXiv Detail & Related papers (2024-11-20T17:23:40Z) - ClaimBrush: A Novel Framework for Automated Patent Claim Refinement Based on Large Language Models [3.3427063846107825]
ClaimBrush is a novel framework for automated patent claim refinement that includes a dataset and a rewriting model.
We constructed a dataset for training and evaluating patent claim rewriting models by collecting a large number of actual patent claim rewriting cases.
Our proposed rewriting model outperformed baselines and zero-shot learning in state-of-the-art large language models.
arXiv Detail & Related papers (2024-10-08T00:20:54Z) - Structural Representation Learning and Disentanglement for Evidential Chinese Patent Approval Prediction [19.287231890434718]
This paper presents the pioneering effort on this task using a retrieval-based classification approach.
We propose a novel framework called DiSPat, which focuses on structural representation learning and disentanglement.
Our framework surpasses state-of-the-art baselines on patent approval prediction, while also exhibiting enhanced evidentiality.
arXiv Detail & Related papers (2024-08-23T05:44:16Z) - A comparative analysis of embedding models for patent similarity [0.0]
This paper makes two contributions to the field of text-based patent similarity.
It compares the performance of different kinds of patent-specific pretrained embedding models.
arXiv Detail & Related papers (2024-03-25T11:20:23Z) - PaECTER: Patent-level Representation Learning using Citation-informed
Transformers [0.16785092703248325]
PaECTER is a publicly available, open-source document-level encoder specific for patents.
We fine-tune BERT for Patents with examiner-added citation information to generate numerical representations for patent documents.
PaECTER performs better in similarity tasks than current state-of-the-art models used in the patent domain.
arXiv Detail & Related papers (2024-02-29T18:09:03Z) - Unveiling Black-boxes: Explainable Deep Learning Models for Patent
Classification [48.5140223214582]
State-of-the-art methods for multi-label patent classification rely on deep opaque neural networks (DNNs)
We propose a novel deep explainable patent classification framework by introducing layer-wise relevance propagation (LRP)
Considering the relevance score, we then generate explanations by visualizing relevant words for the predicted patent class.
arXiv Detail & Related papers (2023-10-31T14:11:37Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Hybrid Model for Patent Classification using Augmented SBERT and KNN [0.0]
This study aims to provide a hybrid approach for patent claim classification with Sentence-BERT (SBERT) and K Nearest Neighbours (KNN)
The proposed framework predicts individual input patent class and subclass based on finding top k semantic similarity patents.
arXiv Detail & Related papers (2021-03-22T15:23:19Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.