BERT based freedom to operate patent analysis
- URL: http://arxiv.org/abs/2105.00817v1
- Date: Mon, 12 Apr 2021 18:30:46 GMT
- Title: BERT based freedom to operate patent analysis
- Authors: Michael Freunek and Andr\'e Bodmer
- Abstract summary: We present a method to apply BERT to freedom to operate patent analysis and patent searches.
BERT is fine-tuned by training patent descriptions to the independent claims.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a method to apply BERT to freedom to operate patent
analysis and patent searches. According to the method, BERT is fine-tuned by
training patent descriptions to the independent claims. Each description
represents an invention which is protected by the corresponding claims. Such a
trained BERT could be able to identify or order freedom to operate relevant
patents based on a short description of an invention or product. We tested the
method by training BERT on the patent class G06T1/00 and applied the trained
BERT on five inventions classified in G06T1/60, described via DOCDB abstracts.
The DOCDB abstract are available on ESPACENET of the European Patent Office.
Related papers
- PatentEdits: Framing Patent Novelty as Textual Entailment [62.8514393375952]
We introduce the PatentEdits dataset, which contains 105K examples of successful revisions.
We design algorithms to label edits sentence by sentence, then establish how well these edits can be predicted with large language models.
We demonstrate that evaluating textual entailment between cited references and draft sentences is especially effective in predicting which inventive claims remained unchanged or are novel in relation to prior art.
arXiv Detail & Related papers (2024-11-20T17:23:40Z) - Structural Representation Learning and Disentanglement for Evidential Chinese Patent Approval Prediction [19.287231890434718]
This paper presents the pioneering effort on this task using a retrieval-based classification approach.
We propose a novel framework called DiSPat, which focuses on structural representation learning and disentanglement.
Our framework surpasses state-of-the-art baselines on patent approval prediction, while also exhibiting enhanced evidentiality.
arXiv Detail & Related papers (2024-08-23T05:44:16Z) - ClaimCompare: A Data Pipeline for Evaluation of Novelty Destroying Patent Pairs [2.60235825984014]
We introduce a novel data pipeline, ClaimCompare, designed to generate labeled patent claim datasets suitable for training IR and ML models.
To the best of our knowledge, ClaimCompare is the first pipeline that can generate multiple novelty destroying patent datasets.
arXiv Detail & Related papers (2024-07-16T21:38:45Z) - Natural Language Processing in Patents: A Survey [0.0]
Patents, encapsulating crucial technical and legal information, present a rich domain for natural language processing (NLP) applications.
As NLP technologies evolve, large language models (LLMs) have demonstrated outstanding capabilities in general text processing and generation tasks.
This paper aims to equip NLP researchers with the essential knowledge to navigate this complex domain efficiently.
arXiv Detail & Related papers (2024-03-06T23:17:16Z) - PaECTER: Patent-level Representation Learning using Citation-informed
Transformers [0.16785092703248325]
PaECTER is a publicly available, open-source document-level encoder specific for patents.
We fine-tune BERT for Patents with examiner-added citation information to generate numerical representations for patent documents.
PaECTER performs better in similarity tasks than current state-of-the-art models used in the patent domain.
arXiv Detail & Related papers (2024-02-29T18:09:03Z) - Towards an Enforceable GDPR Specification [49.1574468325115]
Privacy by Design (PbD) is prescribed by modern privacy regulations such as the EU's.
One emerging technique to realize PbD is enforcement (RE)
We present a set of requirements and an iterative methodology for creating formal specifications of legal provisions.
arXiv Detail & Related papers (2024-02-27T09:38:51Z) - Unveiling Black-boxes: Explainable Deep Learning Models for Patent
Classification [48.5140223214582]
State-of-the-art methods for multi-label patent classification rely on deep opaque neural networks (DNNs)
We propose a novel deep explainable patent classification framework by introducing layer-wise relevance propagation (LRP)
Considering the relevance score, we then generate explanations by visualizing relevant words for the predicted patent class.
arXiv Detail & Related papers (2023-10-31T14:11:37Z) - BiBERT: Accurate Fully Binarized BERT [69.35727280997617]
BiBERT is an accurate fully binarized BERT to eliminate the performance bottlenecks.
Our method yields impressive 56.3 times and 31.2 times saving on FLOPs and model size.
arXiv Detail & Related papers (2022-03-12T09:46:13Z) - BERT based patent novelty search by training claims to their own
description [0.0]
We introduce a new scoring scheme, relevance scoring or novelty scoring, to process the output of BERT in a meaningful way.
We tested the method on patent applications by training BERT on the first claims of patents and corresponding descriptions.
BERT's output has been processed according to the relevance score and the results compared with the cited X documents in the search reports.
arXiv Detail & Related papers (2021-03-01T16:54:50Z) - LEGAL-BERT: The Muppets straight out of Law School [52.53830441117363]
We explore approaches for applying BERT models to downstream legal tasks, evaluating on multiple datasets.
Our findings indicate that the previous guidelines for pre-training and fine-tuning, often blindly followed, do not always generalize well in the legal domain.
We release LEGAL-BERT, a family of BERT models intended to assist legal NLP research, computational law, and legal technology applications.
arXiv Detail & Related papers (2020-10-06T09:06:07Z) - TernaryBERT: Distillation-aware Ultra-low Bit BERT [53.06741585060951]
We propose TernaryBERT, which ternarizes the weights in a fine-tuned BERT model.
Experiments on the GLUE benchmark and SQuAD show that our proposed TernaryBERT outperforms the other BERT quantization methods.
arXiv Detail & Related papers (2020-09-27T10:17:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.