Bringing order into the realm of Transformer-based language models for
artificial intelligence and law
- URL: http://arxiv.org/abs/2308.05502v2
- Date: Sat, 3 Feb 2024 09:54:51 GMT
- Title: Bringing order into the realm of Transformer-based language models for
artificial intelligence and law
- Authors: Candida M. Greco, Andrea Tagarelli
- Abstract summary: Transformer-based language models (TLMs) have widely been recognized to be a cutting-edge technology.
This article provides the first systematic overview of TLM-based methods for AI-driven problems and tasks in the legal sphere.
- Score: 3.2074558838636262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformer-based language models (TLMs) have widely been recognized to be a
cutting-edge technology for the successful development of deep-learning-based
solutions to problems and applications that require natural language processing
and understanding. Like for other textual domains, TLMs have indeed pushed the
state-of-the-art of AI approaches for many tasks of interest in the legal
domain. Despite the first Transformer model being proposed about six years ago,
there has been a rapid progress of this technology at an unprecedented rate,
whereby BERT and related models represent a major reference, also in the legal
domain. This article provides the first systematic overview of TLM-based
methods for AI-driven problems and tasks in the legal sphere. A major goal is
to highlight research advances in this field so as to understand, on the one
hand, how the Transformers have contributed to the success of AI in supporting
legal processes, and on the other hand, what are the current limitations and
opportunities for further research development.
Related papers
- Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives [10.16399860867284]
The emergence of Generative Artificial Intelligence (AI) and Large Language Models (LLMs) has marked a new era of Natural Language Processing (NLP)
This paper explores the current state of these cutting-edge technologies, demonstrating their remarkable advancements and wide-ranging applications.
arXiv Detail & Related papers (2024-07-20T18:48:35Z) - Software Engineering Methods For AI-Driven Deductive Legal Reasoning [2.95701410483693]
We show how principled software engineering techniques can enhance AI-driven legal reasoning of complex statutes.
We show how it is possible to apply principled software engineering techniques to unlock new applications in automated meta-reasoning.
arXiv Detail & Related papers (2024-04-15T15:33:29Z) - A Survey on Large Language Models from Concept to Implementation [4.219910716090213]
Recent advancements in Large Language Models (LLMs) have broadened the scope of natural language processing (NLP) applications.
This paper investigates the multifaceted applications of these models, with an emphasis on the GPT series.
This exploration focuses on the transformative impact of artificial intelligence (AI) driven tools in revolutionizing traditional tasks like coding and problem-solving.
arXiv Detail & Related papers (2024-03-27T19:35:41Z) - Introduction to Transformers: an NLP Perspective [59.0241868728732]
We introduce basic concepts of Transformers and present key techniques that form the recent advances of these models.
This includes a description of the standard Transformer architecture, a series of model refinements, and common applications.
arXiv Detail & Related papers (2023-11-29T13:51:04Z) - Large Language Models in Law: A Survey [34.785207813971134]
The application of legal large language models (LLMs) is still in its nascent stage.
We provide an overview of AI technologies in the legal field and showcase the recent research in LLMs.
We explore the limitations of legal LLMs, including data, algorithms, and judicial practice.
arXiv Detail & Related papers (2023-11-26T00:48:12Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - Selected Trends in Artificial Intelligence for Space Applications [69.3474006357492]
This chapter focuses on differentiable intelligence and on-board machine learning.
We discuss a few selected projects originating from the European Space Agency's (ESA) Advanced Concepts Team (ACT)
arXiv Detail & Related papers (2022-12-10T07:49:50Z) - A Survey of Controllable Text Generation using Transformer-based
Pre-trained Language Models [21.124096884958337]
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG)
We present a systematic critical review on the common tasks, main approaches, and evaluation methods in this area.
We discuss the challenges that the field is facing, and put forward various promising future directions.
arXiv Detail & Related papers (2022-01-14T08:32:20Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.