An energy-based comparative analysis of common approaches to text
classification in the Legal domain
- URL: http://arxiv.org/abs/2311.01256v2
- Date: Mon, 5 Feb 2024 11:13:59 GMT
- Title: An energy-based comparative analysis of common approaches to text
classification in the Legal domain
- Authors: Sinan Gultekin and Achille Globo and Andrea Zugarini and Marco
Ernandes and Leonardo Rigutini
- Abstract summary: Large Language Models (LLMs) are extensively adopted to address NLP problems in academia and industry.
In this work, we present a detailed comparison of LLM and traditional approaches (e.g. SVM) on the LexGLUE benchmark.
The results indicate that very often, the simplest algorithms achieve performance very close to that of large LLMs.
- Score: 0.856335408411906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most Machine Learning research evaluates the best solutions in terms of
performance. However, in the race for the best performing model, many important
aspects are often overlooked when, on the contrary, they should be carefully
considered. In fact, sometimes the gaps in performance between different
approaches are neglectable, whereas factors such as production costs, energy
consumption, and carbon footprint must take into consideration. Large Language
Models (LLMs) are extensively adopted to address NLP problems in academia and
industry. In this work, we present a detailed quantitative comparison of LLM
and traditional approaches (e.g. SVM) on the LexGLUE benchmark, which takes
into account both performance (standard indices) and alternative metrics such
as timing, power consumption and cost, in a word: the carbon-footprint. In our
analysis, we considered the prototyping phase (model selection by
training-validation-test iterations) and in-production phases separately, since
they follow different implementation procedures and also require different
resources. The results indicate that very often, the simplest algorithms
achieve performance very close to that of large LLMs but with very low power
consumption and lower resource demands. The results obtained could suggest
companies to include additional evaluations in the choice of Machine Learning
(ML) solutions.
Related papers
- A Preliminary Study of Multilingual Code Language Models for Code Generation Task Using Translated Benchmarks [0.0]
We evaluate the performance of Poly-Coder, a pioneering open-source, multilingual CLM built for code generation.
Our results suggest that the outcomes observed in these translated benchmarks align well with evaluation metrics used during the training phase.
These initial insights highlight the need for more comprehensive empirical studies.
arXiv Detail & Related papers (2024-11-23T06:40:47Z) - Evaluating Cost-Accuracy Trade-offs in Multimodal Search Relevance Judgements [1.6637373649145606]
Large Language Models (LLMs) have demonstrated potential as effective search relevance evaluators.
There is a lack of comprehensive guidance on which models consistently perform optimally across various contexts or within specific use cases.
Our analysis investigates the trade-offs between cost and accuracy, highlighting that model performance varies significantly depending on the context.
arXiv Detail & Related papers (2024-10-25T21:29:04Z) - EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - On Speeding Up Language Model Evaluation [48.51924035873411]
Development of prompt-based methods with Large Language Models (LLMs) requires making numerous decisions.
We propose a novel method to address this challenge.
We show that it can identify the top-performing method using only 5-15% of the typically needed resources.
arXiv Detail & Related papers (2024-07-08T17:48:42Z) - MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs [55.20845457594977]
Large language models (LLMs) have shown increasing capability in problem-solving and decision-making.
We present a process-based benchmark MR-Ben that demands a meta-reasoning skill.
Our meta-reasoning paradigm is especially suited for system-2 slow thinking.
arXiv Detail & Related papers (2024-06-20T03:50:23Z) - LLMs for Relational Reasoning: How Far are We? [8.840750655261251]
Large language models (LLMs) have revolutionized many areas by achieving state-of-the-art performance on downstream tasks.
Recent efforts have demonstrated that the LLMs are poor at solving sequential decision-making problems.
arXiv Detail & Related papers (2024-01-17T08:22:52Z) - Efficient Benchmarking of Language Models [22.696230279151166]
We present the problem of Efficient Benchmarking, namely, intelligently reducing the costs of LM evaluation without compromising reliability.
Using the HELM benchmark as a test case, we investigate how different benchmark design choices affect the computation-reliability trade-off.
We propose an evaluation algorithm, that, when applied to the HELM benchmark, leads to dramatic cost savings with minimal loss of benchmark reliability.
arXiv Detail & Related papers (2023-08-22T17:59:30Z) - Cheaply Evaluating Inference Efficiency Metrics for Autoregressive
Transformer APIs [66.30706841821123]
Large language models (LLMs) power many state-of-the-art systems in natural language processing.
LLMs are extremely computationally expensive, even at inference time.
We propose a new metric for comparing inference efficiency across models.
arXiv Detail & Related papers (2023-05-03T21:51:42Z) - The Benchmark Lottery [114.43978017484893]
"A benchmark lottery" describes the overall fragility of the machine learning benchmarking process.
We show that the relative performance of algorithms may be altered significantly simply by choosing different benchmark tasks.
arXiv Detail & Related papers (2021-07-14T21:08:30Z) - HULK: An Energy Efficiency Benchmark Platform for Responsible Natural
Language Processing [76.38975568873765]
We introduce HULK, a multi-task energy efficiency benchmarking platform for responsible natural language processing.
We compare pretrained models' energy efficiency from the perspectives of time and cost.
arXiv Detail & Related papers (2020-02-14T01:04:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.