EXPATS: A Toolkit for Explainable Automated Text Scoring
- URL: http://arxiv.org/abs/2104.03364v1
- Date: Wed, 7 Apr 2021 19:29:06 GMT
- Title: EXPATS: A Toolkit for Explainable Automated Text Scoring
- Authors: Hitoshi Manabe, Masato Hagiwara
- Abstract summary: We present EXPATS, an open-source framework to allow users to develop and experiment with different ATS models quickly.
The toolkit also provides seamless integration with the Language Interpretability Tool (LIT) so that one can interpret and visualize models and their predictions.
- Score: 2.299617836036273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated text scoring (ATS) tasks, such as automated essay scoring and
readability assessment, are important educational applications of natural
language processing. Due to their interpretability of models and predictions,
traditional machine learning (ML) algorithms based on handcrafted features are
still in wide use for ATS tasks. Practitioners often need to experiment with a
variety of models (including deep and traditional ML ones), features, and
training objectives (regression and classification), although modern deep
learning frameworks such as PyTorch require deep ML expertise to fully utilize.
In this paper, we present EXPATS, an open-source framework to allow its users
to develop and experiment with different ATS models quickly by offering
flexible components, an easy-to-use configuration system, and the command-line
interface. The toolkit also provides seamless integration with the Language
Interpretability Tool (LIT) so that one can interpret and visualize models and
their predictions. We also describe two case studies where we build ATS models
quickly with minimal engineering efforts. The toolkit is available at
\url{https://github.com/octanove/expats}.
Related papers
- A Survey of Small Language Models [104.80308007044634]
Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources.
We present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques.
arXiv Detail & Related papers (2024-10-25T23:52:28Z) - CMULAB: An Open-Source Framework for Training and Deployment of Natural Language Processing Models [59.91221728187576]
This paper introduces the CMU Linguistic Linguistic Backend, an open-source framework that simplifies model deployment and continuous human-in-the-loop fine-tuning of NLP models.
CMULAB enables users to leverage the power of multilingual models to quickly adapt and extend existing tools for speech recognition, OCR, translation, and syntactic analysis to new languages.
arXiv Detail & Related papers (2024-04-03T02:21:46Z) - Towards Automatic Translation of Machine Learning Visual Insights to
Analytical Assertions [23.535630175567146]
We present our vision for developing an automated tool capable of translating visual properties observed in Machine Learning (ML) visualisations into Python assertions.
The tool aims to streamline the process of manually verifying these visualisations in the ML development cycle, which is critical as real-world data and assumptions often change post-deployment.
arXiv Detail & Related papers (2024-01-15T14:11:59Z) - Simultaneous Machine Translation with Large Language Models [51.470478122113356]
We investigate the possibility of applying Large Language Models to SimulMT tasks.
We conducted experiments using the textttLlama2-7b-chat model on nine different languages from the MUST-C dataset.
The results show that LLM outperforms dedicated MT models in terms of BLEU and LAAL metrics.
arXiv Detail & Related papers (2023-09-13T04:06:47Z) - SINC: Self-Supervised In-Context Learning for Vision-Language Tasks [64.44336003123102]
We propose a framework to enable in-context learning in large language models.
A meta-model can learn on self-supervised prompts consisting of tailored demonstrations.
Experiments show that SINC outperforms gradient-based methods in various vision-language tasks.
arXiv Detail & Related papers (2023-07-15T08:33:08Z) - Pre-Training to Learn in Context [138.0745138788142]
The ability of in-context learning is not fully exploited because language models are not explicitly trained to learn in context.
We propose PICL (Pre-training for In-Context Learning), a framework to enhance the language models' in-context learning ability.
Our experiments show that PICL is more effective and task-generalizable than a range of baselines, outperforming larger language models with nearly 4x parameters.
arXiv Detail & Related papers (2023-05-16T03:38:06Z) - Toolformer: Language Models Can Teach Themselves to Use Tools [62.04867424598204]
Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale.
We show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds.
We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction.
arXiv Detail & Related papers (2023-02-09T16:49:57Z) - Tools and Practices for Responsible AI Engineering [0.5249805590164901]
We present two new software libraries that address critical needs for responsible AI engineering.
hydra-zen dramatically simplifies the process of making complex AI applications, and their behaviors reproducible.
The rAI-toolbox is designed to enable methods for evaluating and enhancing the robustness of AI-models.
arXiv Detail & Related papers (2022-01-14T19:47:46Z) - AdapterHub Playground: Simple and Flexible Few-Shot Learning with
Adapters [34.86139827292556]
Open-access dissemination of pretrained language models has led to a democratization of state-of-the-art natural language processing (NLP) research.
This also allows people outside of NLP to use such models and adapt them to specific use-cases.
In this work, we aim to overcome this gap by providing a tool which allows researchers to leverage pretrained models without writing a single line of code.
arXiv Detail & Related papers (2021-08-18T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.