AutoNLU: An On-demand Cloud-based Natural Language Understanding System
for Enterprises
- URL: http://arxiv.org/abs/2011.13470v1
- Date: Thu, 26 Nov 2020 20:51:57 GMT
- Title: AutoNLU: An On-demand Cloud-based Natural Language Understanding System
for Enterprises
- Authors: Nham Le, Tuan Lai, Trung Bui and Doo Soon Kim
- Abstract summary: We build a practical NLU model for handling various image-editing requests in Photoshop.
We build powerful keyphrase extraction models that achieve state-of-the-art results on two public benchmarks.
In both cases, end users only need to write a small amount of code to convert their datasets into a common format used by AutoNLU.
- Score: 21.25334903155791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the renaissance of deep learning, neural networks have achieved
promising results on many natural language understanding (NLU) tasks. Even
though the source codes of many neural network models are publicly available,
there is still a large gap from open-sourced models to solving real-world
problems in enterprises. Therefore, to fill this gap, we introduce AutoNLU, an
on-demand cloud-based system with an easy-to-use interface that covers all
common use-cases and steps in developing an NLU model. AutoNLU has supported
many product teams within Adobe with different use-cases and datasets, quickly
delivering them working models. To demonstrate the effectiveness of AutoNLU, we
present two case studies. i) We build a practical NLU model for handling
various image-editing requests in Photoshop. ii) We build powerful keyphrase
extraction models that achieve state-of-the-art results on two public
benchmarks. In both cases, end users only need to write a small amount of code
to convert their datasets into a common format used by AutoNLU.
Related papers
- NVLM: Open Frontier-Class Multimodal LLMs [64.00053046838225]
We introduce NVLM 1.0, a family of frontier-class multimodal large language models (LLMs) that achieve state-of-the-art results on vision-language tasks.
We propose a novel architecture that enhances both training efficiency and multimodal reasoning capabilities.
We develop production-grade multimodality for the NVLM-1.0 models, enabling them to excel in vision-language tasks.
arXiv Detail & Related papers (2024-09-17T17:59:06Z) - Prompt2Model: Generating Deployable Models from Natural Language
Instructions [74.19816829003729]
Large language models (LLMs) enable system builders to create competent NLP systems through prompting.
In other ways, LLMs are a step backward from traditional special-purpose NLP models.
We propose Prompt2Model, a general-purpose method that takes a natural language task description like the prompts provided to LLMs.
arXiv Detail & Related papers (2023-08-23T17:28:21Z) - SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence
Understanding [103.34092301324425]
Large language models (LLMs) have shown impressive ability for open-domain NLP tasks.
We present SeqGPT, a bilingual (i.e., English and Chinese) open-source autoregressive model specially enhanced for open-domain natural language understanding.
arXiv Detail & Related papers (2023-08-21T07:31:19Z) - LLMatic: Neural Architecture Search via Large Language Models and Quality Diversity Optimization [4.951599300340954]
Large Language Models (LLMs) have emerged as powerful tools capable of accomplishing a broad spectrum of tasks.
We propose using the coding abilities of LLMs to introduce meaningful variations to code defining neural networks.
By merging the code-generating abilities of LLMs with the diversity and robustness of QD solutions, we introduce textttLLMatic, a Neural Architecture Search (NAS) algorithm.
arXiv Detail & Related papers (2023-06-01T19:33:21Z) - OTOV2: Automatic, Generic, User-Friendly [39.828644638174225]
We propose the second generation of Only-Train-Once (OTOv2), which first automatically trains and compresses a general DNN only once from scratch.
OTOv2 is automatic and pluggable into various deep learning applications, and requires almost minimal engineering efforts from the users.
Numerically, we demonstrate the generality and autonomy of OTOv2 on a variety of model architectures such as VGG, ResNet, CARN, ConvNeXt, DenseNet and StackedUnets.
arXiv Detail & Related papers (2023-03-13T05:13:47Z) - NLU++: A Multi-Label, Slot-Rich, Generalisable Dataset for Natural
Language Understanding in Task-Oriented Dialogue [53.54788957697192]
NLU++ is a novel dataset for natural language understanding (NLU) in task-oriented dialogue (ToD) systems.
NLU++ is divided into two domains (BANKING and HOTELS) and brings several crucial improvements over current commonly used NLU datasets.
arXiv Detail & Related papers (2022-04-27T16:00:23Z) - Towards More Robust Natural Language Understanding [0.0]
Natural Language Understanding (NLU) is branch of Natural Language Processing (NLP)
Recent years have witnessed notable progress across various NLU tasks with deep learning techniques.
It's worth noting that the human ability of understanding natural language is flexible and robust.
arXiv Detail & Related papers (2021-12-01T17:27:19Z) - CLUES: Few-Shot Learning Evaluation in Natural Language Understanding [81.63968985419982]
We introduce CLUES, a benchmark for evaluating the few-shot learning capabilities of NLU models.
We demonstrate that while recent models reach human performance when they have access to large amounts of labeled data, there is a huge gap in performance in the few-shot setting for most tasks.
arXiv Detail & Related papers (2021-11-04T00:43:15Z) - Auto-Split: A General Framework of Collaborative Edge-Cloud AI [49.750972428032355]
This paper describes the techniques and engineering practice behind Auto-Split, an edge-cloud collaborative prototype of Huawei Cloud.
To the best of our knowledge, there is no existing industry product that provides the capability of Deep Neural Network (DNN) splitting.
arXiv Detail & Related papers (2021-08-30T08:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.