Truth-Conditional Captioning of Time Series Data
- URL: http://arxiv.org/abs/2110.01839v1
- Date: Tue, 5 Oct 2021 06:28:37 GMT
- Title: Truth-Conditional Captioning of Time Series Data
- Authors: Harsh Jhamtani and Taylor Berg-Kirkpatrick
- Abstract summary: We explore the task of automatically generating natural language descriptions of salient patterns in a time series.
A model for this task should be able to extract high-level patterns such as presence of a peak or a dip.
We propose a computational model with a truth-conditional architecture which first runs small learned programs on the input time series.
We find that the proposed model is able to generate high-precision captions even though we consider a small and simple space of module types.
- Score: 34.65925116012727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore the task of automatically generating natural
language descriptions of salient patterns in a time series, such as stock
prices of a company over a week. A model for this task should be able to
extract high-level patterns such as presence of a peak or a dip. While typical
contemporary neural models with attention mechanisms can generate fluent output
descriptions for this task, they often generate factually incorrect
descriptions. We propose a computational model with a truth-conditional
architecture which first runs small learned programs on the input time series,
then identifies the programs/patterns which hold true for the given input, and
finally conditions on only the chosen valid program (rather than the input time
series) to generate the output text description. A program in our model is
constructed from modules, which are small neural networks that are designed to
capture numerical patterns and temporal information. The modules are shared
across multiple programs, enabling compositionality as well as efficient
learning of module parameters. The modules, as well as the composition of the
modules, are unobserved in data, and we learn them in an end-to-end fashion
with the only training signal coming from the accompanying natural language
text descriptions. We find that the proposed model is able to generate
high-precision captions even though we consider a small and simple space of
module types.
Related papers
- Learning to Plan for Language Modeling from Unlabeled Data [23.042650737356496]
We train a module for planning the future writing process via a self-supervised learning objective.
Given the textual context, this planning module learns to predict future abstract writing actions, which correspond to centroids in a clustered text embedding space.
arXiv Detail & Related papers (2024-03-31T09:04:01Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Learning Label Modular Prompts for Text Classification in the Wild [56.66187728534808]
We propose text classification in-the-wild, which introduces different non-stationary training/testing stages.
Decomposing a complex task into modular components can enable robust generalisation under such non-stationary environment.
We propose MODULARPROMPT, a label-modular prompt tuning framework for text classification tasks.
arXiv Detail & Related papers (2022-11-30T16:26:38Z) - Language Model Cascades [72.18809575261498]
Repeated interactions at test-time with a single model, or the composition of multiple models together, further expands capabilities.
Cases with control flow and dynamic structure require techniques from probabilistic programming.
We formalize several existing techniques from this perspective, including scratchpads / chain of thought, verifiers, STaR, selection-inference, and tool use.
arXiv Detail & Related papers (2022-07-21T07:35:18Z) - Leveraging Locality in Abstractive Text Summarization [44.67905693077539]
We investigate if models with a restricted context can have competitive performance compared with the memory-efficient attention models.
Our model is applied to individual pages, which contain parts of inputs grouped by the principle of locality.
arXiv Detail & Related papers (2022-05-25T03:59:24Z) - Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
in Natural Language Processing [78.8500633981247]
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning"
Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly.
arXiv Detail & Related papers (2021-07-28T18:09:46Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.