AttentionFlow: Visualising Influence in Networks of Time Series
- URL: http://arxiv.org/abs/2102.01974v1
- Date: Wed, 3 Feb 2021 09:44:46 GMT
- Title: AttentionFlow: Visualising Influence in Networks of Time Series
- Authors: Minjeong Shin, Alasdair Tran, Siqi Wu, Alexander Mathews, Rong Wang,
Georgiana Lyall, Lexing Xie
- Abstract summary: We present AttentionFlow, a new system to visualise networks of time series and the dynamic influence they have on one another.
We show that attention spikes in songs can be explained by external events such as major awards, or changes in the network such as the release of a new song.
AttentionFlow can be generalised to visualise networks of time series on physical infrastructures such as road networks, or natural phenomena such as weather and geological measurements.
- Score: 80.61555138658578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The collective attention on online items such as web pages, search terms, and
videos reflects trends that are of social, cultural, and economic interest.
Moreover, attention trends of different items exhibit mutual influence via
mechanisms such as hyperlinks or recommendations. Many visualisation tools
exist for time series, network evolution, or network influence; however, few
systems connect all three. In this work, we present AttentionFlow, a new system
to visualise networks of time series and the dynamic influence they have on one
another. Centred around an ego node, our system simultaneously presents the
time series on each node using two visual encodings: a tree ring for an
overview and a line chart for details. AttentionFlow supports interactions such
as overlaying time series of influence and filtering neighbours by time or
flux. We demonstrate AttentionFlow using two real-world datasets, VevoMusic and
WikiTraffic. We show that attention spikes in songs can be explained by
external events such as major awards, or changes in the network such as the
release of a new song. Separate case studies also demonstrate how an artist's
influence changes over their career, and that correlated Wikipedia traffic is
driven by cultural interests. More broadly, AttentionFlow can be generalised to
visualise networks of time series on physical infrastructures such as road
networks, or natural phenomena such as weather and geological measurements.
Related papers
- Convolution-enhanced Evolving Attention Networks [41.684265133316096]
Evolving Attention-enhanced Dilated Convolutional (EA-DC-) Transformer outperforms state-of-the-art models significantly.
This is the first work that explicitly models the layer-wise evolution of attention maps.
arXiv Detail & Related papers (2022-12-16T08:14:04Z) - Between News and History: Identifying Networked Topics of Collective
Attention on Wikipedia [0.0]
We develop a temporal community detection approach towards topic detection.
We apply this method to a dataset of one year of current events on Wikipedia.
We are able to resolve the topics that more strongly reflect unfolding current events vs more established knowledge.
arXiv Detail & Related papers (2022-11-14T18:36:21Z) - Evidential Temporal-aware Graph-based Social Event Detection via
Dempster-Shafer Theory [76.4580340399321]
We propose ETGNN, a novel Evidential Temporal-aware Graph Neural Network.
We construct view-specific graphs whose nodes are the texts and edges are determined by several types of shared elements respectively.
Considering the view-specific uncertainty, the representations of all views are converted into mass functions through evidential deep learning (EDL) neural networks.
arXiv Detail & Related papers (2022-05-24T16:22:40Z) - WalkingTime: Dynamic Graph Embedding Using Temporal-Topological Flows [3.8073142980733]
We propose a novel embedding algorithm, WalkingTime, based on a fundamentally different handling of time.
We hold flows comprised of temporally and topologically local interactions as our primitives, without any discretization or alignment of time-related attributes being necessary.
arXiv Detail & Related papers (2021-11-22T00:04:02Z) - Efficient Modelling Across Time of Human Actions and Interactions [92.39082696657874]
We argue that current fixed-sized-temporal kernels in 3 convolutional neural networks (CNNDs) can be improved to better deal with temporal variations in the input.
We study how we can better handle between classes of actions, by enhancing their feature differences over different layers of the architecture.
The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results.
arXiv Detail & Related papers (2021-10-05T15:39:11Z) - Radflow: A Recurrent, Aggregated, and Decomposable Model for Networks of
Time Series [77.47313102926017]
Radflow is a novel model for networks of time series that influence each other.
It embodies three key ideas: a recurrent neural network to obtain node embeddings that depend on time, the aggregation of the flow of influence from neighboring nodes with multi-head attention, and the multi-layer decomposition of time series.
We show that Radflow can learn different trends and seasonal patterns, that it is robust to missing nodes and edges, and that correlated temporal patterns among network neighbors reflect influence strength.
arXiv Detail & Related papers (2021-02-15T00:57:28Z) - Coarse Temporal Attention Network (CTA-Net) for Driver's Activity
Recognition [14.07119502083967]
Driver's activities are different since they are executed by the same subject with similar body parts movements, resulting in subtle changes.
Our model is named Coarse Temporal Attention Network (CTA-Net), in which coarse temporal branches are introduced in a trainable glimpse.
The model then uses an innovative attention mechanism to generate high-level action specific contextual information for activity recognition.
arXiv Detail & Related papers (2021-01-17T10:15:37Z) - AssembleNet++: Assembling Modality Representations via Attention
Connections [83.50084190050093]
We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network.
A new network component named peer-attention is introduced, which dynamically learns the attention weights using another block or input modality.
arXiv Detail & Related papers (2020-08-18T17:54:08Z) - See More, Know More: Unsupervised Video Object Segmentation with
Co-Attention Siamese Networks [184.4379622593225]
We introduce a novel network, called CO-attention Siamese Network (COSNet), to address the unsupervised video object segmentation task.
We emphasize the importance of inherent correlation among video frames and incorporate a global co-attention mechanism.
We propose a unified and end-to-end trainable framework where different co-attention variants can be derived for mining the rich context within videos.
arXiv Detail & Related papers (2020-01-19T11:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.