Exploring Descriptions of Movement Through Geovisual Analytics
- URL: http://arxiv.org/abs/2204.09588v1
- Date: Tue, 1 Mar 2022 18:23:02 GMT
- Title: Exploring Descriptions of Movement Through Geovisual Analytics
- Authors: Scott Pezanowski, Prasenjit Mitra, Alan M. MacEachren
- Abstract summary: We present GeoMovement, a system that is based on combining machine learning and rule-based extraction of movement-related information with state-of-the-art visualization techniques.
Along with the depiction of movement, our tool can extract and present a lack of movement.
- Score: 2.813813570843999
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sensemaking using automatically extracted information from text is a
challenging problem. In this paper, we address a specific type of information
extraction, namely extracting information related to descriptions of movement.
Aggregating and understanding information related to descriptions of movement
and lack of movement specified in text can lead to an improved understanding
and sensemaking of movement phenomena of various types, e.g., migration of
people and animals, impediments to travel due to COVID-19, etc. We present
GeoMovement, a system that is based on combining machine learning and
rule-based extraction of movement-related information with state-of-the-art
visualization techniques. Along with the depiction of movement, our tool can
extract and present a lack of movement. Very little prior work exists on
automatically extracting descriptions of movement, especially negation and
movement. Apart from addressing these, GeoMovement also provides a novel
integrated framework for combining these extraction modules with visualization.
We include two systematic case studies of GeoMovement that show how humans can
derive meaningful geographic movement information. GeoMovement can complement
precise movement data, e.g., obtained using sensors, or be used by itself when
precise data is unavailable.
Related papers
- Diving Deep into the Motion Representation of Video-Text Models [12.197093960700187]
GPT-4 generated motion descriptions capture fine-grained motion descriptions of activities.
We evaluate several video-text models on the task of retrieval of motion descriptions.
arXiv Detail & Related papers (2024-06-07T16:46:10Z) - Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - Motion Generation from Fine-grained Textual Descriptions [29.033358642532722]
We build a large-scale language-motion dataset specializing in fine-grained textual descriptions, FineHumanML3D.
We design a new text2motion model, FineMotionDiffuse, making full use of fine-grained textual information.
Our evaluation shows that FineMotionDiffuse trained on FineHumanML3D improves FID by a large margin of 0.38, compared with competitive baselines.
arXiv Detail & Related papers (2024-03-20T11:38:30Z) - LivePhoto: Real Image Animation with Text-guided Motion Control [51.31418077586208]
This work presents a practical system, named LivePhoto, which allows users to animate an image of their interest with text descriptions.
We first establish a strong baseline that helps a well-learned text-to-image generator (i.e., Stable Diffusion) take an image as a further input.
We then equip the improved generator with a motion module for temporal modeling and propose a carefully designed training pipeline to better link texts and motions.
arXiv Detail & Related papers (2023-12-05T17:59:52Z) - SemanticBoost: Elevating Motion Generation with Augmented Textual Cues [73.83255805408126]
Our framework comprises a Semantic Enhancement module and a Context-Attuned Motion Denoiser (CAMD)
The CAMD approach provides an all-encompassing solution for generating high-quality, semantically consistent motion sequences.
Our experimental results demonstrate that SemanticBoost, as a diffusion-based method, outperforms auto-regressive-based techniques.
arXiv Detail & Related papers (2023-10-31T09:58:11Z) - Semi-Weakly Supervised Object Kinematic Motion Prediction [56.282759127180306]
Given a 3D object, kinematic motion prediction aims to identify the mobile parts as well as the corresponding motion parameters.
We propose a graph neural network to learn the map between hierarchical part-level segmentation and mobile parts parameters.
The network predictions yield a large scale of 3D objects with pseudo labeled mobility information.
arXiv Detail & Related papers (2023-03-31T02:37:36Z) - The Right Spin: Learning Object Motion from Rotation-Compensated Flow
Fields [61.664963331203666]
How humans perceive moving objects is a longstanding research question in computer vision.
One approach to the problem is to teach a deep network to model all of these effects.
We present a novel probabilistic model to estimate the camera's rotation given the motion field.
arXiv Detail & Related papers (2022-02-28T22:05:09Z) - Recognition of Implicit Geographic Movement in Text [3.3241479835797123]
Analyzing the geographic movement of humans, animals, and other phenomena is a growing field of research.
We created a corpus of sentences labeled as describing geographic movement or not.
We developed an iterative process employing hand labeling, crowd voting for confirmation, and machine learning to predict more labels.
arXiv Detail & Related papers (2022-01-30T12:22:55Z) - Differentiating Geographic Movement Described in Text Documents [2.813813570843999]
We show how interpreting geographic movement described in text is challenging because of general spatial terms, linguistic constructions that make the thing(s) moving unclear.
We identify multiple important characteristics of movement descriptions that humans use to differentiate one movement description from another.
Our findings contribute towards an improved understanding of the important characteristics of the underused information about geographic movement that is in the form of text descriptions.
arXiv Detail & Related papers (2022-01-12T11:49:13Z) - DS-Net: Dynamic Spatiotemporal Network for Video Salient Object
Detection [78.04869214450963]
We propose a novel dynamic temporal-temporal network (DSNet) for more effective fusion of temporal and spatial information.
We show that the proposed method achieves superior performance than state-of-the-art algorithms.
arXiv Detail & Related papers (2020-12-09T06:42:30Z) - Affective Movement Generation using Laban Effort and Shape and Hidden
Markov Models [6.181642248900806]
This paper presents an approach for automatic affective movement generation that makes use of two movement abstractions: 1) Laban movement analysis (LMA), and 2) hidden Markov modeling.
The LMA provides a systematic tool for an abstract representation of the kinematic and expressive characteristics of movements.
An HMM abstraction of the identified movements is obtained and used with the desired motion path to generate a novel movement that conveys the target emotion.
The efficacy of the proposed approach in generating movements with recognizable target emotions is assessed using a validated automatic recognition model and a user study.
arXiv Detail & Related papers (2020-06-10T21:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.