(2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering
- URL: http://arxiv.org/abs/2202.09277v1
- Date: Fri, 18 Feb 2022 15:58:54 GMT
- Title: (2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering
- Authors: Anoop Cherian and Chiori Hori and Tim K. Marks and Jonathan Le Roux
- Abstract summary: Video is essentially of 2D "views" of events happening in a 3D space.
We propose a (2.5+1)D scene graph representation to better capture thetemporal information flows inside videos.
- Score: 54.436179346454516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatio-temporal scene-graph approaches to video-based reasoning tasks such as
video question-answering (QA) typically construct such graphs for every video
frame. Such approaches often ignore the fact that videos are essentially
sequences of 2D "views" of events happening in a 3D space, and that the
semantics of the 3D scene can thus be carried over from frame to frame.
Leveraging this insight, we propose a (2.5+1)D scene graph representation to
better capture the spatio-temporal information flows inside the videos.
Specifically, we first create a 2.5D (pseudo-3D) scene graph by transforming
every 2D frame to have an inferred 3D structure using an off-the-shelf 2D-to-3D
transformation module, following which we register the video frames into a
shared (2.5+1)D spatio-temporal space and ground each 2D scene graph within it.
Such a (2.5+1)D graph is then segregated into a static sub-graph and a dynamic
sub-graph, corresponding to whether the objects within them usually move in the
world. The nodes in the dynamic graph are enriched with motion features
capturing their interactions with other graph nodes. Next, for the video QA
task, we present a novel transformer-based reasoning pipeline that embeds the
(2.5+1)D graph into a spatio-temporal hierarchical latent space, where the
sub-graphs and their interactions are captured at varied granularity. To
demonstrate the effectiveness of our approach, we present experiments on the
NExT-QA and AVSD-QA datasets. Our results show that our proposed (2.5+1)D
representation leads to faster training and inference, while our hierarchical
model showcases superior performance on the video QA task versus the state of
the art.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.