Classifying All Interacting Pairs in a Single Shot
- URL: http://arxiv.org/abs/2001.04360v1
- Date: Mon, 13 Jan 2020 15:51:45 GMT
- Title: Classifying All Interacting Pairs in a Single Shot
- Authors: Sanaa Chafik and Astrid Orcesi and Romaric Audigier and Bertrand
Luvison
- Abstract summary: We introduce a novel human interaction detection approach, based on CALIPSO, a classifier of human-object interactions.
It estimates interactions simultaneously for all human-object pairs, regardless of their number and class.
It leads to a constant complexity and computation time independent of the number of subjects, objects or interactions in the image.
- Score: 29.0200561485714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a novel human interaction detection approach,
based on CALIPSO (Classifying ALl Interacting Pairs in a Single shOt), a
classifier of human-object interactions. This new single-shot interaction
classifier estimates interactions simultaneously for all human-object pairs,
regardless of their number and class. State-of-the-art approaches adopt a
multi-shot strategy based on a pairwise estimate of interactions for a set of
human-object candidate pairs, which leads to a complexity depending, at least,
on the number of interactions or, at most, on the number of candidate pairs. In
contrast, the proposed method estimates the interactions on the whole image.
Indeed, it simultaneously estimates all interactions between all human subjects
and object targets by performing a single forward pass throughout the image.
Consequently, it leads to a constant complexity and computation time
independent of the number of subjects, objects or interactions in the image. In
detail, interaction classification is achieved on a dense grid of anchors
thanks to a joint multi-task network that learns three complementary tasks
simultaneously: (i) prediction of the types of interaction, (ii) estimation of
the presence of a target and (iii) learning of an embedding which maps
interacting subject and target to a same representation, by using a metric
learning strategy. In addition, we introduce an object-centric passive-voice
verb estimation which significantly improves results. Evaluations on the two
well-known Human-Object Interaction image datasets, V-COCO and HICO-DET,
demonstrate the competitiveness of the proposed method (2nd place) compared to
the state-of-the-art while having constant computation time regardless of the
number of objects and interactions in the image.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - LEMON: Learning 3D Human-Object Interaction Relation from 2D Images [56.6123961391372]
Learning 3D human-object interaction relation is pivotal to embodied AI and interaction modeling.
Most existing methods approach the goal by learning to predict isolated interaction elements.
We present LEMON, a unified model that mines interaction intentions of the counterparts and employs curvatures to guide the extraction of geometric correlations.
arXiv Detail & Related papers (2023-12-14T14:10:57Z) - Disentangled Interaction Representation for One-Stage Human-Object
Interaction Detection [70.96299509159981]
Human-Object Interaction (HOI) detection is a core task for human-centric image understanding.
Recent one-stage methods adopt a transformer decoder to collect image-wide cues that are useful for interaction prediction.
Traditional two-stage methods benefit significantly from their ability to compose interaction features in a disentangled and explainable manner.
arXiv Detail & Related papers (2023-12-04T08:02:59Z) - Detecting Human-to-Human-or-Object (H2O) Interactions with DIABOLO [29.0200561485714]
We propose a new interaction dataset to deal with both types of human interactions: Human-to-Human-or-Object (H2O)
In addition, we introduce a novel taxonomy of verbs, intended to be closer to a description of human body attitude in relation to the surrounding targets of interaction.
We propose DIABOLO, an efficient subject-centric single-shot method to detect all interactions in one forward pass.
arXiv Detail & Related papers (2022-01-07T11:00:11Z) - HOTR: End-to-End Human-Object Interaction Detection with Transformers [26.664864824357164]
We present a novel framework, referred to by HOTR, which directly predicts a set of human, object, interaction> triplets from an image.
Our proposed algorithm achieves the state-of-the-art performance in two HOI detection benchmarks with an inference time under 1 ms after object detection.
arXiv Detail & Related papers (2021-04-28T10:10:29Z) - DRG: Dual Relation Graph for Human-Object Interaction Detection [65.50707710054141]
We tackle the challenging problem of human-object interaction (HOI) detection.
Existing methods either recognize the interaction of each human-object pair in isolation or perform joint inference based on complex appearance-based features.
In this paper, we leverage an abstract spatial-semantic representation to describe each human-object pair and aggregate the contextual information of the scene via a dual relation graph.
arXiv Detail & Related papers (2020-08-26T17:59:40Z) - Learning Human-Object Interaction Detection using Interaction Points [140.0200950601552]
We propose a novel fully-convolutional approach that directly detects the interactions between human-object pairs.
Our network predicts interaction points, which directly localize and classify the inter-action.
Experiments are performed on two popular benchmarks: V-COCO and HICO-DET.
arXiv Detail & Related papers (2020-03-31T08:42:06Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.