Addressing Issues with Working Memory in Video Object Segmentation
- URL: http://arxiv.org/abs/2410.22451v1
- Date: Tue, 29 Oct 2024 18:34:41 GMT
- Title: Addressing Issues with Working Memory in Video Object Segmentation
- Authors: Clayton Bromley, Alexander Moore, Amar Saini, Douglas Poland, Carmen Carrano,
- Abstract summary: Video object segmentation (VOS) models compare incoming unannotated images to a history of image-mask relations.
Current state of the art models perform very well on clean video data.
Their reliance on a working memory of previous frames leaves room for error.
A simple algorithmic change is proposed that can be applied to any existing working memory-based VOS model.
- Score: 37.755852787082254
- License:
- Abstract: Contemporary state-of-the-art video object segmentation (VOS) models compare incoming unannotated images to a history of image-mask relations via affinity or cross-attention to predict object masks. We refer to the internal memory state of the initial image-mask pair and past image-masks as a working memory buffer. While the current state of the art models perform very well on clean video data, their reliance on a working memory of previous frames leaves room for error. Affinity-based algorithms include the inductive bias that there is temporal continuity between consecutive frames. To account for inconsistent camera views of the desired object, working memory models need an algorithmic modification that regulates the memory updates and avoid writing irrelevant frames into working memory. A simple algorithmic change is proposed that can be applied to any existing working memory-based VOS model to improve performance on inconsistent views, such as sudden camera cuts, frame interjections, and extreme context changes. The resulting model performances show significant improvement on video data with these frame interjections over the same model without the algorithmic addition. Our contribution is a simple decision function that determines whether working memory should be updated based on the detection of sudden, extreme changes and the assumption that the object is no longer in frame. By implementing algorithmic changes, such as this, we can increase the real-world applicability of current VOS models.
Related papers
- AMES: Asymmetric and Memory-Efficient Similarity Estimation for Instance-level Retrieval [14.009257997448634]
This work investigates the problem of instance-level image retrieval re-ranking with the constraint of memory efficiency.
The proposed model uses a transformer-based architecture designed to estimate image-to-image similarity.
Results on standard benchmarks demonstrate the superiority of our approach over both hand-crafted and learned models.
arXiv Detail & Related papers (2024-08-06T16:29:51Z) - Improving Image Recognition by Retrieving from Web-Scale Image-Text Data [68.63453336523318]
We introduce an attention-based memory module, which learns the importance of each retrieved example from the memory.
Compared to existing approaches, our method removes the influence of the irrelevant retrieved examples, and retains those that are beneficial to the input query.
We show that it achieves state-of-the-art accuracies in ImageNet-LT, Places-LT and Webvision datasets.
arXiv Detail & Related papers (2023-04-11T12:12:05Z) - Per-Clip Video Object Segmentation [110.08925274049409]
Recently, memory-based approaches show promising results on semisupervised video object segmentation.
We treat video object segmentation as clip-wise mask-wise propagation.
We propose a new method tailored for the per-clip inference.
arXiv Detail & Related papers (2022-08-03T09:02:29Z) - Learning Quality-aware Dynamic Memory for Video Object Segmentation [32.06309833058726]
We propose a Quality-aware Dynamic Memory Network (QDMN) to evaluate the segmentation quality of each frame.
Our QDMN achieves new state-of-the-art performance on both DAVIS and YouTube-VOS benchmarks.
arXiv Detail & Related papers (2022-07-16T12:18:04Z) - Recurrent Dynamic Embedding for Video Object Segmentation [54.52527157232795]
We propose a Recurrent Dynamic Embedding (RDE) to build a memory bank of constant size.
We propose an unbiased guidance loss during the training stage, which makes SAM more robust in long videos.
We also design a novel self-correction strategy so that the network can repair the embeddings of masks with different qualities in the memory bank.
arXiv Detail & Related papers (2022-05-08T02:24:43Z) - Solving Continuous Control with Episodic Memory [1.9493449206135294]
Episodic memory lets reinforcement learning algorithms remember and exploit promising experience from the past to improve agent performance.
Our study aims to answer the question: can episodic memory be used to improve agent's performance in continuous control?
arXiv Detail & Related papers (2021-06-16T14:51:39Z) - Rethinking Space-Time Networks with Improved Memory Coverage for
Efficient Video Object Segmentation [68.45737688496654]
We establish correspondences directly between frames without re-encoding the mask features for every object.
With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion.
We validated that every memory node now has a chance to contribute, and experimentally showed that such diversified voting is beneficial to both memory efficiency and inference accuracy.
arXiv Detail & Related papers (2021-06-09T16:50:57Z) - Video Object Segmentation with Episodic Graph Memory Networks [198.74780033475724]
A graph memory network is developed to address the novel idea of "learning to update the segmentation model"
We exploit an episodic memory network, organized as a fully connected graph, to store frames as nodes and capture cross-frame correlations by edges.
The proposed graph memory network yields a neat yet principled framework, which can generalize well both one-shot and zero-shot video object segmentation tasks.
arXiv Detail & Related papers (2020-07-14T13:19:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.