HIPPO-Video: Simulating Watch Histories with Large Language Models for Personalized Video Highlighting
- URL: http://arxiv.org/abs/2507.16873v1
- Date: Tue, 22 Jul 2025 08:24:33 GMT
- Title: HIPPO-Video: Simulating Watch Histories with Large Language Models for Personalized Video Highlighting
- Authors: Jeongeun Lee, Youngjae Yu, Dongha Lee,
- Abstract summary: We introduce HIPPO-Video, a novel dataset for personalized video highlighting.<n>The dataset includes 2,040 (watch history, saliency score) pairs, covering 20,400 videos across 170 semantic categories.<n>To validate our dataset, we propose HiPHer, a method that leverages these personalized watch histories to predict preference-conditioned segment-wise saliency scores.
- Score: 27.92094212778288
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The exponential growth of video content has made personalized video highlighting an essential task, as user preferences are highly variable and complex. Existing video datasets, however, often lack personalization, relying on isolated videos or simple text queries that fail to capture the intricacies of user behavior. In this work, we introduce HIPPO-Video, a novel dataset for personalized video highlighting, created using an LLM-based user simulator to generate realistic watch histories reflecting diverse user preferences. The dataset includes 2,040 (watch history, saliency score) pairs, covering 20,400 videos across 170 semantic categories. To validate our dataset, we propose HiPHer, a method that leverages these personalized watch histories to predict preference-conditioned segment-wise saliency scores. Through extensive experiments, we demonstrate that our method outperforms existing generic and query-based approaches, showcasing its potential for highly user-centric video highlighting in real-world scenarios.
Related papers
- Short Video Segment-level User Dynamic Interests Modeling in Personalized Recommendation [23.082810471266235]
Short video growth has necessitated effective recommender systems to match users with content tailored to their evolving preferences.<n>Current video recommendation models primarily treat each video as a whole, overlooking the dynamic nature of user preferences with specific video segments.<n>We propose an innovative model that integrates a hybrid representation module, a multi-modal user-video encoder, and a segment interest decoder.
arXiv Detail & Related papers (2025-04-05T17:45:32Z) - Multi-subject Open-set Personalization in Video Generation [110.02124633005516]
We present Video Alchemist $-$ a video model with built-in multi-subject, open-set personalization capabilities.<n>Our model is built on a new Diffusion Transformer module that fuses each conditional reference image and its corresponding subject-level text prompt.<n>Our method significantly outperforms existing personalization methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2025-01-10T18:59:54Z) - Personalized Video Summarization by Multimodal Video Understanding [2.1372652192505703]
We present a pipeline called Video Summarization with Language (VSL) for user-preferred video summarization.
VSL is based on pre-trained visual language models (VLMs) to avoid the need to train a video summarization system on a large training dataset.
We show that our method is more adaptable across different datasets compared to supervised query-based video summarization models.
arXiv Detail & Related papers (2024-11-05T22:14:35Z) - LLMs + Persona-Plug = Personalized LLMs [41.60364110693824]
Personalization plays a critical role in numerous language tasks and applications, since users with the same requirements may prefer diverse outputs based on their individual interests.
This has led to the development of various personalized approaches aimed at adapting large language models (LLMs) to generate customized outputs aligned with user preferences.
We propose a novel personalized LLM model, ours. It constructs a user-specific embedding for each individual by modeling all her historical contexts through a lightweight plug-in user embedder module.
arXiv Detail & Related papers (2024-09-18T11:54:45Z) - CinePile: A Long Video Question Answering Dataset and Benchmark [55.30860239555001]
We present a novel dataset and benchmark, CinePile, specifically designed for authentic long-form video understanding.
Our comprehensive dataset comprises 305,000 multiple-choice questions (MCQs), covering various visual and multimodal aspects.
We fine-tuned open-source Video-LLMs on the training split and evaluated both open-source and proprietary video-centric LLMs on the test split of our dataset.
arXiv Detail & Related papers (2024-05-14T17:59:02Z) - Scaling Up Video Summarization Pretraining with Large Language Models [73.74662411006426]
We introduce an automated and scalable pipeline for generating a large-scale video summarization dataset.
We analyze the limitations of existing approaches and propose a new video summarization model that effectively addresses them.
Our work also presents a new benchmark dataset that contains 1200 long videos each with high-quality summaries annotated by professionals.
arXiv Detail & Related papers (2024-04-04T11:59:06Z) - EvalCrafter: Benchmarking and Evaluating Large Video Generation Models [70.19437817951673]
We argue that it is hard to judge the large conditional generative models from the simple metrics since these models are often trained on very large datasets with multi-aspect abilities.
Our approach involves generating a diverse and comprehensive list of 700 prompts for text-to-video generation.
Then, we evaluate the state-of-the-art video generative models on our carefully designed benchmark, in terms of visual qualities, content qualities, motion qualities, and text-video alignment with 17 well-selected objective metrics.
arXiv Detail & Related papers (2023-10-17T17:50:46Z) - Show Me What I Like: Detecting User-Specific Video Highlights Using Content-Based Multi-Head Attention [52.84233165201391]
We propose a method to detect individualized highlights for users on given target videos based on their preferred highlight clips marked on previous videos they have watched.
Our method explicitly leverages the contents of both the preferred clips and the target videos using pre-trained features for the objects and the human activities.
arXiv Detail & Related papers (2022-07-18T02:32:48Z) - CLIP-It! Language-Guided Video Summarization [96.69415453447166]
This work introduces CLIP-It, a single framework for addressing both generic and query-focused video summarization.
We propose a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another.
Our model can be extended to the unsupervised setting by training without ground-truth supervision.
arXiv Detail & Related papers (2021-07-01T17:59:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.