What does it mean for two videos to be similar? Videos may appear similar when judged by the actions they depict, yet entirely different if evaluated based on the locations where they were filmed. While humans naturally compare videos by taking different aspects into account, this ability has not been thoroughly studied and presents a challenge for models that often depend on broad global similarity scores. Large Multimodal Models (LMMs) with video understanding capabilities open new opportunities for leveraging natural language in comparative video tasks. We introduce Concept-based Video Similarity estimation (ConViS), a novel task that compares pairs of videos by computing interpretable similarity scores across a predefined set of key semantic concepts. ConViS allows for human-like reasoning about video similarity and enables new applications such as concept-conditioned video retrieval. To support this task, we also introduce ConViS-Bench, a new benchmark comprising carefully annotated video pairs spanning multiple domains. Each pair comes with concept-level similarity scores and textual descriptions of both differences and similarities. Additionally, we benchmark several state-of-the-art models on ConViS, providing insights into their alignment with human judgments. Our results reveal significant performance differences on ConViS, indicating that some concepts present greater challenges for estimating video similarity. We believe that ConViS-Bench will serve as a valuable resource for advancing research in language-driven video understanding.
NeurIPS 2025 / Dataset & Benchmarks Track
We introduce Concept-based Video Similarity estimation (ConViS), a task that quantifies video similarity along specific semantic concepts (e.g., location), and ConViS-Bench , a dataset of video pairs annotated with concept-level similarity scores (1-to-5) and free-form descriptions of similarities and differences. This bridges the gap between prior work focused solely on global similarity or differences in natural language
ConViS-Bench contains 610 video pairs, each annotated with human-judged similarity scores across five general-purpose concepts. In addition to quantitative similarity scores, each pair is accompanied by free-form descriptions highlighting shared and differing elements, offering qualitative insight into human reasoning. It covers the broadest range of domains to date, i.e., 16 in total and features longer videos on average.
If you find our work useful in your research, please consider citing:
@inproceedings{liberatori2025convisbench,
title={ConViS-Bench: Estimating Video Similarity Through Semantic Concepts},
author={Benedetta Liberatori and Alessandro Conti and Lorenzo Vaquero and Yiming Wang and Elisa Ricci and Paolo Rota},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2025},
url={https://openreview.net/forum?id=NoIWLerNKH}
}