Open-domain Visual Entity Recognition

Towards Recognizing Millions of Wikipedia Entities

Hexiang Hu1
Yi Luan1
Yang Chen1,2*
Urvashi Khandelwal 1
Mandar Joshi1
Kenton Lee1
Kristina Toutanova1
Ming-Wei Chang1
1Google Deepmind
2Georgia Tech
(*: Work done when author was interned at Google)

Large-scale multi-modal pre-training models such as CLIP and PaLI exhibit strong generalization on various visual domains and tasks. However, existing image classification benchmarks often evaluate recognition on a specific domain (e.g., outdoor images) or a specific task (e.g., classifying plant species), which falls short of evaluating whether pre-trained foundational models are universal visual recognizers. To address this, we formally present the task of Open-domain Visual Entity recognitioN (OVEN), where a model need to link an image onto a Wikipedia entity with respect to a text query. We construct OVEN-Wiki by re-purposing 14 existing datasets with all labels grounded onto one single label space: Wikipedia entities. OVEN challenges models to select among six million possible Wikipedia entities, making it a general visual recognition benchmark with the largest number of labels. Our study on state-of-the-art pre-trained models reveals large headroom in generalizing to the massive-scale label space. We show that a PaLI-based auto-regressive visual recognition model performs surprisingly well, even on Wikipedia entities that have never been seen during fine-tuning. We also find existing pretrained models yield different strengths: while PaLI-based models obtain higher overall performance, CLIP-based models are better at recognizing tail entities.

OVEN models recognize the Visual Entity on the Wikipedia, from images in the wild

Special Thanks

We thank Boqing Gong, Soravit Changpinyo for reviewing on an early version of this paper in depth, with valuable comments and suggestions. We thank Xi Chen for providing different variants of PaLI pre-trained checkpoints. We thank Huiwen Chang and the Muse Team for providing their website template. We also thank Radu Soricut, Anelia Angelova, Alan Ritter, Chao-Yuan Wu, Jiacheng Chen for discussions and feedback on the project.