Variational saliency maps for dynamic image sequences

Aniello Raffaele Patrone, Christian Valuch, Ulrich Ansorge, Otmar Scherzer

Veröffentlichungen: Beitrag in BuchBeitrag in Konferenzband

Abstract

Saliency maps are an important tool for modeling and predicting human eye movements but their applications to dynamic image sequences, such as videos, are still limited. In this work we propose a variational approach for determining saliency maps in dynamic image sequences. The main idea is to merge information from static saliency maps computed on every single frame of a video (Itti & Koch, 2001) with the optical flow (Horn & Schunk, 1981; Weickert & Schnorr, 2011) representing motion between successive video frames. Including motion information into saliency maps is not a novelty but our variational approach presents a new solution to the problem, which is also conceptually compatible with successive stages of visual processing in the human brain. We present the basic concept and compare our modeling results to classical methods of saliency and optical flow computation. In addition, we present preliminary eye tracking results from an experiment in which 24 participants viewed 80 real-world dynamic scenes. Our data suggest that our algorithm allows feasible and computationally cheap modeling of human attention in videos.
OriginalspracheEnglisch
TitelECEM 2015 Abstracts of the 18th European Conference on Eye Movements
Seiten224
PublikationsstatusVeröffentlicht - 2015

ÖFOS 2012

  • 101028 Mathematische Modellierung

Zitationsweisen