Abstract
Social perception relies on different sensory channels, including vision and audition, which are specifically important for judgements of appearance. Therefore, to understand multimodal integration in person perception, it is important to study both face and voice in a synchronized form. We introduce the Vienna Talking Faces (ViTaFa) database, a high-quality audiovisual database focused on multimodal research of social perception. ViTaFa includes different stimulus modalities: audiovisual dynamic, visual dynamic, visual static, and auditory dynamic. Stimuli were recorded and edited under highly standardized conditions and were collected from 40 real individuals, and the sample matches typical student samples in psychological research (young individuals aged 18 to 45). Stimuli include sequences of various types of spoken content from each person, including German sentences, words, reading passages, vowels, and language-unrelated pseudo-words. Recordings were made with different emotional expressions (neutral, happy, angry, sad, and flirtatious). ViTaFa is freely accessible for academic non-profit research after signing a confidentiality agreement form via https://osf.io/9jtzx/ and stands out from other databases due to its multimodal format, high quality, and comprehensive quantification of stimulus features and human judgements related to attractiveness. Additionally, over 200 human raters validated emotion expression of the stimuli. In summary, ViTaFa provides a valuable resource for investigating audiovisual signals of social perception.
Original language | English |
---|---|
Pages (from-to) | 2923-2940 |
Number of pages | 18 |
Journal | Behavior Research Methods |
Volume | 56 |
Issue number | 4 |
Early online date | 10 Nov 2023 |
DOIs | |
Publication status | Published - Apr 2024 |
Austrian Fields of Science 2012
- 501011 Cognitive psychology
- 501006 Experimental psychology
- 501026 Psychology of perception
Keywords
- Attractiveness
- Audiovisual integration
- Face
- Social perception
- Voice