Marianna Bolognesi (Oxford university)
In this talk I will first explain how visual metaphors are defined, identified ‘in the wild’, and analysed within the current theoretical frameworks proposed by cognitive linguists and media scholars. I will then introduce the corpus of visual metaphors VisMet 1.0 (http://www.vismet.org/VisMet/index.php), implemented at the University of Amsterdam. The version of VisMet that is currently released encompasses 350 images classified by genre (advertising, political cartoons, artworks). All images have been annotated by independent coders, and can be browsed by users, on the basis of various theoretical criteria: content conceptualization, content expression, content realization and linguistic expression. These criteria will be described and exemplified during the talk.
In addition, I will present a new functionality of this corpus, which we are currently developing, consisting of a large-scale dataset of crowd sourced tags. The tags have been collected online, through a crowdsourcing task in which participants were asked to provide meaningful keywords to tag the images in the corpus. During this task, we manipulated the time exposure: The participants were able to see the images for a limited time (1 sec, 5 sec, 15 sec, 20 sec).
The tags collected were first normalized by standard procedures, and then analysed in relation to the type of information that they conveyed, based on the psycholinguistic model for visual metaphor processing proposed in Sorm, Steen (2013). We observed that while perceptual features of the images (shapes, colors, identification of concrete objects) were expressed in the tags also in short time exposures, features denoting metaphor interpretation (such as abstract concepts that were not graphically depicted) tended to appear only in longer time exposures. The results are hereby discussed.