Abstracts

The polysemy problem: Understanding multiple meanings
Jean Aitchison, University of Oxford

In the 19th and early 20th century, words with seemingly changed meanings were viewed with disapproval, and were sometimes described as ‘weakened’ or ‘bleached’. But nowadays, we view this phenomenon differently. Words do not weaken, they expand. The old meanings remain, but are joined and often outnumbered by newer, more widely used meanings. In short, we talk about ‘polysemy’ (multiple meanings) or ‘layering’. Layering is not weakening, but expansion, as words become polysemous. In fact, the majority of common words in our dictionaries have more than one meaning, and often several. But this leaves a number of questions, which will be discussed. First, do different parts of speech formed from the same root, such as /devastate/ and /devastation/, behave in a parallel way when they layer, or differently? Second, do some kinds of words become polysemous more quickly than others? Third, how do words become polysemous? Fourth, why do words become polysemous? Fifth, how do speakers understand one another, when words can have multiple meanings? This talk will attempt to answer these questions, by exploring words for catastrophic events. The data will be drawn from the British National Corpus, a database of written and spoken language, and also from newspapers, which tend to be ‘tuned in’ to current usage, and also often report dramatic events, which are often labelled calamities, catastrophes,
disasters, tragedies, even when the events described are relatively trivial ones which have been dramatised in order to attract and maintain their readers. The paper will look at the range of events which are labelled as catastrophic, and will also discuss the language used when a genuine disaster or tragedy is reported. The processing of emotional information
Kai Alter, Newcastle University, UK

Successful social interaction relies on the ability to react to communication signals. More importantly, emotional vocalizations regulate social relationships on different levels, i.e., between adults, as well as between adults and children. Furthermore, there is no doubt about the importance of optimal communication of emotion between healthy adults and patients. In this presentation, I will highlight aspects related to brain networks involved in communicating emotional information. I will demonstrate how the choice of materials as well as emotional valence may have an impact on the functional characteristics of brain regions implicated in processing of emotional information. I will focus on three different aspects:

1. Voice and emotion: The overlap between voice sensitive brain areas with ‘classical’ emotional pathways such as the right middle temporal gyrus; and amygdala, insula, and mediodorsal thalami (Campanella & Belin 2007; Wittfoth et al. 2009)

2. Valence: The impact of emotional valence on brain areas involved
2.1 in discrimination, i.e., happiness vs anger for activations in right dorsal anterior cingulate cortex and right superior temporal gyrus and superior temporal sulcus; and bilateral orbito-frontal cortices (Ethofer et al. 2009)
2.2 in activation of additional motor areas in the brain for positive utterances (Warren et al 2006)

3. The amount of lexicality during processing of emotional information encoded in interjections has a strong influence on identification (Dietrich et al., 2006; Metcalfe et al., 2009). Moreover, I want to argue in favour of a distinction between interjections with low and high lexical content that may activate different brain areas (Dietrich et al 2008).

References
Campanella,  S., & Belin P. (2007). Integrating face and voice in person perception. Trends Cogn Sci. 11(12), 535-43.
Dietrich, S., Szameitat, D., Ackermann, H. & Alter, K. How to disentangle lexical and prosodic information? Psychoacoustic studies on the processing of vocal interjections. Progress in Brain Research 156, 295-302. 2006
Dietrich, S., Hertrich, I., Alter, K., Ischebeck, A., & Ackermann, H. (2008). Understanding the emotional expression of verbal interjections: a functional MRI study. Neuroreport 19, 1751-1755.
Ethofer, T., Kreifelts, B., Wiethoff, S., Wolf, J., Grodd, W., Vuilleumier, P., & Wildgruber, D. Differential influences of emotion, task, and novelty on brain regions underlying the processing of speech melody. (2009). J. Cogn. Neurosci. 21(7), 1255-1268.
Metcalfe, C., Grube, M., Gabriel, D., Dietrich, S., Ackermann, H., Cook, V., Hanson, A. & K. Alter. (2009). The processing of emotional utterances: contributions of prosodic and lexical information. In: Alter, K., Horne, M., Lindgren, M., Roll, M. & von Koss Torkildson, J. Brain Talk. Discourse with and in the brain. Papers from the 1st Birgit Rausing Language Program Conference in Linguistics, Lund, June 2008. Lund: Media Tryck, 139-149.
Warren, J.E., Sauter, A.S., Elsner, F., Dresner, A., Wise, R.J.S., Rosen, S., & Scott, S.K. (2006). Positive emotions preferentially engage an auditory-motor ‘mirror’ system. J. of Neuroscience 26, 13067-13075.
Wittfoth, M., Schröder, C., Schardt, D.M., Dengler, R., Heinze, H.J., & Kotz, S.A. (2009). On Emotional Conflict: Interference Resolution of Happy and Angry Prosody Reveals Valence-Specific Effects. Cereb. Cortex. Jun 8. [Epub ahead of print]

When language understanding meets the motor system
Véronique Boulenger
Laboratoire Dynamique du Langage CNRS UMR 5596, Lyon, France


Theories of embodied cognition consider language understanding as closely linked to sensory and motor processes. In this talk, I will present recent evidence from kinematic and electrophysiological studies that processing of words referring to bodily actions, even when subliminally presented, recruits the same motor regions that are involved in movement preparation and execution. I will also discuss the functional role of the motor system in action word retrieval in light of a neuropsychological study that revealed modulation of masked repetition priming effects for action verbs in Parkinson’s patients as a function of dopaminergic treatment. Finally, neuroimaging data showing somatotopic activation along the motor strip during reading of action words embedded in idiomatic sentences (e.g. /He grasped the idea/) will be presented. Altogether, these findings provide strong arguments that semantic mechanisms are grounded in action-perception systems of the brain. They particularly support the existence of common neural substrates for action word retrieval, even at an abstract level, and motor action and suggest that cortical motor regions contribute to the processing of lexico-semantic information about action words.


A neural network approach to compositionality
Michael Fortescue, University of Copenhagen

This paper adresses the question of compositionality in terms of the neural network model developed in Fortescue (2009). Relevant aspects of the model include the distinction between sensory, micro-functional and macro-functional affordances, where the second kind is restricted to features directly relevant to the lexico-grammar (and the broad ‘derivational’ relationships binding it together), and the third kind consists of frames or scenarios providing the wider context for the lexical items concerned. Both nominal compounding and verbal decomposition will be addressed. As regards the former, an example of the mutual adjusting of the meanings of the component of nominal compounds (as handled by Pustejevsky in terms of ‘qualia unification’) will be presented. As regards verbal decomposition, verbs of motion and manipulation will be focused on; here the ‘logic’ of simple ‘atomic’ verbs (directly reflecting basic ‘image schemas’) is passed on to more complex ‘molecular’ ones that may be built up in a way that is reminiscent of Levinson’s ‘dual level’ theory and of Langacker’s notion of ‘partial compositionality’. The approach takes a middle path between universal compositionality (in the manner of Wierzbicka) and the holistic eschewing of any compositionality (in the manner of Fodor). This leads to consideration of a single more complex verb involving not only sundry connotational associations but also a specific stylistic dimension that draws upon a number of macro-functional ‘scenarios’. Its relationship to other complex words of overlapping semantic content will be considered. The semantics of such complex words, it is claimed, may be built up (and learnt) compositionally from words and their meanings already learnt, but are deployed in actual usage holistically. It will be suggested that the bi-directional interplay between ‘basic words’ (stored in the left hemisphere) and contextual scenarios (anchored principally in the right hemisphere) is crucial to this elaborational process. Scenarios, it is claimed, are related through their own kind of ‘derivational’ compositionality and logic, of a type analogous to - but more coarse-grained than - that of lexical items.

References
Fortescue, Michael. 2009. A neural network model of lexical organisation. London/New York: Continuum Books.
Langacker, Ronald. 2000. Grammar and Conceptualization. Berlin: Mouton de Gruyter.
Levinson, Stephen C. 2003. Space in Language and Cognition. Cambridge: Cambridge University Press.
Pustejovsky, James.1995. The Generative Lexicon. Cambridge, Mass.: The MIT Press.


Dual Coding Theory and the Mental Lexicon
Allan Paivio, University of Western Ontario

Described in different terminology, the mental lexicon is part of the
structural and processing foundations of dual coding theory (DCT). The DCT approach differs radically from the standard approach to the mental lexicon in linguistics and psychology. The differences are related to a long-standing dispute concerning the nature of the mental representations that mediate perception, comprehension, and performance in linguistic and nonlinguistic tasks. The issue contrasts what has been described as common coding and multiple coding views of mental representations. The common coding view is that a single, abstract form of representation underlies language and other cognitive skills. The standard approach to the mental lexicon is in that category. The multiple coding view is that mental representations are modality specific and multimodal. The DCT view of the mental lexicon is in the latter camp.
The assumption of a single abstract mental lexicon is apparent in
Chomskyan and Chomsky-inspired generative grammars as well in cognitive linguistic theories that include the concept. I document that assertion and discuss the logical and empirical limitations of single-code theories as explanations of language performance. I then describe the multimodal DCT alternative. The DCT representational units were initially referred to simply as verbal and nonverbal (or imaginal) representations that vary in sensory-motor modality. Subsequently, in the interests of descriptive economy, the verbal and nonverbal units were called logogens and imagens, respectively. Logogen was adapted from John Morton's use of the term, stretched to become a logogen "family" that includes auditory, visual, haptic, and motor logogens to accommodate behavioral and neuropsychological evidence. Nonverbal DCT imagens also are multimodal and are involved in explanations of language and other cognitive phenomena, cooperating with logogens through inter-unit connections between and within verbal and nonverbal systems. I present behavioral and neuropsychological evidence for the DCT interpretation of the mental lexicon, arguing that DCT predicts and explains language-related phenomena that are problematic for the abstract coding alternatives.


Dopaminergic modulation of lexical selection
Gabriele Scheler, Stanford University

A great number of human behavioral experiments, partly reviewed in the talk, converges on the idea that increased dopaminergic tone leads to increased cognitive fluency, associative recall, semantic awareness, decreased context-dependence and increased semantic interference in lexical perception.

We analyse the process of lexical perception and selection as driven by prefrontal cortical ensembles.

Prefrontal ensembles ultimately connect to wide-spread neuronal activation networks for lexical meaning. We discuss, however, only the role of the prefrontal ensembles and their modulation by dopamine. In our model, lexical perception corresponds to distributed neuronal activation within the structured prefrontal network and lexical selection corresponds to competition between activated ensembles. Prefrontal ensembles are modeled on cortical microcolumns with similar connectivity. Individual neurons are constructed as Fitzhugh-Nagumo style models with additional parametrization corresponding to dopamine modulation. We attempt to achieve reduced competition and a higher chance of parallel, noisy or hybrid ensemble activation by modifying the network with dopamine-related effects. The most significant part of the model is the concerted modification of single-neuron activation properties, network-wide synaptic connectivity, and synapse-specific coincidence detection which are all related to dopamine . To generate consistent explanations of high-level cognitive function by these separate processes is difficult but may pave the way towards further realistic models with potential significance for other cognitive tasks as well.



Automaticity and attentional control in neural language processing
Yury Shtyrov, Medical Research Council, Cambridge

A long-standing debate in the science of language is whether our capacity to process language draws on attentional resources, or whether some stages or types of this processing (e.g. lexical or syntactic access) may be automatic. I will present a series of experiments in which this issue was addressed by modulating the level of attention on the auditory input while recording event-related brain activity elicited by spoken linguistic stimuli [1-3].

The overall results of these studies show that the language function does possess a certain degree of automaticity, which seems to apply to different types of information (including lexical access). It can be explained, at least in part, by robustness of strongly connected linguistic memory circuits in the brain that can activate fully even when attentional resources are low. At the same time, this automaticity is limited to the very first stages of linguistic processing (<200 msec from the point in time when the relevant information is available in the auditory input, e.g. word recognition point). Later processing steps are, in turn, more affected by attention modulation. These later steps, which possibly reflect a more in-depth, secondary processing or re-analysis and repair of incoming speech, therefore appear dependant on the amount of resources allocated to language. Full processing of spoken language may thus not be possible without allocating attentional resources to it; this allocation in itself may be triggered by the early automatic stages in the first place.

The results will be further discussed in the framework of distributed neural circuits which function as memory traces for language elements in the human brain. References
1.Garagnani M, Shtyrov Y, Pulvermuller F. (2009) Effects of Attention on what is known and what is not: MEG Evidence for Functionally Discrete Memory Circuits. Front Hum Neurosci, 3, 10.
2.Pulvermuller F, Shtyrov Y, Hasting AS, Carlyon RP. (2008) Syntax as a reflex: neurophysiological evidence for early automaticity of grammatical processing. Brain Lang, 104, 244-253.
3.Shtyrov Y, Kujala T, Pulvermuller F. (2009) Interactions between Language and Attention Systems: Early Automatic Lexical Processing? J Cogn Neurosci.
Sidansvarig: merle.horneling.luse | 2009-11-11