U razgovoru govornici često izvode pokrete ruku koji su sinkronizirani s njihovim govorom, a običan promatrač vrlo lako zamjećuje kako je barem dio tih pokreta povezan sa značenjem govora koji prate. Ti se pokreti tradicionalno nazivaju koverbalnim gestama, a o njihovoj su se ulozi u govornoj komunikaciji razvile brojne znanstvene diskusije. Pitanje koje je potaknulo velik interes i raspravu u okviru gestovnih istraživanja jest zašto gestikuliramo prilikom govora, a iz ponuđenih odgovora proizašlo je nekoliko pristupa koji objašnjavaju funkciju i upotrebu koverbalnih gestâ. Prema mišljenjima pobornika kognitivnog pristupa koverbalne geste olakšavaju leksički pristup mentalnom leksikonu tijekom govorne produkcije pa se vežu isključivo uz govornika. Njihov je utjecaj na prijenos semantičkih informacija minimalan i nedovoljan za poboljšanje komunikacijske vrijednosti (MorrelSamuels i Krauss, 1992). Prema komunikacijskom pristupu, koverbalne su geste namijenjene slušatelju jer olakšavaju razumijevanje jezične poruke, omogućuju dodatan izvor informacija za lakše dekodiranje ciljanog sadržaja i prenose informaciju na sličan način kao što se to realizira govorom (Kelly, Ӧzyürek i Maris, 2010a). Konačno, prema dvojnom pristupu koverbalne geste imaju i komunikacijsku i kognitivnu ulogu, ovisno o vrsti informacije koju prenose (Alibali, Heath i Myers, 2001). Kako bismo empirijski ispitali ulogu koverbalnih ikoničkih gestâ u jezičnoj recepciji, proveli smo dva eksperimenta vodeći se paradigmom usmjeravanja. Balansiranjem dvaju modaliteta ciljnog podražaja (auditivnog i audio-vizualnog) i jezika podražaja (hrvatskog i engleskog), ispitanici su trebali što brže i točnije odrediti je li radnja koju su vidjeli u usmjerivaču podudarna sa zvučnim zapisom koji su čuli u sklopu ciljnog podražaja. Pritom su koverbalne ikoničke geste samo u jednom dijelu zadataka bile podudarne s verbalnim dijelom ciljnog podražaja. Rezultati našeg istraživanja pokazali su kako vrsta modaliteta utječe na brzinu i točnost razumijevanja jezične poruke. Konkretnije, ispitanici su brže procesirali visokopredočive konkretne glagole kada su oni emitirani auditivno nego audio-vizualno, kako u materinskom tako i u prvom stranom jeziku. Utvrđeno je kako prisustvo podudarnih koverbalnih ikoničkih gestâ nije pridonijelo bržoj recepciji informacije. Rezultati istraživanja pokazali su da je nepodudarnost koverbalnih ikoničkih gestâ usporila jezičnu recepciju kada se informacija emitirala multimodalno. Može se stoga zaključiti da dobiveni rezultati idu u prilog tezi o dvojnoj ulozi koverbalnih ikoničkih gestâ.
|Abstract (english)|| |
When communicating, speakers often produce hand movements which are synchronised with their speech. A layperson observing such communication would easily notice how at least a part of these movements is related to the meaning of speech they accompany. These movements are usually defined as co-speech gestures, one of the most common types of gestures which occurs spontaneously in multimodal utterances. Such spontaneous gestures carry the meaning which is related to speech at the semantic, pragmatic and discourse level (Kita, van Gijn and van der Hulst, 1998). In the psycholinguistics literature, co-speech gestures are typically defined in terms of their complex and integral relationship with language. David McNeill, a pioneering researcher in gesture studies, argues that “[...] gesture is not a behavioral fossil, not an ‘attachment’ to language or an ‘enhancement’, but is an indispensable part of our ongoing current system of language and was selected with speech in the evolution of this system.” (McNeill, 2005: 233). Why co-speech gestures occur and which role they have in spoken languages are some of the questions that have stimulated the academic interest in gesture studies and have encouraged numerous scientific discussions which resulted in several different approaches and theories explaining gestural function and use. The cognitive approach assumes that co-speech gestures facilitate lexical access to the mental lexicon during language production, which is why they benefit only the speaker. Their influence on the reception of semantic information is minimal and insufficient to improve communicative value (Morrel-Samuels and Krauss, 1992). The advocates of a communicative approach, on the other hand, argue that co-speech gestures benefit the listener since they facilitate language reception, thus providing an additional source of information for easier decoding and conveying the message in a speech-like way (Kelly, Ӧzyürek and Maris, 2010a). Finally, the postulates of the dual approach suggest that gestures have both a cognitive and a communicative function, depending on the type of information they convey (Alibali, Heath and Myers, 2001). To empirically test the role of co-speech iconic gestures in language reception, we conducted two experiments using the priming paradigm. Both experiments were designed and conducted in controlled experimental conditions by using the E-Prime 3.0. software package. In the first experiment, we analysed the effect of modality of the stimulus (audio and audiovisual) and language of the stimulus (Croatian and English) on reaction time and response accuracy. Each task started with a fixation cross (+) being shown for the duration of 500 ms, after which the prime and target stimuli were presented sequentially. The participants were first presented with an action prime (a video clip of real, everyday actions) in the duration of 2000 ms followed by an audio-visual speech-gesture target in the duration of 2000 ms. Our aim was to compare the reaction time of correctly recognized stimuli shown with a congruent co-speech iconic gesture in the native language and the first foreign language and without a congruent gesture. Consequently, the target stimuli were shown in two languages (Croatian and English) and in two conditions: a) the audio-visual condition, in which the actor produced the sentence describing the action shown in the prime and at the same time performed the semantically congruent co-speech gesture; and b) the audio condition, in which the participant was exposed only to an auditory recording semantically congruent with the action shown in the prime. By pressing the YES or NO button on their keyboards, the participants were supposed to answer whether the spoken utterance presented in the target stimulus describes the action presented in the prime. The results of the first experiment show that language of the stimulus has no effect on language reception of high imageability concrete words as no significant difference was noted in reaction times and accuracy between the two languages. This result is explained by the fact that the participants were highly proficient foreign language speakers, which negated the facilitatory effect of gestural modality. On the other hand, we established a statistically significant effect of type of modality on reaction time as faster reaction times were recorded for stimuli in the audio only condition in both languages. This conclusion is in line with the results reported by Krauss, Morrel-Samuels and Solasante (1991), McNeil, Alibali and Evans (2000) and Kelly, Ӧzyürek and Maris (2010a). Several factors are suggested as possible reasons why faster processing was noted with audio stimuli – conditional semantic redundancy of gesture type in question, low complexity of the task, and the effect of common ground. In the second experiment, we analysed the speed and accuracy of language reception in the case of bimodal stimuli by using the same priming paradigm. The participants were first presented with an action prime (a video clip of real, everyday actions) followed by an audiovisual speech-gesture target. The target stimuli were shown in two experimental conditions: a) speech + a semantically congruent co-speech iconic gesture, and b) speech + a semantically incongruent co-speech iconic gesture. By pressing the YES or NO button on their keyboards, the participants were supposed to answer if the spoken utterance presented in the target stimulus describes the real action presented in the prime stimulus. The results of the second experiment indicate that incongruent co-speech iconic gestures had a negative effect on the reaction time and accuracy rate of multimodal language processing. Slower reaction time and a higher proportion of incorrect answers were recorded when the semantic congruence between verbal and gestural modality was compromised and the same negative effect was recorded in both the native and the foreign language. We believe that such results speak in favour of McNeill's claim that verbal and gestural modality are integrated and form a complete image of the person's mental process (McNeill, 1992). Finally, the results yielded by this doctoral dissertation provide empirical support for the hypothesis that co-speech iconic gestures have a dual function in communication. Their dynamic nature, demonstrated in this research and in many scientific papers presented in this dissertation, provides strong support for Hosttetter's meta-analysis findings that the role of gestures is neither strictly communicative nor cognitive, but that it rather depends on a wide range of factors which determine their function (Hostetter, 2011). It may be concluded that a more detailed understanding of the gesture role in communication and further investigation of their speech-related properties might increase the potential of their use and implementation in many domains of linguistic research.