Share this post on:

Author Manuscript Author ManuscriptWe have created a novel experimental paradigm for
Author Manuscript Author ManuscriptWe have created a novel experimental paradigm for mapping the temporal dynamics of audiovisual integration in speech. Specifically, we employed a phoneme identification process in which we overlaid McGurk stimuli using a spatiotemporally correlated visual masker that revealed essential visual cues on some trials but not on others. As a result, McGurk fusion was observed only on trials for which vital visual cues were obtainable. Behavioral patterns in phoneme identification (fusion or no fusion) had been reverse correlated with masker patterns over a lot of trials, yielding a classification timecourse of your visual cues that contributed drastically to fusion. This method offers many advantages more than methods utilised previously to study the temporal dynamics of audiovisual integration in speech. First, as opposed to temporal gating (M.A. Cathiard et al 996; Jesse Massaro, 200; K. G. Munhall Tohkura, 998; Smeele, 994) in which only the first portion with the visual or auditory stimulus is presented towards the participant (up to some predetermined “gate” location), masking allows presentation of the entire stimulus on every single trial. Second, as opposed to manipulations of audiovisual synchrony (Conrey Pisoni, 2006; Grant Greenberg, 200; K. G. Munhall et al 996; V. van Wassenhove et al 2007), masking will not call for the all-natural EPZ031686 web timing of the stimulus to become altered. As in the current study, one can choose to manipulate stimulus timing to examine modifications in audiovisual temporal dynamics relative to the unaltered stimulus. Lastly, whilst methods have been developed to estimate natural audiovisual timing based on physical measurements of speech stimuli (Chandrasekaran et al 2009; Schwartz Savariaux, 204), our paradigm offers behavioral verification of such measures primarily based on actual human perception. To the very best of our know-how, this can be the first application of a “bubbleslike” masking process (Fiset et al 2009; Thurman et al 200; Thurman Grossman, 20; Vinette et al 2004) to an issue of multisensory integration.Atten Percept Psychophys. Author manuscript; available in PMC 207 February 0.Venezia et al.PageIn the present experiment, we performed classification evaluation with three McGurk stimuli presented at distinctive audiovisual SOAs organic timing (SYNC), 50ms visuallead (VLead50), and 00ms visuallead (VLead00). Three significant findings summarize the results. First, the SYNC, VLead50, and VLead00 McGurk stimuli had been rated almost identically within a phoneme identification task with no visual PubMed ID:https://www.ncbi.nlm.nih.gov/pubmed/24943195 masker. Particularly, every single stimulus elicited a higher degree of fusion suggesting that all the stimuli have been perceived similarly. Second, the main visual cue contributing to fusion (peak with the classification timecourses, Figs. 56) was identical across the McGurk stimuli (i.e the position with the peak was not impacted by the temporal offset between the auditory and visual signals). Third, in spite of this reality, there have been significant variations inside the contribution of a secondary visual cue across the McGurk stimuli. Namely, an early visual cue that may be, a single related to lip movements that preceded the onset in the consonantrelated auditory signal contributed considerably to fusion for the SYNC stimulus, but not for the VLead50 or VLead00 stimuli. The latter locating is noteworthy since it reveals that (a) temporallyleading visual speech info can significantly influence estimates of auditory signal identity, and (b).

Share this post on:

Author: Cholesterol Absorption Inhibitors