UCL  IRIS
Institutional Research Information Service
UCL Logo
Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:

Email: portico-services@ucl.ac.uk

Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Training enhances the ability of listeners to exploit visual information for auditory scene analysis
Abstract
The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox et al., 2015). In this task participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when it was temporally coherent with the masker sound, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this improved their ability to benefit from visual cues during the auditory selective attention task. We observed these listeners improved performance in the auditory selective attention task and changed the way in which they benefited from a visual stimulus: after training performance was better when the visual stimulus was temporally coherent with either the target or the masker stream, relative to the condition in which the visual stimulus was coherent with neither auditory stream. A second group which trained to discriminate modulation rate differences between temporally coherent audiovisual streams improved task performance, but did not change the way in which they used visual information. A control group did not change their performance between pretest and post-test. These results provide insights into how crossmodal experience may optimize multisensory integration.
Publication data is maintained in RPS. Visit https://rps.ucl.ac.uk
 More search options
UCL Researchers
Author
The Ear Institute
University College London - Gower Street - London - WC1E 6BT Tel:+44 (0)20 7679 2000

© UCL 1999–2011

Search by