Студопедия

Главная страница Случайная страница

КАТЕГОРИИ:

АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатикаИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханикаОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторикаСоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансыХимияЧерчениеЭкологияЭкономикаЭлектроника






Cross-modal transfer in discrimination tasks






 

Inference of modality plays a key role in many situations when it is necessary to match relation properties of different stimuli. The ability to recognise by touching those things which we only perceived, say, visually is a part of what termed cross modal inference. To do this, we need to compare signals received through different sensor channels and bring them to conformity with each other. The ability to solve problems basing on cross-modal inference is intensively studied as the essential part of human ageing. For example, in experiments of Krekling and co-authors (1989), the ability to solve tactual oddity problems, and transfer of oddity learning across the visual and the tactual modalities, was studied in 3- to 8-year-old children. Oddity tasks consisting of one odd and two equal objects were made from stimuli that were easily discriminated visually and tactually. The results showed that tactual oddity learning increased gradually with age. The growth in tactual performance begins later than visual, suggesting that children are more adept at encoding visual stimulus invariances or relational properties than tactual ones. Bidirectional cross-modal transfer of oddity learning was found, supporting the suggestion that such transfer occurs when training and transfer oddity tasks share a common vehicle dimension. The cross-modal effect also shows that oddity learning is independent of a specific modality-labelled perceptual context. These results are consistent with the view that development of oddity learning depends on a single rather than a dual process, and that the oddity relation may be treated as an amodal stimulus feature.

Ladygina-Kohts (1923) was first to demonstrate that a non-human animal can possess the ability of cross-modal inference. In her experiments with chimpanzee Iony, she placed into a bag several plane and volume figures, such as a prism, a cylinder, a plane circle, a plane square, and a plane triangle. Being presented with one of those stimuli visually, the chimp accurately selected the same thing in the dark of the bag by touch. The chimp recognised a figure that had been perceived visually before, basing now on its tactile properties only. These results were later confirmed in experiments on chimpanzees that manifested the ability to recognise on photos not only those things that they had seen before but as well those things that they only perceived by touch (Davenport, Rogers, 1968).

In a recent study of Hashiya and Kojima (2001) chimpanzee solved an auditory–visual intermodal matching-to-sample (AVMTS) task, in which, following the presentation of a sample sound, the subject had to select from two photographs that corresponded to the sample. The authors describe through a series of experiments the features of the chimpanzee AVMTS performance in comparison with results obtained in a visual intramodal matching task, in which a visual stimulus alone served as the sample. The results show that the acquisition of AVMTS was facilitated by the alternation of auditory presentation and audio-visual presentation (i.e., the sample sound together with a visual presentation of the object producing the particular sample sound). Once AVMTS performance was established for the limited number of stimulus sets, the subject showed rapid transfer of the performance to novel sets. However, the subject showed a steep decay of matching performance as a function of the delay interval between the sample and the choice alternative presentations when the sound alone, but not the visual stimulus alone, served as the sample. This might suggest a cognitive limitation for the chimpanzee in auditory-related tasks. In dolphins a similar cross modal transfer between vision and echolocation has been demonstrated with the use of matching-to sample procedure (Pack, Herman, 1995).


Поделиться с друзьями:

mylektsii.su - Мои Лекции - 2015-2024 год. (0.006 сек.)Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав Пожаловаться на материал