Студопедия

Главная страница Случайная страница

КАТЕГОРИИ:

АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатикаИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханикаОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторикаСоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансыХимияЧерчениеЭкологияЭкономикаЭлектроника






Criteria of numerical competence for comparative studies






 

Basic number related skills, that is, knowledge of quantities and their relations, is one of the highest properties of cognition. A fundamental question in cognitive science is whether the sense of numbers is unique to humans or we share this capacity with other species. Of course, people who lead lives almost completely devoid of numbers remain in the ranks of human beings. In his recent book devoted to human “mathematical brain”, neuropsychologist Butterworth (1999) describes people who have little or no sense of numbers. The clinical terms are acalculia, for people who lost their sense of numbers after stroke and dyscalculia for people who were born without numbers. One of concrete cases described in this book concerns a woman who was blind to numbers greater than four. She could readily perform addition and subtraction, she could name numbers in sequence - so long as all the digits involved are less than or equal to four. In general, a condition called acalculia is evidence that in humans the brain may be biologically " wired" for mathematics.

Cognitive ethologists use the term numerosity as a property of a stimulus that is defined by the number of discriminable elements it contains. It seems that our brain, as well as those of some other species, is equipped from birth with a number sense. This does not mean that animals have an abstract number conception. At the same time, being able to perceive numbers is helpful in many natural situations, for example, in tracking predators or selecting the best foraging grounds. To what extent does animals’ numerical competence reach?

Perhaps, no field of cognitive science is based on comparison between animal and human abilities to such a great extend as the field of studying animal’s numerical competence. However, we are still lacking an adequate “language” for comparative analysis. Practically all criteria for comparison number - related skills in animals and humans is derived from development psychology. The main difficulty for comparing numerical abilities in humans and other species is that our numerical competence is closely connected with abilities for language and for symbolic representation. It is likely that some species can judge about numbers of things and sounds, and may be smells. A good example comes from field experiments of McComb at al. (1994) with lions in the Serengeti National Park in Tanzania. The lioness leader identifies roaring that comes from individuals who are not members of the pride; she can also represent the defenders as known individuals. She is able to count distinguished roarers and the number of her sisters and compares the two numbers within the limit of four.

Recent approaches to studying numerical competence in animals are mainly based on criteria suggested by Gelman and Gallistel (1978) for children and then adopted for animal studies by Davis and Pé russe’

(1988). These authors distinguish several types of numerical competence and suggest that different cognitive or perceptual processes underlie each type. They divide numerical competence into the categories of relative numerousness judgments, subitizing, estimation, and counting.

Relative numerousness judgments involve the simplest decision processes since no knowledge of an absolute number is required. Instead an animal compares “more" versus " less". Animal researchers often use the term " numerosity discrimination " instead of “relative numerousness judgments”.

Subitizing is a form of pattern recognition that is used to rapidly assess small quantities of simultaneously presented items. This term was invented by Kaufman et al. (1949) who observed that when adult humans were asked to tell the number of items in an array there was discontinuity in their responses: if the number of items was 6 or fewer the subjects performed the task very quickly, but beyond this number their response time increased dramatically with the quantity of items. They suggested using the term “subitizing” in order to emphasise the difference of perception of small and large quantities by humans. Subitizing is a sophisticated process that is defined by other researchers as “magnitudes through an accumulator mechanism” (Meck and Church, 1983), or “prototype matching” (Thomas et al., 1999).

Estimation refers to the ability to assign a numerical label to an array of large numbers of items with poor precision. When we judge at a glance that there are about 50 ducks on a lake we are " estimating". Animals’ judgments about large arrays have not been systematically studied yet; nevertheless some exciting results will be reviewed in this issue.

Counting is the ability to discriminate the absolute number in a set by a process of enumeration. This involves tagging each item in a set, and applying a series of ordered labels as these items are “counted off”. To count the number of peanuts in a packet of mixed nuts, for instance, we might put each peanut to one side, at the same time labelling them " 1", " 2", " 3", etc. The numerical label we apply to the last peanut we find is the absolute or cardinal number of peanuts in the packet. Davis and Pé russe (1988) regard counting as a more sophisticated process than those involved in relative numerousness judgments or subitizing (and presumably estimation too). They also discuss the concept or sense of number as an attribute of counting. This term implies an ability to transfer numerical discriminations across the sensory modalities (e.g. 5 sound pulses are equivalent to 5 light flashes) or across the modes of presentation (e.g. 4 red squares shown simultaneously on a computer screen are equivalent to 4 red squares presented one after the other).

Gelman and Gallistel (Gelman, Gallistel, 1978; Gallistel, Gelman, 1992) list five criteria that formally define the process of counting and have been widely accepted in comparative studies. They are:

1. The one-to-one principle. Each item in a set (or event in a sequence) is given a unique tag, code or label so that there is a one-to-one correspondence between items and tags.

2. The stable-order principle (ordinality). The tags or labels must always be applied in the same order (e.g. 1, 2, 3, 4 and not 3, 2, 1, 4). This principle underlies the idea of ordinality: the label " 3" stands for a numerosity greater than the quantity called " 2" and less than the amount called " 4".

3. The cardinal principle (cardinality). The label that is applied to the final item represents the absolute quantity of the set. In children, it seems likely that the cardinal principle presupposes the one-to-one principle and the stable-order principle and therefore should develop after the child has some experience in selecting distinct tags and applying those tags in a set.

4. The abstraction principle (property indifference). As it has been noted before, the realisation of what is counted is reflected in this principle. In experiments with children, a child should realise that counting can be applied to heterogeneous items like toys of different kinds, colour, or shape and demonstrate skills of counting even actions or sounds. There are indications that many 2 or 3 year olds can count mixed sets of objects.

It is important to note that Gallistel and Gelman (1992) do not consider counting to be a process dependent on language and so it can be presented within behavioural repertoire of non-human animals. They consider symbols that are needed to meet any of the criteria described above to be non-linguistic mental symbols (“numerons”), or internal tags the mind makes use of to enumerate a set of objects.

Piaget suggested that infants are born with no understanding of numerosity. His early experiments (Piaget 1942) described infants' lack of numerosity as a poor perception of quantity conservation. Piaget argued that our very idea of numbers is constructed out of previously developed logical abilities. One of these was transitive reasoning (see Chapter 16): if A is bigger than B, and C is smaller than B. If you figure that out correctly, then you are able to put the numbers in order. These logical abilities don't develop until at least four years of age, and are not functioning in their most abstract form until the teens. Further experiments have shown that infants possess some numerical competence at an early age, and this enables researchers to expand the correspondence between the abilities of humans and non-human animals.

 


Поделиться с друзьями:

mylektsii.su - Мои Лекции - 2015-2024 год. (0.007 сек.)Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав Пожаловаться на материал