The Now-or-Never Processing Bottleneck
In: Creating Language, S. 93-133
13 Ergebnisse
Sortierung:
In: Creating Language, S. 93-133
In: Creating Language, S. 137-168
In: Creating Language, S. 3-17
In: Creating Language, S. 197-225
In: Creating Language, S. 19-65
In: Creating Language, S. 227-247
In: Creating Language, S. 67-91
In: Creating Language, S. 169-195
In: Strüngmann Forum Reports
In: Human biology: the international journal of population genetics and anthropology ; the official publication of the American Association of Anthropological Genetics, Band 83, Heft 2, S. 247-259
ISSN: 1534-6617
In: Developmental science, Band 12, Heft 3, S. 388-395
ISSN: 1467-7687
AbstractWhen learning language, young children are faced with many seemingly formidable challenges, including discovering words embedded in a continuous stream of sounds and determining what role these words play in syntactic constructions. We suggest that knowledge of phoneme distributions may play a crucial part in helping children segment words and determine their lexical category, and we propose an integrated model of how children might go from unsegmented speech to lexical categories. We corroborated this theoretical model using a two‐stage computational analysis of a large corpus of English child‐directed speech. First, we used transition probabilities between phonemes to find words in unsegmented speech. Second, we used distributional information about word edges – the beginning and ending phonemes of words – to predict whether the segmented words from the first stage were nouns, verbs, or something else. The results indicate that discovering lexical units and their associated syntactic category in child‐directed speech is possible by attending to the statistics of single phoneme transitions and word‐initial and final phonemes. Thus, we suggest that a core computational principle in language acquisition is that the same source of information is used to learn about different aspects of linguistic structure.
In: Developmental science, Band 22, Heft 6
ISSN: 1467-7687
AbstractStatistical learning (SL), sensitivity to probabilistic regularities in sensory input, has been widely implicated in cognitive and perceptual development. Little is known, however, about the underlying mechanisms of SL and whether they undergo developmental change. One way to approach these questions is to compare SL across perceptual modalities. While a decade of research has compared auditory and visual SL in adults, we present the first direct comparison of visual and auditory SL in infants (8–10 months). Learning was evidenced in both perceptual modalities but with opposite directions of preference: Infants in the auditory condition displayed a novelty preference, while infants in the visual condition showed a familiarity preference. Interpreting these results within the Hunter and Ames model (1988), where familiarity preferences reflect a weaker stage of encoding than novelty preferences, we conclude that there is weaker learning in the visual modality than the auditory modality for this age. In addition, we found evidence of different developmental trajectories across modalities: Auditory SL increased while visual SL did not change for this age range. The results suggest that SL is not an abstract, amodal ability; for the types of stimuli and statistics tested, we find that auditory SL precedes the development of visual SL and is consistent with recent work comparing SL across modalities in older children.