This study explored the comprehension processes of reading braille text. It found that some processes were similar to that of reading printed text. However, braille readers did not show any sensitivity to sublexical variables or to most of the variables that were assumed to be related to integration processes.
AbstractBilingualism is a powerful experiential factor, and its effects have been proposed to extend beyond the linguistic domain by boosting the development of executive functioning skills. Crucially, recent findings suggest that this effect can be detected in bilingual infants before their first birthday indicating that it emerges as a result of early bilingual exposure and the experience of negotiating two linguistic systems in infants' environment. However, these conclusions are based on only two research studies from the last decade (Comishen, Bialystok, & Adler, 2019; Kovács & Mehler, 2009), so to date, there is a lack of evidence regarding their replicability and generalizability. In addition, previous research does not shed light on the precise aspects of bilingual experience and the extent of bilingual exposure underlying the emergence of this early bilingual advantage. The present study addressed these two questions by assessing attentional control abilities in 7‐month‐old bilingual infants in comparison to same‐age monolinguals and in relation to their individual bilingual exposure patterns. Findings did not reveal significant differences between monolingual and bilingual infants in the measure of attentional control and no relation between individual performance and degree of bilingual exposure. Bilinguals showed different patterns of allocating attention to the visual rewards in this task compared to monolinguals. Thus, this study indicates that bilingualism modulates attentional processes early on, possibly as a result of bilinguals' experience of encoding dual‐language information from a complex linguistic input, but it does not lead to significant advantages in attentional control in the first year of life.
AbstractThis study investigates whether orthographic consistency and transparency of languages have an impact on the development of reading strategies and reading sub‐skills (i.e. phonemic awareness and visual attention span) in bilingual children. We evaluated 21 French (opaque)‐Basque (transparent) bilingual children and 21 Spanish (transparent)‐Basque (transparent) bilingual children at Grade 2, and 16 additional children of each group at Grade 5. All of them were assessed in their common language (i.e. Basque) on tasks measuring word and pseudoword reading, phonemic awareness and visual attention span skills. The Spanish speaking groups showed better Basque pseudoword reading and better phonemic awareness abilities than their French speaking peers, but only in the most difficult conditions of the tasks. However, on the visual attention span task, the French‐Basque bilinguals showed the most efficient visual processing strategies to perform the task. Therefore, learning to read in an additional language affected differently Basque literacy skills, depending on whether this additional orthography was opaque (e.g. French) or transparent (e.g. Spanish). Moreover, we showed that the most noteworthy effects of Spanish and French orthographic transparency on Basque performance were related to the size of the phonological and visual grain used to perform the tasks.
AbstractA commonly shared assumption in the field of visual‐word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing – as measured by masked priming – in young children (3rd and 6th Graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early stages of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word's letters) as the target word (e.g.– [ktz b–ktA b] – note that the three initial letters are connected in prime and target) than for those that do not (– [ktxb–ktA b]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g. –) was remarkably similar for both types of prime. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers.
Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively. ; Tis work was supported by the Spanish Ministry of Economy and Competitiveness through the Juan de la Cierva Fellowship (FJCI-2015-26814), and the Ramon y Cajal Fellowship (RYC-2017- 21845), the Spanish State Research Agency through the BCBL "Severo Ochoa" excellence accreditation (SEV-2015-490), the Basque Government (BERC 2018- 2021) and the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant (No 799554). ; info:eu-repo/semantics/publishedVersion
AbstractIn this work we ask whether at birth, the human brain responds uniquely to speech, or if similar activation also occurs to a non‐speech surrogate 'language'. We compare neural activation in newborn infants to the language heard in utero (English), to an unfamiliar language (Spanish), and to a whistled surrogate language (Silbo Gomero) that, while used by humans to communicate, is not speech. Anterior temporal areas of the neonate cortex are activated in response to both familiar and unfamiliar spoken language, but these classic language areas are not activated to the whistled surrogate form. These results suggest that at the time human infants emerge from the womb, the neural preparation for language is specialized to speech.
This paper investigates whether the semantic and phonological levels in speech production are specific to spoken languages or universal across modalities. We examined semantic and phonological effects during Catalan Signed Language (LSC: Llengua de Signes Catalana) production using an adaptation of the picture–word interference task: native and non-native signers were asked to sign picture names while ignoring signs produced in the background. The results showed semantic interference effects for semantically related distractor signs and phonological facilitation effects when target signs and distractor signs shared either Handshape or Movement but phonological interference effects when target and distractor shared Location. The results suggest that the general distinction between semantic and phonological levels seems to hold across modalities. However, differences in sign language and spoken production become evident in the mechanisms underlying phonological encoding, shown by the different role that Location, Handshape, and Movement play during phonological encoding in sign language. ; This research has been partially supported by Grants SEJ2004-07680-C02-02/PSIC and SEJ2006-09238/PSIC from the Spanish Government. We thank all the deaf volunteers and the deaf associations CERECUSOR and CASAL DE SORDS DE BARCELONA (with special thanks to Javi Vidal, who signed the stimuli for the experiment, Santiago Frigola and Delfina Aliaga). We also thank Bencie Woll, Margaret Gillon Dowens, Ansgar Hantsch and the UCSD journal club for their helpful comments on the manuscript.
AbstractThe present research aims to assess literacy acquisition in children becoming bilingual via second language immersion in school. We adopt a cognitive components approach, assessing text‐level reading comprehension, a complex literacy skill, as well as underlying cognitive and linguistic components in 144 children aged 7 to 14 (72 immersion bilinguals, 72 controls). Using principal component analysis, a nuanced pattern of results was observed: although emergent bilinguals lag behind their monolingual counterparts on measures of linguistic processing, they showed enhanced performance on a memory and reasoning component. For reading comprehension, no between‐group differences were evident, suggesting that selective benefits compensate costs at the level of underlying cognitive components. Overall, the results seem to indicate that literacy skills may be modulated by emerging bilingualism even when no between‐group differences are evident at the level of complex skill, and the detection of such differences may depend on the focus and selectivity of the task battery used.