Are you talking to me? Neural activations in 6-month-old infants in response to being addressed during natural interactions
Human interactions are guided by continuous communication among the parties involved, in which verbal communication plays a primary role. However, speech does not necessarily reveal to whom it is addressed, especially for young infants who are unable to decode its semantic content. To overcome such difficulty, adults often explicitly mark their communication as infant-directed. In the present study we investigated whether ostensive signals, which would disambiguate the infant as the addressee of a communicative act, would modulate the brain responses of 6-month-old infants to speech and gestures in an ecologically valid setting. In Experiment 1, we tested whether the gaze direction of the speaker modulates cortical responses to infant-direct speech. To provide a naturalistic environment, two infants and their parents participated at the same time. In Experiment 2, we tested whether a similar modulation of the cortical response would be obtained by varying the intonation (infant versus adult directed speech) of the speech during face-to-face communication, one on one. The results of both experiments indicated that only the combination of ostensive signals (infant directed speech and direct gaze) led to enhanced brain activation. This effect was indicated by responses localized in regions known to be involved in processing auditory and visual aspects of social communication. This study also demonstrated the potential of fNIRS as a tool for studying neural responses in naturalistic scenarios, and for simultaneous measurement of brain function in multiple participants.
Concept-based word learning in human infants
It is debated whether infants initially learn object labels by mapping them onto similarity-defining perceptual features or onto concepts of object kinds. We addressed this question by attempting to teach infants words for behaviorally defined action roles. In a series of experiments, we found that 14-month-olds could rapidly learn a label for the role the chaser plays in a chasing scenario, even when the different instances of chasers did not share perceptual features. Furthermore, when infants could choose, they preferred to interpret a novel label as expressing the actor’s role within the observed interaction rather than as being associated with the actor’s appearance. These results demonstrate that infants can learn labels as easily, or even easier, for concepts identified by abstract behavioral characteristics than by perceptual features. Thus, already at early stages of word learning, infants expect that novel words express concepts.