Fourteen-month-old infants track the language comprehension of communicative partners
Infants employ sophisticated mechanisms to acquire their first language, including some that rely on taking the perspective of adults as speakers or listeners. When do infants first show awareness of what other people understand? We tested 14-month-old infants in two experiments measuring event-related potentials. In Experiment 1, we established that infants produce the N400 effect, a brain signature of semantic violations, in a live object naming paradigm in the presence of an adult observer. In Experiment 2, we induced false beliefs about the labelled objects in the adult observer to test whether infants keep track of the other person’s comprehension. The results revealed that infants reacted to the semantic incongruity heard by the other as if they encountered it themselves: they exhibited an N400-like response, even though labels were congruous from their perspective. This finding demonstrates that infants track the linguistic understanding of social partners.
Communicative mind-reading in preverbal infants
Pragmatic theories of communication assume that humans evolved a species-unique inferential capacity to express and recognize intentions via communicative actions. We show that 13-month-old non-verbal infants can interpret the turn-taking exchange of variable tone sequences between unfamiliar agents as indicative of communicative transfer of goal-relevant information from a knowledgeable to a naïve agent pursuing the goal. No such inference of information transfer was drawn by the infants, however, when a) the agents exchanged fully predictable identical signal sequences, which does not enable transmission of new information, or b) when no goal-relevant contextual change was observed that would motivate its communicative transmission. These results demonstrate that young infants can recognize communicative interactions between third-party agents and possess an evolved capacity for communicative mind-reading that enables them to infer what contextually relevant information has been transmitted between the agents even without language.
Seeing behind the surface: Communicative demonstration boosts category disambiguation in 12-month-olds
In their first years, infants acquire an incredible amount of information regarding the objects present in their environment. While often it is not clear what specific information should be prioritized in encoding from the many characteristics of an object, different types of object representations facilitate different types of generalizations. We tested the hypotheses that one-year-old infants distinctively represent familiar objects as exemplars of their kind, and that ostensive communication plays a role in determining kind membership for ambiguous objects. In the training phase of our experiment, infants were exposed to movies displaying an agent sorting objects from two categories (cups and plates) into two locations (left or right). Afterwards, different groups of infants saw either an ostensive or a non-ostensive demonstration performed by the agent revealing that a new object that looked like a plate can be transformed into a cup. A third group of infants experienced no demonstration regarding the new object. During test, infants were presented with the ambiguous object in the plate format, and we measured generalization by coding anticipatory looks to the plate or the cup side. While infants looked equally often towards the two sides when the demonstration was non-ostensive, and more often to the plate side when there was no demonstration, they performed more anticipatory eye movements to the cup side when the demonstration was ostensive. Thus, ostensive demonstration likely highlighted the hidden dispositional properties of the target object as kind-relevant, guiding infants’ categorization of the foldable cup as a cup, despite that it looked like a plate. These results suggest that infants likely encode familiar objects as exemplars of their kind and that ostensive communication can play a crucial role in disambiguating what kind an object belongs to, even when this requires disregarding salient surface features.
Are all beliefs equal? Implicit belief attributions recruiting core brain regions of Theory of Mind.
Humans possess efficient mechanisms to behave adaptively in social contexts. They ascribe goals and beliefs to others and use these for behavioural predictions. Researchers argued for two separate mental attribution systems: an implicit and automatic one involved in online interactions, and an explicit one mainly used in offline deliberations. However, the underlying mechanisms of these systems and the types of beliefs represented in the implicit system are still unclear. Using neuroimaging methods, we show that the right temporo-parietal junction and the medial prefrontal cortex, brain regions consistently found to be involved in explicit mental state reasoning, are also recruited by spontaneous belief tracking. While the medial prefrontal cortex was more active when both the participant and another agent believed an object to be at a specific location, the right temporo-parietal junction was selectively activated during tracking the false beliefs of another agent about the presence, but not the absence of objects. While humans can explicitly attribute to a conspecific any possible belief they themselves can entertain, implicit belief tracking seems to be restricted to beliefs with specific contents, a content selectivity that may reflect a crucial functional characteristic and signature property of implicit belief attribution.