Singhlancaster5907
Low integration of speech sounds with the mouth movements likely contributes to language acquisition disabilities that frequently characterize young autistic children. However, the existing empirical evidence either relies on complex verbal instructions or merely focuses on preferential gaze on in-synch videos. The former method is clearly unadapted for young, minimally, or nonverbal autistic children, while the latter has several biases, making it difficult to interpret the data. We designed a Reinforced Preferential Gaze paradigm that allows to test multimodal integration in young, nonverbal autistic children and overcomes several of the methodological challenges faced by previous studies. We show that autistic children have difficulties in temporally binding the speech signal with the corresponding articulatory gestures. A condition with structurally similar nonsocial video stimuli suggests that atypical multimodal integration in autism is not limited to speech stimuli. (PsycInfo Database Record (c) 2021 APA, all rights reserved).Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here, we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that (a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users and (b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic). (PsycInfo Database Record (c) 2021 APA, all rights reserved).What are the mental processes that allow us to understand the meaning of words? A large body of evidence suggests that when we process speech, we engage a process of perceptual simulation whereby sensorimotor states are activated as a source of semantic information. But does the same process take place when words are expressed with the hands and perceived through the eyes? To date, it is not known whether perceptual simulation is also observed in sign languages, the manual-visual languages of deaf communities. Continuous flash suppression is a method that addresses this question by measuring the effect of language on detection sensitivity to images that are suppressed from awareness. In spoken languages, it has been reported that listening to a word (e.g., "bottle") activates visual features of an object (e.g., the shape of a bottle), and this in turn facilitates image detection. An interesting but untested question is whether the same process takes place when deaf signers see signs. We found that processing signs boosted the detection of congruent images, making otherwise invisible pictures visible. A boost of visual processing was observed only for signers but not for hearing nonsigners, suggesting that the penetration of the visual system through signs requires a fully fledged manual language. Iconicity did not modulate the effect of signs on detection, neither in signers nor in hearing nonsigners. This suggests that visual simulation during language processing occurs regardless of language modality (sign vs. speech) or iconicity, pointing to a foundational role of simulation for language comprehension. (PsycInfo Database Record (c) 2021 APA, all rights reserved).Billions of people from around the world believe in vengeful gods who punish immoral behavior. These punitive religious beliefs may foster prosociality and contribute to large-scale cooperation, but little is known about how these beliefs emerge and why people adopt them in the first place. We present a cultural-psychological model suggesting that cultural tightness-the strictness of cultural norms and normative punishment-helps to catalyze punitive religious beliefs by increasing people's motivation to punish norm violators. Our model also suggests that tightness mediates the impact of ecological threat on punitive belief, explaining why punitive religious beliefs are most common in regions with high levels of ecological threat. Five multimethod studies support these predictions. Studies 1-3 focus on the effect of cultural tightness on punitive religious beliefs. Historical increases in cultural tightness precede and predict historical increases in punitive beliefs (Study 1), and both manipulating people's support for tightness (Study 2) and placing people in a simulated tight society (Study 3) increase punitive religious beliefs via the personal motivation to punish norm violators. Studies 4-5 focus on whether cultural tightness mediates the link between ecological threat and punitive religious beliefs. selleck kinase inhibitor Cultural tightness helps explain why U.S. states with high ecological threat (e.g., natural hazards, scarcity) have the highest levels of punitive religious beliefs (Study 4) and why experimental manipulations of threat increase punitive religious beliefs (Study 5). Past research has shown how religion impacts culture, but our studies show how culture can shape religion. (PsycInfo Database Record (c) 2021 APA, all rights reserved).Humans are highly social. We spend most of our time interacting with the social world, and we spend most of our thoughts thinking about the social world. Are we social beings by default, or is our sociality a response to the social world? On the one hand, fundamental social needs may drive social behavior. According to this account, social thoughts fulfill social needs when the environment is insufficiently social. On the other hand, spontaneous thoughts may process incoming information. According to this account, social thoughts reflect the social information in the environment. To arbitrate between these possibilities, we assessed the content of spontaneous thought during mind wandering in three social contexts solitude (Study 1), social presence (Study 2), and social interaction (Study 3). Additionally, in Study 1, we used functional neuroimaging to measure neural activity while participants considered social and nonsocial targets. Results consistently showed that spontaneous thought reflects the sociality of the world around us Solitude decreased spontaneous social thought and decreased neural activity in the mentalizing network when thinking about a close friend.