Littlebarker6005
This semantic interference effect was associated with increased activation of the inferior frontal gyrus. When the two scenes were semantically congruent, the dissimilarity of their physical properties impaired the categorization of the central scene. This effect was associated with increased activation in occipito-temporal areas. In line with the hypothesis of predictive mechanisms involved in visual recognition, results suggest that semantic and physical properties of the information coming from peripheral vision would be automatically used to generate predictions that guide the processing of signal in central vision.Previous work suggests that perception of an object automatically facilitates actions related to object grasping and manipulation. Recently, the notion of automaticity has been challenged by behavioral studies suggesting that dangerous objects elicit aversive affordances that interfere with encoding of an object's motor properties; however, related EEG studies have provided little support for these claims. We sought EEG evidence that would support the operation of an inhibitory mechanism that interferes with the motor encoding of dangerous objects, and we investigated whether such mechanism would be modulated by the perceived distance of an object and the goal of a given task. EEGs were recorded by 24 participants who passively perceived dangerous and neutral objects in their peripersonal, boundary, or extrapersonal space and performed either a reachability judgment task or a categorization task. Our results showed that greater attention, reflected in the visual P1 potential, was drawn by dangerous and reachable objects. Crucially, a frontal N2 potential, associated with motor inhibition, was larger for dangerous objects only when participants performed a reachability judgment task. Furthermore, a larger parietal P3b potential for dangerous objects indicated the greater difficulty in linking a dangerous object to the appropriate response, especially when it was located in the participants' extrapersonal space. Taken together, our results show that perception of dangerous objects elicits aversive affordances in a task-dependent way and provides evidence for the operation of a neural mechanism that does not code affordances of dangerous objects automatically, but rather on the basis of contextual information.Rapid visual perception is often viewed as a bottom-up process. Category-preferred neural regions are often characterized as automatic, default processing mechanisms for visual inputs of their categorical preference. To explore the sensitivity of such regions to top-down information, we examined three scene-preferring brain regions, the occipital place area (OPA), the parahippocampal place area (PPA), and the retrosplenial complex (RSC) and tested whether the processing of outdoor scenes is influenced by the functional contexts in which they are seen. Context was manipulated by presenting real-world landscape images as if being viewed through a window or within a picture frame-manipulations that do not affect scene content but do affect one's functional knowledge regarding the scene. This manipulation influences neural scene processing (as measured by fMRI) The OPA and the PPA exhibited greater neural activity when participants viewed images as if through a window as compared with within a picture frame, whereas the RSC did not show this difference. In a separate behavioral experiment, functional context affected scene memory in predictable directions (boundary extension). Our interpretation is that the window context denotes three dimensionality, therefore rendering the perceptual experience of viewing landscapes as more realistic. Conversely, the frame context denotes a 2-D image. As such, more spatially biased scene representations in the OPA and the PPA are influenced by differences in top-down, perceptual expectations generated from context. In contrast, more semantically biased scene representations in the RSC are likely to be less affected by top-down signals that carry information about the physical layout of a scene.Almost all models of visual working memory-the cognitive system that holds visual information in an active state-assume it has a fixed capacity Some models propose a limit of three to four objects, where others propose there is a fixed pool of resources for each basic visual feature. Recent findings, however, suggest that memory performance is improved for real-world objects. GNE781 What supports these increases in capacity? Here, we test whether the meaningfulness of a stimulus alone influences working memory capacity while controlling for visual complexity and directly assessing the active component of working memory using EEG. Participants remembered ambiguous stimuli that could either be perceived as a face or as meaningless shapes. Participants had higher performance and increased neural delay activity when the memory display consisted of more meaningful stimuli. Critically, by asking participants whether they perceived the stimuli as a face or not, we also show that these increases in visual working memory capacity and recruitment of additional neural resources are because of the subjective perception of the stimulus and thus cannot be driven by physical properties of the stimulus. Broadly, this suggests that the capacity for active storage in visual working memory is not fixed but that more meaningful stimuli recruit additional working memory resources, allowing them to be better remembered.Rhythmic neural activity synchronizes with certain rhythmic behaviors, such as breathing, sniffing, saccades, and speech. The extent to which neural oscillations synchronize with higher-level and more complex behaviors is largely unknown. Here, we investigated electrophysiological synchronization with keyboard typing, which is an omnipresent behavior daily engaged by an uncountably large number of people. Keyboard typing is rhythmic, with frequency characteristics roughly the same as neural oscillatory dynamics associated with cognitive control, notably through midfrontal theta (4-7 Hz) oscillations. We tested the hypothesis that synchronization occurs between typing and midfrontal theta and breaks down when errors are committed. Thirty healthy participants typed words and sentences on a keyboard without visual feedback, while EEG was recorded. Typing rhythmicity was investigated by interkeystroke interval analyses and by a kernel density estimation method. We used a multivariate spatial filtering technique to investigate frequency-specific synchronization between typing and neuronal oscillations.