Perception

See also

Smell

Hearing

Vision

A small study has tested the eminent Donald Hebb’s hypothesis that visual imagery results from the reactivation of neural activity associated with viewing images, and that the re-enactment of eye-movement patterns helps both imagery and neural reactivation.

In the study, 16 young adults (aged 20-28) were shown a set of 14 distinct images for a few seconds each. They were asked to remember as many details of the picture as possible so they could visualize it later on. They were then cued to mentally visualize the images within an empty rectangular box shown on the screen.

Brain imaging and eye-tracking technology revealed that the same pattern of eye movements and brain activation occurred when the image was learned and when it was recalled. During recall, however, the patterns were compressed (which is consistent with our experience of remembering, where memories take a much shorter time than the original experiences).

Our understanding of memory is that it’s constructive — when we remember, we reconstruct the memory from separate bits of information in our database. This finding suggests that eye movements might be like a blueprint to help the brain piece together the bits in the right way.

https://www.eurekalert.org/pub_releases/2018-02/bcfg-cga021318.php

I've reported before on the idea that the drop in working memory capacity commonly seen in old age is related to the equally typical increase in distractability. Studies of brain activity have also indicated that lower WMC is correlated with greater storage of distractor information. So those with higher WMC, it's thought, are better at filtering out distraction and focusing only on the pertinent information. Older adults may show a reduced WMC, therefore, because their ability to ignore distraction and irrelevancies has declined.

Why does that happen?

A new, large-scale study using a smartphone game suggests that the root cause is a change in the way we hold items in working memory.

The study involved 29,631 people aged 18—69, who played a smartphone game in which they had to remember the positions of an increasing number of red circles. Yellow circles, which had to be ignored, could also appear — either at the same time as the red circles, or after them. Data from this game revealed both WMC (how many red circle locations the individual could remember), and distractability (how many red circle locations they could remember in the face of irrelevant yellow circles).

Now this game isn't simply a way of measuring WMC. It enables us to make an interesting distinction based on the timing of the distraction. If the yellow circles appeared at the same time as the red ones, they are providing distraction when you are trying to encode the information. If they appear afterward, the distraction occurs when you are trying to maintain the information in working memory.

Now it would seem commonsensical that distraction at the time of encoding must be the main problem, but the fascinating finding of this study is that it was distraction during the delay (while the information is being maintained in working memory) that was the greater problem. And it was this distraction that became more and more marked with increasing age.

The study is a follow-up to a smaller 2014 study that included two experiments: a lab experiment involving 21 young adults, and data from the same smartphone game involving only the younger cohort (18-29 years; 3247 participants).

This study demonstrated that distraction during encoding and distraction during delay were independent contributory factors to WMC, suggesting that separate mechanisms are involved in filtering out distraction at encoding and maintenance.

Interestingly, analysis of the data from the smartphone game did indicate some correlation between the two in that context. One reason may be that participants in the smartphone game were exposed to higher load trials (the lab study kept WM load constant); another might be that they were in more distracting environments.

While in general researchers have till now assumed that the two processes are not distinct, it has been theorized that distractor filtering at encoding may involve a 'selective gating mechanism', while filtering during WM maintenance may involve a shutting down of perception. The former has been linked to a gating mechanism in the striatum in the basal ganglia, while the latter has been linked to an increase in alpha waves in the frontal cortex, specifically, the left middle frontal gyrus. The dorsolateral prefrontal cortex may also be involved in distractor filtering at encoding.

To return to the more recent study:

  • there was a significant decrease in WMC with increasing age in all conditions (no distraction; encoding distraction; delay distraction)
  • for older adults, the decrease in WMC was greatest in the delay distraction condition
  • when 'distraction cost' was calculated (((ND score − (ED or DD score))/ND score) × 100), there was a significant correlation between delay distraction cost and age, but not between encoding distraction cost and age
  • for older adults, performance in the encoding distraction condition was better predicted by performance in the no distraction condition than it was among the younger groups
  • this correlation was significantly different between the 30-39 age group and the 40-49 age group, between the 40s and the 50s, and between the 50s and the 60s — showing that this is a progressive change
  • older adults with a higher delay distraction cost (ie, those more affected by distractors during delay) also showed a significantly greater correlation between their no-distraction performance and encoding-distraction performance.

All of this suggests that older adults are focusing more attention during attention even when there is no distraction, and they are doing so to compensate for their reduced ability to maintain information in working memory.

This suggests several approaches to improving older adults' ability to cope:

  • use perceptual discrimination training to help improve WMC
  • make working memory training more about learning to ignore certain types of distraction
  • reduce distraction — modify daily tasks to make them more "older adult friendly"
  • (my own speculation) use meditation training to improve frontal alpha rhythms.

You can participate in the game yourself, at http://thegreatbrainexperiment.com/

http://medicalxpress.com/news/2015-05-smartphone-reveals-older.html

[3921] McNab, F., Zeidman P., Rutledge R. B., Smittenaar P., Brown H. R., Adams R. A., et al.
(2015).  Age-related changes in working memory and the ability to ignore distraction.
Proceedings of the National Academy of Sciences. 112(20), 6515 - 6518.

McNab, F., & Dolan, R. J. (2014). Dissociating distractor-filtering at encoding and during maintenance. Journal of Experimental Psychology. Human Perception and Performance, 40(3), 960–7. doi:10.1037/a0036013

A new study has found that errors in perceptual decisions occurred only when there was confused sensory input, not because of any ‘noise’ or randomness in the cognitive processing. The finding, if replicated across broader contexts, will change some of our fundamental assumptions about how the brain works.

The study unusually involved both humans and rats — four young adults and 19 rats — who listened to streams of randomly timed clicks coming into both the left ear and the right ear. After listening to a stream, the subjects had to choose the side from which more clicks originated.

The errors made, by both humans and rats, were invariably when two clicks overlapped. In other words, and against previous assumptions, the errors did not occur because of any ‘noise’ in the brain processing, but only when noise occurred in the sensory input.

The researchers supposedly ruled out alternative sources of confusion, such as “noise associated with holding the stimulus in mind, or memory noise, and noise associated with a bias toward one alternative or the other.”

However, before concluding that the noise which is the major source of variability and errors in more conceptual decision-making likewise stems only from noise in the incoming input (in this case external information), I would like to see the research replicated in a broader range of scenarios. Nevertheless, it’s an intriguing finding, and if indeed, as the researchers say, “the internal mental process was perfectly noiseless. All of the imperfections came from noise in the sensory processes”, then the ramifications are quite extensive.

The findings do add weight to recent evidence that a significant cause of age-related cognitive decline is sensory loss.

http://www.futurity.org/science-technology/dont-blame-your-brain-for-that-bad-decision/

[3376] Brunton, B. W., Botvinick M. M., & Brody C. D.
(2013).  Rats and Humans Can Optimally Accumulate Evidence for Decision-Making.
Science. 340(6128), 95 - 98.

More evidence that even an 8-week meditation training program can have measurable effects on the brain comes from an imaging study. Moreover, the type of meditation makes a difference to how the brain changes.

The study involved 36 participants from three different 8-week courses: mindful meditation, compassion meditation, and health education (control group). The courses involved only two hours class time each week, with meditation students encouraged to meditate for an average 20 minutes a day outside class. There was a great deal of individual variability in the total amount of meditation done by the end of the course (210-1491 minutes for the mindful attention training course; 190-905 minutes for the compassion training course).

Participants’ brains were scanned three weeks before the courses began, and three weeks after the end. During each brain scan, the volunteers viewed 108 images of people in situations that were either emotionally positive, negative or neutral.

In the mindful attention group, the second brain scan showed a decrease in activation in the right amygdala in response to all images, supporting the idea that meditation can improve emotional stability and response to stress. In the compassion meditation group, right amygdala activity also decreased in response to positive or neutral images, but, among those who reported practicing compassion meditation most frequently, right amygdala activity tended to increase in response to negative images. No significant changes were seen in the control group or in the left amygdala of any participant.

The findings support the idea that meditation can be effective in improving emotional control, and that compassion meditation can indeed increase compassionate feelings. Increased amygdala activation was also correlated with decreased depression scores in the compassion meditation group, which suggests that having more compassion towards others may also be beneficial for oneself.

The findings also support the idea that the changes brought about by meditation endure beyond the meditative state, and that the changes can start to occur quite quickly.

These findings are all consistent with other recent research.

One point is worth emphasizing, in the light of the difficulty in developing a training program that improves working memory rather than simply improving the task being practiced. These findings suggest that, unlike most cognitive training programs, meditation training might produce learning that is process-specific rather than stimulus- or task-specific, giving it perhaps a wider generality than most cognitive training.

We know that emotion affects memory. We know that attention affects perception (see, e.g., Visual perception heightened by meditation training; How mindset can improve vision). Now a new study ties it all together. The study shows that emotionally arousing experiences affect how well we see them, and this in turn affects how vividly we later recall them.

The study used images of positively and negatively arousing scenes and neutral scenes, which were overlaid with varying amounts of “visual noise” (like the ‘snow’ we used to see on old televisions). College students were asked to rate the amount of noise on each picture, relative to a specific image they used as a standard. There were 25 pictures in each category, and three levels of noise (less than standard, equal to standard, and more than standard).

Different groups explored different parameters: color; gray-scale; less noise (10%, 15%, 20% as compared to 35%, 45%, 55%); single exposure (each picture was only presented once, at one of the noise levels).

Regardless of the actual amount of noise, emotionally arousing pictures were consistently rated as significantly less noisy than neutral pictures, indicating that people were seeing them more clearly. This was true in all conditions.

Eye-tracking analysis ruled out the idea that people directed their attention differently for emotionally arousing images, but did show that more eye fixations were associated both with less noisy images and emotionally arousing ones. In other words, people were viewing emotionally important images as if they were less noisy.

One group of 22 students were given a 45-minute spatial working memory task after seeing the images, and then asked to write down all the details they could remember about the pictures they remembered seeing. The amount of detail they recalled was taken to be an indirect measure of vividness.

A second group of 27 students were called back after a week for a recognition test. They were shown 36 new images mixed in with the original 75 images, and asked to rate them as new, familiar, or recollected. They were also asked to rate the vividness of their recollection.

Although, overall, emotionally arousing pictures were not more likely to be remembered than neutral pictures, both experiments found that pictures originally seen as more vivid (less noise) were remembered more vividly and in more detail.

Brain scans from 31 students revealed that the amygdala was more active when looking at images rated as vivid, and this in turn increased activity in the visual cortex and in the posterior insula (which integrates sensations from the body). This suggests that the increased perceptual vividness is not simply a visual phenomenon, but part of a wider sensory activation.

There was another neural response to perceptual vividness: activity in the dorsolateral prefrontal cortex and the posterior parietal cortex was negatively correlated with vividness. This suggests that emotion is not simply increasing our attentional focus, it is instead changing it by reducing effortful attentional and executive processes in favor of more perceptual ones. This, perhaps, gives emotional memories their different ‘flavor’ compared to more neutral memories.

These findings clearly need more exploration before we know exactly what they mean, but the main finding from the study is that the vividness with which we recall some emotional experiences is rooted in the vividness with which we originally perceived it.

The study highlights how emotion can sharpen our attention, building on previous findings that emotional events are more easily detected when visibility is difficult, or attentional demands are high. It is also not inconsistent with a study I reported on last year, which found some information needs no repetition to be remembered because the amygdala decrees it of importance.

I should add, however, that the perceptual effect is not the whole story — the current study found that, although perceptual vividness is part of the reason for memories that are vividly remembered, emotional importance makes its own, independent, contribution. This contribution may occur after the event.

It’s suggested that individual differences in these reactions to emotionally enhanced vividness may underlie an individual’s vulnerability to post-traumatic stress disorder.

A standard test of how we perceive local vs global features of visual objects uses Navon figures — large letters made up of smaller ones (see below for an example). As in the Stroop test when colors and color words disagree (RED), the viewer can focus either on the large letter or the smaller ones. When the viewer is faster at seeing the larger letter, they are said to be showing global precedence; when they’re faster at seeing the component letters, they are said to be showing local precedence. Typically, the greater the number of component letters, the easier it is to see the larger letter. This is consistent with the Gestalt principles of proximity and continuity — elements that are close together and form smooth lines will tend to be perceptually grouped together and seen as a unit (the greater the number of component letters, the closer they will be, and the smoother the line).

In previous research, older adults have often demonstrated local precedence rather than global, although the results have been inconsistent. One earlier study found that older adults performed poorly when asked to report in which direction (horizontal or vertical) dots formed smooth lines, suggesting an age-related decline in perceptual grouping. The present study therefore investigated whether this decline was behind the decrease in global precedence.

In the study 20 young men (average age 22) and 20 older men (average age 57) were shown Navon figures and asked whether the target letter formed the large letter or the smaller letters (e.g., “Is the big or the small letter an E?”). The number of component letters was systematically varied across five quantities. Under such circumstances it is expected that at a certain level of letter density everyone will switch to global precedence, but if a person is impaired at perceptual grouping, this will occur at a higher level of density.

The young men were, unsurprisingly, markedly faster than the older men in their responses. They were also significantly faster at responding when the target was the global letter, compared to when it was the local letter (i.e. they showed global precedence). The older adults, on the other hand, had equal reaction times to global and local targets. Moreover, they showed no improvement as the letter-density increased (unlike the young men).

It is noteworthy that the older men, while they failed to show global precedence, also failed to show local precedence (remember that results are based on group averages; this suggests that the group was evenly balanced between those showing local precedence and those showing global precedence). Interestingly, previous research has suggested that women are more likely to show local precedence.

The link between perceptual grouping and global precedence is further supported by individual differences — older men who were insensitive to changes in letter-density were almost exclusively the ones that showed persistent local precedence. Indeed, increases in letter-density were sometimes counter-productive for these men, leading to even slower reaction times for global targets. This may be the result of greater distractor interference, to which older adults are more vulnerable, and to which this sub-group of older men may have been especially susceptible.

Example of a Navon figure:

FFFFFF
F
FFFFFF
F
FFFFFF

Previous research has found practice improves your ability at distinguishing visual images that vary along one dimension, and that this learning is specific to the visual images you train on and quite durable. A new study extends the finding to more natural stimuli that vary on multiple dimensions.

In the small study, 9 participants learned to identify faces and 6 participants learned to identify “textures” (noise patterns) over the course of two hour-long sessions of 840 trials (consecutive days). Faces were cropped to show only internal features and only shown briefly, so this was not a particularly easy task. Participants were then tested over a year later (range: 10-18 months; average 13 and 15 months, respectively).

On the test, participants were shown both images from training and new images that closely resembled them. While accuracy rates were high for the original images, they plummeted for the very similar new images, indicating that despite the length of time since they had seen the original images, they still retained much of the memory of them.

Although practice improved performance across nearly all items and for all people, there were significant differences between both participants and individual stimuli. More interestingly, individual differences (in both stimuli and people) were stable across sessions (e.g., if you were third-best on day 1, you were probably third-best on day 2 too, even though you were doing better). In other words, learning didn’t produce any qualitative changes in the representations of different items — practice had nearly the same effect on all; differences were rooted in initial difficulty of discriminating the pattern.

However, while it’s true that individual differences were stable, that doesn’t mean that every person improved their performance the exact same amount with the same amount of practice. Interestingly (and this is just from my eye-ball examination of the graphs), it looks like there was more individual variation among the group looking at noise patterns. This isn’t surprising. We all have a lot of experience discriminating faces; we’re all experts. This isn’t the case with the textures. For these, people had to ‘catch on’ to the features that were useful in discriminating patterns. You would expect more variability between people in how long it takes to work out a strategy, and how good that strategy is. Interestingly, three of the six people in the texture group actually performed better on the test than they had done on the second day of training, over a year ago. For the other three, and all nine of those in the face group, test performance was worse than it had been on the second day of training (but decidedly better than the first day).

The durability and specificity of this perceptual learning, the researchers point out, resembles that found in implicit memory and some types of sensory adaptation. It also indicates that such perceptual learning is not limited, as has been thought, to changes early in the visual pathway, but produces changes in a wider network of cortical neurons, particularly in the inferior temporal cortex.

The second, unrelated, study also bears on this issue of specificity.

We look at a scene and extract the general features — a crowd of people, violently riotous or riotously happy? — or we look at a scene and extract specific features that over time we use to build patterns about what goes with what. The first is called “statistical summary perception”; the second “statistical learning”.

A study designed to disentangle these two processes found that you can only do one or other; you can’t derive both types of information at the same time. Thus, when people were shown grids of lines slanted to varying degrees, they could either assess whether the lines were generally leaning to the left or right, or they could learn to recognize pairs of lines that had been hidden repeatedly in the grids — but they couldn’t do both.

The fact that each of these tasks interfered with the other suggests that the two processes are fundamentally related.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

[2291] Kim, J. G., Biederman I., & Juan C-H.
(2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study.
The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L.
(2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories.
Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A.
(2011).  Behavior and neural basis of near-optimal visual search.
Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S.
(2011).  A neural basis for real-world visual search in human occipitotemporal cortex.
Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S.
(2011).  Value-driven attentional capture.
Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

As I’ve discussed on many occasions, a critical part of attention (and working memory capacity) is being able to ignore distraction. There has been growing evidence that mindfulness meditation training helps develop attentional control. Now a new study helps fill out the picture of why it might do so.

The alpha rhythm is particularly active in neurons that process sensory information. When you expect a touch, sight or sound, the focusing of attention toward the expected stimulus induces a lower alpha wave height in neurons that would handle the expected sensation, making them more receptive to that information. At the same time the height of the alpha wave in neurons that would handle irrelevant or distracting information increases, making those cells less receptive to that information. In other words, alpha rhythm helps screen out distractions.

In this study, six participants who completed an eight-week mindfulness meditation program (MBSR) were found to generate larger alpha waves, and generate them faster, than the six in the control group. Alpha wave activity in the somatosensory cortex was measured while participants directed their attention to either their left hand or foot. This was done on three occasions: before training, at three weeks of the program, and after the program.

The MBSR program involves an initial two-and-a-half-hour training session, followed by daily 45-minute meditation sessions guided by a CD recording. The program is focused on training participants first to pay close attention to body sensations, then to focus on body sensations in a specific area, then being able to disengage and shifting the focus to another body area.

Apart from helping us understand why mindfulness meditation training seems to improve attention, the findings may also explain why this meditation can help sufferers of chronic pain.

Most memory research has concerned itself with learning over time, but many memories, of course, become fixed in our mind after only one experience. The mechanism by which we acquire knowledge from single events is not well understood, but a new study sheds some light on it.

The study involved participants being presented with images degraded almost beyond recognition. After a few moments, the original image was revealed, generating an “aha!” type moment. Insight is an experience that is frequently remembered well after a single occurrence. Participants repeated the exercise with dozens of different images.

Memory for these images was tested a week later, when participants were again shown the degraded images, and asked to recall details of the actual image.

Around half the images were remembered. But what’s intriguing is that the initial learning experience took place in a brain scanner, and to the researchers’ surprise, one of the highly active areas during the moment of insight was the amygdala. Moreover, high activity in the amygdala predicted that those images would be remembered a week later.

It seems the more we learn about the amygdala, the further its involvement extends. In this case, it’s suggested that the amygdala signals to other parts of the brain that an event is significant. In other words, it gives a value judgment, decreeing whether an event is worthy of being remembered. Presumably the greater the value, the more effort the brain puts into consolidating the information.

It is not thought, from the images used, that those associated with high activity in the amygdala were more ‘emotional’ than the other images.

If our brains are full of clusters of neurons resolutely only responding to specific features (as suggested in my earlier report), how do we bring it all together, and how do we switch from one point of interest to another? A new study using resting state data from 58 healthy adolescents and young adults has found that the intraparietal sulcus, situated at the intersection of visual, somatosensory, and auditory association cortices and known to be a key area for processing attention, contains a miniature map of all the things we can pay attention to (visual, auditory, motor stimuli etc).

Moreover, this map is copied in at least 13 other places in the brain, all of which are connected to the intraparietal sulcus. Each copy appears to do something different with the information. For instance, one map processes eye movements while another processes analytical information. This map of the world may be a fundamental building block for how information is represented in the brain.

There were also distinct clusters within the intraparietal sulcus that showed different levels of connectivity to auditory, visual, somatosensory, and default mode networks, suggesting they are specialized for different sensory modalities.

The findings add to our understanding of how we can shift our attention so precisely, and may eventually help us devise ways of treating disorders where attention processing is off, such as autism, attention deficit disorder, and schizophrenia.

[1976] Anderson, J. S., Ferguson M. A., Lopez-Larson M., & Yurgelun-Todd D.
(2010).  Topographic maps of multisensory attention.
Proceedings of the National Academy of Sciences. 107(46), 20110 - 20114.

Following on from earlier studies that found individual neurons were associated with very specific memories (such as a particular person), new research has shown that we can actually regulate the activity of specific neurons, increasing the firing rate of some while decreasing the rate of others.

The study involved 12 patients implanted with deep electrodes for intractable epilepsy. On the basis of each individual’s interests, four images were selected for each patient. Each of these images was associated with the firing of specific neurons in the mediotemporal lobe. The firing of these neurons was hooked up to a computer, allowing the patients to make their particular images appear by thinking of them. When another image appeared on top of the image as a distraction, creating a composite image, patients were asked to focus on their particular image, brightening the target image while the distractor image faded. The patients were successful 70% of the time in brightening their target image. This was primarily associated with increased firing of the specific neurons associated with that image.

I should emphasize that the use of a composite image meant that the participants had to rely on a mental representation rather than the sensory stimuli, at least initially. Moreover, when the feedback given was fake — that is, the patients’ efforts were no longer linked to the behavior of the image on the screen — success rates fell dramatically, demonstrating that their success was due to a conscious, directed action.

Different patients used different strategies to focus their attention. While some simply thought of the picture, others repeated the name of the image out loud or focused their gaze on a particular aspect of the image.

Resolving the competition of multiple internal and external stimuli is a process which involves a number of different levels and regions, but these findings help us understand at least some of the process that is under our conscious control. It would be interesting to know more about the relative effectiveness of the different strategies people used, but this was not the focus of the study. It would also be very interesting to compare effectiveness at this task across age, but of course this procedure is invasive and can only be used in special cases.

The study offers hope for building better brain-machine interfaces.

A study involving 110 toddlers (aged 14-42 months), of whom 37 were diagnosed with an autism spectrum disorder and 22 with a developmental delay, has compared their behavior when watching a 1-minute movie depicting moving geometric patterns (a standard screen saver) on 1 side of a video monitor and children in high action, such as dancing or doing yoga, on the other.

It was found that only one of the 51 typically-developing toddlers preferred the shapes, but 40% of the ASD toddlers did, as well as 9% of the developmentally delayed toddlers. Moreover, all those who spent over 69% of the time focusing on the moving shapes were those with ASD.

Additionally, those with ASD who preferred the geometric images also showed a particular pattern of saccades (eye movements) when viewing the images — a reduced number of saccades, demonstrated in a fixed stare. It’s suggested that a preference for moving geometric patterns combined with lengthy absorption in such images, might be an early identifier of autism. Such behavior should be taken as a signal to look for other warning signs, such as reduced enjoyment during back-and-forth games like peek-a-boo; an unusual tone of voice; failure to point at or bring objects to show; and failure to respond to their name.

[1891] Pierce, K., Conant D., Hazin R., Stoner R., & Desmond J.
(2010).  Preference for Geometric Patterns Early in Life As a Risk Factor for Autism.
Arch Gen Psychiatry. archgenpsychiatry.2010.113 - archgenpsychiatry.2010.113.

Children with autism often focus intently on a single activity or feature of their environment. A study involving 17 autistic children (6-16 years) and 17 controls has compared brain activity as they watched a silent video of their choice while tones and vibrations were presented, separately and simultaneously.

A simple stimulus takes about 20 milliseconds to arrive in the brain. When information from multiple senses registers at the same time, integration takes about 100 to 200 milliseconds in normally developing children. But those with autism took an average of 310 milliseconds to integrate the noise and vibration when they occurred together. The children with autism also showed weaker signal strength, signified by lower amplitude brainwaves.

The findings are consistent with theories that automatic sensory integration is impaired in autism, and may help explain autism’s characteristic sensitivity to excessive sensory stimulation.

A couple of years ago I reported on a finding that walking in the park, and (most surprisingly) simply looking at photos of natural scenes, could improve memory and concentration (see below). Now a new study helps explain why. The study examined brain activity while 12 male participants (average age 22) looked at images of tranquil beach scenes and non-tranquil motorway scenes. On half the presentations they concurrently listened to the same sound associated with both scenes (waves breaking on a beach and traffic moving on a motorway produce a similar sound, perceived as a constant roar).

Intriguingly, the natural, tranquil scenes produced significantly greater effective connectivity between the auditory cortex and medial prefrontal cortex, and between the auditory cortex and posterior cingulate gyrus, temporoparietal cortex and thalamus. It’s of particular interest that this is an example of visual input affecting connectivity of the auditory cortex, in the presence of identical auditory input (which was the focus of the research). But of course the take-home message for us is that the benefits of natural scenes for memory and attention have been supported.

Previous study:

Many of us who work indoors are familiar with the benefits of a walk in the fresh air, but a new study gives new insight into why, and how, it works. In two experiments, researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. The intriguing finding was that this effect was achieved not only by walking in the botanical gardens (versus walking along main streets of Ann Arbor), but also by looking at photos of nature (versus looking at photos of urban settings). The findings are consistent with a theory that natural environments are better at restoring attention abilities, because they provide a more coherent pattern of stimulation that requires less effort, as opposed to urban environments that are provide complex and often confusing stimulation that captures attention dramatically and requires directed attention (e.g., to avoid being hit by a car).

[1867] Hunter, M. D., Eickhoff S. B., Pheasant R. J., Douglas M. J., Watts G. R., Farrow T. F. D., et al.
(2010).  The state of tranquility: Subjective perception is shaped by contextual modulation of auditory connectivity.
NeuroImage. 53(2), 611 - 618.

[279] Berman, M. G., Jonides J., & Kaplan S.
(2008).  The cognitive benefits of interacting with nature.
Psychological Science: A Journal of the American Psychological Society / APS. 19(12), 1207 - 1212.

Because male superiority in mental rotation appears to be evident at a very young age, it has been suggested that testosterone may be a factor. To assess whether females exposed to higher levels of prenatal testosterone perform better on mental rotation tasks than females with lower levels of testosterone, researchers compared mental rotation task scores between twins from same-sex and opposite-sex pairs.

It was found that females with a male co-twin scored higher than did females with a female co-twin (there was no difference in scores between males from opposite-sex and same-sex pairs). Of course, this doesn’t prove that that the differences are produced in the womb; it may be that girls with a male twin engage in more male-typical activities. However, the association remained after allowing for computer game playing experience.

The study involved 804 twins, average age 22, of whom 351 females were from same-sex pairs and 120 from opposite-sex pairs. There was no significant difference between females from identical same-sex pairs compared to fraternal same-sex pairs.

* Please do note that ‘innate male superiority’ does NOT mean that all men are inevitably better than all women at this very specific task! My words simply reflect the evidence that the tendency of males to be better at mental rotation is found in infants as young as 3 months.

Following a monkey study that found training in spatial memory could raise females to the level of males, and human studies suggesting the video games might help reduce gender differences in spatial processing (see below for these), a new study shows that training in spatial skills can eliminate the gender difference in young children. Spatial ability, along with verbal skills, is one of the two most-cited cognitive differences between the sexes, for the reason that these two appear to be the most robust.

This latest study involved 116 first graders, half of whom were put in a training program that focused on expanding working memory, perceiving spatial information as a whole rather than concentrating on details, and thinking about spatial geometric pictures from different points of view. The other children took part in a substitute training program, as a control group. Initial gender differences in spatial ability disappeared for those who had been in the spatial training group after only eight weekly sessions.

Previously:

A study of 90 adult rhesus monkeys found young-adult males had better spatial memory than females, but peaked early. By old age, male and female monkeys had about the same performance. This finding is consistent with reports suggesting that men show greater age-related cognitive decline relative to women. A second study of 22 rhesus monkeys showed that in young adulthood, simple spatial-memory training did not help males but dramatically helped females, raising their performance to the level of young-adult males and wiping out the gender gap.

Another study showing that expert video gamers have improved mental rotation skills, visual and spatial memory, and multitasking skills has led researchers to conclude that training with video games may serve to reduce gender differences in visual and spatial processing, and some of the cognitive declines that come with aging.

I’ve talked about the importance of labels for memory, so I was interested to see that a recent series of experiments has found that hearing the name of an object improved people’s ability to see it, even when the object was flashed onscreen in conditions and speeds (50 milliseconds) that would render it invisible. The effect was specific to language; a visual preview didn’t help.

Moreover, those who consider their mental imagery particularly vivid scored higher when given the auditory cue (although this association disappeared when the position of the object was uncertain). The researchers suggest that hearing the image labeled evokes an image of the object, strengthening its visual representation and thus making it visible. They also suggested that because words in different languages pick out different things in the environment, learning different languages might shape perception in subtle ways.

While brain training programs can certainly improve your ability to do the task you’re practicing, there has been little evidence that this transfers to other tasks. In particular, the holy grail has been very broad transfer, through improvement in working memory. While there has been some evidence of this in pilot programs for children with ADHD, a new study is the first to show such improvement in older adults using a commercial brain training program.

A study involving 30 healthy adults aged 60 to 89 has demonstrated that ten hours of training on a computer game designed to boost visual perception improved perceptual abilities significantly, and also increased the accuracy of their visual working memory to the level of younger adults. There was a direct link between improved performance and changes in brain activity in the visual association cortex.

The computer game was one of those developed by Posit Science. Memory improvement was measured about one week after the end of training. The improvement did not, however, withstand multi-tasking, which is a particular problem for older adults. The participants, half of whom underwent the training, were college educated. The training challenged players to discriminate between two different shapes of sine waves (S-shaped patterns) moving across the screen. The memory test (which was performed before and after training) involved watching dots move across the screen, followed by a short delay and then re-testing for the memory of the exact direction the dots had moved.

A rat study demonstrates how specialized brain training can reverse many aspects of normal age-related cognitive decline in targeted areas. The month-long study involved daily hour-long sessions of intense auditory training targeted at the primary auditory cortex. The rats were rewarded for picking out the oddball note in a rapid sequence of six notes (five of them of the same pitch). The difference between the oddball note and the others became progressively smaller. After the training, aged rats showed substantial reversal of their previously degraded ability to process sound. Moreover, measures of neuron health in the auditory cortex had returned to nearly youthful levels.

Because Nicaraguan Sign Language is only about 35 years old, and still evolving rapidly, the language used by the younger generation is more complex than that used by the older generation. This enables researchers to compare the effects of language ability on other abilities. A recent study found that younger signers (in their 20s) performed better than older signers (in their 30s) on two spatial cognition tasks that involved finding a hidden object. The findings provide more support for the theory that language shapes how we think and perceive.

[1629] Pyers, J. E., Shusterman A., Senghas A., Spelke E. S., & Emmorey K.
(2010).  Evidence from an emerging sign language reveals that language supports spatial cognition.
Proceedings of the National Academy of Sciences. 107(27), 12116 - 12120.

There is a pervasive myth that every detail of every experience we've ever had is recorded in memory. It is interesting to note therefore, that even very familiar objects, such as coins, are rarely remembered in accurate detail1.

We see coins every day, but we don't see them. What we remember about coins are global attributes, such as size and color, not the little details, such as which way the head is pointing, what words are written on it, etc. Such details are apparently noted only if the person's attention is specifically drawn to them.

There are several interesting conclusions that can be drawn from studies that have looked at the normal encoding of familiar objects:

  • you don't automatically get more and more detail each time you see a particular object
  • only a limited amount of information is extracted the first time you see the object
  • the various features aren't equally important
  • normally, global rather than detail features are most likely to be remembered

In the present study, four experiments investigated people's memories for drawings of oak leaves. Two different types of oak leaves were used - "red oak" and "white oak". Subjects were shown two drawings for either 5 or 60 seconds. The differences between the two oak leaves varied, either:

  • globally (red vs white leaf), or
  • in terms of a major feature (the same type of leaf, but varying in that twomajor lobes are combined in one leaf but not in the other), or
  • in terms of a minor feature (one small lobe eliminated in one but not in theother).

According to the principle of top-down encoding, the time needed to detect a difference between stimuli that differ in only one critical feature will increase as the level of that feature decreases (from a global to a major specific to a lower-grade specific feature).

The results of this study supported the view that top-down encoding occurs, and indicate that, unless attention is explicitly directed to specific features, the likelihood of encoding such features becomes less the lower its structural level. One of the experiments tested whether the size of the feature made a difference, and it was decided that it didn't.

References

1. Jones, G.V. 1990. Misremembering a familiar object: When left is not right. Memory & Cognition, 18, 174-182.

Jones, G.V. & Martin, M. 1992. Misremembering a familiar object: Mnemonic illusion, not drawing bias. Memory & Cognition, 20, 211-213.

Nickerson, R.S. & Adams, M.J. 1979. Long-term memory of a common object. Cognitive Psychology, 11, 287-307.

Modigliani, V., Loverock, D.S. & Kirson, S.R. (1998). Encoding features of complex and unfamiliar objects. American Journal Of Psychology, 111, 215-239.

Older news items (pre-2010) brought over from the old website

Perception affected by mood

An imaging study has revealed that when people were shown a composite image with a face surrounded by "place" images, such as a house, and asked to identify the gender of the face, those in whom a bad mood had been induced didn’t process the places in the background. However, those in a good mood took in both the focal and background images. These differences in perception were coupled with differences in activity in the parahippocampal place area. Increasing the amount of information is of course not necessarily a good thing, as it may result in more distraction.

[1054] Schmitz, T. W., De Rosa E., & Anderson A. K.
(2009).  Opposing Influences of Affective State Valence on Visual Cortical Encoding.
J. Neurosci.. 29(22), 7199 - 7207.

http://www.eurekalert.org/pub_releases/2009-06/uot-pww060309.php

What we perceive is not what we sense

Perceiving a simple touch may depend as much on memory, attention, and expectation as on the stimulus itself. A study involving macaque monkeys has found that the monkeys’ perception of a touch (varied in intensity) was more closely correlated with activity in the medial premotor cortex (MPC), a region of the brain's frontal lobe known to be involved in making decisions about sensory information, than activity in the primary somatosensory cortex (which nevertheless accurately recorded the intensity of the sensation). MPC neurons began to fire before the stimulus even touched the monkeys' fingertips — presumably because the monkey was expecting the stimulus.

[263] de Lafuente, V., & Romo R.
(2005).  Neuronal correlates of subjective sensory experience.
Nat Neurosci. 8(12), 1698 - 1703.

http://www.eurekalert.org/pub_releases/2005-11/hhmi-tsi110405.php

Varied sensory experience important in childhood

A new baby has far more connections between neurons than necessary; from birth to about age 12 the brain trims 50% of these unnecessary connections while at the same time building new ones through learning and sensory stimulation — in other words, tailoring the brain to its environment. A mouse study has found that without enough sensory stimulation, infant mice lose fewer connections — indicating that connections need to be lost in order for appropriate ones to grow. The findings support the idea that parents should try to expose their children to a variety of sensory experiences.

[479] Zuo, Y., Yang G., Kwon E., & Gan W-B.
(2005).  Long-term sensory deprivation prevents dendritic spine loss in primary somatosensory cortex.
Nature. 436(7048), 261 - 265.

http://www.sciencentral.com/articles/view.htm3?article_id=218392607

Brain regions that process reality and illusion identified

Researchers have now identified the regions of the brain involved in processing what’s really going on, and what we think is going on. Macaque monkeys played a virtual reality video game in which the monkeys were tricked into thinking that they were tracing ellipses with their hands, although they actually were moving their hands in a circle. Monitoring of nerve cells revealed that the primary motor cortex represented the actual movement while the signals from cells in a neighboring area, called the ventral premotor cortex, were generating elliptical shapes. Knowing how the brain works to distinguish between action and perception will help efforts to build biomedical devices that can control artificial limbs, some day enabling the disabled to move a prosthetic arm or leg by thinking about it.

[1107] Schwartz, A. B., Moran D. W., & Reina A. G.
(2004).  Differential Representation of Perception and Action in the Frontal Cortex.
Science. 303(5656), 380 - 383.

http://news-info.wustl.edu/tips/page/normal/652.html
http://www.eurekalert.org/pub_releases/2004-02/wuis-rpb020704.php

Memory different depending on whether information received via eyes or ears

Carnegie Mellon scientists using magnetic resonance imaging found quite different brain activity patterns for reading and listening to identical sentences. During reading, the right hemisphere was not as active as expected, suggesting a difference in the nature of comprehension experienced when reading versus listening. When listening, there was greater activation in a part of Broca's area associated with verbal working memory, suggesting that there is more semantic processing and working memory storage in listening comprehension than in reading. This should not be taken as evidence that comprehension is better in one or other of these situations, merely that it is different. "Listening to an audio book leaves a different set of memories than reading does. A newscast heard on the radio is processed differently from the same words read in a newspaper."

[2540] Michael, E. B., Keller T. A., Carpenter P. A., & Just M A.
(2001).  fMRI investigation of sentence comprehension by eye and by ear: Modality fingerprints on cognitive processes.
Human Brain Mapping. 13(4), 239 - 252.

http://www.eurekalert.org/pub_releases/2001-08/cmu-tma081401.php

The chunking of our lives: the brain "sees" life in segments

We talk about "chunking" all the time in the context of memory. But the process of breaking information down into manageable bits occurs, it seems, right from perception. Magnetic resonance imaging reveals that when people watched movies of common, everyday, goal-directed activities (making the bed, doing the dishes, ironing a shirt), their brains automatically broke these continuous events into smaller segments. The study also identified a network of brain areas that is activated during the perception of boundaries between events. "The fact that changes in brain activity occurred during the passive viewing of movies indicates that this is how we normally perceive continuous events, as a series of segments rather than a dynamic flow of action."

Zacks, J.M., Braver, T.S., Sheridan, M.A., Donaldson, D.I., Snyder, A.Z., Ollinger, J.M., Buckner, R.L. & Raichle, M.E. 2001. Human brain activity time-locked to perceptual event boundaries. Nature Neuroscience, 4(6), 651-5.

http://www.eurekalert.org/pub_releases/2001-07/aaft-bp070201.php

Amygdala may be critical for allowing perception of emotionally significant events despite inattention

We choose what to pay attention to, what to remember. We give more weight to some things than others. Our perceptions and memories of events are influenced by our preconceptions, and by our moods. Researchers at Yale and New York University have recently published research indicating that the part of the brain known as the amygdala is responsible for the influence of emotion on perception. This builds on previous research showing that the amygdala is critically involved in computing the emotional significance of events. The amygdala is connected to those brain regions dealing with sensory experiences, and the theory that these connections allow the amygdala to influence early perceptual processing is supported by this research. Dr. Anderson suggests that “the amygdala appears to be critical for the emotional tuning of perceptual experience, allowing perception of emotionally significant events to occur despite inattention.”

[968] Anderson, A. K., & Phelps E. A.
(2001).  Lesions of the human amygdala impair enhanced perception of emotionally salient events.
Nature. 411(6835), 305 - 309.

http://www.eurekalert.org/pub_releases/2001-05/NYU-Infr-1605101.php

Error | About memory

Error

The website encountered an unexpected error. Please try again later.