Visual Memory

Latest Research News

Do older adults forget as much as they think, or is it rather that they ‘misremember’?

A small study adds to evidence that gist memory plays an important role in false memories at any age, but older adults are more susceptible to misremembering because of their greater use of gist memory.

Gist memory is about remembering the broad story, not the details. We use schemas a lot. Schemas are concepts we build over time for events and experiences, in order to relieve the cognitive load. They allow us to respond and process faster. We build schemas for such things as going to the dentist, going to a restaurant, attending a lecture, and so on. Schemas are very useful, reminding us what to expect and what to do in situations we have experienced before. But they are also responsible for errors of perception and memory — we see and remember what we expect to see.

As we get older, we do of course build up more and firmer schemas, making it harder to really see with fresh eyes. Which means it’s harder for us to notice the details, and easier for us to misremember what we saw.

A small study involving 20 older adults (mean age 75) had participants look at 26 different pictures of common scenes (such as a farmyard, a bathroom) for about 10 seconds, and asked them to remember as much as they could about the scenes. Later, they were shown 300 pictures of objects that were either in the scene, related to the scene (but not actually in the scene), or not commonly associated to the scene, and were required to say whether or not the objects were in the picture. Brain activity was monitored during these tests. Performance was also compared with that produced in a previous identical study, involving 22 young adults (mean age 23).

As expected and as is typical, there was a higher hit rate for schematic items and a higher rate of false memories for schematically related lures (items that belong to the schema but didn’t appear in the picture). True memories activated the typical retrieval network (medial prefrontal cortex, hippocampus/parahippocampal gyrus, inferior parietal lobe, right middle temporal gyrus, and left fusiform gyrus).

Activity in some of these regions (frontal-parietal regions, left hippocampus, right MTG, and left fusiform) distinguished hits from false alarms, supporting the idea that it’s more demanding to retrieve true memories than illusory ones. This contrasts with younger adults who in this and previous research have displayed the opposite pattern. The finding is consistent, however, with the theory that older adults tend to engage frontal resources at an earlier level of difficulty.

Older adults also displayed greater activation in the medial prefrontal cortex for both schematic and non-schematic hits than young adults did.

While true memories activated the typical retrieval network, and there were different patterns of activity for schematic vs non-schematic hits, there was no distinctive pattern of activity for retrieving false memories. However, there was increased activity in the middle frontal gyrus, middle temporal gyrus, and hippocampus/parahippocampal gyrus as a function of the rate of false memories.

Imaging also revealed that, like younger adults, older adults also engage the ventromedial prefrontal cortex when retrieving schematic information, and that they do so to a greater extent. Activation patterns also support the role of the mediotemporal lobe (MTL), and the posterior hippocampus/parahippocampal gyrus in particular, in determining true memories from false. Note that schematic information is not part of this region’s concern, and there was no consistent difference in activation in this region for schematic vs non-schematic hits. But older adults showed this shift within the hippocampus, with much of the activity moving to a more posterior region.

Sensory details are also important for distinguishing between true and false memories, but, apart from activity in the left fusiform gyrus, older adults — unlike younger adults — did not show any differential activation in the occipital cortex. This finding is consistent with previous research, and supports the conclusion that older adults don’t experience the recapitulation of sensory details in the same way that younger adults do. This, of course, adds to the difficulty they have in distinguishing true and false memories.

Older adults also showed differential activation of the right MTG, involved in gist processing, for true memories. Again, this is not found in younger adults, and supports the idea that older adults depend more on schematic gist information to assess whether a memory is true.

However, in older adults, increased activation of both the MTL and the MTG is seen as rates of false alarms increase, indicating that both gist and episodic memory contribute to their false memories. This is also in line with previous research, suggesting that memories of specific events and details can (incorrectly) provide support for false memories that are consistent with such events.

Older adults, unlike young adults, failed to show differential activity in the retrieval network for targets and lures (items that fit in with the schema, but were not in fact present in the image).

What does all this mean? Here’s what’s important:

  • older adults tend to use schema information more when trying to remember
  • older adults find it harder to recall specific sensory details that would help confirm a memory’s veracity
  • at all ages, gist processing appears to play a strong role in false memories
  • memory of specific (true) details can be used to endorse related (but false) details.

What can you do about any of this? One approach would be to make an effort to recall specific sensory details of an event rather than relying on the easier generic event that comes to mind first. So, for example, if you’re asked to go to the store to pick up orange juice, tomatoes and muesli, you might end up with more familiar items — a sort of default position, as it were, because you can’t quite remember what you were asked. If you make an effort to remember the occasion of being told — where you were, how the other person looked, what time of day it was, other things you talked about, etc — you might be able to bring the actual items to mind. A lot of the time, we simply don’t make the effort, because we don’t think we can remember.

https://www.eurekalert.org/pub_releases/2018-03/ps-fdg032118.php

[4331] Webb, C. E., & Dennis N. A.
(Submitted).  Differentiating True and False Schematic Memories in Older Adults.
The Journals of Gerontology: Series B.

A small study has tested the eminent Donald Hebb’s hypothesis that visual imagery results from the reactivation of neural activity associated with viewing images, and that the re-enactment of eye-movement patterns helps both imagery and neural reactivation.

In the study, 16 young adults (aged 20-28) were shown a set of 14 distinct images for a few seconds each. They were asked to remember as many details of the picture as possible so they could visualize it later on. They were then cued to mentally visualize the images within an empty rectangular box shown on the screen.

Brain imaging and eye-tracking technology revealed that the same pattern of eye movements and brain activation occurred when the image was learned and when it was recalled. During recall, however, the patterns were compressed (which is consistent with our experience of remembering, where memories take a much shorter time than the original experiences).

Our understanding of memory is that it’s constructive — when we remember, we reconstruct the memory from separate bits of information in our database. This finding suggests that eye movements might be like a blueprint to help the brain piece together the bits in the right way.

https://www.eurekalert.org/pub_releases/2018-02/bcfg-cga021318.php

A British study using data from 475,397 participants has shown that, on average, stronger people performed better across every test of brain functioning used. Tests looked at reaction speed, reasoning, visuospatial memory, prospective memory, and working memory (digit span). The relationship between muscular strength and brain function was consistently strong in both older and younger adults (those under 55 and those over), contradicting previous research showing it only in older adults.

The study also found that maximal handgrip was strongly correlated with both visuospatial memory and reaction time in 1,162 people with schizophrenia (prospective memory also approached statistical significance).

The finding raises the intriguing possibility that weight training could be particularly beneficial for people with mental health conditions, such as schizophrenia, major depression and bipolar disorder.

https://www.eurekalert.org/pub_releases/2018-04/nwsu-rrs041918.php

Full text available online at https://doi.org/10.1093/schbul/sby034

In a series of experiments involving college students, drawing pictures was found to be the best strategy for remembering lists of words.

The basic experiment involved students being given a list of simple, easily drawn words, for each of which they had 40 seconds to either draw the word, or write it out repeatedly. Following a filler task (classifying musical tones), they were given 60 seconds to then recall as many words as possible. Variations of the experiment had students draw the words repeatedly, list physical characteristics, create mental images, view pictures of the objects, or add visual details to the written letters (such as shading or other doodles).

In all variations, there was a positive drawing effect, with participants often recalling more than twice as many drawn than written words.

Importantly, the quality of the drawings didn’t seem to matter, nor did the time given, with even a very brief 4 seconds being enough. This challenges the usual explanation for drawing benefits: that it simply reflects the greater time spent with the material.

Participants were rated on their ability to form vivid mental images (measured using the VVIQ), and questioned about their drawing history. Neither of these factors had any reliable effect.

The experimental comparisons challenge various theories about why drawing is beneficial:

  • that it processes the information more deeply (when participants in the written word condition listed semantic characteristics of the word, thus processing it more deeply, the results were no better than simply writing out the word repeatedly, and drawing was still significantly better)
  • that it evokes mental imagery (when some students were told to mentally visualize the object, their recall was intermediate between the write and draw conditions)
  • that it simply reflects the fact that pictures are remembered better (when some students were shown a picture of the target word during the encoding time, their recall performance was not significantly better than that of the students writing the words)

The researchers suggest that it is a combination of factors that work together to produce a greater effect than the sum of each. These factors include mental imagery, elaboration, the motor action, and the creation of a picture. Drawing brings all these factors together to create a stronger and more integrated memory code.

http://www.eurekalert.org/pub_releases/2016-04/uow-ntr042116.php

[4245] Wammes, J. D., Meade M. E., & Fernandes M. A.
(2016).  The drawing effect: Evidence for reliable and robust memory benefits in free recall.
The Quarterly Journal of Experimental Psychology. 69(9), 1752 - 1776.

A study involving 18 volunteers who performed a simple orientation discrimination while on a stationary bicycle, has found that low-intensity exercise boosted activation in the visual cortex, compared with activation levels when at rest or during high-intensity exercise.

The changes suggest that the neurons in the visual cortex were most sensitive to the orientation stimuli during the low-intensity exercise condition relative to the other conditions. It’s suggested that this reflects an evolutionary pressure for the visual system to be more sensitive when the individual is actively exploring the environment (as opposed to, say, running away).

http://www.futurity.org/vision-exercise-brains-1400422-2/

[4274] Bullock, T., Elliott J. C., Serences J. T., & Giesbrecht B.
(2016).  Acute Exercise Modulates Feature-selective Responses in Human Cortex.
Journal of Cognitive Neuroscience. 29(4), 605 - 618.

A small study involving 50 younger adults (18-35; average age 24) has found that those with a higher BMI performed significantly worse on a computerised memory test called the “Treasure Hunt Task”.

The task involved moving food items around complex scenes (e.g., a desert with palm trees), hiding them in various locations, and indicating afterward where and when they had hidden them. The test was designed to disentangle object, location, and temporal order memory, and the ability to integrate those separate bits of information.

Those with higher BMI were poorer at all aspects of this task. There was no difference, however, in reaction times, or time taken at encoding. In other words, they weren't slower, or less careful when they were learning. Analysis of the errors made indicated that the problem was not with spatial memory, but rather with the binding of the various elements into one coherent memory.

The results could suggest that overweight people are less able to vividly relive details of past events. This in turn might make it harder for them to keep track of what they'd eaten, perhaps making overeating more likely.

The 50 participants included 27 with BMI below 25, 24 with BMI 25-30 (overweight), and 8 with BMI over 30 (obese). 72% were female. None were diagnosed diabetics. However, the researchers didn't take other health conditions which often co-occur with obesity, such as hypertension and sleep apnea, into account.

This is a preliminary study only, and further research is needed to validate its findings. However, it's significant in that it adds to growing evidence that the cognitive impairments that accompany obesity are present early in adult life and are not driven by diabetes.

The finding is also consistent with previous research linking obesity with dysfunction of the hippocampus and the frontal lobe.

http://www.eurekalert.org/pub_releases/2016-02/uoc-bol022616.php

https://www.theguardian.com/science/neurophilosophy/2016/mar/03/obesity-linked-to-memory-deficits

[4183] Cheke, L. G., Simons J. S., & Clayton N. S.
(2015).  Higher body mass index is associated with episodic memory deficits in young adults.
The Quarterly Journal of Experimental Psychology. 1 - 12.

A small study that fitted 29 young adults (18-31) and 31 older adults (55-82) with a device that recorded steps taken and the vigor and speed with which they were made, has found that those older adults with a higher step rate performed better on memory tasks than those who were more sedentary. There was no such effect seen among the younger adults.

Improved memory was found for both visual and episodic memory, and was strongest with the episodic memory task. This required recalling which name went with a person's face — an everyday task that older adults often have difficulty with.

However, the effect on visual memory had more to do with time spent sedentary than step rate. With the face-name task, both time spent sedentary and step rate were significant factors, and both factors had a greater effect than they had on visual memory.

Depression and hypertension were both adjusted for in the analysis.

There was no significant difference in executive function related to physical activity, although previous studies have found an effect. Less surprisingly, there was also no significant effect on verbal memory.

Both findings might be explained in terms of cognitive demand. The evidence suggests that the effect of physical exercise is only seen when the task is sufficiently cognitively demanding. No surprise that verbal memory (which tends to be much less affected by age) didn't meet that challenge, but interestingly, the older adults in this study were also less impaired on executive function than on visual memory. This is unusual, and reminds us that, especially with small studies, you cannot ignore the individual differences.

This general principle may also account for the lack of effect among younger adults. It is interesting to speculate whether physical activity effects would be found if the younger adults were given much more challenging tasks (either by increasing their difficulty, or selecting a group who were less capable).

Step Rate was calculated by total steps taken divided by the total minutes in light, moderate, and vigorous activities, based on the notion that this would provide an independent indicator of physical activity intensity (how briskly one is walking). Sedentary Time was the total minutes spent sedentary.

http://www.eurekalert.org/pub_releases/2015-11/bumc-slp112415.php

[4045] Hayes, S. M., Alosco M. L., Hayes J. P., Cadden M., Peterson K. M., Allsup K., et al.
(2015).  Physical Activity Is Positively Associated with Episodic Memory in Aging.
Journal of the International Neuropsychological Society. 21(Special Issue 10), 780 - 790.

The number of items a person can hold in short-term memory is strongly correlated with their IQ. But short-term memory has been recently found to vary along another dimension as well: some people remember (‘see’) the items in short-term memory more clearly and precisely than other people. This discovery has lead to the hypothesis that both of these factors should be considered when measuring working memory capacity. But do both these aspects correlate with fluid intelligence?

A new study presented 79 students with screen displays fleetingly showing either four or eight items. After a one-second blank screen, one item was returned and the subject asked whether that object had been in a particular location previously. Their ability to detect large and small changes in the items provided an estimate of how many items the individual could hold in working memory, and how clearly they remembered them. These measures were compared with individuals’ performance on standard measures of fluid intelligence.

Analysis of data found that these two measures of working memory — number and clarity —are completely independent of each other, and that it was the number factor only that correlated with intelligence.

This is not to say that clarity is unimportant! Only that it is not related to intelligence.

Organophosphate pesticides are the most widely used insecticides in the world; they are also (according to WHO), one of the most hazardous pesticides to vertebrate animals. While the toxic effects of high levels of organophosphates are well established, the effects of long-term low-level exposure are still controversial.

A meta-analysis involving 14 studies and more than 1,600 participants, reveals that the majority of well-designed studies undertaken over the last 20 years have found a significant association between low-level exposure to organophosphates and impaired cognitive function. Impairment was small to moderate, and mainly concerned psychomotor speed, executive function, visuospatial ability, working memory, and visual memory.

Spatial abilities have been shown to be important for achievement in STEM subjects (science, technology, engineering, math), but many people have felt that spatial skills are something you’re either born with or not.

In a comprehensive review of 217 research studies on educational interventions to improve spatial thinking, researchers concluded that you can indeed improve spatial skills, and that such training can transfer to new tasks. Moreover, not only can the right sort of training improve spatial skill in general, and across age and gender, but the effect of training appears to be stable and long-lasting.

One interesting finding (the researchers themselves considered it perhaps the most important finding) was the diversity in effective training — several different forms of training can be effective in improving spatial abilities. This may have something to do with the breadth covered by the label ‘spatial ability’, which include such skills as:

  • Perceiving objects, paths, or spatial configurations against a background of distracting information;
  • Piecing together objects into more complex configurations, visualizing and mentally transforming objects;
  • Understanding abstract principles, such as horizontal invariance;
  • Visualizing an environment in its entirety from a different position.

The review compared three types of training. Those that used:

  • Video games (24 studies)
  • Semester-long instructional courses on spatial reasoning (42 studies)
  • Practical training, often in a lab, that involved practicing spatial tasks, strategic instruction, or computerized lessons (138 studies).

The first two are examples of indirect training, while the last involves direct training.

On average, taken across the board, training improved performance by well over half a standard deviation when considered on its own, and still almost one half of a standard deviation when compared to a control group. This is a moderately large effect, and it extended to transfer tasks.

It also conceals a wide range, most of which is due to different treatment of control groups. Because the retesting effect is so strong in this domain (if you give any group a spatial test twice, regardless of whether they’ve been training in between the two tests, they’re going to do better on the second test), repeated testing can have a potent effect on the control group. Some ‘filler’ tasks can also inadvertently improve the control group’s performance. All of this will reduce the apparent effect of training. (Not having a control group is even worse, because you don’t know how much of the improvement is due to training and how much to the retesting effect.)

This caution is, of course, more support for the value of practice in developing spatial skills. This is further reinforced by studies that were omitted from the analysis because they would skew the data. Twelve studies found very high effect sizes — more than three times the average size of the remaining studies. All these studies took place in poorly developed countries (those with a Human Development Index above 30 at the time of the study) — Malaysia, Turkey, China, India, and Nigeria. HDI rating was even associated with the benefits of training in a dose-dependent manner — that is, the lower the standard of living, the greater the benefit.

This finding is consistent with other research indicating that lower socioeconomic status is associated with larger responses to training or intervention.

In similar vein, when the review compared 19 studies that specifically selected participants who scored poorly on spatial tests against the other studies, they found that the effects of training were significantly bigger among the selected studies.

In other words, those with poorer spatial skills will benefit most from training. It may be, indeed, that they are poor performers precisely because they have had little practice at these tasks — a question that has been much debated (particularly in the context of gender differences).

It’s worth noting that there was little difference in performance on tests carried out immediately after training ended, within a week, or within a month, indicating promising stability.

A comparison of different types of training did find that some skills were more resistant to training than others, but all types of spatial skill improved. The differences may be because some sorts of skill are harder to teach, and/or because some skills are already more practiced than others.

Given the demonstrated difficulty in increasing working memory capacity through training, it is intriguing to notice one example the researchers cite: experienced video game players have been shown to perform markedly better on some tasks that rely on spatial working memory, such as a task requiring you to estimate the number of dots shown in a brief presentation. Most of us can instantly recognize (‘subitize’) up to five dots without needing to count them, but video game players can typically subitize some 7 or 8. The extent to which this generalizes to a capacity to hold more elements in working memory is one that needs to be explored. Video game players also apparently have a smaller attentional blink, meaning that they can take in more information.

A more specific practical example of training they give is that of a study in which high school physics students were given training in using two- and three-dimensional representations over two class periods. This training significantly improved students’ ability to read a topographical map.

The researchers suggest that the size of training effect could produce a doubling of the number of people with spatial abilities equal to or greater than that of engineers, and that such training might lower the dropout rate among those majoring in STEM subjects.

Apart from that, I would argue many of us who are ‘spatially-challenged’ could benefit from a little training!

We know that emotion affects memory. We know that attention affects perception (see, e.g., Visual perception heightened by meditation training; How mindset can improve vision). Now a new study ties it all together. The study shows that emotionally arousing experiences affect how well we see them, and this in turn affects how vividly we later recall them.

The study used images of positively and negatively arousing scenes and neutral scenes, which were overlaid with varying amounts of “visual noise” (like the ‘snow’ we used to see on old televisions). College students were asked to rate the amount of noise on each picture, relative to a specific image they used as a standard. There were 25 pictures in each category, and three levels of noise (less than standard, equal to standard, and more than standard).

Different groups explored different parameters: color; gray-scale; less noise (10%, 15%, 20% as compared to 35%, 45%, 55%); single exposure (each picture was only presented once, at one of the noise levels).

Regardless of the actual amount of noise, emotionally arousing pictures were consistently rated as significantly less noisy than neutral pictures, indicating that people were seeing them more clearly. This was true in all conditions.

Eye-tracking analysis ruled out the idea that people directed their attention differently for emotionally arousing images, but did show that more eye fixations were associated both with less noisy images and emotionally arousing ones. In other words, people were viewing emotionally important images as if they were less noisy.

One group of 22 students were given a 45-minute spatial working memory task after seeing the images, and then asked to write down all the details they could remember about the pictures they remembered seeing. The amount of detail they recalled was taken to be an indirect measure of vividness.

A second group of 27 students were called back after a week for a recognition test. They were shown 36 new images mixed in with the original 75 images, and asked to rate them as new, familiar, or recollected. They were also asked to rate the vividness of their recollection.

Although, overall, emotionally arousing pictures were not more likely to be remembered than neutral pictures, both experiments found that pictures originally seen as more vivid (less noise) were remembered more vividly and in more detail.

Brain scans from 31 students revealed that the amygdala was more active when looking at images rated as vivid, and this in turn increased activity in the visual cortex and in the posterior insula (which integrates sensations from the body). This suggests that the increased perceptual vividness is not simply a visual phenomenon, but part of a wider sensory activation.

There was another neural response to perceptual vividness: activity in the dorsolateral prefrontal cortex and the posterior parietal cortex was negatively correlated with vividness. This suggests that emotion is not simply increasing our attentional focus, it is instead changing it by reducing effortful attentional and executive processes in favor of more perceptual ones. This, perhaps, gives emotional memories their different ‘flavor’ compared to more neutral memories.

These findings clearly need more exploration before we know exactly what they mean, but the main finding from the study is that the vividness with which we recall some emotional experiences is rooted in the vividness with which we originally perceived it.

The study highlights how emotion can sharpen our attention, building on previous findings that emotional events are more easily detected when visibility is difficult, or attentional demands are high. It is also not inconsistent with a study I reported on last year, which found some information needs no repetition to be remembered because the amygdala decrees it of importance.

I should add, however, that the perceptual effect is not the whole story — the current study found that, although perceptual vividness is part of the reason for memories that are vividly remembered, emotional importance makes its own, independent, contribution. This contribution may occur after the event.

It’s suggested that individual differences in these reactions to emotionally enhanced vividness may underlie an individual’s vulnerability to post-traumatic stress disorder.

The evidence that adult brains could grow new neurons was a game-changer, and has spawned all manner of products to try and stimulate such neurogenesis, to help fight back against age-related cognitive decline and even dementia. An important study in the evidence for the role of experience and training in growing new neurons was Maguire’s celebrated study of London taxi drivers, back in 2000.

The small study, involving 16 male, right-handed taxi drivers with an average experience of 14.3 years (range 1.5 to 42 years), found that the taxi drivers had significantly more grey matter (neurons) in the posterior hippocampus than matched controls, while the controls showed relatively more grey matter in the anterior hippocampus. Overall, these balanced out, so that the volume of the hippocampus as a whole wasn’t different for the two groups. The volume in the right posterior hippocampus correlated with the amount of experience the driver had (the correlation remained after age was accounted for).

The posterior hippocampus is preferentially involved in spatial navigation. The fact that only the right posterior hippocampus showed an experience-linked increase suggests that the right and left posterior hippocampi are involved in spatial navigation in different ways. The decrease in anterior volume suggests that the need to store increasingly detailed spatial maps brings about a reorganization of the hippocampus.

But (although the experience-related correlation is certainly indicative) it could be that those who manage to become licensed taxi drivers in London are those who have some innate advantage, evidenced in a more developed posterior hippocampus. Only around half of those who go through the strenuous training program succeed in qualifying — London taxi drivers are unique in the world for being required to pass through a lengthy training period and pass stringent exams, demonstrating their knowledge of London’s 25,000 streets and their idiosyncratic layout, plus 20,000 landmarks.

In this new study, Maguire and her colleague made a more direct test of this question. 79 trainee taxi drivers and 31 controls took cognitive tests and had their brains scanned at two time points: at the beginning of training, and 3-4 years later. Of the 79 would-be taxi drivers, only 39 qualified, giving the researchers three groups to compare.

There were no differences in cognitive performance or brain scans between the three groups at time 1 (before training). At time 2 however, when the trainees had either passed the test or failed to acquire the Knowledge, those trainees that qualified had significantly more gray matter in the posterior hippocampus than they had had previously. There was no change in those who failed to qualify or in the controls.

Unsurprisingly, both qualified and non-qualified trainees were significantly better at judging the spatial relations between London landmarks than the control group. However, qualified trainees – but not the trainees who failed to qualify – were worse than the other groups at recalling a complex visual figure after 30 minutes (see here for an example of such a figure). Such a finding replicates previous findings of London taxi drivers. In other words, their improvement in spatial memory as it pertains to London seems to have come at a cost.

Interestingly, there was no detectable difference in the structure of the anterior hippocampus, suggesting that these changes develop later, in response to changes in the posterior hippocampus. However, the poorer performance on the complex figure test may be an early sign of changes in the anterior hippocampus that are not yet measurable in a MRI.

The ‘Knowledge’, as it is known, provides a lovely real-world example of expertise. Unlike most other examples of expertise development (e.g. music, chess), it is largely unaffected by childhood experience (there may be some London taxi drivers who began deliberately working on their knowledge of London streets in childhood, but it is surely not common!); it is developed through a training program over a limited time period common to all participants; and its participants are of average IQ and education (average school-leaving age was around 16.7 years for all groups; average verbal IQ was around or just below 100).

So what underlies this development of the posterior hippocampus? If the qualified and non-qualified trainees were comparable in education and IQ, what determined whether a trainee would ‘build up’ his hippocampus and pass the exams? The obvious answer is hard work / dedication, and this is borne out by the fact that, although the two groups were similar in the length of their training period, those who qualified spent significantly more time training every week (an average of 34.5 hours a week vs 16.7 hours). Those who qualified also attended far more tests (an average of 15.6 vs 2.6).

While neurogenesis is probably involved in this growth within the posterior hippocampus, it is also possible that growth reflects increases in the number of connections, or in the number of glia. Most probably (I think), all are involved.

There are two important points to take away from this study. One is its clear demonstration that training can produce measurable changes in a brain region. The other is the indication that this development may come at the expense of other regions (and functions).

American football has been in the news a lot in recent years, as evidence has accumulated as to the brain damage incurred by professional footballers. But American football is a high-impact sport. Soccer is quite different. And yet the latest research reveals that even something as apparently unexceptional as bouncing a ball off your forehead can cause damage to your brain, if done often enough.

Brain scans on 32 amateur soccer players (average age 31) have revealed that those who estimated heading the ball more than 1,000-1,500 times in the past year had damage to white matter similar to that seen in patients with concussion.

Six brain regions were seen to be affected: one in the frontal lobe and five in the temporo-occipital cortex. These regions are involved in attention, memory, executive functioning and higher-order visual functions. The number of headings (obviously very rough estimates, based presumably on individuals’ estimates of how often they play and how often they head the ball on average during a game) needed to produce measurable decreases in the white matter integrity varied per region. In four of temporo-occipital regions, the threshold number was around 1500; in the fifth it was only 1000; in the frontal lobe, it was 1300.

Those with the highest annual heading frequency also performed worse on tests of verbal memory and psychomotor speed (activities that require mind-body coordination, like throwing a ball).

This is only a small study and clearly more research is required, but the findings indicate that we should lower our ideas of what constitutes ‘harm’ to the brain — if repetition is frequent enough, even mild knocks can cause damage. This adds to the evidence I discussed in a recent blog post, that even mild concussions can produce long-lasting trauma to the brain, and it is important to give your brain time to repair itself.

At the moment we can only speculate on the effect such repetition might have to the vulnerable brains of children.

The researchers suggest that heading should be monitored to prevent players exceeding unsafe exposure thresholds.

Kim, N., Zimmerman, M., Lipton, R., Stewart, W., Gulko, E., Lipton, M. & Branch, C. 2011. PhD Making Soccer Safer for the Brain: DTI-defined Exposure Thresholds for White Matter Injury Due to Soccer Heading. Presented November 30 at the annual meeting of the Radiological Society of North America (RSNA) in Chicago.

Here’s an intriguing approach to the long-standing debate about gender differences in spatial thinking. The study involved 1,279 adults from two cultural groups in India. One of these groups was patrilineal, the other matrilineal. The volunteers were given a wooden puzzle to assemble as quickly as they could.

Within the patrilineal group, men were on average 36% faster than women. Within the matrilineal group, however, there was no difference between the genders.

I have previously reported on studies showing how small amounts of spatial training can close the gap in spatial abilities between the genders. It has been argued that in our culture, males are directed toward spatial activities (construction such as Lego; later, video games) more than females are.

In this case, the puzzle was very simple. However, general education was clearly one factor mediating this gender difference. In the patrilineal group, males had an average 3.67 more years of education, while in the matrilineal group, men and women had the same amount of education. When education was included in the statistical analysis, a good part of the difference between the groups was accounted for — but not all.

While we can only speculate about the remaining cause, it is interesting to note that, among the patrilineal group, the gender gap was decidedly smaller among those who lived in households not wholly owned by males (in the matrilineal group, men are not allowed to own property, so this comparison cannot be made).

It is also interesting to note that the men in the matrilineal group were faster than the men in the patrilineal group. This is not a function of education differences, because education in the matrilineal group was slightly less than that of males in the patrilineal group.

None of the participants had experience with puzzle solving, and both groups had similar backgrounds, being closely genetically related and living in villages geographically close. Participants came from eight villages: four patrilineal and four matrilineal.

[2519] Hoffman, M., Gneezy U., & List J. A.
(2011).  Nurture affects gender differences in spatial abilities.
Proceedings of the National Academy of Sciences. 108(36), 14786 - 14788.

In the study, two rhesus monkeys were given a standard human test of working memory capacity: an array of colored squares, varying from two to five squares, was shown for 800 msec on a screen. After a delay, varying from 800 to 1000 msec, a second array was presented. This array was identical to the first except for a change in color of one item. The monkey was rewarded if its eyes went directly to this changed square (an infra-red eye-tracking system was used to determine this). During all this, activity from single neurons in the lateral prefrontal cortex and the lateral intraparietal area — areas critical for short-term memory and implicated in human capacity limitations — was recorded.

As with humans, the more squares in the array, the worse the performance (from 85% correct for two squares to 66.5% for 5). Their working memory capacity was calculated at 3.88 objects — i.e. the same as that of humans.

That in itself is interesting, speaking as it does to the question of how human intelligence differs from other animals. But the real point of the exercise was to watch what is happening at the single neuron level. And here a surprise occurred.

That total capacity of around 4 items was composed of two independent, smaller capacities in the right and left halves of the visual space. What matters is how many objects are in the hemifield an eye is covering. Each hemifield can only handle two objects. Thus, if the left side of the visual space contains three items, and the right side only one, information about the three items from the left side will be degraded. If the left side contains four items and the right side two, those two on the right side will be fine, but information from the four items on the left will be degraded.

Notice that the effect of more items than two in a hemifield is to decrease the total information from all the items in the hemifield — not to simply lose the additional items.

The behavioral evidence correlated with brain activity, with object information in LPFC neurons decreasing with increasing number of items in the same hemifield, but not the opposite hemifield, and the same for the intraparietal neurons (the latter are active during the delay; the former during the presentation).

The findings resolve a long-standing debate: does working memory function like slots, which we fill one by one with items until all are full, or as a pool that fills with information about each object, with some information being lost as the number of items increases? And now we know why there is evidence for both views, because both contain truth. Each hemisphere might be considered a slot, but each slot is a pool.

Another long-standing question is whether the capacity limit is a failure of perception or  memory. These findings indicate that the problem is one of perception. The neural recordings showed information about the objects being lost even as the monkeys were viewing them, not later as they were remembering what they had seen.

All of this is important theoretically, but there are also immediate practical applications. The work suggests that information should be presented in such a way that it’s spread across the visual space — for example, dashboard displays should spread the displays evenly on both sides of the visual field; medical monitors that currently have one column of information should balance it in right and left columns; security personnel should see displays scrolled vertically rather than horizontally; working memory training should present information in a way that trains each hemisphere separately. The researchers are forming collaborations to develop these ideas.

[2335] Buschman, T. J., Siegel M., Roy J. E., & Miller E. K.
(2011).  Neural substrates of cognitive capacity limitations.
Proceedings of the National Academy of Sciences.

Previous research has found practice improves your ability at distinguishing visual images that vary along one dimension, and that this learning is specific to the visual images you train on and quite durable. A new study extends the finding to more natural stimuli that vary on multiple dimensions.

In the small study, 9 participants learned to identify faces and 6 participants learned to identify “textures” (noise patterns) over the course of two hour-long sessions of 840 trials (consecutive days). Faces were cropped to show only internal features and only shown briefly, so this was not a particularly easy task. Participants were then tested over a year later (range: 10-18 months; average 13 and 15 months, respectively).

On the test, participants were shown both images from training and new images that closely resembled them. While accuracy rates were high for the original images, they plummeted for the very similar new images, indicating that despite the length of time since they had seen the original images, they still retained much of the memory of them.

Although practice improved performance across nearly all items and for all people, there were significant differences between both participants and individual stimuli. More interestingly, individual differences (in both stimuli and people) were stable across sessions (e.g., if you were third-best on day 1, you were probably third-best on day 2 too, even though you were doing better). In other words, learning didn’t produce any qualitative changes in the representations of different items — practice had nearly the same effect on all; differences were rooted in initial difficulty of discriminating the pattern.

However, while it’s true that individual differences were stable, that doesn’t mean that every person improved their performance the exact same amount with the same amount of practice. Interestingly (and this is just from my eye-ball examination of the graphs), it looks like there was more individual variation among the group looking at noise patterns. This isn’t surprising. We all have a lot of experience discriminating faces; we’re all experts. This isn’t the case with the textures. For these, people had to ‘catch on’ to the features that were useful in discriminating patterns. You would expect more variability between people in how long it takes to work out a strategy, and how good that strategy is. Interestingly, three of the six people in the texture group actually performed better on the test than they had done on the second day of training, over a year ago. For the other three, and all nine of those in the face group, test performance was worse than it had been on the second day of training (but decidedly better than the first day).

The durability and specificity of this perceptual learning, the researchers point out, resembles that found in implicit memory and some types of sensory adaptation. It also indicates that such perceptual learning is not limited, as has been thought, to changes early in the visual pathway, but produces changes in a wider network of cortical neurons, particularly in the inferior temporal cortex.

The second, unrelated, study also bears on this issue of specificity.

We look at a scene and extract the general features — a crowd of people, violently riotous or riotously happy? — or we look at a scene and extract specific features that over time we use to build patterns about what goes with what. The first is called “statistical summary perception”; the second “statistical learning”.

A study designed to disentangle these two processes found that you can only do one or other; you can’t derive both types of information at the same time. Thus, when people were shown grids of lines slanted to varying degrees, they could either assess whether the lines were generally leaning to the left or right, or they could learn to recognize pairs of lines that had been hidden repeatedly in the grids — but they couldn’t do both.

The fact that each of these tasks interfered with the other suggests that the two processes are fundamentally related.

Here’s a perception study with an intriguing twist. In my recent round-up of perception news I spoke of how images with people in them were more memorable, and of how some images ‘jump out’ at you. This study showed different images to each participant’s left and right eye at the same time, creating a contest between them. The amount of time it takes the participant to report seeing each image indicates the relative priority granted by the brain.

So, 66 college students were shown faces of people, and told something ‘gossipy’ about each one. The gossip could be negative, positive or neutral — for example, the person “threw a chair at a classmate”; “helped an elderly woman with her groceries”; “passed a man on the street.” These faces were then shown to one eye while the other eye saw a picture of a house.

The students had to press one button when they could see a face and another when they saw a house. As a control, some faces were used that the students had never seen. The students took the same length of time to register seeing the unknown faces and those about which they had been told neutral or positive information, but pictures of people about whom they had heard negative information registered around half a second quicker, and were looked at for longer.

A second experiment confirmed the findings and showed that subjects saw the faces linked to negative gossip for longer periods than faces about whom they had heard about upsetting personal experiences.

[2283] Anderson, E., Siegel E. H., Bliss-Moreau E., & Barrett L F.
(2011).  The Visual Impact of Gossip.
Science. 332(6036), 1446 - 1448.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

[2291] Kim, J. G., Biederman I., & Juan C-H.
(2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study.
The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L.
(2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories.
Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A.
(2011).  Behavior and neural basis of near-optimal visual search.
Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S.
(2011).  A neural basis for real-world visual search in human occipitotemporal cortex.
Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S.
(2011).  Value-driven attentional capture.
Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

In the first of three experiments, 132 students were found to gesture more often when they had difficulties solving mental rotation problems. In the second experiment, 22 students were encouraged to gesture, while 22 were given no such encouragement, and a further 22 were told to sit on their hands to prevent gesturing. Those encouraged to gesture solved more mental rotation problems.

Interestingly, the amount of gesturing decreased with experience with these spatial problems, and when the gesture group were given new spatial visualization problems in which gesturing was prohibited, their performance was still better than that of the other participants. This suggests that the spatial computation supported by gestures becomes internalized. The third experiment increased the range of spatial visualization problems helped by gesture.

The researchers suggest that hand gestures may improve spatial visualization by helping a person keep track of an object in the mind as it is rotated to a new position, and by providing additional feedback and visual cues by simulating how an object would move if the hand were holding it.

[2140] Chu, M., & Kita S.
(2011).  The nature of gestures' beneficial role in spatial problem solving..
Journal of Experimental Psychology: General. 140(1), 102 - 116.

Full text of the article is available at http://www.apa.org/pubs/journals/releases/xge-140-1-102.pdf

Two experiments involving a total of 191 volunteers have investigated the parameters of sleep’s effect on learning. In the first experiment, people learned 40 pairs of words, while in the second experiment, subjects played a card game matching pictures of animals and objects, and also practiced sequences of finger taps. In both groups, half the volunteers were told immediately following the tasks that they would be tested in 10 hours. Some of the participants slept during this time.

As expected, those that slept performed better on the tests (all of them: word recall, visuospatial, and procedural motor memory), but the really interesting bit is that it turned out it was only the people who slept who also knew a test was coming that had improved memory recall. These people showed greater brain activity during deep or "slow wave" sleep, and for these people only, the greater the activity during slow-wave sleep, the better their recall.

Those who didn’t sleep, however, were unaffected by whether they knew there would be a test or not.

Of course, this doesn’t mean you never remember things you don’t intend or want to remember! There is more than one process going on in the encoding and storing of our memories. However, it does confirm the importance of intention, and cast light perhaps on some of your learning failures.

[2148] Wilhelm, I., Diekelmann S., Molzow I., Ayoub A., Mölle M., & Born J.
(2011).  Sleep Selectively Enhances Memory Expected to Be of Future Relevance.
The Journal of Neuroscience. 31(5), 1563 - 1569.

Contrary to previous laboratory studies showing that children with autism often demonstrate outstanding visual search skills, new research indicates that in real-life situations, children with autism are unable to search effectively for objects. The study, involving 20 autistic children and 20 normally-developing children (aged 8-14), used a novel test room, with buttons on the floor that the children had to press to find a hidden target among multiple illuminated locations. Critically, 80% of these targets appeared on one side of the room.

Although autistics are generally believed to be more systematic, with greater sensitivity to regularities within a system, such behavior was not observed. Compared to other children, those with autism were slower to pick up on the regularities that would help them choose where to search. The slowness was not due to a lack of interest — all the children seemed to enjoy the game, and were keen to find the hidden targets.

The findings suggest that those with ASD have difficulties in applying the rules of probability to larger environments, particularly when they themselves are part of that environment.

[2055] Pellicano, E., Smith A. D., Cristino F., Hood B. M., Briscoe J., & Gilchrist I. D.
(2011).  Children with autism are neither systematic nor optimal foragers.
Proceedings of the National Academy of Sciences. 108(1), 421 - 426.

When stroke or brain injury damages a part of the brain controlling movement or sensation or language, other parts of the brain can learn to compensate for this damage. It’s been thought that this is a case of one region taking over the lost function. Two new studies show us the story is not so simple, and help us understand the limits of this plasticity.

In the first study, six stroke patients who have lost partial function in their prefrontal cortex, and six controls, were briefly shown a series of pictures to test the ability to remember images for a brief time (visual working memory) while electrodes recorded their EEGs. When the images were shown to the eye connected to the damaged hemisphere, the intact prefrontal cortex (that is, the one not in the hemisphere directly receiving that visual input) responded within 300 to 600 milliseconds.

Visual working memory involves a network of brain regions, of which the prefrontal cortex is one important element, and the basal ganglia, deep within the brain, are another. In the second study, the researchers extended the experiment to patients with damage not only to the prefrontal cortex, but also to the basal ganglia. Those with basal ganglia damage had problems with visual working memory no matter which part of the visual field was shown the image.

In other words, basal ganglia lesions caused a more broad network deficit, while prefrontal cortex lesions resulted in a more limited, and recoverable, deficit. The findings help us understand the different roles these brain regions play in attention, and emphasize how memory and attention are held in networks. They also show us that the plasticity compensating for brain damage is more dynamic and flexible than we realized, with intact regions stepping in on a case by case basis, very quickly, but only when the usual region fails.

[2034] Voytek, B., Davis M., Yago E., Barcel F., Vogel E. K., & Knight R. T.
(2010).  Dynamic Neuroplasticity after Human Prefrontal Cortex Damage.
Neuron. 68(3), 401 - 408.

[2033] Voytek, B., & Knight R. T.
(2010).  Prefrontal cortex and basal ganglia contributions to visual working memory.
Proceedings of the National Academy of Sciences. 107(42), 18167 - 18172.

An imaging study of 10 illiterates, 22 people who learned to read as adults and 31 who did so as children, has confirmed that the visual word form area (involved in linking sounds with written symbols) showed more activation in better readers, although everyone had similar levels of activation in that area when listening to spoken sentences. More importantly, it also revealed that this area was much less active among the better readers when they were looking at pictures of faces.

Other changes in activation patterns were also evident (for example, readers showed greater activation in the planum temporal in response to spoken speech), and most of the changes occurred even among those who acquired literacy in adulthood — showing that the brain re-structuring doesn’t depend on a particular time-window.

The finding of competition between face and word processing is consistent with the researcher’s theory that reading may have hijacked a neural network used to help us visually track animals, and raises the intriguing possibility that our face-perception abilities suffer in proportion to our reading skills.

In a study in which 14 volunteers were trained to recognize a faint pattern of bars on a computer screen that continuously decreased in faintness, the volunteers became able to recognize fainter and fainter patterns over some 24 days of training, and this correlated with stronger EEG signals from their brains as soon as the pattern flashed on the screen. The findings indicate that learning modified the very earliest stage of visual processing.

The findings could help shape training programs for people who must learn to detect subtle patterns quickly, such as doctors reading X-rays or air traffic controllers monitoring radars, and may also help improve training for adults with visual deficits such as lazy eye.

The findings are also noteworthy for showing that learning is not confined to ‘higher-order’ processes, but can occur at even the most basic, unconscious and automatic, level of processing.

New imaging techniques used on macaque monkeys explains why we find it so easy to scan many items quickly when we’re focused on one attribute, and how we can be so blind to attributes and objects we’re not focused on.

The study reveals that a region of the visual cortex called V4, which is involved in visual object recognition, shows extensive compartmentalization. There are areas for specific colors; areas for specific orientations, such as horizontal or vertical. Other groups of neurons are thought to process more complex aspects of color and form, such as integrating different contours that are the same color, to achieve overall shape perception.

[1998] Tanigawa, H., Lu H. D., & Roe A. W.
(2010).  Functional organization for color and orientation in macaque V4.
Nat Neurosci. 13(12), 1542 - 1548.

Because people with damage to their hippocampus are sometimes impaired at remembering spatial information even over extremely short periods of time, it has been thought that the hippocampus is crucial for spatial information irrespective of whether the task is a working memory or a long-term memory task. This is in contrast to other types of information. In general, the hippocampus (and related structures in the mediotemporal lobe) is assumed to be involved in long-term memory, not working memory.

However, a new study involving four patients with damage to their mediotemporal lobes, has found that they were perfectly capable of remembering for one second the relative positions of three or fewer objects on a table — but incapable of remembering more. That is, as soon as the limits of working memory were reached, their performance collapsed. It appears, therefore, that there is, indeed, a fundamental distinction between working memory and long-term memory across the board, including the area of spatial information and spatial-objection relations.

The findings also underscore how little working memory is really capable of on its own (although absolutely vital for what it does!) — in real life, long-term memory and working memory work in tandem.

An experiment with congenitally deaf cats has revealed how deaf or blind people might acquire other enhanced senses. The deaf cats showed only two specific enhanced visual abilities: visual localization in the peripheral field and visual motion detection. This was associated with the parts of the auditory cortex that would normally be used to pick up peripheral and moving sound (posterior auditory cortex for localization; dorsal auditory cortex for motion detection) being switched to processing this information for vision.

This suggests that only those abilities that have a counterpart in the unused part of the brain (auditory cortex for the deaf; visual cortex for the blind) can be enhanced. The findings also point to the plasticity of the brain. (As a side-note, did you know that apparently cats are the only animal besides humans that can be born deaf?)

The findings (and their broader implications) receive support from an imaging study involving 12 blind and 12 sighted people, who carried out an auditory localization task and a tactile localization task (reporting which finger was being gently stimulated). While the visual cortex was mostly inactive when the sighted people performed these tasks, parts of the visual cortex were strongly activated in the blind. Moreover, the accuracy of the blind participants directly correlated to the strength of the activation in the spatial-processing region of the visual cortex (right middle occipital gyrus). This region was also activated in the sighted for spatial visual tasks.

‘Face-blindness’ — prosopagnosia — is a condition I find fascinating, perhaps because I myself have a touch of it (it’s now recognized that this condition represents the end of a continuum rather than being an either/or proposition). The intriguing thing about this inability to recognize faces is that, in its extreme form, it can nevertheless exist side-by-side with quite normal recognition of other objects.

Prosopagnosia that is not the result of brain damage often runs in families, and a study of three family members with this condition has revealed that in some cases at least, the inability to remember faces has to do with failing to form a mental representation that abstracts the essence of the face, sans context. That is, despite being fully able to read facial expressions, attractiveness and gender from the face (indeed one of the family members is an artist who has no trouble portraying fully detailed faces), they couldn’t cope with changes in lighting conditions and viewing angles.

I’m reminded of the phenomenon of perfect pitch, which is characterized by an inability to generalize across acoustically similar tones, so an A in a different key is a completely different note. Interestingly, like prosopagnosia, perfect pitch is now thought to be more common than has been thought (recognition of it is of course limited by the fact that some musical expertise is generally needed to reveal it). This inability to abstract or generalize is also a phenomenon of eidetic memory, and I have spoken before of the perils of this.

(Note: A fascinating account of what it is like to be face-blind, from a person with the condition, can be found at: http://www.choisser.com/faceblind/)

A couple of years ago I reported on a finding that walking in the park, and (most surprisingly) simply looking at photos of natural scenes, could improve memory and concentration (see below). Now a new study helps explain why. The study examined brain activity while 12 male participants (average age 22) looked at images of tranquil beach scenes and non-tranquil motorway scenes. On half the presentations they concurrently listened to the same sound associated with both scenes (waves breaking on a beach and traffic moving on a motorway produce a similar sound, perceived as a constant roar).

Intriguingly, the natural, tranquil scenes produced significantly greater effective connectivity between the auditory cortex and medial prefrontal cortex, and between the auditory cortex and posterior cingulate gyrus, temporoparietal cortex and thalamus. It’s of particular interest that this is an example of visual input affecting connectivity of the auditory cortex, in the presence of identical auditory input (which was the focus of the research). But of course the take-home message for us is that the benefits of natural scenes for memory and attention have been supported.

Previous study:

Many of us who work indoors are familiar with the benefits of a walk in the fresh air, but a new study gives new insight into why, and how, it works. In two experiments, researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. The intriguing finding was that this effect was achieved not only by walking in the botanical gardens (versus walking along main streets of Ann Arbor), but also by looking at photos of nature (versus looking at photos of urban settings). The findings are consistent with a theory that natural environments are better at restoring attention abilities, because they provide a more coherent pattern of stimulation that requires less effort, as opposed to urban environments that are provide complex and often confusing stimulation that captures attention dramatically and requires directed attention (e.g., to avoid being hit by a car).

[1867] Hunter, M. D., Eickhoff S. B., Pheasant R. J., Douglas M. J., Watts G. R., Farrow T. F. D., et al.
(2010).  The state of tranquility: Subjective perception is shaped by contextual modulation of auditory connectivity.
NeuroImage. 53(2), 611 - 618.

[279] Berman, M. G., Jonides J., & Kaplan S.
(2008).  The cognitive benefits of interacting with nature.
Psychological Science: A Journal of the American Psychological Society / APS. 19(12), 1207 - 1212.

Because male superiority in mental rotation appears to be evident at a very young age, it has been suggested that testosterone may be a factor. To assess whether females exposed to higher levels of prenatal testosterone perform better on mental rotation tasks than females with lower levels of testosterone, researchers compared mental rotation task scores between twins from same-sex and opposite-sex pairs.

It was found that females with a male co-twin scored higher than did females with a female co-twin (there was no difference in scores between males from opposite-sex and same-sex pairs). Of course, this doesn’t prove that that the differences are produced in the womb; it may be that girls with a male twin engage in more male-typical activities. However, the association remained after allowing for computer game playing experience.

The study involved 804 twins, average age 22, of whom 351 females were from same-sex pairs and 120 from opposite-sex pairs. There was no significant difference between females from identical same-sex pairs compared to fraternal same-sex pairs.

* Please do note that ‘innate male superiority’ does NOT mean that all men are inevitably better than all women at this very specific task! My words simply reflect the evidence that the tendency of males to be better at mental rotation is found in infants as young as 3 months.

Following a monkey study that found training in spatial memory could raise females to the level of males, and human studies suggesting the video games might help reduce gender differences in spatial processing (see below for these), a new study shows that training in spatial skills can eliminate the gender difference in young children. Spatial ability, along with verbal skills, is one of the two most-cited cognitive differences between the sexes, for the reason that these two appear to be the most robust.

This latest study involved 116 first graders, half of whom were put in a training program that focused on expanding working memory, perceiving spatial information as a whole rather than concentrating on details, and thinking about spatial geometric pictures from different points of view. The other children took part in a substitute training program, as a control group. Initial gender differences in spatial ability disappeared for those who had been in the spatial training group after only eight weekly sessions.

Previously:

A study of 90 adult rhesus monkeys found young-adult males had better spatial memory than females, but peaked early. By old age, male and female monkeys had about the same performance. This finding is consistent with reports suggesting that men show greater age-related cognitive decline relative to women. A second study of 22 rhesus monkeys showed that in young adulthood, simple spatial-memory training did not help males but dramatically helped females, raising their performance to the level of young-adult males and wiping out the gender gap.

Another study showing that expert video gamers have improved mental rotation skills, visual and spatial memory, and multitasking skills has led researchers to conclude that training with video games may serve to reduce gender differences in visual and spatial processing, and some of the cognitive declines that come with aging.

Reports on cognitive decline with age have, over the years, come out with two general findings: older adults do significantly worse than younger adults; older adults are just as good as younger adults. Part of the problem is that there are two different approaches to studying this, each with their own specific bias. You can keep testing the same group of people as they get older — the problem with this is that they get more and more practiced, which mitigates the effects of age. Or you can test different groups of people, comparing older with younger — but cohort differences (e.g., educational background) may disadvantage the older generations. There is also argument about when it starts. Some studies suggest we start declining in our 20s, others in our 60s.

One of my favorite cognitive aging researchers has now tried to find the true story using data from the Virginia Cognitive Aging Project involving nearly 3800 adults aged 18 to 97 tested on reasoning, spatial visualization, episodic memory, perceptual speed and vocabulary, with 1616 tested at least twice. This gave a nice pool for both cross-sectional and longitudinal comparison (retesting ranged from 1 to 8 years and averaged 2.5 years).

From this data, Salthouse has estimated the size of practice effects and found them to be as large as or larger than the annual cross-sectional differences, although they varied depending on the task and the participant’s age. In general the practice effect was greater for younger adults, possibly because younger people learn better.

Once the practice-related "bonus points" were removed, age trends were flattened, with much less positive changes occurring at younger ages, and slightly less negative changes occurring at older ages. This suggests that change in cognitive ability over an adult lifetime (ignoring the effects of experience) is smaller than we thought.

While brain training programs can certainly improve your ability to do the task you’re practicing, there has been little evidence that this transfers to other tasks. In particular, the holy grail has been very broad transfer, through improvement in working memory. While there has been some evidence of this in pilot programs for children with ADHD, a new study is the first to show such improvement in older adults using a commercial brain training program.

A study involving 30 healthy adults aged 60 to 89 has demonstrated that ten hours of training on a computer game designed to boost visual perception improved perceptual abilities significantly, and also increased the accuracy of their visual working memory to the level of younger adults. There was a direct link between improved performance and changes in brain activity in the visual association cortex.

The computer game was one of those developed by Posit Science. Memory improvement was measured about one week after the end of training. The improvement did not, however, withstand multi-tasking, which is a particular problem for older adults. The participants, half of whom underwent the training, were college educated. The training challenged players to discriminate between two different shapes of sine waves (S-shaped patterns) moving across the screen. The memory test (which was performed before and after training) involved watching dots move across the screen, followed by a short delay and then re-testing for the memory of the exact direction the dots had moved.

Rodent studies have demonstrated the existence of specialized neurons involved in spatial memory. These ‘grid cells’ represent where an animal is located within its environment, firing in patterns that show up as geometrically regular, triangular grids when plotted on a map of a navigated surface. Now for the first time, evidence for these cells has been found in humans. Moreover, those with the clearest signs of grid cells performed best in a virtual reality spatial memory task, suggesting that the grid cells help us to remember the locations of objects. These cells, located particularly in the entorhinal cortex, are also critical for autobiographical memory, and are amongst the first to be affected by Alzheimer's disease, perhaps explaining why getting lost is one of the most common early symptoms.

[378] Doeller, C. F., Barry C., & Burgess N.
(2010).  Evidence for grid cells in a human memory network.
Nature. 463(7281), 657 - 661.

Because Nicaraguan Sign Language is only about 35 years old, and still evolving rapidly, the language used by the younger generation is more complex than that used by the older generation. This enables researchers to compare the effects of language ability on other abilities. A recent study found that younger signers (in their 20s) performed better than older signers (in their 30s) on two spatial cognition tasks that involved finding a hidden object. The findings provide more support for the theory that language shapes how we think and perceive.

[1629] Pyers, J. E., Shusterman A., Senghas A., Spelke E. S., & Emmorey K.
(2010).  Evidence from an emerging sign language reveals that language supports spatial cognition.
Proceedings of the National Academy of Sciences. 107(27), 12116 - 12120.

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but when the scene is presented. In the study, participants performed an attention-demanding letter-identification task while also viewing a rapid sequence of full-field photographs of urban and natural scenes. They were then tested on their memory of the scenes. It was found that, notwithstanding their attention had been focused on the target letter, only those scenes which were presented at the same time as a target letter (rather than a distractor letter) were reliably remembered. The results point to a brain mechanism that automatically encodes certain visual features into memory at behaviorally relevant points in time, regardless of the spatial focus of attention.

[321] Lin, J. Y., Pype A. D., Murray S. O., & Boynton G. M.
(2010).  Enhanced Memory for Scenes Presented at Behaviorally Relevant Points in Time.
PLoS Biol. 8(3), e1000337 - e1000337.

Full text available at doi:10.1371/journal.pbio.1000337

Visual working memory, which can only hold three of four objects at a time, is thought to be based on synchronized brain activity across a network of brain regions. Now a new study has allowed us to get a better picture of how exactly that works. Both the maintenance and the contents of working memory were connected to brief synchronizations of neural activity in alpha, beta and gamma brainwaves across frontoparietal regions that underlie executive and attentional functions and visual areas in the occipital lobe. Most interestingly, individual VWM capacity could be predicted by synchrony in a network centered on the intraparietal sulcus.

[458] Palva, M. J., Monto S., Kulashekhar S., & Palva S.
(2010).  Neuronal synchrony reveals working memory networks and predicts individual memory capacity.
Proceedings of the National Academy of Sciences. 107(16), 7580 - 7585.

An intriguing set of experiments showing how you can improve perception by manipulating mindset found significantly improved vision when:

  • an eye chart was arranged in reverse order (the letters getting progressively larger rather than smaller);
  • participants were given eye exercises and told their eyes would improve with practice;
  • participants were told athletes have better vision, and then told to perform jumping jacks or skipping (seen as less athletic);
  • participants flew a flight simulator, compared to pretending to fly a supposedly broken simulator (pilots are believed to have good vision).

[158] Langer, E., Djikic M., Pirson M., Madenci A., & Donohue R.
(2010).  Believing Is Seeing.
Psychological Science. 21(5), 661 - 666.

There is a pervasive myth that every detail of every experience we've ever had is recorded in memory. It is interesting to note therefore, that even very familiar objects, such as coins, are rarely remembered in accurate detail1.

We see coins every day, but we don't see them. What we remember about coins are global attributes, such as size and color, not the little details, such as which way the head is pointing, what words are written on it, etc. Such details are apparently noted only if the person's attention is specifically drawn to them.

There are several interesting conclusions that can be drawn from studies that have looked at the normal encoding of familiar objects:

  • you don't automatically get more and more detail each time you see a particular object
  • only a limited amount of information is extracted the first time you see the object
  • the various features aren't equally important
  • normally, global rather than detail features are most likely to be remembered

In the present study, four experiments investigated people's memories for drawings of oak leaves. Two different types of oak leaves were used - "red oak" and "white oak". Subjects were shown two drawings for either 5 or 60 seconds. The differences between the two oak leaves varied, either:

  • globally (red vs white leaf), or
  • in terms of a major feature (the same type of leaf, but varying in that twomajor lobes are combined in one leaf but not in the other), or
  • in terms of a minor feature (one small lobe eliminated in one but not in theother).

According to the principle of top-down encoding, the time needed to detect a difference between stimuli that differ in only one critical feature will increase as the level of that feature decreases (from a global to a major specific to a lower-grade specific feature).

The results of this study supported the view that top-down encoding occurs, and indicate that, unless attention is explicitly directed to specific features, the likelihood of encoding such features becomes less the lower its structural level. One of the experiments tested whether the size of the feature made a difference, and it was decided that it didn't.

References

1. Jones, G.V. 1990. Misremembering a familiar object: When left is not right. Memory & Cognition, 18, 174-182.

Jones, G.V. & Martin, M. 1992. Misremembering a familiar object: Mnemonic illusion, not drawing bias. Memory & Cognition, 20, 211-213.

Nickerson, R.S. & Adams, M.J. 1979. Long-term memory of a common object. Cognitive Psychology, 11, 287-307.

Modigliani, V., Loverock, D.S. & Kirson, S.R. (1998). Encoding features of complex and unfamiliar objects. American Journal Of Psychology, 111, 215-239.

Older news items (pre-2010) brought over from the old website

More light shed on distinction between long and short-term memory

The once clear-cut distinction between long- and short-term memory has increasingly come under fire in recent years. A new study involving patients with a specific form of epilepsy called 'temporal lobe epilepsy with bilateral hippocampal sclerosis' has now clarified the distinction. The patients, who all had severely compromised hippocampi, were asked to try and memorize photographic images depicting normal scenes. Their memory was tested and brain activity recorded after five seconds or 60 minutes. As expected, the patients could not remember the images after 60 minutes, but could distinguish seen-before images from new at five seconds. However, their memory was poor when asked to recall details about the images. Brain activity showed that short-term memory for details required the coordinated activity of a network of visual and temporal brain areas, whereas standard short-term memory drew on a different network, involving frontal and parietal regions, and independent of the hippocampus.

[996] Cashdollar, N., Malecki U., Rugg-Gunn F. J., Duncan J. S., Lavie N., & Duzel E.
(2009).  Hippocampus-dependent and -independent theta-networks of active maintenance.
Proceedings of the National Academy of Sciences. 106(48), 20493 - 20498.

http://www.eurekalert.org/pub_releases/2009-11/ucl-tal110909.php

Individual differences in working memory capacity depend on two factors

A new computer model adds to our understanding of working memory, by showing that working memory can be increased by the action of the prefrontal cortex in reinforcing activity in the parietal cortex (where the information is temporarily stored). The idea is that the prefrontal cortex sends out a brief stimulus to the parietal cortex that generates a reverberating activation in a small subpopulation of neurons, while inhibitory interactions with neurons further away prevents activation of the entire network. This lateral inhibition is also responsible for limiting the mnemonic capacity of the parietal network (i.e. provides the limit on your working memory capacity). The model has received confirmatory evidence from an imaging study involving 25 volunteers. It was found that individual differences in performance on a short-term visual memory task were correlated with the degree to which the dorsolateral prefrontal cortex was activated and its interconnection with the parietal cortex. In other words, your working memory capacity is determined by both storage capacity (in the posterior parietal cortex) and prefrontal top-down control. The findings may help in the development of ways to improve working memory capacity, particularly when working memory is damaged.

[441] Edin, F., Klingberg T., Johansson P., McNab F., Tegner J., & Compte A.
(2009).  Mechanism for top-down control of working memory capacity.
Proceedings of the National Academy of Sciences. 106(16), 6802 - 6807.

http://www.eurekalert.org/pub_releases/2009-04/i-id-aot040109.php

Some short-term memories die suddenly, no fading

We don’t remember everything; the idea of memory as being a video faithfully recording every aspect of everything we have ever experienced is a myth. Every day we look at the world and hold a lot of what we say for no more than a few seconds before discarding it as not needed any more. Until now it was thought that these fleeting visual memories faded away, gradually becoming more imprecise. Now it seems that such memories remain quite accurate as long as they exist (about 4 seconds), and then just vanish away instantly. The study involved testing memory for shapes and colors in 12 adults, and it was found that the memory for shape or color was either there or not there – the answers either correct or random guesses. The probability of remembering correctly decreased between 4 and 10 seconds.

[941] Zhang, W., & Luck S. J.
(2009).  Sudden death and gradual decay in visual working memory.
Psychological Science: A Journal of the American Psychological Society / APS. 20(4), 423 - 428.

http://www.eurekalert.org/pub_releases/2009-04/uoc--ssm042809.php

Where visual short-term memory occurs

Working memory used to be thought of as a separate ‘store’, and now tends to be regarded more as a process, a state of mind. Such a conception suggests that it may occur in the same regions of the brain as long-term memory, but in a pattern of activity that is somehow different from LTM. However, there has been little evidence for that so far. Now a new study has found that information in WM may indeed be stored via sustained, but low, activity in sensory areas. The study involved volunteers being shown an image for one second and instructed to remember either the color or the orientation of the image. After then looking at a blank screen for 10 seconds, they were shown another image and asked whether it was the identical color/orientation as the first image. Brain activity in the primary visual cortex was scanned during the 10 second delay, revealing that areas normally involved in processing color and orientation were active during that time, and that the pattern only contained the targeted information (color or orientation).

[1032] Serences, J. T., Ester E. F., Vogel E. K., & Awh E.
(2009).  Stimulus-Specific Delay Activity in Human Primary Visual Cortex.
Psychological Science. 20(2), 207 - 214.

http://www.eurekalert.org/pub_releases/2009-02/afps-sih022009.php
http://www.eurekalert.org/pub_releases/2009-02/uoo-dsm022009.php

The finding is consistent with that of another study published this month, in which participants were shown two examples of simple striped patterns at different orientations and told to hold either one or the other of the orientations in their mind while being scanned. Orientation is one of the first and most basic pieces of visual information coded and processed by the brain. Using a new decoding technique, researchers were able to predict with 80% accuracy which of the two orientations was being remembered 11 seconds after seeing a stimulus, from the activity patterns in the visual areas. This was true even when the overall level of activity in these visual areas was very weak, no different than looking at a blank screen.

[652] Harrison, S. A., & Tong F.
(2009).  Decoding reveals the contents of visual working memory in early visual areas.
Nature. 458(7238), 632 - 635.

http://www.eurekalert.org/pub_releases/2009-02/vu-edi021709.php
http://www.physorg.com/news154186809.html

Even toddlers can ‘chunk' information for better remembering

We all know it’s easier to remember a long number (say a phone number) when it’s broken into chunks. Now a study has found that we don’t need to be taught this; it appears to come naturally to us. The study showed 14 months old children could track only three hidden objects at once, in the absence of any grouping cues, demonstrating the standard limit of working memory. However, with categorical or spatial cues, the children could remember more. For example, when four toys consisted of two groups of two familiar objects, cats and cars, or when six identical orange balls were grouped in three groups of two.

[196] Feigenson, L., & Halberda J.
(2008).  From the Cover: Conceptual knowledge increases infants' memory capacity.
Proceedings of the National Academy of Sciences. 105(29), 9926 - 9930.

http://www.eurekalert.org/pub_releases/2008-07/jhu-etg071008.php

Full text available at http://www.pnas.org/content/105/29/9926.abstract?sid=c01302b6-cd8e-4072-842c-7c6fcd40706f

Working memory has a fixed number of 'slots'

A study that showed volunteers a pattern of colored squares for a tenth of a second, and then asked them to recall the color of one of the squares by clicking on a color wheel, has found that working memory acts like a high-resolution camera, retaining three or four features in high detail. Unlike a digital camera, however, it appears that you can’t increase the number of images you can store by lowering the resolution. The resolution appears to be constant for a given individual. However, individuals do differ in the resolution of each feature and the number of features that can be stored.

[278] Zhang, W., & Luck S. J.
(2008).  Discrete fixed-resolution representations in visual working memory.
Nature. 453(7192), 233 - 235.

http://www.physorg.com/news126432902.html
http://www.eurekalert.org/pub_releases/2008-04/uoc--wmh040208.php

And another study of working memory has attempted to overcome the difficulties involved in measuring a person’s working memory capacity (ensuring that no ‘chunking’ of information takes place), and concluded that people do indeed have a fixed number of ‘slots’ in their working memory. In the study, participants were shown two, five or eight small, scattered, different-colored squares in an array, which was then replaced by an array of the same squares without the colors, after which the participant was shown a single color in one location and asked to indicate whether the color in that spot had changed from the original array.

[437] Rouder, J. N., Morey R. D., Cowan N., Zwilling C. E., Morey C. C., & Pratte M. S.
(2008).  An assessment of fixed-capacity models of visual working memory.
Proceedings of the National Academy of Sciences. 105(16), 5975 - 5979.

http://www.eurekalert.org/pub_releases/2008-04/uom-mpd042308.php

Impressive feats in visual memory

In light of all the recent experiments emphasizing how small our short-term visual memory is, it’s comforting to be reminded that, nevertheless, we have an amazing memory for pictures — in the right circumstances. Those circumstances include looking at images of familiar objects, as opposed to abstract artworks, and being motivated to do well (the best-scoring participant was given a cash prize). In the study, 14 people aged 18 to 40 viewed 2,500 images, one at a time, for a few seconds. Afterwards, they were shown pairs of images and asked to select the exact image they had seen earlier. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Stunningly, participants on average chose the correct image 92%, 88% and 87% of the time, in each of the three pairing categories respectively.

[870] Brady, T. F., Konkle T., Alvarez G. A., & Oliva A.
(2008).  Visual long-term memory has a massive storage capacity for object details.
Proceedings of the National Academy of Sciences. 105(38), 14325 - 14329.

Full text available at http://www.pnas.org/content/105/38/14325.abstract

Attention grabbers snatch lion's share of visual memory

It’s long been thought that when we look at a visually "busy" scene, we are only able to store a very limited number of objects in our visual short-term or working memory. For some time, this figure was believed to be four or five objects, but a recent report suggested it could be as low as two. However, a new study reveals that although it might not be large, it’s more flexible than we thought. Rather than being restricted to a limited number of objects, it can be shared out across the whole image, with more memory allocated for objects of interest and less for background detail. What’s of interest might be something we’ve previously decided on (i.e., we’re searching for), or something that grabs our attention.  Eye movements also reveal how brief our visual memory is, and that what our eyes are looking at isn’t necessarily what we’re ‘seeing’ — when people were asked to look at objects in a particular sequence, but the final object disappeared before their eyes moved on to it, it was found that the observers could more accurately recall the location of the object that they were about to look at than the one that they had just been looking at.

[1398] Bays, P. M., & Husain M.
(2008).  Dynamic shifts of limited working memory resources in human vision.
Science (New York, N.Y.). 321(5890), 851 - 854.

http://www.physorg.com/news137337380.html

More on how short-term memory works

It’s been established that visual working memory is severely limited — that, on average, we can only be aware of about four objects at one time. A new study explored the idea that this capacity might be affected by complexity, that is, that we can think about fewer complex objects than simple objects. It found that complexity did not affect memory capacity. It also found that some people have clearer memories of the objects than other people, and that this is not related to how many items they can remember. That is, a high IQ is associated with the ability to hold more items in working memory, but not with the clarity of those items.

[426] Awh, E., Barton B., & Vogel E. K.
(2007).  Visual working memory represents a fixed number of items regardless of complexity.
Psychological Science: A Journal of the American Psychological Society / APS. 18(7), 622 - 628.

http://www.eurekalert.org/pub_releases/2007-07/uoo-htb071107.php
http://www.physorg.com/news103472118.html

Support for labeling as an aid to memory

A study involving an amnesia-inducing drug has shed light on how we form new memories. Participants in the study participants viewed words, photographs of faces and landscapes, and abstract pictures one at a time on a computer screen. Twenty minutes later, they were shown the words and images again, one at a time. Half of the images they had seen earlier, and half were new. They were then asked whether they recognized each one. For one session they were given midazolam, a drug used to relieve anxiety during surgical procedures that also causes short-term anterograde amnesia, and for one session they were given a placebo.
It was found that the participants' memory while in the placebo condition was best for words, but the worst for abstract images. Midazolam impaired the recognition of words the most, impaired memory for the photos less, and impaired recognition of abstract pictures hardly at all. The finding reinforces the idea that the ability to recollect depends on the ability to link the stimulus to a context, and that unitization increases the chances of this linking occurring. While the words were very concrete and therefore easy to link to the experimental context, the photographs were of unknown people and unknown places and thus hard to distinctively label. The abstract images were also unfamiliar and not unitized into something that could be described with a single word.

[1216] Reder, L. M., Oates J. M., Thornton E. R., Quinlan J. J., Kaufer A., & Sauer J.
(2006).  Drug-Induced Amnesia Hurts Recognition, but Only for Memories That Can Be Unitized.
Psychological science : a journal of the American Psychological Society / APS. 17(7), 562 - 567.

http://www.sciencedaily.com/releases/2006/07/060719092800.htm

Discovery disproves simple concept of memory as 'storage space'

The idea of memory “capacity” has become more and more eroded over the years, and now a new technique for measuring brainwaves seems to finally knock the idea on the head. Consistent with recent research suggesting that a crucial problem with aging is a growing inability to ignore distracting information, this new study shows that visual working memory depends on your ability to filter out irrelevant information. Individuals have long been characterized as having a “high” working memory capacity or a “low” one — the assumption has been that these people differ in their storage capacity. Now it seems it’s all about a neural mechanism that controls what information gets into awareness. People with high capacity have a much better ability to ignore irrelevant information.

[1091] Vogel, E. K., McCollough A. W., & Machizawa M. G.
(2005).  Neural measures reveal individual differences in controlling access to working memory.
Nature. 438(7067), 500 - 503.

http://www.eurekalert.org/pub_releases/2005-11/uoo-dds111805.php

Language cues help visual learning in children

A study of 4-year-old children has found that language, in the form of specific kinds of sentences spoken aloud, helped them remember mirror image visual patterns. The children were shown cards bearing red and green vertical, horizontal and diagonal patterns that were mirror images of one another. When asked to choose the card that matched the one previously seen, the children tended to mistake the original card for its mirror image, showing how difficult it was for them to remember both color and location. However, if they were told, when viewing the original card, a mnemonic cue such as ‘The red part is on the left’, they performed “reliably better”.

The paper was presented by a graduate student at the 17th annual meeting of the American Psychological Society, held May 26-29 in Los Angeles.

http://www.eurekalert.org/pub_releases/2005-05/jhu-lc051705.php

An advantage of age

A study comparing the ability of young and older adults to indicate which direction a set of bars moved across a computer screen has found that although younger participants were faster when the bars were small or low in contrast, when the bars were large and high in contrast, the older people were faster. The results suggest that the ability of one neuron to inhibit another is reduced as we age (inhibition helps us find objects within clutter, but makes it hard to see the clutter itself). The loss of inhibition as we age has previously been seen in connection with cognition and speech studies, and is reflected in our greater inability to tune out distraction as we age. Now we see the same process in vision.

[1356] Betts, L. R., Taylor C. P., Sekuler A. B., & Bennett P. J.
(2005).  Aging Reduces Center-Surround Antagonism in Visual Motion Processing.
Neuron. 45(3), 361 - 366.

http://psychology.plebius.org/article.htm?article=739
http://www.eurekalert.org/pub_releases/2005-02/mu-opg020305.php

Why working memory capacity is so limited

There’s an old parlor game whereby someone brings into a room a tray covered with a number of different small objects, which they show to the people in the room for one minute, before whisking it away again. The participants are then required to write down as many objects as they can remember. For those who perform badly at this type of thing, some consolation from researchers: it’s not (entirely) your fault. We do actually have a very limited storage capacity for visual short-term memory.
Now visual short-term memory is of course vital for a number of functions, and reflecting this, there is an extensive network of brain structures supporting this type of memory. However, a new imaging study suggests that the limited storage capacity is due mainly to just one of these regions: the posterior parietal cortex. An interesting distinction can be made here between registering information and actually “holding it in mind”. Activity in the posterior parietal cortex strongly correlated with the number of objects the subjects were able to remember, but only if the participants were asked to remember. In contrast, regions of the visual cortex in the occipital lobe responded differently to the number of objects even when participants were not asked to remember what they had seen.

[598] Todd, J. J., & Marois R.
(2004).  Capacity limit of visual short-term memory in human posterior parietal cortex.
Nature. 428(6984), 751 - 754.

http://www.eurekalert.org/pub_releases/2004-04/vu-slo040704.php
http://tinyurl.com/2jzwe (Telegraph article)

Brain signal predicts working memory capacity

Our visual short-term memory may have an extremely limited capacity, but some people do have a greater capacity than others. A new study reveals that an individual's capacity for such visual working memory can be predicted by his or her brainwaves. In the study, participants briefly viewed a picture containing colored squares, followed by a one-second delay, and then a test picture. They pressed buttons to indicate whether the test picture was identical to -- or differed by one color -- from the one seen earlier. The more squares a subject could correctly identify having just seen, the greater his/her visual working memory capacity, and the higher the spike of corresponding brain activity – up to a point. Neural activity of subjects with poorer working memory scores leveled off early, showing little or no increase when the number of squares to remember increased from 2 to 4, while those with high capacity showed large increases. Subjects averaged 2.8 squares.

[1154] Vogel, E. K., & Machizawa M. G.
(2004).  Neural activity predicts individual differences in visual working memory capacity.
Nature. 428(6984), 748 - 751.

http://www.eurekalert.org/pub_releases/2004-04/niom-bsp041604.php

Learning without desire or awareness

We have long known that learning can occur without attention. A recent study demonstrates learning that occurs without attention, without awareness and without any task relevance. Subjects were repeatedly presented with a background motion signal so weak that its direction was not visible; the invisible motion was an irrelevant background to the central task that engaged the subject's attention. Despite being below the threshold of visibility and being irrelevant to the central task, the repetitive exposure improved performance specifically for the direction of the exposed motion when tested in a subsequent suprathreshold test. These results suggest that a frequently presented feature sensitizes the visual system merely owing to its frequency, not its relevance or salience.

[594] Watanabe, T., Nanez J. E., & Sasaki Y.
(2001).  Perceptual learning without perception.
Nature. 413(6858), 844 - 848.

http://www.nature.com/nsu/011025/011025-12.html
http://tinyurl.com/ix98

Visual memory better than previously thought

Why is it that you can park your car at a huge mall and find it a few hours later without much problem, or make your way through a store you have never been to before? The answer may lie in our ability to build up visual memories of a scene in a short period of time. A new study counters current thinking that visual memory is generally poor and that people quickly forget the details of what they have seen. It appears that even with very limited visual exposure to a scene, people are able to build up strong visual memories and, in fact, their recall of objects in the scene improved with each exposure. It is suggested these images aren't stored in short-term or long-term memory, but in medium-term memory, which lasts for a few minutes and appears to be specific to visual information as opposed to verbal or semantic information. "Medium-term memory depends on the visual context of the scene, such as the background, furniture and walls, which seems to be key in the ability to keep in mind the location and identity of objects. These disposable accumulated visual memories can be recalled in a few minutes if faced with that scene again, but are discarded in a day or two if the scene is not viewed again so they don't take up valuable memory space."

Melcher, D. 2001. Persistence of visual memory for scenes. Nature, 412 (6845), 401.

http://www.eurekalert.org/pub_releases/2001-07/rtsu-rrf072501.php

Error | About memory

Error

The website encountered an unexpected error. Please try again later.