News Topic visual

About these topic collections

I’ve been reporting on memory research for over ten years and these topic pages are simply collections of all the news items I have made on a particular topic. They do not pretend to be in any way exhaustive! I cover far too many areas within memory to come anywhere approaching that. What I aim to do is provide breadth, rather than depth. Outside my own area of cognitive psychology, it is difficult to know how much weight to give to any study (I urge you to read my blog post on what constitutes scientific evidence). That (among other reasons) is why my approach in my news reporting is based predominantly on replication and consistency. It's about the aggregate. So here is the aggregate of those reports I have at one point considered of sufficient interest to discuss. If you know of any research you would like to add to the collection, feel free to write about it in a comment (please provide a reference).

Latest news

Increasing evidence shows that perception is nowhere near the simple bottom-up process we once thought. Two recent perception studies add to the evidence.

Previous research has found practice improves your ability at distinguishing visual images that vary along one dimension, and that this learning is specific to the visual images you train on and quite durable. A new study extends the finding to more natural stimuli that vary on multiple dimensions.

In the small study, 9 participants learned to identify faces and 6 participants learned to identify “textures” (noise patterns) over the course of two hour-long sessions of 840 trials (consecutive days). Faces were cropped to show only internal features and only shown briefly, so this was not a particularly easy task. Participants were then tested over a year later (range: 10-18 months; average 13 and 15 months, respectively).

On the test, participants were shown both images from training and new images that closely resembled them. While accuracy rates were high for the original images, they plummeted for the very similar new images, indicating that despite the length of time since they had seen the original images, they still retained much of the memory of them.

Although practice improved performance across nearly all items and for all people, there were significant differences between both participants and individual stimuli. More interestingly, individual differences (in both stimuli and people) were stable across sessions (e.g., if you were third-best on day 1, you were probably third-best on day 2 too, even though you were doing better). In other words, learning didn’t produce any qualitative changes in the representations of different items — practice had nearly the same effect on all; differences were rooted in initial difficulty of discriminating the pattern.

However, while it’s true that individual differences were stable, that doesn’t mean that every person improved their performance the exact same amount with the same amount of practice. Interestingly (and this is just from my eye-ball examination of the graphs), it looks like there was more individual variation among the group looking at noise patterns. This isn’t surprising. We all have a lot of experience discriminating faces; we’re all experts. This isn’t the case with the textures. For these, people had to ‘catch on’ to the features that were useful in discriminating patterns. You would expect more variability between people in how long it takes to work out a strategy, and how good that strategy is. Interestingly, three of the six people in the texture group actually performed better on the test than they had done on the second day of training, over a year ago. For the other three, and all nine of those in the face group, test performance was worse than it had been on the second day of training (but decidedly better than the first day).

The durability and specificity of this perceptual learning, the researchers point out, resembles that found in implicit memory and some types of sensory adaptation. It also indicates that such perceptual learning is not limited, as has been thought, to changes early in the visual pathway, but produces changes in a wider network of cortical neurons, particularly in the inferior temporal cortex.

The second, unrelated, study also bears on this issue of specificity.

We look at a scene and extract the general features — a crowd of people, violently riotous or riotously happy? — or we look at a scene and extract specific features that over time we use to build patterns about what goes with what. The first is called “statistical summary perception”; the second “statistical learning”.

A study designed to disentangle these two processes found that you can only do one or other; you can’t derive both types of information at the same time. Thus, when people were shown grids of lines slanted to varying degrees, they could either assess whether the lines were generally leaning to the left or right, or they could learn to recognize pairs of lines that had been hidden repeatedly in the grids — but they couldn’t do both.

The fact that each of these tasks interfered with the other suggests that the two processes are fundamentally related.

It seems that prosopagnosia can be, along with perfect pitch and eidetic memory, an example of what happens when your brain can’t abstract the core concept.

‘Face-blindness’ — prosopagnosia — is a condition I find fascinating, perhaps because I myself have a touch of it (it’s now recognized that this condition represents the end of a continuum rather than being an either/or proposition). The intriguing thing about this inability to recognize faces is that, in its extreme form, it can nevertheless exist side-by-side with quite normal recognition of other objects.

Prosopagnosia that is not the result of brain damage often runs in families, and a study of three family members with this condition has revealed that in some cases at least, the inability to remember faces has to do with failing to form a mental representation that abstracts the essence of the face, sans context. That is, despite being fully able to read facial expressions, attractiveness and gender from the face (indeed one of the family members is an artist who has no trouble portraying fully detailed faces), they couldn’t cope with changes in lighting conditions and viewing angles.

I’m reminded of the phenomenon of perfect pitch, which is characterized by an inability to generalize across acoustically similar tones, so an A in a different key is a completely different note. Interestingly, like prosopagnosia, perfect pitch is now thought to be more common than has been thought (recognition of it is of course limited by the fact that some musical expertise is generally needed to reveal it). This inability to abstract or generalize is also a phenomenon of eidetic memory, and I have spoken before of the perils of this.

(Note: A fascinating account of what it is like to be face-blind, from a person with the condition, can be found at: http://www.choisser.com/faceblind/)

Visual short-term memory

Older news items (pre-2010) brought over from the old website

More light shed on distinction between long and short-term memory

The once clear-cut distinction between long- and short-term memory has increasingly come under fire in recent years. A new study involving patients with a specific form of epilepsy called 'temporal lobe epilepsy with bilateral hippocampal sclerosis' has now clarified the distinction. The patients, who all had severely compromised hippocampi, were asked to try and memorize photographic images depicting normal scenes. Their memory was tested and brain activity recorded after five seconds or 60 minutes. As expected, the patients could not remember the images after 60 minutes, but could distinguish seen-before images from new at five seconds. However, their memory was poor when asked to recall details about the images. Brain activity showed that short-term memory for details required the coordinated activity of a network of visual and temporal brain areas, whereas standard short-term memory drew on a different network, involving frontal and parietal regions, and independent of the hippocampus.

[996] Cashdollar, N., Malecki U., Rugg-Gunn F. J., Duncan J. S., Lavie N., & Duzel E. (2009).  Hippocampus-dependent and -independent theta-networks of active maintenance. Proceedings of the National Academy of Sciences. 106(48), 20493 - 20498.

http://www.eurekalert.org/pub_releases/2009-11/ucl-tal110909.php

Individual differences in working memory capacity depend on two factors

A new computer model adds to our understanding of working memory, by showing that working memory can be increased by the action of the prefrontal cortex in reinforcing activity in the parietal cortex (where the information is temporarily stored). The idea is that the prefrontal cortex sends out a brief stimulus to the parietal cortex that generates a reverberating activation in a small subpopulation of neurons, while inhibitory interactions with neurons further away prevents activation of the entire network. This lateral inhibition is also responsible for limiting the mnemonic capacity of the parietal network (i.e. provides the limit on your working memory capacity). The model has received confirmatory evidence from an imaging study involving 25 volunteers. It was found that individual differences in performance on a short-term visual memory task were correlated with the degree to which the dorsolateral prefrontal cortex was activated and its interconnection with the parietal cortex. In other words, your working memory capacity is determined by both storage capacity (in the posterior parietal cortex) and prefrontal top-down control. The findings may help in the development of ways to improve working memory capacity, particularly when working memory is damaged.

[441] Edin, F., Klingberg T., Johansson P., McNab F., Tegner J., & Compte A. (2009).  Mechanism for top-down control of working memory capacity. Proceedings of the National Academy of Sciences. 106(16), 6802 - 6807.

http://www.eurekalert.org/pub_releases/2009-04/i-id-aot040109.php

Some short-term memories die suddenly, no fading

We don’t remember everything; the idea of memory as being a video faithfully recording every aspect of everything we have ever experienced is a myth. Every day we look at the world and hold a lot of what we say for no more than a few seconds before discarding it as not needed any more. Until now it was thought that these fleeting visual memories faded away, gradually becoming more imprecise. Now it seems that such memories remain quite accurate as long as they exist (about 4 seconds), and then just vanish away instantly. The study involved testing memory for shapes and colors in 12 adults, and it was found that the memory for shape or color was either there or not there – the answers either correct or random guesses. The probability of remembering correctly decreased between 4 and 10 seconds.

[941] Zhang, W., & Luck S. J. (2009).  Sudden death and gradual decay in visual working memory. Psychological Science: A Journal of the American Psychological Society / APS. 20(4), 423 - 428.

http://www.eurekalert.org/pub_releases/2009-04/uoc--ssm042809.php

Where visual short-term memory occurs

Working memory used to be thought of as a separate ‘store’, and now tends to be regarded more as a process, a state of mind. Such a conception suggests that it may occur in the same regions of the brain as long-term memory, but in a pattern of activity that is somehow different from LTM. However, there has been little evidence for that so far. Now a new study has found that information in WM may indeed be stored via sustained, but low, activity in sensory areas. The study involved volunteers being shown an image for one second and instructed to remember either the color or the orientation of the image. After then looking at a blank screen for 10 seconds, they were shown another image and asked whether it was the identical color/orientation as the first image. Brain activity in the primary visual cortex was scanned during the 10 second delay, revealing that areas normally involved in processing color and orientation were active during that time, and that the pattern only contained the targeted information (color or orientation).

[1032] Serences, J. T., Ester E. F., Vogel E. K., & Awh E. (2009).  Stimulus-Specific Delay Activity in Human Primary Visual Cortex. Psychological Science. 20(2), 207 - 214.

http://www.eurekalert.org/pub_releases/2009-02/afps-sih022009.php
http://www.eurekalert.org/pub_releases/2009-02/uoo-dsm022009.php

The finding is consistent with that of another study published this month, in which participants were shown two examples of simple striped patterns at different orientations and told to hold either one or the other of the orientations in their mind while being scanned. Orientation is one of the first and most basic pieces of visual information coded and processed by the brain. Using a new decoding technique, researchers were able to predict with 80% accuracy which of the two orientations was being remembered 11 seconds after seeing a stimulus, from the activity patterns in the visual areas. This was true even when the overall level of activity in these visual areas was very weak, no different than looking at a blank screen.

[652] Harrison, S. A., & Tong F. (2009).  Decoding reveals the contents of visual working memory in early visual areas. Nature. 458(7238), 632 - 635.

http://www.eurekalert.org/pub_releases/2009-02/vu-edi021709.php
http://www.physorg.com/news154186809.html

Even toddlers can ‘chunk' information for better remembering

We all know it’s easier to remember a long number (say a phone number) when it’s broken into chunks. Now a study has found that we don’t need to be taught this; it appears to come naturally to us. The study showed 14 months old children could track only three hidden objects at once, in the absence of any grouping cues, demonstrating the standard limit of working memory. However, with categorical or spatial cues, the children could remember more. For example, when four toys consisted of two groups of two familiar objects, cats and cars, or when six identical orange balls were grouped in three groups of two.

[196] Feigenson, L., & Halberda J. (2008).  From the Cover: Conceptual knowledge increases infants' memory capacity. Proceedings of the National Academy of Sciences. 105(29), 9926 - 9930.

http://www.eurekalert.org/pub_releases/2008-07/jhu-etg071008.php

Full text available at http://www.pnas.org/content/105/29/9926.abstract?sid=c01302b6-cd8e-4072-842c-7c6fcd40706f

Working memory has a fixed number of 'slots'

A study that showed volunteers a pattern of colored squares for a tenth of a second, and then asked them to recall the color of one of the squares by clicking on a color wheel, has found that working memory acts like a high-resolution camera, retaining three or four features in high detail. Unlike a digital camera, however, it appears that you can’t increase the number of images you can store by lowering the resolution. The resolution appears to be constant for a given individual. However, individuals do differ in the resolution of each feature and the number of features that can be stored.

[278] Zhang, W., & Luck S. J. (2008).  Discrete fixed-resolution representations in visual working memory. Nature. 453(7192), 233 - 235.

http://www.physorg.com/news126432902.html
http://www.eurekalert.org/pub_releases/2008-04/uoc--wmh040208.php

And another study of working memory has attempted to overcome the difficulties involved in measuring a person’s working memory capacity (ensuring that no ‘chunking’ of information takes place), and concluded that people do indeed have a fixed number of ‘slots’ in their working memory. In the study, participants were shown two, five or eight small, scattered, different-colored squares in an array, which was then replaced by an array of the same squares without the colors, after which the participant was shown a single color in one location and asked to indicate whether the color in that spot had changed from the original array.

[437] Rouder, J. N., Morey R. D., Cowan N., Zwilling C. E., Morey C. C., & Pratte M. S. (2008).  An assessment of fixed-capacity models of visual working memory. Proceedings of the National Academy of Sciences. 105(16), 5975 - 5979.

http://www.eurekalert.org/pub_releases/2008-04/uom-mpd042308.php

Impressive feats in visual memory

In light of all the recent experiments emphasizing how small our short-term visual memory is, it’s comforting to be reminded that, nevertheless, we have an amazing memory for pictures — in the right circumstances. Those circumstances include looking at images of familiar objects, as opposed to abstract artworks, and being motivated to do well (the best-scoring participant was given a cash prize). In the study, 14 people aged 18 to 40 viewed 2,500 images, one at a time, for a few seconds. Afterwards, they were shown pairs of images and asked to select the exact image they had seen earlier. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Stunningly, participants on average chose the correct image 92%, 88% and 87% of the time, in each of the three pairing categories respectively.

[870] Brady, T. F., Konkle T., Alvarez G. A., & Oliva A. (2008).  Visual long-term memory has a massive storage capacity for object details. Proceedings of the National Academy of Sciences. 105(38), 14325 - 14329.

Full text available at http://www.pnas.org/content/105/38/14325.abstract

Attention grabbers snatch lion's share of visual memory

It’s long been thought that when we look at a visually "busy" scene, we are only able to store a very limited number of objects in our visual short-term or working memory. For some time, this figure was believed to be four or five objects, but a recent report suggested it could be as low as two. However, a new study reveals that although it might not be large, it’s more flexible than we thought. Rather than being restricted to a limited number of objects, it can be shared out across the whole image, with more memory allocated for objects of interest and less for background detail. What’s of interest might be something we’ve previously decided on (i.e., we’re searching for), or something that grabs our attention.  Eye movements also reveal how brief our visual memory is, and that what our eyes are looking at isn’t necessarily what we’re ‘seeing’ — when people were asked to look at objects in a particular sequence, but the final object disappeared before their eyes moved on to it, it was found that the observers could more accurately recall the location of the object that they were about to look at than the one that they had just been looking at.

[1398] Bays, P. M., & Husain M. (2008).  Dynamic shifts of limited working memory resources in human vision. Science (New York, N.Y.). 321(5890), 851 - 854.

http://www.physorg.com/news137337380.html

More on how short-term memory works

It’s been established that visual working memory is severely limited — that, on average, we can only be aware of about four objects at one time. A new study explored the idea that this capacity might be affected by complexity, that is, that we can think about fewer complex objects than simple objects. It found that complexity did not affect memory capacity. It also found that some people have clearer memories of the objects than other people, and that this is not related to how many items they can remember. That is, a high IQ is associated with the ability to hold more items in working memory, but not with the clarity of those items.

[426] Awh, E., Barton B., & Vogel E. K. (2007).  Visual working memory represents a fixed number of items regardless of complexity. Psychological Science: A Journal of the American Psychological Society / APS. 18(7), 622 - 628.

http://www.eurekalert.org/pub_releases/2007-07/uoo-htb071107.php
http://www.physorg.com/news103472118.html

Support for labeling as an aid to memory

A study involving an amnesia-inducing drug has shed light on how we form new memories. Participants in the study participants viewed words, photographs of faces and landscapes, and abstract pictures one at a time on a computer screen. Twenty minutes later, they were shown the words and images again, one at a time. Half of the images they had seen earlier, and half were new. They were then asked whether they recognized each one. For one session they were given midazolam, a drug used to relieve anxiety during surgical procedures that also causes short-term anterograde amnesia, and for one session they were given a placebo.
It was found that the participants' memory while in the placebo condition was best for words, but the worst for abstract images. Midazolam impaired the recognition of words the most, impaired memory for the photos less, and impaired recognition of abstract pictures hardly at all. The finding reinforces the idea that the ability to recollect depends on the ability to link the stimulus to a context, and that unitization increases the chances of this linking occurring. While the words were very concrete and therefore easy to link to the experimental context, the photographs were of unknown people and unknown places and thus hard to distinctively label. The abstract images were also unfamiliar and not unitized into something that could be described with a single word.

[1216] Reder, L. M., Oates J. M., Thornton E. R., Quinlan J. J., Kaufer A., & Sauer J. (2006).  Drug-Induced Amnesia Hurts Recognition, but Only for Memories That Can Be Unitized. Psychological science : a journal of the American Psychological Society / APS. 17(7), 562 - 567.

http://www.sciencedaily.com/releases/2006/07/060719092800.htm

Discovery disproves simple concept of memory as 'storage space'

The idea of memory “capacity” has become more and more eroded over the years, and now a new technique for measuring brainwaves seems to finally knock the idea on the head. Consistent with recent research suggesting that a crucial problem with aging is a growing inability to ignore distracting information, this new study shows that visual working memory depends on your ability to filter out irrelevant information. Individuals have long been characterized as having a “high” working memory capacity or a “low” one — the assumption has been that these people differ in their storage capacity. Now it seems it’s all about a neural mechanism that controls what information gets into awareness. People with high capacity have a much better ability to ignore irrelevant information.

[1091] Vogel, E. K., McCollough A. W., & Machizawa M. G. (2005).  Neural measures reveal individual differences in controlling access to working memory. Nature. 438(7067), 500 - 503.

http://www.eurekalert.org/pub_releases/2005-11/uoo-dds111805.php

Language cues help visual learning in children

A study of 4-year-old children has found that language, in the form of specific kinds of sentences spoken aloud, helped them remember mirror image visual patterns. The children were shown cards bearing red and green vertical, horizontal and diagonal patterns that were mirror images of one another. When asked to choose the card that matched the one previously seen, the children tended to mistake the original card for its mirror image, showing how difficult it was for them to remember both color and location. However, if they were told, when viewing the original card, a mnemonic cue such as ‘The red part is on the left’, they performed “reliably better”.

The paper was presented by a graduate student at the 17th annual meeting of the American Psychological Society, held May 26-29 in Los Angeles.

http://www.eurekalert.org/pub_releases/2005-05/jhu-lc051705.php

An advantage of age

A study comparing the ability of young and older adults to indicate which direction a set of bars moved across a computer screen has found that although younger participants were faster when the bars were small or low in contrast, when the bars were large and high in contrast, the older people were faster. The results suggest that the ability of one neuron to inhibit another is reduced as we age (inhibition helps us find objects within clutter, but makes it hard to see the clutter itself). The loss of inhibition as we age has previously been seen in connection with cognition and speech studies, and is reflected in our greater inability to tune out distraction as we age. Now we see the same process in vision.

[1356] Betts, L. R., Taylor C. P., Sekuler A. B., & Bennett P. J. (2005).  Aging Reduces Center-Surround Antagonism in Visual Motion Processing. Neuron. 45(3), 361 - 366.

http://psychology.plebius.org/article.htm?article=739
http://www.eurekalert.org/pub_releases/2005-02/mu-opg020305.php

Why working memory capacity is so limited

There’s an old parlor game whereby someone brings into a room a tray covered with a number of different small objects, which they show to the people in the room for one minute, before whisking it away again. The participants are then required to write down as many objects as they can remember. For those who perform badly at this type of thing, some consolation from researchers: it’s not (entirely) your fault. We do actually have a very limited storage capacity for visual short-term memory.
Now visual short-term memory is of course vital for a number of functions, and reflecting this, there is an extensive network of brain structures supporting this type of memory. However, a new imaging study suggests that the limited storage capacity is due mainly to just one of these regions: the posterior parietal cortex. An interesting distinction can be made here between registering information and actually “holding it in mind”. Activity in the posterior parietal cortex strongly correlated with the number of objects the subjects were able to remember, but only if the participants were asked to remember. In contrast, regions of the visual cortex in the occipital lobe responded differently to the number of objects even when participants were not asked to remember what they had seen.

[598] Todd, J. J., & Marois R. (2004).  Capacity limit of visual short-term memory in human posterior parietal cortex. Nature. 428(6984), 751 - 754.

http://www.eurekalert.org/pub_releases/2004-04/vu-slo040704.php
http://tinyurl.com/2jzwe (Telegraph article)

Brain signal predicts working memory capacity

Our visual short-term memory may have an extremely limited capacity, but some people do have a greater capacity than others. A new study reveals that an individual's capacity for such visual working memory can be predicted by his or her brainwaves. In the study, participants briefly viewed a picture containing colored squares, followed by a one-second delay, and then a test picture. They pressed buttons to indicate whether the test picture was identical to -- or differed by one color -- from the one seen earlier. The more squares a subject could correctly identify having just seen, the greater his/her visual working memory capacity, and the higher the spike of corresponding brain activity – up to a point. Neural activity of subjects with poorer working memory scores leveled off early, showing little or no increase when the number of squares to remember increased from 2 to 4, while those with high capacity showed large increases. Subjects averaged 2.8 squares.

[1154] Vogel, E. K., & Machizawa M. G. (2004).  Neural activity predicts individual differences in visual working memory capacity. Nature. 428(6984), 748 - 751.

http://www.eurekalert.org/pub_releases/2004-04/niom-bsp041604.php

Scene memory

A recent study reveals that when we focus on searching for something, regions across the brain are pulled into the search. The study sheds light on how attention works.

In the experiments, brain activity was recorded as participants searched for people or vehicles in movie clips. Computational models showed how each of the roughly 50,000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips.

When participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles.

Now this might not sound very surprising, but it appears to contradict our whole developing picture of the brain as having specialized areas for specific categories — instead, areas normally involved in recognizing categories such as plants or buildings were being switched to become attuned to humans or vehicles. The changes occurred across the brain, not just in those regions devoted to vision, and in fact, the largest changes were seen in the prefrontal cortex.

What this suggests is that categories are represented in highly organized, continuous maps, a ‘semantic space’, as it were. By increasing the representation of the target category (and related categories) at the expense of other categories, this semantic space is changed. Note that this did not come about in response to the detection of the target; it occurred in response to the direction of attention — the goal setting.

In other words, in the same way that gravity warps the space-time continuum (well, probably not the exact same way!), attention warps your mental continuum.

You can play with an interactive online brain viewer which tries to portray this semantic space.

http://www.futurity.org/science-technology/to-find-whats-lost-brain-forms-search-party/

[3417] Çukur, T., Nishimoto S., Huth A. G., & Gallant J. L. (2013).  Attention during natural vision warps semantic representation across the human brain. Nature Neuroscience. advance online publication,

Emotionally arousing images that are remembered more vividly were seen more vividly. This may be because the amygdala focuses visual attention rather than more cognitive attention on the image.

We know that emotion affects memory. We know that attention affects perception (see, e.g., Visual perception heightened by meditation training; How mindset can improve vision). Now a new study ties it all together. The study shows that emotionally arousing experiences affect how well we see them, and this in turn affects how vividly we later recall them.

The study used images of positively and negatively arousing scenes and neutral scenes, which were overlaid with varying amounts of “visual noise” (like the ‘snow’ we used to see on old televisions). College students were asked to rate the amount of noise on each picture, relative to a specific image they used as a standard. There were 25 pictures in each category, and three levels of noise (less than standard, equal to standard, and more than standard).

Different groups explored different parameters: color; gray-scale; less noise (10%, 15%, 20% as compared to 35%, 45%, 55%); single exposure (each picture was only presented once, at one of the noise levels).

Regardless of the actual amount of noise, emotionally arousing pictures were consistently rated as significantly less noisy than neutral pictures, indicating that people were seeing them more clearly. This was true in all conditions.

Eye-tracking analysis ruled out the idea that people directed their attention differently for emotionally arousing images, but did show that more eye fixations were associated both with less noisy images and emotionally arousing ones. In other words, people were viewing emotionally important images as if they were less noisy.

One group of 22 students were given a 45-minute spatial working memory task after seeing the images, and then asked to write down all the details they could remember about the pictures they remembered seeing. The amount of detail they recalled was taken to be an indirect measure of vividness.

A second group of 27 students were called back after a week for a recognition test. They were shown 36 new images mixed in with the original 75 images, and asked to rate them as new, familiar, or recollected. They were also asked to rate the vividness of their recollection.

Although, overall, emotionally arousing pictures were not more likely to be remembered than neutral pictures, both experiments found that pictures originally seen as more vivid (less noise) were remembered more vividly and in more detail.

Brain scans from 31 students revealed that the amygdala was more active when looking at images rated as vivid, and this in turn increased activity in the visual cortex and in the posterior insula (which integrates sensations from the body). This suggests that the increased perceptual vividness is not simply a visual phenomenon, but part of a wider sensory activation.

There was another neural response to perceptual vividness: activity in the dorsolateral prefrontal cortex and the posterior parietal cortex was negatively correlated with vividness. This suggests that emotion is not simply increasing our attentional focus, it is instead changing it by reducing effortful attentional and executive processes in favor of more perceptual ones. This, perhaps, gives emotional memories their different ‘flavor’ compared to more neutral memories.

These findings clearly need more exploration before we know exactly what they mean, but the main finding from the study is that the vividness with which we recall some emotional experiences is rooted in the vividness with which we originally perceived it.

The study highlights how emotion can sharpen our attention, building on previous findings that emotional events are more easily detected when visibility is difficult, or attentional demands are high. It is also not inconsistent with a study I reported on last year, which found some information needs no repetition to be remembered because the amygdala decrees it of importance.

I should add, however, that the perceptual effect is not the whole story — the current study found that, although perceptual vividness is part of the reason for memories that are vividly remembered, emotional importance makes its own, independent, contribution. This contribution may occur after the event.

It’s suggested that individual differences in these reactions to emotionally enhanced vividness may underlie an individual’s vulnerability to post-traumatic stress disorder.

A small study provides more support for the idea that viewing nature can refresh your attention and improve short-term memory, and extends it to those with clinical depression.

I’ve talked before about Dr Berman’s research into Attention Restoration Theory, which proposes that people concentrate better after nature walks or even just looking at nature scenes. In his latest study, the findings have been extended to those with clinical depression.

The study involved 20 young adults (average age 26), all of whom had a diagnosis of major depressive disorder. Short-term memory and mood were assessed (using the backwards digit span task and the PANAS), and then participants were asked to think about an unresolved, painful autobiographical experience. They were then randomly assigned to go for a 50-minute walk along a prescribed route in either the Ann Arbor Arboretum (woodland park) or traffic heavy portions of downtown Ann Arbor. After the walk, mood and cognition were again assessed. A week later the participants repeated the entire procedure in the other location.

Participants exhibited a significant (16%) increase in attention and working memory after the nature walk compared to the urban walk. While participants felt more positive after both walks, there was no correlation with memory effects.

The finding is particularly interesting because depression is characterized by high levels of rumination and negative thinking. It seemed quite likely, then, that a solitary walk in the park might make depressed people feel worse, and worsen working memory. It’s intriguing that it didn’t.

It’s also worth emphasizing that, as in earlier studies, this effect of nature on cognition appears to be independent of mood (which is, of course, the basic tenet of Attention Restoration Theory).

Of course, this study is, like the others, small, and involves the same demographic. Hopefully future research will extend the sample groups, to middle-aged and older adults.

A small study has found that ten hours of playing action video games produced significant changes in brainwave activity and improved visual attention for some (but not all) novices.

Following on from research finding that people who regularly play action video games show visual attention related differences in brain activity compared to non-players, a new study has investigated whether such changes could be elicited in 25 volunteers who hadn’t played video games in at least four years. Sixteen of the participants played a first-person shooter game (Medal of Honor: Pacific Assault), while nine played a three-dimensional puzzle game (Ballance). They played the games for a total of 10 hours spread over one- to two-hour sessions.

Selective attention was assessed through an attentional visual field task, carried out prior to and after the training program. Individual learning differences were marked, and because of visible differences in brain activity after training, the action gamers were divided into two groups for analysis — those who performed above the group mean on the second attentional visual field test (7 participants), and those who performed below the mean (9). These latter individuals showed similar brain activity patterns as those in the control (puzzle) group.

In all groups, early-onset brainwaves were little affected by video game playing. This suggests that game-playing has little impact on bottom–up attentional processes, and is in keeping with earlier research showing that players and non-players don’t differ in the extent to which their attention is captured by outside stimuli.

However, later brainwaves — those thought to reflect top–down control of selective attention via increased inhibition of distracters — increased significantly in the group who played the action game and showed above-average improvement on the field test. Another increased wave suggests that the total amount of attention allocated to the task was also greater in that group (i.e., they were concentrating more on the game than the below-average group, and the control group).

The improved ability to select the right targets and ignore other stimuli suggests, too, that these players are also improving their ability to make perceptual decisions.

The next question, of course, is what personal variables underlie the difference between those who benefit more quickly from the games, and those who don’t. And how much more training is necessary for this latter group, and are there some people who won’t achieve these benefits at all, no matter how long they play? Hopefully, future research will be directed to these questions.

[2920] Wu, S., Cheng C. K., Feng J., D'Angelo L., Alain C., & Spence I. (2012).  Playing a First-person Shooter Video Game Induces Neuroplastic Change. Journal of Cognitive Neuroscience. 24(6), 1286 - 1293.

New research suggests that sleeping within a few hours of a disturbing event keeps your emotional response to the event strong.

Previous research has shown that negative objects and events are preferentially consolidated in sleep — if you experience them in the evening, you are more likely to remember them than more neutral objects or events, but if you experience them in the morning, they are not more likely to be remembered than other memories (see collected sleep reports). However, more recent studies have failed to find this. A new study also fails to find such preferential consolidation, but does find that our emotional reaction to traumatic or disturbing events can be greatly reduced if we stay awake afterward.

Being unable to sleep after such events is of course a common response — these findings indicate there’s good reason for it, and we should go along with it rather than fighting it.

The study involved 106 young adults rating pictures on a sad-happy scale and their own responses on an excited-calm scale. Twelve hours later, they were given a recognition test: noting pictures they had seen earlier from a mix of new and old pictures. They also rated all the pictures on the two scales. There were four groups: 41 participants saw the first set late in the day and the second set 12 hours later on the following day (‘sleep group’); 41 saw the first set early and the second set 12 hours later on the same day; 12 participants saw both sets in the evening, with only 45 minutes between the sets; 12 participants saw both sets in the morning (these last two groups were to rule out circadian effects). 25 of the sleep group had their brain activity monitored while they slept.

The sleep group performed significantly better on the recognition test than the same-day group. Negative pictures were remembered better than neutral ones. However, unlike earlier studies, the sleep group didn’t preferentially remember negative pictures more than the same-day group.

But, interestingly, the sleep group was more likely to maintain the strength of initial negative responses. The same-day group showed a weaker response to negative scenes on the second showing.

It’s been theorized that late-night REM sleep is critical for emotional memory consolidation. However, this study found no significant relationship between the amount of time spent in REM sleep and recognition memory, nor was there any relationship between other sleep stages and memory. There was one significant result: those who had more REM sleep in the third quarter of the night showed the least reduction of emotional response to the negative pictures.

There were no significant circadian effects, but it’s worth noting that even the 45 minute gap between the sets was sufficient to weaken the negative effect of negative scenes.

While there was a trend toward a gender effect, it didn’t reach statistical significance, and there were no significant interactions between gender and group or emotional value.

The findings suggest that the effects of sleep on memory and emotion may be independent.

The findings also contradict previous studies showing preferential consolidation of emotional memories during sleep, but are consistent with two other recent studies that have also failed to find this. At this stage, all we can say is that there may be certain conditions in which this occurs (or doesn’t occur), but more research is needed to determine what these conditions are. Bear in mind that there is no doubt that sleep helps consolidate memories; we are talking here only about emphasizing negative memories at the expense of emotionally-neutral ones.

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

[2291] Kim, J. G., Biederman I., & Juan C. - H. (2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study. The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L. (2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories. Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W. J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A. (2011).  Behavior and neural basis of near-optimal visual search. Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S. (2011).  A neural basis for real-world visual search in human occipitotemporal cortex. Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S. (2011).  Value-driven attentional capture. Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Autism is popularly associated with intense awareness of systematic regularities, but a new study shows that the skill displayed in computer tasks is not available in real-world tasks.

Contrary to previous laboratory studies showing that children with autism often demonstrate outstanding visual search skills, new research indicates that in real-life situations, children with autism are unable to search effectively for objects. The study, involving 20 autistic children and 20 normally-developing children (aged 8-14), used a novel test room, with buttons on the floor that the children had to press to find a hidden target among multiple illuminated locations. Critically, 80% of these targets appeared on one side of the room.

Although autistics are generally believed to be more systematic, with greater sensitivity to regularities within a system, such behavior was not observed. Compared to other children, those with autism were slower to pick up on the regularities that would help them choose where to search. The slowness was not due to a lack of interest — all the children seemed to enjoy the game, and were keen to find the hidden targets.

The findings suggest that those with ASD have difficulties in applying the rules of probability to larger environments, particularly when they themselves are part of that environment.

[2055] Pellicano, E., Smith A. D., Cristino F., Hood B. M., Briscoe J., & Gilchrist I. D. (2011).  Children with autism are neither systematic nor optimal foragers. Proceedings of the National Academy of Sciences. 108(1), 421 - 426.

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but the context in which the scene is presented.

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but when the scene is presented. In the study, participants performed an attention-demanding letter-identification task while also viewing a rapid sequence of full-field photographs of urban and natural scenes. They were then tested on their memory of the scenes. It was found that, notwithstanding their attention had been focused on the target letter, only those scenes which were presented at the same time as a target letter (rather than a distractor letter) were reliably remembered. The results point to a brain mechanism that automatically encodes certain visual features into memory at behaviorally relevant points in time, regardless of the spatial focus of attention.

[321] Lin, J. Y., Pype A. D., Murray S. O., & Boynton G. M. (2010).  Enhanced Memory for Scenes Presented at Behaviorally Relevant Points in Time. PLoS Biol. 8(3), e1000337 - e1000337.

Full text available at doi:10.1371/journal.pbio.1000337

Older news items (pre-2010) brought over from the old website

Learning without desire or awareness

We have long known that learning can occur without attention. A recent study demonstrates learning that occurs without attention, without awareness and without any task relevance. Subjects were repeatedly presented with a background motion signal so weak that its direction was not visible; the invisible motion was an irrelevant background to the central task that engaged the subject's attention. Despite being below the threshold of visibility and being irrelevant to the central task, the repetitive exposure improved performance specifically for the direction of the exposed motion when tested in a subsequent suprathreshold test. These results suggest that a frequently presented feature sensitizes the visual system merely owing to its frequency, not its relevance or salience.

[594] Watanabe, T., Nanez J. E., & Sasaki Y. (2001).  Perceptual learning without perception. Nature. 413(6858), 844 - 848.

http://www.nature.com/nsu/011025/011025-12.html
http://tinyurl.com/ix98

Visual memory better than previously thought

Why is it that you can park your car at a huge mall and find it a few hours later without much problem, or make your way through a store you have never been to before? The answer may lie in our ability to build up visual memories of a scene in a short period of time. A new study counters current thinking that visual memory is generally poor and that people quickly forget the details of what they have seen. It appears that even with very limited visual exposure to a scene, people are able to build up strong visual memories and, in fact, their recall of objects in the scene improved with each exposure. It is suggested these images aren't stored in short-term or long-term memory, but in medium-term memory, which lasts for a few minutes and appears to be specific to visual information as opposed to verbal or semantic information. "Medium-term memory depends on the visual context of the scene, such as the background, furniture and walls, which seems to be key in the ability to keep in mind the location and identity of objects. These disposable accumulated visual memories can be recalled in a few minutes if faced with that scene again, but are discarded in a day or two if the scene is not viewed again so they don't take up valuable memory space."

Melcher, D. 2001. Persistence of visual memory for scenes. Nature, 412 (6845), 401.

http://www.eurekalert.org/pub_releases/2001-07/rtsu-rrf072501.php

Color

Object perception rests on groups of neurons that respond to specific attributes.

New imaging techniques used on macaque monkeys explains why we find it so easy to scan many items quickly when we’re focused on one attribute, and how we can be so blind to attributes and objects we’re not focused on.

The study reveals that a region of the visual cortex called V4, which is involved in visual object recognition, shows extensive compartmentalization. There are areas for specific colors; areas for specific orientations, such as horizontal or vertical. Other groups of neurons are thought to process more complex aspects of color and form, such as integrating different contours that are the same color, to achieve overall shape perception.

[1998] Tanigawa, H., Lu H. D., & Roe A. W. (2010).  Functional organization for color and orientation in macaque V4. Nat Neurosci. 13(12), 1542 - 1548.

Older news items (pre-2010) brought over from the old website

Which color boosts brain performance depends on task

Previous research has produced contradictory results as to which color helps memory the most: some have said blue or green; others red. A series of six experiments has found that the answer depends on the task. Red boosted performance on detail-oriented tasks such as memory retrieval and proofreading by as much as 31% compared to blue, while blue environmental cues produced significantly more creativity in such tasks as brainstorming. The effects are thought to be due to learned associations, such that red is associated with danger, mistakes and caution, while blue is associated with calm and openness. The study also found that these effects carry over to consumer packaging and advertising.

[1405] Mehta, R., & Zhu R.(J.) (2009).  Blue or Red? Exploring the Effect of Color on Cognitive Task Performances. Science. 323(5918), 1226 - 1229.

http://www.eurekalert.org/pub_releases/2009-02/uobc-cbb020409.php

Why are uniforms uniform? Because color helps us track objects

Laboratory tests have revealed that humans can pay attention to only 3 objects at a time. Yet there are instances in the real world — for example, in watching a soccer match — when we certainly think we are paying attention to more than 3 objects. Are we wrong? No. Anew study shows how we do it — it’s all in the color coding. People can focus on more than three items at a time if those items share a common color. But, logically enough, no more than 3 color sets.

[927] Halberda, J., Sires S. F., & Feigenson L. (2006).  Multiple spatially overlapping sets can be enumerated in parallel. Psychological Science: A Journal of the American Psychological Society / APS. 17(7), 572 - 576.

http://www.eurekalert.org/pub_releases/2006-06/jhu-wau062106.php

Scenes in natural color remembered better than black and white

In a series of experiments, subjects were found to remember photographs of colored natural scenes significantly better than black and white images, regardless of how long they saw the images. Falsely colored natural scenes were remembered no better than scenes in black and white. If shown the images in color but tested on them in black and white (and vice versa), the images were not remembered as well. It may be that color helps by providing an extra 'tag' on the stored memory code stored.

[341] Wichmann, F. A., Sharpe L. T., & Gegenfurtner K. R. (2002).  The Contributions of Color to Recognition Memory for Natural Scenes. Journal of Experimental Psychology: Learning, Memory, and Cognition. 28(3), 509 - 520.

Spatial memory

Evidence against an evolutionary explanation for male superiority in spatial ability coves from a review of 35 studies covering 11 species: cuttlefish, deer mice, horses, humans, laboratory mice, meadow voles, pine voles, prairie voles, rats, rhesus macaques and talastuco-tucos (a type of burrowing rodent). In eight species, males demonstrated moderately superior spatial skills to their female counterparts, regardless of the size of their territories or the extent to which males ranged farther than females of the same species.

The findings lend support to an alternative theory: that the tendency for males to be better at spatial navigation may just be a "side effect" of testosterone.

http://phys.org/news/2013-02-males-superior-spatial-ability-evolutionary.html

[3315] Clint, E. K., Sober E., GarlandJr. T., & Rhodes J. S. (2012).  Male Superiority in Spatial Navigation: Adaptation or Side Effect?. The Quarterly Review of Biology. 87(4), 289 - 313.

Full text available at http://www.jstor.org/stable/10.1086/668168

The most popular format of the most common type of diagram in biology textbooks is more difficult to understand than formats that use different orientations.

A study into how well students understand specific diagrams reminds us that, while pictures may be worth 1000 words, even small details can make a significant difference to how informative they are.

The study focused on variously formatted cladograms (also known as phylogenetic trees) that are commonly used in high school and college biology textbooks. Such diagrams are hierarchically branching, and are typically used to show the evolutionary history of taxa.

Nineteen college students (most of whom were women), who were majoring in biology, were shown cladograms in sequential pairs and asked whether the second cladogram (a diagonal one) depicted relationships that were the same or different as those depicted in the first cladogram (a rectangular one). Taxa were represented by single letters, which were either in forward or reverse alphabetical order. Each set (diagonal and rectangular) had four variants: up to the right (UR) with forward letters; UR with reverse letters; down to the right (DR), forward letters; DR, reverse. Six topologies were used, creating 24 cladograms in each set. Eye-tracking showed how the students studied the diagrams.

The order of the letters turned out not to matter, but the way the diagrams were oriented made a significant difference to how well students understood them.

In line with our training in reading (left to right), and regardless of orientation, students scanned the diagrams from left to right. The main line of the cladogram (the “backbone”) also provided a strong visual cue to the direction of scanning (upward or downward). In conjunction with the left-right bias, this meant that UR cladograms were processed from bottom to top, while DR cladograms were processed from top to bottom.

Put like that, the results are less surprising. Diagonal cladograms going up to the right were significantly harder for students to match to the rectangular format (63% correct vs 70% for cladograms going down to the right).

Moreover, this was true even for experts. Of the two biology professors included in the study, one showed the same pattern as the students in terms of accuracy, while the other managed the translations accurately enough, but took significantly longer to interpret the UR diagrams than the DR ones.

Unfortunately, the upward orientation is the more widely used (82% of diagonal cladograms in a survey of 27 high school & college biology textbooks; diagonal cladograms comprised 72% of all diagrams).

The findings suggest that teachers need to teach their students to go against their own natural inclinations, and regardless of orientation, scan the tree in a downward direction. This strategy applies to rectangular cladograms as well as diagonal ones.

It’s worth emphasizing another aspect of these findings: even the best type of diagonal cladogram was only translated at a relatively poor level of accuracy. Previous research has suggested that the diagonal cladogram is significantly harder to understand than the rectangular format. Note that the only difference between them is the orientation.

All this highlights two points:

Even apparently minor aspects of a diagram can make a significant difference to how easily it’s understood.

Teachers shouldn’t assume that students ‘naturally’ know how to read a diagram.

Novick, L., Stull, A. T., & Catley, K. M. (2012). Reading Phylogenetic Trees: The Effects of Tree Orientation and Text Processing on Comprehension. BioScience, 62(8), 757–764. doi:10.1525/bio.2012.62.8.8

Catley, K., & Novick, L. (2008). Seeing the wood for the trees: An analysis of evolutionary diagrams in biology textbooks. BioScience, 58(10), 976–987. Retrieved from http://www.jstor.org/stable/10.1641/B581011
 

A review has concluded that spatial training produces significant improvement, particularly for poorer performers, and that such training could significantly increase STEM achievement.

Spatial abilities have been shown to be important for achievement in STEM subjects (science, technology, engineering, math), but many people have felt that spatial skills are something you’re either born with or not.

In a comprehensive review of 217 research studies on educational interventions to improve spatial thinking, researchers concluded that you can indeed improve spatial skills, and that such training can transfer to new tasks. Moreover, not only can the right sort of training improve spatial skill in general, and across age and gender, but the effect of training appears to be stable and long-lasting.

One interesting finding (the researchers themselves considered it perhaps the most important finding) was the diversity in effective training — several different forms of training can be effective in improving spatial abilities. This may have something to do with the breadth covered by the label ‘spatial ability’, which include such skills as:

  • Perceiving objects, paths, or spatial configurations against a background of distracting information;
  • Piecing together objects into more complex configurations, visualizing and mentally transforming objects;
  • Understanding abstract principles, such as horizontal invariance;
  • Visualizing an environment in its entirety from a different position.

The review compared three types of training. Those that used:

  • Video games (24 studies)
  • Semester-long instructional courses on spatial reasoning (42 studies)
  • Practical training, often in a lab, that involved practicing spatial tasks, strategic instruction, or computerized lessons (138 studies).

The first two are examples of indirect training, while the last involves direct training.

On average, taken across the board, training improved performance by well over half a standard deviation when considered on its own, and still almost one half of a standard deviation when compared to a control group. This is a moderately large effect, and it extended to transfer tasks.

It also conceals a wide range, most of which is due to different treatment of control groups. Because the retesting effect is so strong in this domain (if you give any group a spatial test twice, regardless of whether they’ve been training in between the two tests, they’re going to do better on the second test), repeated testing can have a potent effect on the control group. Some ‘filler’ tasks can also inadvertently improve the control group’s performance. All of this will reduce the apparent effect of training. (Not having a control group is even worse, because you don’t know how much of the improvement is due to training and how much to the retesting effect.)

This caution is, of course, more support for the value of practice in developing spatial skills. This is further reinforced by studies that were omitted from the analysis because they would skew the data. Twelve studies found very high effect sizes — more than three times the average size of the remaining studies. All these studies took place in poorly developed countries (those with a Human Development Index above 30 at the time of the study) — Malaysia, Turkey, China, India, and Nigeria. HDI rating was even associated with the benefits of training in a dose-dependent manner — that is, the lower the standard of living, the greater the benefit.

This finding is consistent with other research indicating that lower socioeconomic status is associated with larger responses to training or intervention.

In similar vein, when the review compared 19 studies that specifically selected participants who scored poorly on spatial tests against the other studies, they found that the effects of training were significantly bigger among the selected studies.

In other words, those with poorer spatial skills will benefit most from training. It may be, indeed, that they are poor performers precisely because they have had little practice at these tasks — a question that has been much debated (particularly in the context of gender differences).

It’s worth noting that there was little difference in performance on tests carried out immediately after training ended, within a week, or within a month, indicating promising stability.

A comparison of different types of training did find that some skills were more resistant to training than others, but all types of spatial skill improved. The differences may be because some sorts of skill are harder to teach, and/or because some skills are already more practiced than others.

Given the demonstrated difficulty in increasing working memory capacity through training, it is intriguing to notice one example the researchers cite: experienced video game players have been shown to perform markedly better on some tasks that rely on spatial working memory, such as a task requiring you to estimate the number of dots shown in a brief presentation. Most of us can instantly recognize (‘subitize’) up to five dots without needing to count them, but video game players can typically subitize some 7 or 8. The extent to which this generalizes to a capacity to hold more elements in working memory is one that needs to be explored. Video game players also apparently have a smaller attentional blink, meaning that they can take in more information.

A more specific practical example of training they give is that of a study in which high school physics students were given training in using two- and three-dimensional representations over two class periods. This training significantly improved students’ ability to read a topographical map.

The researchers suggest that the size of training effect could produce a doubling of the number of people with spatial abilities equal to or greater than that of engineers, and that such training might lower the dropout rate among those majoring in STEM subjects.

Apart from that, I would argue many of us who are ‘spatially-challenged’ could benefit from a little training!

  • Fifth grade students' understanding of fractions and division predicted high school students' knowledge of algebra and overall math achievement.
  • School entrants’ spatial skills predicted later number sense and estimation skills.
  • Gender differences in math performance may rest in part on differences in retrieval practice.
  • ‘Math’ training for infants may be futile, given new findings that they’re unable to integrate two mechanisms for number estimation.

Grasp of fractions and long division predicts later math success

One possible approach to improving mathematics achievement comes from a recent study finding that fifth graders' understanding of fractions and division predicted high school students' knowledge of algebra and overall math achievement, even after statistically controlling for parents' education and income and for the children's own age, gender, I.Q., reading comprehension, working memory, and knowledge of whole number addition, subtraction and multiplication.

The study compared two nationally representative data sets, one from the U.S. and one from the United Kingdom. The U.S. set included 599 children who were tested in 1997 as 10-12 year-olds and again in 2002 as 15-17-year-olds. The set from the U.K. included 3,677 children who were tested in 1980 as 10-year-olds and in 1986 as 16-year-olds.

You can watch a short video of Siegler discussing the study and its implications at http://youtu.be/7YSj0mmjwBM.

Spatial skills improve children’s number sense

More support for the idea that honing spatial skills leads to better mathematical ability comes from a new children’s study.

The study found that first- and second-graders with the strongest spatial skills at the beginning of the school year showed the most improvement in their number line sense over the course of the year. Similarly, in a second experiment, not only were those children with better spatial skills at 5 ½ better on a number-line test at age 6, but this number line knowledge predicted performance on a math estimation task at age 8.

Hasty answers may make boys better at math

A study following 311 children from first to sixth grade has revealed gender differences in their approach to math problems. The study used single-digit addition problems, and focused on the strategy of directly retrieving the answer from long-term memory.

Accurate retrieval in first grade was associated with working memory capacity and intelligence, and predicted a preference for direct retrieval in second grade. However, at later grades the relation reversed, such that preference in one grade predicted accuracy and speed in the next grade.

Unlike girls, boys consistently preferred to use direct retrieval, favoring speed over accuracy. In the first and second grades, this was seen in boys giving more answers in total, and more wrong answers. Girls, on the other hand, were right more often, but responded less often and more slowly. By sixth grade, however, the boys’ practice was paying off, and they were both answering more problems and getting more correct.

In other words, while ability was a factor in early skilled retrieval, the feedback loop of practice and skill leads to practice eventually being more important than ability — and the relative degrees of practice may underlie some of the gender differences in math performance.

The findings also add weight to the view being increasingly expressed, that mistakes are valuable and educational approaches that try to avoid mistakes (e.g., errorless learning) should be dropped.

Infants can’t compare big and small groups

Our brains process large and small numbers of objects using two different mechanisms, seen in the ability to estimate numbers of items at a glance and the ability to visually track small sets of objects. A new study indicates that at age one, infants can’t yet integrate those two processes. Accordingly, while they can choose the larger of two sets of items when both sets are larger or smaller than four, they can’t distinguish between a large (above four) and small (below four) set.

In the study, infants consistently chose two food items over one and eight items over four, but chose randomly when asked to compare two versus four and two versus eight.

The researchers suggest that educational programs that claim to give children an advantage by teaching them arithmetic at an early age are unlikely to be effective for this reason.

While sports training benefits the spatial skills of both men and women, music training closes the gender gap by only helping women.

I talked recently about how the well-established difference in spatial ability between men and women apparently has a lot to do with confidence. I also mentioned in passing that previous research has shown that training can close the gender gap. A recent study suggests that this training may not have to be specific to spatial skills.

In the German study, 120 students were given a processing speed test and a standard mental rotation test. The students were evenly divided into three groups: musicians, athletes, and education students who didn’t participate in either sports or music.

While the expected gender gap was found among the education students, the gap was smaller among the sports students, and non-existent in the music students.

Among the education students, men got twice as many rotation problems correct as women. Among the sports students, both men and women did better than their peers in education, but since they were both about equally advantaged, a gender gap was still maintained. However, among the musicians, it was only women who benefited, bringing them up to the level of the men.

Thus, for males, athletes did best on mental rotation; for females, musicians did best.

Although it may be that those who went into music or sports had relevant “natural abilities”, the amount of training in sports/music did have a significant effect. Indeed, analysis found that the advantage of sports and music students disappeared when hours of practice and years of practicing were included.

Interestingly, too, there was an effect of processing speed. Although overall the three groups didn’t differ in processing speed, male musicians had a lower processing speed than female musicians, or male athletes (neither of which groups were significantly different from each other).

It is intriguing that music training should only benefit females’ spatial abilities. However, I’m reminded that in research showing how a few hours of video game training can help females close the gender gap, females benefited from the training far more than men. The obvious conclusion is that the males already had sufficient experience, and a few more hours were neither here nor there. Perhaps the question should rather be: why does sports practice benefit males’ spatial skills? A question that seems to point to the benefits for processing speed, but then we have to ask why sports didn’t have the same effect on women. One possible answer here is that the women had engaged in sports for a significantly shorter time (an average of 10.6 years vs 17.55), meaning that the males tended to begin their sports training at a much younger age. There was no such difference among the musicians.

(For more on spatial memory, see the aggregated news reports)

Pietsch, S., & Jansen, P. (2012). Different mental rotation performance in students of music, sport and education. Learning and Individual Differences, 22(1), 159-163. Elsevier Inc. doi:10.1016/j.lindif.2011.11.012

A series of experiments has found that confidence fully accounted for women’s poorer performance on a mental rotation task.

One of the few established cognitive differences between men and women lies in spatial ability. But in recent years, this ‘fact’ has been shaken by evidence that training can close the gap between the genders. In this new study, 545 students were given a standard 3D mental rotation task, while at the same time manipulating their confidence levels.

In the first experiment, 70 students were asked to rate their confidence in each answer. They could also choose not to answer. Confidence level was significantly correlated with performance both between and within genders.

On the face of it, these findings could be explained, of course, by the ability of people to be reliable predictors of their own performance. However, the researchers claim that regression analysis shows clearly that when the effect of confidence was taken into account, gender differences were eliminated. Moreover, gender significantly predicted confidence.

But of course this is still just indicative.

In the next experiment, however, the researchers tried to reduce the effect of confidence. One group of 87 students followed the same procedure as in the first experiment (“omission” group), except they were not asked to give confidence ratings. Another group of 87 students was not permitted to miss out any questions (“commission” group). The idea here was that confidence underlay the choice of whether or not to answer a question, so while the first group should perform similarly to those in the first experiment, the second group should be less affected by their confidence level.

This is indeed what was found: men significantly outperformed women in the first condition, but didn’t in the second condition. In other words, it appears that the mere possibility of not answering makes confidence an important factor.

In the third experiment, 148 students replicated the commission condition of the second experiment with the additional benefit of being allowed unlimited time. Half of the students were required to give confidence ratings.

The advantage of unlimited time improved performance overall. More importantly, the results confirmed those produced earlier: confidence ratings produced significant gender differences; there were no gender differences in the absence of such ratings.

In the final experiment, 153 students were required to complete an intentionally difficult line judgment task, which men and women both carried out at near chance levels. They were then randomly informed that their performance had been either above average (‘high confidence’) or below average (‘low confidence’). Having manipulated their confidence, the students were then given the standard mental rotation task (omission version).

As expected (remember this is the omission procedure, where subjects could miss out answers), significant gender differences were found. But there was also a significant difference between the high and low confidence groups. That is, telling people they had performed well (or badly) on the first task affected how well they did on the second. Importantly, women in the high confidence group performed as well as men in the low confidence group.

A comparison of the brains of London taxi drivers before and after their lengthy training shows clearly that the increase in hippocampal gray matter develops with training, but this may come at the expense of other brain functions.

The evidence that adult brains could grow new neurons was a game-changer, and has spawned all manner of products to try and stimulate such neurogenesis, to help fight back against age-related cognitive decline and even dementia. An important study in the evidence for the role of experience and training in growing new neurons was Maguire’s celebrated study of London taxi drivers, back in 2000.

The small study, involving 16 male, right-handed taxi drivers with an average experience of 14.3 years (range 1.5 to 42 years), found that the taxi drivers had significantly more grey matter (neurons) in the posterior hippocampus than matched controls, while the controls showed relatively more grey matter in the anterior hippocampus. Overall, these balanced out, so that the volume of the hippocampus as a whole wasn’t different for the two groups. The volume in the right posterior hippocampus correlated with the amount of experience the driver had (the correlation remained after age was accounted for).

The posterior hippocampus is preferentially involved in spatial navigation. The fact that only the right posterior hippocampus showed an experience-linked increase suggests that the right and left posterior hippocampi are involved in spatial navigation in different ways. The decrease in anterior volume suggests that the need to store increasingly detailed spatial maps brings about a reorganization of the hippocampus.

But (although the experience-related correlation is certainly indicative) it could be that those who manage to become licensed taxi drivers in London are those who have some innate advantage, evidenced in a more developed posterior hippocampus. Only around half of those who go through the strenuous training program succeed in qualifying — London taxi drivers are unique in the world for being required to pass through a lengthy training period and pass stringent exams, demonstrating their knowledge of London’s 25,000 streets and their idiosyncratic layout, plus 20,000 landmarks.

In this new study, Maguire and her colleague made a more direct test of this question. 79 trainee taxi drivers and 31 controls took cognitive tests and had their brains scanned at two time points: at the beginning of training, and 3-4 years later. Of the 79 would-be taxi drivers, only 39 qualified, giving the researchers three groups to compare.

There were no differences in cognitive performance or brain scans between the three groups at time 1 (before training). At time 2 however, when the trainees had either passed the test or failed to acquire the Knowledge, those trainees that qualified had significantly more gray matter in the posterior hippocampus than they had had previously. There was no change in those who failed to qualify or in the controls.

Unsurprisingly, both qualified and non-qualified trainees were significantly better at judging the spatial relations between London landmarks than the control group. However, qualified trainees – but not the trainees who failed to qualify – were worse than the other groups at recalling a complex visual figure after 30 minutes (see here for an example of such a figure). Such a finding replicates previous findings of London taxi drivers. In other words, their improvement in spatial memory as it pertains to London seems to have come at a cost.

Interestingly, there was no detectable difference in the structure of the anterior hippocampus, suggesting that these changes develop later, in response to changes in the posterior hippocampus. However, the poorer performance on the complex figure test may be an early sign of changes in the anterior hippocampus that are not yet measurable in a MRI.

The ‘Knowledge’, as it is known, provides a lovely real-world example of expertise. Unlike most other examples of expertise development (e.g. music, chess), it is largely unaffected by childhood experience (there may be some London taxi drivers who began deliberately working on their knowledge of London streets in childhood, but it is surely not common!); it is developed through a training program over a limited time period common to all participants; and its participants are of average IQ and education (average school-leaving age was around 16.7 years for all groups; average verbal IQ was around or just below 100).

So what underlies this development of the posterior hippocampus? If the qualified and non-qualified trainees were comparable in education and IQ, what determined whether a trainee would ‘build up’ his hippocampus and pass the exams? The obvious answer is hard work / dedication, and this is borne out by the fact that, although the two groups were similar in the length of their training period, those who qualified spent significantly more time training every week (an average of 34.5 hours a week vs 16.7 hours). Those who qualified also attended far more tests (an average of 15.6 vs 2.6).

While neurogenesis is probably involved in this growth within the posterior hippocampus, it is also possible that growth reflects increases in the number of connections, or in the number of glia. Most probably (I think), all are involved.

There are two important points to take away from this study. One is its clear demonstration that training can produce measurable changes in a brain region. The other is the indication that this development may come at the expense of other regions (and functions).

Two recent studies in embodied cognition show that hand movements and hand position are associated with less abstract thinking.

I always like studies about embodied cognition — that is, about how what we do physically affects how we think. Here are a couple of new ones.

The first study involved two experiments. In the first, 86 American college students were asked questions about gears in relation to each other. For example, “If five gears are arranged in a line, and you move the first gear clockwise, what will the final gear do?” The participants were videotaped as they talked their way through the problem. But here’s the interesting thing: half the students wore Velcro gloves attached to a board, preventing them from moving their hands. The control half were similarly prevented from moving their feet — giving them the same experience of restriction without the limitation on hand movement.

Those who gestured commonly used perceptual-motor strategies (simulation of gear movements) in solving the puzzles. Those who were prevented from gesturing, as well as those who chose not to gesture, used abstract, mathematical strategies much more often.

The second experiment confirmed the results with 111 British adults.

The findings are consistent with the hypothesis that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving.

That can be helpful, but not always. Even when we are solving problems that have to do with motion and space, more abstract strategies may sometimes be more efficient, and thus an inability to use the body may force us to come up with better strategies.

The other study is quite different. In this study, college students searched for a single letter embedded within images of fractals and other complex geometrical patterns. Some did this while holding their hands close to the images; others kept their hands in their laps, far from the images. This may sound a little wacky, but previous research has shown that perception and attention are affected by how close our hands are to an object. Items near our hands tend to take priority.

In the first experiment, eight randomly chosen images were periodically repeated 16 times, while the other 128 images were only shown once. The target letter was a gray “T” or “L”; the images were colorful.

As expected, finding the target letter was faster the more times the image had been presented. Hand position didn’t affect learning.

In the second experiment, a new set of students were shown the same shown-once images, while 16 versions of the eight repeated images were created. These versions varied in their color components. In this circumstance, learning was slower when hands were held near the images. That is, people found it harder to recognize the commonalities among identical but differently colored patterns, suggesting they were too focused on the details to see the similarities.

These findings suggest that processing near the hands is biased toward item-specific detail. This is in keeping with earlier suggestions that the improvements in perception and attention near the hands are item-specific. It may indeed be that this increased perceptual focus is at the cost of higher-order function such as memory and learning. This would be consistent with the idea that there are two largely independent visual streams, one of which is mainly concerned with visuospatial operations, and the other of which is primarily for more cognitive operations (such as object identification).

All this may seem somewhat abstruse, but it is worryingly relevant in these days of hand-held technological devices.

The point of both these studies is not that one strategy (whether of hand movements or hand position) is wrong. What you need to take away is the realization that hand movements and hand position can affect the way you approach problems, and the things you perceive. Sometimes you want to take a more physical approach to a problem, or pick out the fine details of a scene or object — in these cases, moving your hands, or holding something in or near your hands, is a good idea. Other times you might want to take a more abstract/generalized approach — in these cases, you might want to step back and keep your body out of it.

A cross-cultural study finds a significant gender difference on a simple puzzle problem for one culture but no gender difference for another. The difference was only partly explained by education.

Here’s an intriguing approach to the long-standing debate about gender differences in spatial thinking. The study involved 1,279 adults from two cultural groups in India. One of these groups was patrilineal, the other matrilineal. The volunteers were given a wooden puzzle to assemble as quickly as they could.

Within the patrilineal group, men were on average 36% faster than women. Within the matrilineal group, however, there was no difference between the genders.

I have previously reported on studies showing how small amounts of spatial training can close the gap in spatial abilities between the genders. It has been argued that in our culture, males are directed toward spatial activities (construction such as Lego; later, video games) more than females are.

In this case, the puzzle was very simple. However, general education was clearly one factor mediating this gender difference. In the patrilineal group, males had an average 3.67 more years of education, while in the matrilineal group, men and women had the same amount of education. When education was included in the statistical analysis, a good part of the difference between the groups was accounted for — but not all.

While we can only speculate about the remaining cause, it is interesting to note that, among the patrilineal group, the gender gap was decidedly smaller among those who lived in households not wholly owned by males (in the matrilineal group, men are not allowed to own property, so this comparison cannot be made).

It is also interesting to note that the men in the matrilineal group were faster than the men in the patrilineal group. This is not a function of education differences, because education in the matrilineal group was slightly less than that of males in the patrilineal group.

None of the participants had experience with puzzle solving, and both groups had similar backgrounds, being closely genetically related and living in villages geographically close. Participants came from eight villages: four patrilineal and four matrilineal.

[2519] Hoffman, M., Gneezy U., & List J. A. (2011).  Nurture affects gender differences in spatial abilities. Proceedings of the National Academy of Sciences. 108(36), 14786 - 14788.

Playing Tetris shortly after a traumatic event reduced flashbacks, but playing a word-based quiz increased the number of flashbacks.

Following a study showing that playing Tetris after traumatic events could reduce memory flashbacks in healthy volunteers, two experiments have found playing Tetris after viewing traumatic images significantly reduced flashbacks while playing Pub Quiz Machine 2008 (a word-based quiz game) increased the frequency of flashbacks. In the experiments, volunteers were shown a film that included traumatic images of injury.

In the first experiment, after waiting for 30 minutes, 20 volunteers played Tetris for 10 minutes, 20 played Pub Quiz for 10 minutes and 20 did nothing. In the second experiment, this wait was extended to four hours, with 25 volunteers in each group.

In both experiments, those who played Tetris had significantly fewer flashbacks that the other two groups, and all groups were equally able to recall specific details of the film. Flashbacks were monitored for a week.

It is thought that with traumatic information, perceptual information is emphasized over conceptual information, meaning we are less likely to remember the experience of being in a high-speed road traffic collision as a coherent story, and more likely to remember it by the flash of headlights and noise of a crash. This perceptual information then pops up repeatedly in the victim's mind in the form of flashbacks to the trauma causing great emotional distress, as little conceptual meaning has been attached to them. If you experience other events that involve similar information, during the time window in which the traumatic memories are being processed, that information will interfere with that processing.

Thus, the spatial tasks of Tetris (which involves moving and rotating shapes) are thought to compete with the images of trauma, while answering general knowledge questions in the Pub Quiz game competes with remembering the contextual meaning of the trauma, so the visual memories are reinforced and the flashbacks are increased.

A twin study suggests prenatal testosterone may be a factor in the innate male superiority in mental rotation*.

Because male superiority in mental rotation appears to be evident at a very young age, it has been suggested that testosterone may be a factor. To assess whether females exposed to higher levels of prenatal testosterone perform better on mental rotation tasks than females with lower levels of testosterone, researchers compared mental rotation task scores between twins from same-sex and opposite-sex pairs.

It was found that females with a male co-twin scored higher than did females with a female co-twin (there was no difference in scores between males from opposite-sex and same-sex pairs). Of course, this doesn’t prove that that the differences are produced in the womb; it may be that girls with a male twin engage in more male-typical activities. However, the association remained after allowing for computer game playing experience.

The study involved 804 twins, average age 22, of whom 351 females were from same-sex pairs and 120 from opposite-sex pairs. There was no significant difference between females from identical same-sex pairs compared to fraternal same-sex pairs.

* Please do note that ‘innate male superiority’ does NOT mean that all men are inevitably better than all women at this very specific task! My words simply reflect the evidence that the tendency of males to be better at mental rotation is found in infants as young as 3 months.

Male superiority in mental rotation is the most-cited gender difference in cognitive abilities. A new study shows that the difference can be eliminated in 6-year-olds after a mere 8 weeks.

Following a monkey study that found training in spatial memory could raise females to the level of males, and human studies suggesting the video games might help reduce gender differences in spatial processing (see below for these), a new study shows that training in spatial skills can eliminate the gender difference in young children. Spatial ability, along with verbal skills, is one of the two most-cited cognitive differences between the sexes, for the reason that these two appear to be the most robust.

This latest study involved 116 first graders, half of whom were put in a training program that focused on expanding working memory, perceiving spatial information as a whole rather than concentrating on details, and thinking about spatial geometric pictures from different points of view. The other children took part in a substitute training program, as a control group. Initial gender differences in spatial ability disappeared for those who had been in the spatial training group after only eight weekly sessions.

Previously:

A study of 90 adult rhesus monkeys found young-adult males had better spatial memory than females, but peaked early. By old age, male and female monkeys had about the same performance. This finding is consistent with reports suggesting that men show greater age-related cognitive decline relative to women. A second study of 22 rhesus monkeys showed that in young adulthood, simple spatial-memory training did not help males but dramatically helped females, raising their performance to the level of young-adult males and wiping out the gender gap.

Another study showing that expert video gamers have improved mental rotation skills, visual and spatial memory, and multitasking skills has led researchers to conclude that training with video games may serve to reduce gender differences in visual and spatial processing, and some of the cognitive declines that come with aging.

The existence of specialized neurons involved in spatial memory has now been found in humans, and appear to also help with object location and autobiographical memory.

Rodent studies have demonstrated the existence of specialized neurons involved in spatial memory. These ‘grid cells’ represent where an animal is located within its environment, firing in patterns that show up as geometrically regular, triangular grids when plotted on a map of a navigated surface. Now for the first time, evidence for these cells has been found in humans. Moreover, those with the clearest signs of grid cells performed best in a virtual reality spatial memory task, suggesting that the grid cells help us to remember the locations of objects. These cells, located particularly in the entorhinal cortex, are also critical for autobiographical memory, and are amongst the first to be affected by Alzheimer's disease, perhaps explaining why getting lost is one of the most common early symptoms.

[378] Doeller, C. F., Barry C., & Burgess N. (2010).  Evidence for grid cells in a human memory network. Nature. 463(7281), 657 - 661.

Signers reveal that more complex language helps you find a hidden object, providing more support for the theory that language shapes how we think and perceive.

Because Nicaraguan Sign Language is only about 35 years old, and still evolving rapidly, the language used by the younger generation is more complex than that used by the older generation. This enables researchers to compare the effects of language ability on other abilities. A recent study found that younger signers (in their 20s) performed better than older signers (in their 30s) on two spatial cognition tasks that involved finding a hidden object. The findings provide more support for the theory that language shapes how we think and perceive.

[1629] Pyers, J. E., Shusterman A., Senghas A., Spelke E. S., & Emmorey K. (2010).  Evidence from an emerging sign language reveals that language supports spatial cognition. Proceedings of the National Academy of Sciences. 107(27), 12116 - 12120.

Older news items (pre-2010) brought over from the old website

Video games may help visuospatial processing and multitasking

Another study has come out showing that expert video gamers have improved mental rotation skills, visual and spatial memory, and multitasking skills. The researchers conclude that training with video games may serve to reduce gender differences in visual and spatial processing, and some of the cognitive declines that come with aging.

[366] Dye, M. W. G., Green S. C., & Bavelier D. (2009).  Increasing Speed of Processing With Action Video Games. Current Directions in Psychological Science. 18(6), 321 - 326.

http://www.eurekalert.org/pub_releases/2009-12/afps-rsa121709.php

The limited nature of the 'Mozart Effect'

The so-called ‘Mozart effect’ (which is far more limited than commonly reported in the popular press, and which argues that listening to Mozart can temporally improve spatial abilities, such as mental rotation) has been found in some studies but not in others. Now a study of 50 musicians and 50 non-musicians may explain the inconsistent results. The study found that only non-musicians had their spatial processing skills improved by listening to Mozart — partly because the musicians were better at the mental rotation task to start with. The effect may have to do with non-musicians processing music and spatial information in the right hemisphere, while musicians tend to use both hemispheres. The effect may also be restricted to right-handed non-musicians — all the participants were right-handed, and left-handed people are more likely to process information in both hemispheres. And finally, the effect may be further restricted to some types of spatial task — the present study used the same task as originally used. So, what we can say is that right-handed non-musicians may temporarily improve their mental rotation skills by listening to Mozart.

[301] Aheadi, A., Dixon P., & Glover S. (2010).  A limiting feature of the Mozart effect: listening enhances mental rotation abilities in non-musicians but not musicians. Psychology of Music. 38(1), 107 - 117.

http://www.miller-mccune.com/news/mozart-effect-real-for-some-1394

Meditation technique can temporarily improve visuospatial abilities

And continuing on the subject of visual short-term memory, a study involving experienced practitioners of two styles of meditation: Deity Yoga (DY) and Open Presence (OP) has found that, although meditators performed similarly to nonmeditators on two types of visuospatial tasks (mental rotation and visual memory), when they did the tasks immediately after meditating for 20 minutes (while the nonmeditators rested or did something else), practitioners of the DY style of meditation showed a dramatic improvement compared to OP practitioners and controls. In other words, although the claim that regular meditation practice can increase your short-term memory capacity was not confirmed, it does appear that some forms of meditation can temporarily (and dramatically) improve it. Since the form of meditation that had this effect was one that emphasizes visual imagery, it does support the idea that you can improve your imagery and visual memory skills (even if you do need to ‘warm up’ before the improvement is evident).

[860] Kozhevnikov, M., Louchakova O., Josipovic Z., & Motes M. A. (2009).  The enhancement of visuospatial processing efficiency through Buddhist Deity meditation. Psychological Science: A Journal of the American Psychological Society / APS. 20(5), 645 - 653.

http://www.sciencedaily.com/releases/2009/04/090427131315.htm
http://www.eurekalert.org/pub_releases/2009-04/afps-ssb042709.php

Why it’s so hard to disrupt your routine

New research has added to our understanding of why we find it so hard to break a routine or overcome bad habits. The problem lies in the competition between the striatum and the hippocampus. The striatum is involved with habits and routines, for example, it records cues or landmarks that lead to a familiar destination. It’s the striatum that enables you to drive familiar routes without much conscious awareness. If you’re travelling an unfamiliar route however, you need the hippocampus, which is much ‘smarter’.  The mouse study found that when the striatum was disrupted, the mice had trouble navigating using landmarks, but they were actually better at spatial learning. When the hippocampus was disrupted, the converse was true. This may help us understand, and treat, certain mental illnesses in which patients have destructive, habit-like patterns of behavior or thought. Obsessive-compulsive disorder, Tourette syndrome, and drug addiction all involve abnormal function of the striatum. Cognitive-behavioral therapy may be thought of as trying to learn to use one of these systems to overcome and, ultimately, to re-train the other.

[931] Lee, A. S., Duman R. S., & Pittenger C. (2008).  A double dissociation revealing bidirectional competition between striatum and hippocampus during learning. Proceedings of the National Academy of Sciences. 105(44), 17163 - 17168.

http://www.eurekalert.org/pub_releases/2008-10/yu-ce102008.php

More light shed on how episodic memories are formed

A rat study has revealed more about the workings of the hippocampus. Previous studies have identified “place cells” in the hippocampus – neurons which become more active in response to a particular spatial location. Activity in the hippocampus while rats searched for food in a maze where the starting and ending point was varied, has found that, while some cells signaled location alone, others were also sensitive to recent or impending events – i.e., activation depended upon where the rat had just been or where it intended to go. This finding helps us understand how episodic memories are formed – how, for example, a spatial location can trigger a reminder of an intended action at a particular time, but not others.

[1136] Ferbinteanu, J., & Shapiro M. L. (2003).  Prospective and retrospective memory coding in the hippocampus. Neuron. 40(6), 1227 - 1239.

http://www.eurekalert.org/pub_releases/2003-12/msh-ta121503.php

More learned about how spatial navigation works in humans

Researchers monitored signals from individual brain cells as patients played a computer game in which they drove around a virtual town in a taxi, searching for passengers who appeared in random locations and delivering them to their destinations. Previous research has found specific cells in the brains of rodents that respond to “place”, but until now we haven’t known whether humans have such specific cells. This study identifies place cells (primarily found in the hippocampus), as well as “view” cells (responsive to landmarks; found mainly in the parahippocampal region) and “goal” cells (responsive to goals, found throughout the frontal and temporal lobes). Some cells respond to combinations of place, view and goal — for example, cells that responded to viewing an object only when that object was a goal.

[1019] Ekstrom, A. D., Kahana M. J., Caplan J. B., Fields T. A., Isham E. A., Newman E. L., et al. (2003).  Cellular networks underlying human spatial navigation. Nature. 425(6954), 184 - 188.

http://www.eurekalert.org/pub_releases/2003-09/uoc--vgu091003.php

Object & face recognition

Autobiographical memory is an interesting memory domain, given its inextricable association with identity. One particularly fascinating aspect of it is its unevenness - why do we remember so little from the first years of life ('childhood amnesia'), why do we remember some periods of our life so much more vividly than others? There are obvious answers (well, nothing interesting happened in those other times), but the obvious is not always correct. Intriguing, then, to read about a new study that links those memorable periods to self-identity. (Is that part of why little children remember so little? because their self is so undeveloped?)

Katy Waldman at Slate:

… a team of scientists from England’s University of Leeds devised a clever experiment. Noting that developmental psychologists have isolated the second and third decades as times of identity formation, they gathered a group of volunteers and tried to map the emergence of their self-perceptions. Participants were asked to complete 20 “I am” statements (e.g., “I am quick-tempered”; “I am a mother”). Then they were instructed to pick three statements and come up with 10 memories that seemed relevant to each. Finally, the volunteers were told to pinpoint as best they could the ages at which their three personality traits surfaced. If it’s true that we remember more assiduously during bursts of self-making—and that these self-making periods tend to span our late teens and early 20s—a few things should happen, the researchers reasoned. First, participants should frequently date the unfurling of their “I am” statements to young adulthood. Second, the memories they summoned to support each “I am” statement should constellate around the age at which they believed the “I am” statement started to apply.

That was exactly what transpired. A majority of the memories associated with a particular self-image came from the very same year that the self-image developed. It seemed clear that the more salient a past experience was to your identity, the more luminous it grew in your memory. And what turned out to be the median age at which all these traits and self-concepts were acquired? 22.9.

Slate article

A small study involving patients with TBI has found that the best learning strategies are ones that call on the self-schema rather than episodic memory, and the best involves self-imagination.

Sometime ago, I reported on a study showing that older adults could improve their memory for a future task (remembering to regularly test their blood sugar) by picturing themselves going through the process. Imagination has been shown to be a useful strategy in improving memory (and also motor skills). A new study extends and confirms previous findings, by testing free recall and comparing self-imagination to more traditional strategies.

The study involved 15 patients with acquired brain injury who had impaired memory and 15 healthy controls. Participants memorized five lists of 24 adjectives that described personality traits, using a different strategy for each list. The five strategies were:

  • think of a word that rhymes with the trait (baseline),
  • think of a definition for the trait (semantic elaboration),
  • think about how the trait describes you (semantic self-referential processing),
  • think of a time when you acted out the trait (episodic self-referential processing), or
  • imagine acting out the trait (self-imagining).

For both groups, self-imagination produced the highest rates of free recall of the list (an average of 9.3 for the memory-impaired, compared to 3.2 using the baseline strategy; 8.1 vs 3.2 for the controls — note that the controls were given all 24 items in one list, while the memory-impaired were given 4 lists of 6 items).

Additionally, those with impaired memory did better using semantic self-referential processing than episodic self-referential processing (7.3 vs 5.7). In contrast, the controls did much the same in both conditions. This adds to the evidence that patients with brain injury often have a particular problem with episodic memory (knowledge about specific events). Episodic memory is also particularly affected in Alzheimer’s, as well as in normal aging and depression.

It’s also worth noting that all the strategies that involved the self were more effective than the two strategies that didn’t, for both groups (also, semantic elaboration was better than the baseline strategy).

The researchers suggest self-imagination (and semantic self-referential processing) might be of particular benefit for memory-impaired patients, by encouraging them to use information they can more easily access (information about their own personality traits, identity roles, and lifetime periods — what is termed the self-schema), and that future research should explore ways in which self-imagination could be used to support everyday memory tasks, such as learning new skills and remembering recent events.

A small study shows that an intensive program to help young children with autism not only improves cognition and behavior, but can also normalize brain activity for face processing.

The importance of early diagnosis for autism spectrum disorder has been highlighted by a recent study demonstrating the value of an educational program for toddlers with ASD.

The study involved 48 toddlers (18-30 months) diagnosed with autism and age-matched normally developing controls. Those with ASD were randomly assigned to participate in a two-year program called the Early Start Denver Model, or a standard community program.

The ESDM program involved two-hour sessions by trained therapists twice a day, five days every week. Parent training also enabled ESDM strategies to be used during daily activities. The program emphasizes interpersonal exchange, social attention, and shared engagement. It also includes training in face recognition, using individualized booklets of color photos of the faces of four familiar people.

The community program involved evaluation and advice, annual follow-up sessions, programs at Birth-to-Three centers and individual speech-language therapy, occupational therapy, and/or applied behavior analysis treatments.

All of those in the ESDM program were still participating at the end of the two years, compared to 88% of the community program participants.

At the end of the program, children were assessed on various cognitive and behavioral measures, as well as brain activity.

Compared with children who participated in the community program, children who received ESDM showed significant improvements in IQ, language, adaptive behavior, and autism diagnosis. Average verbal IQ for the ESDM group was 95 compared to an average 75 for the community group, and 93 vs 80 for nonverbal IQ. These are dramatically large differences, although it must be noted that individual variability was high.

Moreover, for the ESDM group, brain activity in response to faces was similar to that of normally-developing children, while the community group showed the pattern typical of autism (greater activity in response to objects compared to faces). This was associated with improvements in social behavior.

Again, there were significant individual differences. Specifically, 73% of the ESDM group, 53% of the control group, and 29% of the community group, showed a pattern of faster response to faces. (Bear in mind, re the control group, that these children are all still quite young.) It should also be borne in mind that it was difficult to get usable EEG data from many of the children with ASD — these results come from only 60% of the children with ASD.

Nevertheless, the findings are encouraging for parents looking to help their children.

It should also be noted that, although obviously earlier is better, the findings don’t rule out benefits for older children or even adults. Relatively brief targeted training in face recognition has been shown to affect brain activity patterns in adults with ASD.

[3123] Dawson, G., Jones E. J. H., Merkle K., Venema K., Lowy R., Faja S., et al. (2012).  Early Behavioral Intervention Is Associated With Normalized Brain Activity in Young Children With Autism. Journal of the American Academy of Child & Adolescent Psychiatry. 51(11), 1150 - 1159.

Faces of people about whom something negative was known were perceived more quickly than faces of people about whom nothing, or something positive or neutral, was known.

Here’s a perception study with an intriguing twist. In my recent round-up of perception news I spoke of how images with people in them were more memorable, and of how some images ‘jump out’ at you. This study showed different images to each participant’s left and right eye at the same time, creating a contest between them. The amount of time it takes the participant to report seeing each image indicates the relative priority granted by the brain.

So, 66 college students were shown faces of people, and told something ‘gossipy’ about each one. The gossip could be negative, positive or neutral — for example, the person “threw a chair at a classmate”; “helped an elderly woman with her groceries”; “passed a man on the street.” These faces were then shown to one eye while the other eye saw a picture of a house.

The students had to press one button when they could see a face and another when they saw a house. As a control, some faces were used that the students had never seen. The students took the same length of time to register seeing the unknown faces and those about which they had been told neutral or positive information, but pictures of people about whom they had heard negative information registered around half a second quicker, and were looked at for longer.

A second experiment confirmed the findings and showed that subjects saw the faces linked to negative gossip for longer periods than faces about whom they had heard about upsetting personal experiences.

[2283] Anderson, E., Siegel E. H., Bliss-Moreau E., & Barrett L. F. (2011).  The Visual Impact of Gossip. Science. 332(6036), 1446 - 1448.

New research confirms the role of experience in the other race effect, and shows how easily the problem in discriminating faces belonging to other races might be prevented.

Our common difficulty in recognizing faces that belong to races other than our own (or more specifically, those we have less experience of) is known as the Other Race Effect. Previous research has revealed that six-month-old babies show no signs of this bias, but by nine months, their ability to recognize faces is reduced to those races they see around them.

Now, an intriguing study has looked into whether infants can be trained in such a way that they can maintain the ability to process other-race faces. The study involved 32 six-month-old Caucasian infants, who were shown picture books that contained either Chinese (training group) or Caucasian (control group) faces. There were eight different books, each containing either six female faces or six male faces (with names). Parents were asked to present the pictures in the book to their child for 2–3 minutes every day for 1 week, then every other day for the next week, and then less frequently (approximately once every 6 days) following a fixed schedule of exposures during the 3-month period (equating to approximately 70 minutes of exposure overall).

When tested at nine months, there were significant differences between the two groups that indicated that the group who trained on the Chinese faces had maintained their ability to discriminate Chinese faces, while those who had trained on the Caucasian faces had lost it (specifically, they showed no preference for novel or familiar faces, treating them both the same).

It’s worth noting that the babies generalized from the training pictures, all of which showed the faces in the same “passport photo” type pose, to a different orientation (three-quarter pose) during test trials. This finding indicates that infants were actually learning the face, not simply an image.

Evidence that illiterates use a brain region involved in reading for face processing to a greater extent than readers do, suggests that reading may have hijacked the network used for object recognition.

An imaging study of 10 illiterates, 22 people who learned to read as adults and 31 who did so as children, has confirmed that the visual word form area (involved in linking sounds with written symbols) showed more activation in better readers, although everyone had similar levels of activation in that area when listening to spoken sentences. More importantly, it also revealed that this area was much less active among the better readers when they were looking at pictures of faces.

Other changes in activation patterns were also evident (for example, readers showed greater activation in the planum temporal in response to spoken speech), and most of the changes occurred even among those who acquired literacy in adulthood — showing that the brain re-structuring doesn’t depend on a particular time-window.

The finding of competition between face and word processing is consistent with the researcher’s theory that reading may have hijacked a neural network used to help us visually track animals, and raises the intriguing possibility that our face-perception abilities suffer in proportion to our reading skills.

It seems that prosopagnosia can be, along with perfect pitch and eidetic memory, an example of what happens when your brain can’t abstract the core concept.

‘Face-blindness’ — prosopagnosia — is a condition I find fascinating, perhaps because I myself have a touch of it (it’s now recognized that this condition represents the end of a continuum rather than being an either/or proposition). The intriguing thing about this inability to recognize faces is that, in its extreme form, it can nevertheless exist side-by-side with quite normal recognition of other objects.

Prosopagnosia that is not the result of brain damage often runs in families, and a study of three family members with this condition has revealed that in some cases at least, the inability to remember faces has to do with failing to form a mental representation that abstracts the essence of the face, sans context. That is, despite being fully able to read facial expressions, attractiveness and gender from the face (indeed one of the family members is an artist who has no trouble portraying fully detailed faces), they couldn’t cope with changes in lighting conditions and viewing angles.

I’m reminded of the phenomenon of perfect pitch, which is characterized by an inability to generalize across acoustically similar tones, so an A in a different key is a completely different note. Interestingly, like prosopagnosia, perfect pitch is now thought to be more common than has been thought (recognition of it is of course limited by the fact that some musical expertise is generally needed to reveal it). This inability to abstract or generalize is also a phenomenon of eidetic memory, and I have spoken before of the perils of this.

(Note: A fascinating account of what it is like to be face-blind, from a person with the condition, can be found at: http://www.choisser.com/faceblind/)

Providing support for a modular concept of the brain, a twin study has found that face recognition is heritable, and that it is inherited separately from IQ.

No surprise to me (I’m hopeless at faces), but a twin study has found that face recognition is heritable, and that it is inherited separately from IQ. The findings provide support for a modular concept of the brain, suggesting that some cognitive abilities, like face recognition, are shaped by specialist genes rather than generalist genes. The study used 102 pairs of identical twins and 71 pairs of fraternal twins aged 7 to 19 from Beijing schools to calculate that 39% of the variance between individuals on a face recognition task is attributable to genetic effects. In an independent sample of 321 students, the researchers found that face recognition ability was not correlated with IQ.

Zhu, Q. et al. 2010. Heritability of the specific cognitive ability of face perception. Current Biology, 20 (2), 137-142.

Why are women better at recognizing faces? Apparently it has to do with using both sides of your brain, and homosexual men tend to do it too.

Why do women tend to be better than men at recognizing faces? Two recent studies give a clue, and also explain inconsistencies in previous research, some of which has found that face recognition mainly happens in the right hemisphere part of the face fusiform area, and some that face recognition occurs bilaterally. One study found that, while men tended to process face recognition in the right hemisphere only, women tended to process the information in both hemispheres. Another study found that both women and gay men tended to use both sides of the brain to process faces (making them faster at retrieving faces), while heterosexual men tended to use only the right. It also found that homosexual males have better face recognition memory than heterosexual males and homosexual women, and that women have better face processing than men. Additionally, left-handed heterosexual participants had better face recognition abilities than left-handed homosexuals, and also tended to be better than right-handed heterosexuals. In other words, bilaterality (using both sides of your brain) seems to make you faster and more accurate at recognizing people, and bilaterality is less likely in right-handers and heterosexual males (and perhaps homosexual women). Previous research has shown that homosexual individuals are 39% more likely to be left-handed.

Proverbio AM, Riva F, Martin E, Zani A (2010) Face Coding Is Bilateral in the Female Brain. PLoS ONE 5(6): e11242. doi:10.1371/journal.pone.0011242

[1611] Brewster, P. W. H., Mullin C. R., Dobrin R. A., & Steeves J. K. E. (2010).  Sex differences in face processing are mediated by handedness and sexual orientation. Laterality: Asymmetries of Body, Brain and Cognition.

It’s well established that we are better at recognizing faces of our own racial group, but a new study shows that this ability disappears when we’re mildly intoxicated.

It’s well established that we are better at recognizing faces of our own racial group, but a new study shows that this ability disappears when we’re mildly intoxicated. The study tested about 140 university students of Western European and east-Asian descent and found that recognition of different-race faces was unaffected by alcohol, yet both groups showed impaired recognition of own-race faces, bringing it down to about the same level of accuracy as for different-race faces. Those given a placebo drink were unaffected.

Older news items (pre-2010) brought over from the old website

Children recognize other children’s faces better than adults do

It is well known that people find it easier to distinguish between the faces of people from their own race, compared to those from a different race. It is also known that adults recognize the faces of other adults better than the faces of children. This may relate to holistic processing of the face (seeing the face as a whole rather than analyzing it feature by feature) — it may be that we more easily recognize faces for which we have strong holistic ‘templates’. A new study has tested to see whether the same is true for children aged 8 to 13. The study found that children had stronger holistic processing for other children than adults did. This may reflect an own-age bias, but I’d love to see what happens with teachers, or any other adults who spend much of their time with many children.

[1358] Susilo, T., Crookes K., McKone E., & Turner H. (2009).  The Composite Task Reveals Stronger Holistic Processing in Children than Adults for Child Faces. PLoS ONE. 4(7), e6460 - e6460.

Full text at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006460
http://dsc.discovery.com/news/2009/08/18/children-faces.html

Alcoholics show abnormal brain activity when processing facial expressions

Excessive chronic drinking is known to be associated with deficits in comprehending emotional information, such as recognizing different facial expressions. Now an imaging study of abstinent long-term alcoholics has found that they show decreased and abnormal activity in the amygdala and hippocampus when looking at facial expressions. They also show increased activity in the lateral prefrontal cortex, perhaps in an attempt to compensate for the failure of the limbic areas. The finding is consistent with other studies showing alcoholics invoking additional and sometimes higher-order brain systems to accomplish a relatively simple task at normal levels. The study compared 15 abstinent long-term alcoholics and 15 healthy, nonalcoholic controls, matched on socioeconomic backgrounds, age, education, and IQ.

[1044] Marinkovic, K., Oscar-Berman M., Urban T., O'Reilly C. E., Howard J. A., Sawyer K., et al. (2009).  Alcoholism and dampened temporal limbic activation to emotional faces. Alcoholism, Clinical and Experimental Research. 33(11), 1880 - 1892.

http://www.eurekalert.org/pub_releases/2009-08/ace-edc080509.php
http://www.eurekalert.org/pub_releases/2009-08/bumc-rfa081109.php

More insight into encoding of identity information

Different pictures of, say, Marilyn Monroe can evoke the same mental image — even hearing or reading her name can evoke the same concept. So how exactly does that work? A study in which pictures, spoken and written names were used has revealed that single neurons in the hippocampus and surrounding areas respond selectively to representations of the same individual regardless of the sensory cue. Moreover, this occurs very quickly, not only to very familiar people — the same process was observed with the researcher’s image and name, although he was unknown to the subject a day or two earlier. It also appears that the degree of abstraction reflects the hierarchical structure within the mediotemporal lobe.

[1141] Quiroga, Q. R., Kraskov A., Koch C., & Fried I. (2009).  Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain. Current Biology. 19(15), 1308 - 1313.

http://www.eurekalert.org/pub_releases/2009-07/uol-ols072009.php

Monkeys and humans use the same mechanism to recognize faces

The remarkable ability of humans to distinguish faces depends on sensitivity to unique configurations of facial features. One of the best demonstrations for this sensitivity comes from our difficulty in detecting changes in the orientation of the eyes and mouth in an inverted face — what is known as the Thatcher effect . A new study has revealed that this effect is also demonstrated among rhesus macaque monkeys, indicating that our skills in facial recognition date back 30 million years or more.

[1221] Adachi, I., Chou D. P., & Hampton R. R. (2009).  Thatcher Effect in Monkeys Demonstrates Conservation of Face Perception across Primates. Current Biology. 19(15), 1270 - 1273.

http://www.eurekalert.org/pub_releases/2009-06/eu-yri062309.php

Face recognition may vary more than thought

We know that "face-blindness" (prosopagnosia) may afflict as many as 2%, but until now it’s been thought that either a person has ‘normal’ face recognition skills, or they have a recognition disorder. Now for the first time a new group has been identified: those who are "super-recognizers", who have a truly remarkable ability to recognize faces, even those only seen in passing many years earlier. The finding suggests that these two abnormal groups are merely the ends of a spectrum — that face recognition ability varies widely.

[1140] Russell, R., Duchaine B., & Nakayama K. (2009).  Super-recognizers: people with extraordinary face recognition ability. Psychonomic Bulletin & Review. 16(2), 252 - 257.

http://www.eurekalert.org/pub_releases/2009-05/hu-we051909.php

Oxytocin improves human ability to recognize faces but not places

The breastfeeding hormone oxytocin has been found to increase social behaviors like trust. A new study has found that a single dose of an oxytocin nasal spray resulted in improved recognition memory for faces, but not for inanimate objects, suggesting that different mechanisms exist for social and nonsocial memory. Further analysis showed that oxytocin selectively improved the discrimination of new and familiar faces — participants with oxytocin were less likely to mistakenly characterize unfamiliar faces as familiar.

[897] Rimmele, U., Hediger K., Heinrichs M., & Klaver P. (2009).  Oxytocin Makes a Face in Memory Familiar. J. Neurosci.. 29(1), 38 - 42.

http://www.eurekalert.org/pub_releases/2009-01/sfn-hii010509.php

Insight into 'face blindness'

An imaging study has finally managed to see a physical difference in the brains of those with congenital prosopagnosia (face blindness): reduced connectivity in the region that processes faces. Specifically, a reduction in the integrity of the white matter tracts in the ventral occipito-temporal cortex, the extent of which was related to the severity of the impairment.

[1266] Thomas, C., Avidan G., Humphreys K., Jung K. -jin, Gao F., & Behrmann M. (2009).  Reduced structural connectivity in ventral visual cortex in congenital prosopagnosia. Nat Neurosci. 12(1), 29 - 31.

http://www.eurekalert.org/pub_releases/2008-11/cmu-cms112508.php

Visual expertise marked by left-side bias

It’s been established that facial recognition involves both holistic processing (seeing the face as a whole rather than the sum of parts) and a left-side bias. The new study explores whether these effects are specific to face processing, by seeing how Chinese characters, which share many of the same features as faces, are processed by native Chinese and non-Chinese readers. It was found that non-readers tended to look at the Chinese characters more holistically, and that native Chinese readers prefer characters that are made of two left sides. These findings suggest that whether or not we use holistic processing depends on the task performed with the object and its features, and that holistic processing is not used in general visual expertise – but left-side bias is.

[1103] Hsiao, J. H., & Cottrell G. W. (2009).  Not all visual expertise is holistic, but it may be leftist: the case of Chinese character recognition. Psychological Science: A Journal of the American Psychological Society / APS. 20(4), 455 - 463.

http://www.physorg.com/news160145799.html

Object recognition fast and early in processing

We see through our eye and with our brain. Visual information flows from the retina through a hierarchy of visual areas in the brain until it reaches the temporal lobe, which is ultimately responsible for our visual perceptions, and also sends information back along the line, solidifying perception. This much we know, but how much processing goes on at each stage, and how important feedback is compared to ‘feedforward’, is still under exploration. A new study involving children about to undergo surgery for epilepsy (using invasive electrode techniques) reveals that feedback from the ‘smart’ temporal lobe is less important than we thought, that the brain can recognize objects under a variety of conditions very rapidly, at a very early processing stage. It appears that certain areas of the visual cortex selectively respond to specific categories of objects.

[1416] Liu, H., Agam Y., Madsen J. R., & Kreiman G. (2009).  Timing, Timing, Timing: Fast Decoding of Object Information from Intracranial Field Potentials in Human Visual Cortex. Neuron. 62(2), 281 - 290.

http://www.sciencedaily.com/releases/2009/04/090429132231.htm
http://www.physorg.com/news160229380.html
http://www.eurekalert.org/pub_releases/2009-04/chb-aga042709.php

New brain region associated with face recognition

Using a new technique, researchers have found evidence for neurons that are selectively tuned for gender, ethnicity and identity cues in the cingulate gyrus, a brain area not previously associated with face processing.

[463] Ng, M., Ciaramitaro V. M., Anstis S., Boynton G. M., & Fine I. (2006).  Selectivity for the configural cues that identify the gender, ethnicity, and identity of faces in human cortex. Proceedings of the National Academy of Sciences. 103(51), 19552 - 19557.

http://www.sciencedaily.com/releases/2006/12/061212091823.htm

No specialized face area

Another study has come out casting doubt on the idea that there is an area of the brain specialized for faces. The fusiform gyrus has been dubbed the "fusiform face area", but a detailed imaging study has revealed that different patches of neurons respond to different images. However, twice as many of the patches are predisposed to faces versus inanimate objects (cars and abstract sculptures), and patches that respond to faces outnumber those that respond to four-legged animals by 50%. But patches that respond to the same images are not physically connected, implying a "face area" may not even exist.

[444] Grill-Spector, K., Sayres R., & Ress D. (2007).  High-resolution imaging reveals highly selective nonface clusters in the fusiform face area. Nat Neurosci. 10(1), 133 - 133.

http://www.sciencedaily.com/releases/2006/08/060830005949.htm

Face blindness is a common hereditary disorder

A German study has found 17 cases of the supposedly rare disorder prosopagnosia (face blindness) among 689 subjects recruited from local secondary schools and a medical school. Of the 14 subjects who consented to further interfamilial testing, all of them had at least one first degree relative who also had it. Because of the compensation strategies that sufferers learn to utilize at an early age, many of them do not realize that it is an actual disorder or even realize that other members of their family have it — which may explain why it has been thought to be so rare. The disorder is one of the few cognitive dysfunctions that has only one symptom and is inherited. It is apparently controlled by a defect in a single gene.

[1393] Kennerknecht, I., Grueter T., Welling B., Wentzek S., Horst J., Edwards S., et al. (2006).  First report of prevalence of non-syndromic hereditary prosopagnosia (HPA). American Journal of Medical Genetics. Part A. 140(15), 1617 - 1622.

http://www.sciencedaily.com/releases/2006/07/060707151549.htm

Nothing special about face recognition

A new study adds to a growing body of evidence that there is nothing special about face recognition. The researchers have found experimental support for their model of how a brain circuit for face recognition could work. The model shows how face recognition can occur simply from selective processing of shapes of facial features. Moreover, the model equally well accounted for the recognition of cars.

[373] Jiang, X., Rosen E., Zeffiro T., VanMeter J., Blanz V., & Riesenhuber M. (2006).  Evaluation of a Shape-Based Model of Human Face Discrimination Using fMRI and Behavioral Techniques. Neuron. 50(1), 159 - 172.

http://www.eurekalert.org/pub_releases/2006-04/cp-eht033106.php

Rare learning disability particularly impacts face recognition

A study of 14 children with Nonverbal Learning Disability (NLD) has found that the children were poor at recognizing faces. NLD has been associated with difficulties in visual spatial processing, but this specific deficit with faces hasn’t been identified before. NLD affects less than 1% of the population and appears to be congenital.

[577] Liddell, G. A., & Rasmussen C. (2005).  Memory Profile of Children with Nonverbal Learning Disability. Learning Disablilities Research & Practice. 20(3), 137 - 141.

http://www.eurekalert.org/pub_releases/2005-08/uoa-sra081005.php

Single cell recognition research finds specific neurons for concepts

An intriguing study surprises cognitive researchers by showing that individual neurons in the medial temporal lobe are able to recognize specific people and objects. It’s long been thought that concepts such as these require a network of cells, and this doesn’t deny that many cells are involved. However, this new study points to the importance of a single brain cell. The study of 8 epileptic subjects found variable responses from subjects, but within subjects, individuals showed remarkably specific responses to concepts. For example, a single neuron in the left posterior hippocampus of one subject responded to all pictures of actress Jennifer Aniston, and also to Lisa Kudrow, her co-star on the TV hit "Friends", but not to pictures of Jennifer Aniston together with actor Brad Pitt, and not, or only very weakly, to other famous and non-famous faces, landmarks, animals or objects. In another patient, pictures of actress Halle Berry activated a neuron in the right anterior hippocampus, as did a caricature of the actress, images of her in the lead role of the film "Catwoman," and a letter sequence spelling her name. The results suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.

[1372] Quiroga, Q. R., Reddy L., Kreiman G., Koch C., & Fried I. (2005).  Invariant visual representation by single neurons in the human brain. Nature. 435(7045), 1102 - 1107.

http://www.eurekalert.org/pub_releases/2005-06/uoc--scr062005.php

Evidence faces are processed like words

It has been suggested that faces and words are recognized differently, that faces are identified by wholes, whereas words and other objects are identified by parts. However, a recent study has devised a new test, that finds people use letters to recognize words and facial features to recognize faces.

[790] Martelli, M., Majaj N. J., & Pelli D. G. (2005).  Are faces processed like words? A diagnostic test for recognition by parts. Journal of Vision. 5(1), 

You can read this article online at http://www.journalofvision.org//5/1/6/.

http://www.eurekalert.org/pub_releases/2005-03/afri-ssf030705.php

Face blindness runs in families

A study of those with prosopagnosia (face blindness) and their relatives has revealed a genetic basis to the neurological condition. An earlier questionnaire study by the same researcher (himself prosopagnosic) suggests the impairment may be more common than has been thought. The study involved 576 biology students. Nearly 2% reported face-blindness symptoms.

[2545] Grueter, M., Grueter T., Bell V., Horst J., Laskowski W., Sperling K., et al. (2007).  Hereditary Prosopagnosia: the First Case Series. Cortex. 43(6), 734 - 749.

http://www.newscientist.com/article.ns?id=dn7174

Faces must be seen to be recognized

In an interesting new perspective on face recognition, a series of perception experiments have revealed that identifying a face depends on actually seeing it, as opposed to merely having the image of the face fall on the retina. In other words, attention is necessary.

[725] Moradi, F., Koch C., & Shimojo S. (2005).  Face Adaptation Depends on Seeing the Face. Neuron. 45(1), 169 - 175.

http://www.eurekalert.org/pub_releases/2005-01/cp-fmb122904.php

New insight into the relationship between recognizing faces and recognizing expressions

The quest to create a computer that can recognize faces and interpret facial expressions has given new insight into how the human brain does it. A study using faces photographed with four different facial expressions (happy, angry, screaming, and neutral), with different lighting, and with and without different accessories (like sunglasses), tested how long people took to decide if two faces belonged to the same person. Another group were tested to see how fast they could identify the expressions. It was found that people were quicker to recognize faces and facial expressions that involved little muscle movement, and slower to recognize expressions that involved a lot of movement. This supports the idea that recognition of faces and recognition of facial expressions are linked – it appears, through the part of the brain that helps us understand motion.

[1288] Martínez, A. M. (2003).  Matching expression variant faces. Vision Research. 43(9), 1047 - 1060.

http://www.osu.edu/researchnews/archive/compvisn.htm

How the brain is wired for faces

The question of how special face recognition is — whether it is a process quite distinct from recognition of other objects, or whether we are simply highly practiced at this particular type of recognition — has been a subject of debate for some time. A new imaging study has concluded that the fusiform face area (FFA), a brain region crucially involved in face recognition, extracts configural information about faces rather than processing spatial information on the parts of faces. The study also indicated that the FFA is only involved in face recognition.

Yovel, G. & Kanwisher, N. 2004. Face Perception: Domain Specific, Not Process Specific. Neuron, 44 (5), 889–898.

http://www.eurekalert.org/pub_releases/2004-12/cp-htb112304.php

How the brain recognizes a face

Face recognition involves at least three stages. An imaging study has now localized these stages to particular regions of the brain. It was found that the inferior occipital gyrus was particularly sensitive to slight physical changes in faces. The right fusiform gyrus (RFG), appeared to be involved in making a more general appraisal of the face and compares it to the brain's database of stored memories to see if it is someone familiar. The third activated region, the anterior temporal cortex (ATC), is believed to store facts about people and is thought to be an essential part of the identifying process.

Rotshtein, P., Henson, R.N.A., Treves, A., Driver, J. & Dolan, R.J. 2005. Morphing Marilyn into Maggie dissociates physical and identity face representations in the brain. Nature Neuroscience, 8, 107-113.

http://news.bbc.co.uk/go/pr/fr/-/2/hi/health/4086319.stm

Memories of crime stories influenced by racial stereotypes

The influence of stereotypes on memory, a well-established phenomenon, has been demonstrated anew in a study concerning people's memory of news photographs. In the study, 163 college students (of whom 147 were White) examined one of four types of news stories, all about a hypothetical Black man. Two of the stories were not about crime, the third dealt with non-violent crime, while the fourth focused on violent crime. All four stories included an identical photograph of the same man. Afterwards, participants reconstructed the photograph by selecting from a series of facial features presented on a computer screen. It was found that selected features didn’t differ from the actual photograph in the non-crime conditions, but for the crime stories, more pronounced African-American features tended to be selected, particularly so for the story concerning violent crime. Participants appeared largely unaware of their associations of violent crime with the physical characteristics of African-Americans.

[675] Oliver, M. B., II R. J. L., Moses N. N., & Dangerfield C. L. (2004).  The Face of Crime: Viewers' Memory of Race-Related Facial Features of Individuals Pictured in the News. The Journal of Communication. 54(1), 88 - 104.

http://www.eurekalert.org/pub_releases/2004-05/ps-rmo050504.php

Special training may help people with autism recognize faces

People with autism tend to activate object-related brain regions when they are viewing unfamiliar faces, rather than a specific face-processing region. They also tend to focus on particular features, such as a mustache or a pair of glasses. However, a new study has found that when people with autism look at a picture of a very familiar face, such as their mother's, their brain activity is similar to that of control subjects – involving the fusiform gyrus, a region in the brain's temporal lobe that is associated with face processing, rather than the inferior temporal gyrus, an area associated with objects. Use of the fusiform gyrus in recognizing faces is a process that starts early with non-autistic people, but does take time to develop (usually complete by age 12). The study indicates that the fusiform gyrus in autistic people does have the potential to function normally, but may need special training to operate properly.

Aylward, E. 2004. Functional MRI studies of face processing in adolescents and adults with autism: Role of experience. Paper presented February 14 at the annual meeting of the American Association for the Advancement of Science in Seattle.

Dawson, G. & Webb, S. 2004. Event related potentials reveal early abnormalities in face processing autism. Paper presented February 14 at the annual meeting of the American Association for the Advancement of Science in Seattle.

http://www.eurekalert.org/pub_releases/2004-02/uow-stm020904.php

How faces become familiar

With faces, familiarity makes a huge difference. Even when pictures are high quality and faces are shown at the same time, we make a surprising number of mistakes when trying to decide if two pictures are of the same person – when the face is unknown to us. On the other hand, even when picture quality is very poor, we’re very good at recognising familiar faces. So how do faces become familiar to us? Recent research led by Vicki Bruce (well-known in this field) showed volunteers video sequences of people, episodes of unfamiliar soap operas, and images of familiar but previously unseen characters from radio's The Archers and voices from The Simpsons. They confirmed previous research suggesting that for unfamiliar faces, memory appears dominated by the 'external' features, but where the face is well-known it is 'internal' features such as the eyes, nose and mouth, that are more important. The shift to internal features occurred rapidly, within minutes. Speed of learning was unaffected by whether the faces were experienced as static or moving images, or with or without accompanying voices, but faces which belonged to well-known, though previously unseen, personal identities were learned more easily.

Bruce, V., Burton, M. et al. 2003. Getting To Know You – How We Learn New Faces. A research report funded by the Economic and Social Research Council (ESRC).

http://www.eurekalert.org/pub_releases/2003-06/esr-hs061603.php
http://www.esrc.ac.uk/esrccontent/news/june03-5.asp

Face recognition may not be a special case

Many researchers have argued that the brain processes faces quite separately from other objects — that faces are a special class. Research has shown many ways in which face recognition does seem to be a special case, but it could be argued that the differences are due not to a separate processing system, but to people’s expertise with faces. We have, after all, plenty of evidence that babies are programmed right from the beginning to pay lots of attention to faces. A new study has endeavored to answer this question, by looking at separate and concurrent perception of faces and cars, by people who were “car buffs” and those who were not. If expert processing of these objects depends on a common mechanism (presumed to be related to the perception of objects as wholes), then car perception would be expected to interfere with concurrent face perception. Moreover, such interference should get worse, as the subjects became more expert at processing cars. This is indeed what was found. Experts were found to recognize cars holistically, but this recognition interfered with their recognition of familiar faces. While novices processed the cars piece by piece, in a slower process that did not interfere with face recognition. This study follows on from earlier research in which car fanciers and bird watchers were found to identify cars and birds, respectively, using the same area of the brain as is used in face recognition. A subsequent study found that people trained to identify novel, computer-generated objects, began to recognize them holistically (as is done in face recognition). This latest study shows that, not only is experts’ car recognition occurring in the same brain region as face recognition, but that the same neural circuits are involved.

[1318] Gauthier, I., Curran T., Curby K. M., & Collins D. (2003).  Perceptual interference supports a non-modular account of face processing. Nat Neurosci. 6(4), 428 - 432.

http://www.eurekalert.org/pub_releases/2003-03/vu-cfe030503.php
http://www.nytimes.com/2003/03/11/health/11PERC.html

Detection of foreign faces faster than faces of your own race

A recent study tracked the time it takes for the brain to perceive the faces of people of other races as opposed to faces from the same race. The faces were mixed with images of everyday objects, and the subjects were given the distracting task of counting butterflies. The study found that the Caucasian subjects took longer to detect Caucasian faces than Asian faces. The study complements an earlier imaging study that showed that, when people are actively trying to recognize faces, they are better at recognizing members of their own race. [see Why recognizing a face is easier when the race matches our own]

[2544] Caldara, R., Thut G., Servoir P., Michel C. M., Bovet P., & Renault B. (2003).  Face versus non-face object perception and the ‘other-race’ effect: a spatio-temporal event-related potential study. Clinical Neurophysiology. 114(3), 515 - 528.

http://news.bmn.com/news/story?day=030108&story=1

Women better at recognizing female but not male faces

Women’s superiority in face recognition tasks appears to be due to their better recognition of female faces. There was no difference between men and women in the recognition of male faces.

[671] Lewin, C., & Herlitz A. (2002).  Sex differences in face recognition--Women's faces make the difference. Brain and Cognition. 50(1), 121 - 128.

Imaging confirms people knowledge processed differently

Earlier research has demonstrated that semantic knowledge for different classes of inanimate objects (e.g., tools, musical instruments, and houses) is processed in different brain regions. A new imaging study looked at knowledge about people, and found a unique pattern of brain activity was associated with person judgments, supporting the idea that person knowledge is functionally dissociable from other classes of semantic knowledge within the brain.

[766] Mitchell, J. P., Heatherton T. F., & Macrae N. C. (2002).  Distinct neural systems subserve person and object knowledge. Proceedings of the National Academy of Sciences of the United States of America. 99(23), 15238 - 15243.

http://www.pnas.org/cgi/content/abstract/99/23/15238?etoc

Identity memory area localized

An imaging study investigating brain activation when people were asked to answer yes or no to statements about themselves (e.g. 'I forget important things', 'I'm a good friend', 'I have a quick temper'), found consistent activation in the anterior medial prefrontal and posterior cingulate. This is consistent with lesion studies, and suggests that these areas of the cortex are involved in self-reflective thought.

[210] Johnson, S. C., Baxter L. C., Wilder L. S., Pipe J. G., Heiserman J. E., & Prigatano G. P. (2002).  Neural correlates of self-reflection. Brain. 125(8), 1808 - 1814.

http://brain.oupjournals.org/cgi/content/abstract/125/8/1808

Recognizing yourself is different from recognizing other people

Recognition of familiar faces occurs largely in the right side of the brain, but new research suggests that identifying your own face occurs more in the left side of your brain. Evidence for this comes from a split-brain patient (a person whose corpus callosum – the main bridge of nerve fibers between the two hemispheres of the brain - has been severed to minimize the spread of epileptic seizure activity). The finding needs to be confirmed in studies of people with intact brains, but it suggests not only that there is a distinction between recognizing your self and recognizing other people you know well, but also that memories and knowledge about oneself may be stored largely in the left hemisphere.

[1075] Turk, D. J., Heatherton T. F., Kelley W. M., Funnell M. G., Gazzaniga M. S., & Macrae N. C. (2002).  Mike or me? Self-recognition in a split-brain patient. Nat Neurosci. 5(9), 841 - 842.

http://www.nature.com/neurolink/v5/n9/abs/nn907.html
http://www.sciencenews.org/20020824/fob8.asp

Differential effects of encoding strategy on brain activity patterns

Encoding and recognition of unfamiliar faces in young adults were examined using PET imaging to determine whether different encoding strategies would lead to differences in brain activity. It was found that encoding activated a primarily ventral system including bilateral temporal and fusiform regions and left prefrontal cortices, whereas recognition activated a primarily dorsal set of regions including right prefrontal and parietal areas. The type of encoding strategy produced different brain activity patterns. There was no effect of encoding strategy on brain activity during recognition. The left inferior prefrontal cortex was engaged during encoding regardless of strategy.

[566] Bernstein, L. J., Beig S., Siegenthaler A. L., & Grady C. L. (2002).  The effect of encoding strategy on the neural correlates of memory for faces. Neuropsychologia. 40(1), 86 - 98.

http://tinyurl.com/i87v

Babies' experience with faces leads to narrowing of perception

A theory that infants' experience in viewing faces causes their brains (in particular an area of the cerebral cortex known as the fusiform gyrus) to "tune in" to the types of faces they see most often and tune out other types, has been given support from a study showing that 6-month-old babies were significantly better than both adults and 9-month-old babies in distinguishing the faces of monkeys. All groups were able to distinguish human faces from one another.

[526] Pascalis, O., de Haan M., & Nelson C. A. (2002).  Is Face Processing Species-Specific During the First Year of Life?. Science. 296(5571), 1321 - 1323.

http://www.eurekalert.org/pub_releases/2002-05/uom-ssi051302.php
http://news.bbc.co.uk/hi/english/health/newsid_1991000/1991705.stm
http://www.eurekalert.org/pub_releases/2002-05/aaft-bbl050902.php

Different brain regions implicated in the representation of the structure and meaning of pictured objects

Imaging studies continue apace! Having established that that part of the brain known as the fusiform gyrus is important in picture naming, a new study further refines our understanding by studying the cerebral blood flow (CBF) changes in response to a picture naming task that varied on two dimensions: familiarity (or difficulty: hard vs easy) and category (tools vs animals). Results show that although familiarity effects are present in the frontal and left lateral posterior temporal cortex, they are absent from the fusiform gyrus. The authors conclude that the fusiform gyrus processes information relating to an object's structure, rather than its meaning. The blood flows suggest that it is the left posterior middle temporal gyrus that is involved in representing the object's meaning.

[691] Whatmough, C., Chertkow H., Murtha S., & Hanratty K. (2002).  Dissociable brain regions process object meaning and object structure during picture naming. Neuropsychologia. 40(2), 174 - 186.

Debate over how the brain deals with visual information

Neuroscientists can't agree on whether the brain uses specific regions to distinguish specific objects, or patterns of activity from different regions. The debate over how the brain deals with visual information has been re-ignited with apparently contradictory findings from two research groups. One group has pinpointed a distinct region in the brain that responds selectively to images of the human body, while another concludes that the representations of a wide range of image categories are dealt with by overlapping brain regions. (see below)

Specific brain region responds specifically to images of the human body

Cognitive neuroscientists have identified a new area of the human brain that responds specifically when people view images of the human body. They have named this region of the brain the 'extrastriate body area' or 'EBA'. The EBA can be distinguished from other known anatomical subdivisions of the visual cortex. However, the EBA is in a region of the brain called the posterior superior temporal sulcus, where other areas have been implicated in the perception of socially relevant information such as the direction that another person's eyes are gazing, the sound of human voices, or the inferred intentions of animate entities.

Brain scan patterns identify objects being viewed

National Institute of Mental Health (NIMH) scientists have shown that they can tell what kind of object a person is looking at — a face, a house, a shoe, a chair — by the pattern of brain activity it evokes. Earlier NIMH fMRI studies had shown that brain areas that respond maximally to a particular category of object are consistent across different people. This new study finds that the full pattern of responses — not just the areas of maximal activation — is consistent within the same person for a given category of object. Overall, the pattern of fMRI responses predicted the category with 96% accuracy. Accuracy was l00% for faces, houses and scrambled pictures.

[683] Downing, P. E., Jiang Y., Shuman M., & Kanwisher N. (2001).  A Cortical Area Selective for Visual Processing of the Human Body. Science. 293(5539), 2470 - 2473.

[1239] Haxby, J. V., Gobbini I. M., Furey M. L., Ishai A., Schouten J. L., & Pietrini P. (2001).  Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex. Science. 293(5539), 2425 - 2430.

http://www.eurekalert.org/pub_releases/2001-09/niom-bsp092601.php
http://www.sciencemag.org/cgi/content/abstract/293/5539/2425

Why recognizing a face is easier when the race matches our own

We have known for a while that recognizing a face is easier when its owner's race matches our own. An imaging study now shows that greater activity in the brain's expert face-discrimination area occurs when the subject is viewing faces that belong to members of the same race as their own.

Golby, A. J., Gabrieli, J. D. E., Chiao, J. Y. & Eberhardt, J. L. 2001. Differential responses in the fusiform region to same-race and other-race faces. Nature Neuroscience, 4, 845-850.

http://www.nature.com/nsu/010802/010802-1.html

Boys' and girls' brains process faces differently

Previous research has suggested a right-hemisphere superiority in face processing, as well as adult male superiority at spatial and non-verbal skills (also associated with the right hemisphere of the brain). This study looked at face recognition and the ability to read facial expressions in young, pre-pubertal boys and girls. Boys and girls were equally good at recognizing faces and identifying expressions, but boys showed significantly greater activity in the right hemisphere, while the girls' brains were more active in the left hemisphere. It is speculated that boys tend to process faces at a global level (right hemisphere), while girls process faces at a more local level (left hemisphere). This may mean that females have an advantage in reading fine details of expression. More importantly, it may be that different treatments might be appropriate for males and females in the case of brain injury.

[2541] Everhart, E. D., Shucard J. L., Quatrin T., & Shucard D. W. (2001).  Sex-related differences in event-related potentials, face recognition, and facial affect processing in prepubertal children. Neuropsychology. 15(3), 329 - 341.

http://www.eurekalert.org/pub_releases/2001-07/aaft-pba062801.php
http://news.bbc.co.uk/hi/english/health/newsid_1425000/1425797.stm

Children's recognition of faces

Children aged 4 to 7 were found to be able to use both configural and featural information to recognize faces. However, even when trained to proficiency on recognizing the target faces, their recognition was impaired when a superfluous hat was added to the face.

[1424] Freire, A., & Lee K. (2001).  Face Recognition in 4- to 7-Year-Olds: Processing of Configural, Featural, and Paraphernalia Information. Journal of Experimental Child Psychology. 80(4), 347 - 371.

Differences in face perception processing between autistic and normal adults

An imaging study compared activation patterns of adults with autism and normal control subjects during a face perception task. While autistic subjects could perform the face perception task, none of the regions supporting face processing in normals were found to be significantly active in the autistic subjects. Instead, in every autistic patient, faces maximally activated aberrant and individual-specific neural sites (e.g. frontal cortex, primary visual cortex, etc.), which was in contrast to the 100% consistency of maximal activation within the traditional fusiform face area (FFA) for every normal subject. It appears that, as compared with normal individuals, autistic individuals `see' faces utilizing different neural systems, with each patient doing so via a unique neural circuitry.

[704] Pierce, K., Muller R. - A., Ambrose J., Allen G., & Courchesne E. (2001).  Face processing occurs outside the fusiform `face area' in autism: evidence from functional MRI. Brain. 124(10), 2059 - 2073.

http://brain.oupjournals.org/cgi/content/abstract/124/10/2059

Visual impairment

A new study has found that errors in perceptual decisions occurred only when there was confused sensory input, not because of any ‘noise’ or randomness in the cognitive processing. The finding, if replicated across broader contexts, will change some of our fundamental assumptions about how the brain works.

The study unusually involved both humans and rats — four young adults and 19 rats — who listened to streams of randomly timed clicks coming into both the left ear and the right ear. After listening to a stream, the subjects had to choose the side from which more clicks originated.

The errors made, by both humans and rats, were invariably when two clicks overlapped. In other words, and against previous assumptions, the errors did not occur because of any ‘noise’ in the brain processing, but only when noise occurred in the sensory input.

The researchers supposedly ruled out alternative sources of confusion, such as “noise associated with holding the stimulus in mind, or memory noise, and noise associated with a bias toward one alternative or the other.”

However, before concluding that the noise which is the major source of variability and errors in more conceptual decision-making likewise stems only from noise in the incoming input (in this case external information), I would like to see the research replicated in a broader range of scenarios. Nevertheless, it’s an intriguing finding, and if indeed, as the researchers say, “the internal mental process was perfectly noiseless. All of the imperfections came from noise in the sensory processes”, then the ramifications are quite extensive.

The findings do add weight to recent evidence that a significant cause of age-related cognitive decline is sensory loss.

http://www.futurity.org/science-technology/dont-blame-your-brain-for-that-bad-decision/

[3376] Brunton, B. W., Botvinick M. M., & Brody C. D. (2013).  Rats and Humans Can Optimally Accumulate Evidence for Decision-Making. Science. 340(6128), 95 - 98.

A large, long-running study has found cognitive decline and brain lesions are linked to mild retinal damage in older women.

Damage to the retina (retinopathy) doesn’t produce noticeable symptoms in the early stages, but a new study indicates it may be a symptom of more widespread damage. In the ten-year study, involving 511 older women (average age 69), 7.6% (39) were found to have retinopathy. These women tended to have lower cognitive performance, and brain scans revealed that they had more areas of small vascular damage within the brain — 47% more overall, and 68% more in the parietal lobe specifically. They also had more white matter damage. They did not have any more brain atrophy.

These correlations remained after high blood pressure and diabetes (the two major risk factors for retinopathy) were taken into account. It’s estimated that 40-45% of those with diabetes have retinopathy.

Those with retinopathy performed similarly to those without on a visual acuity test. However, testing for retinopathy is a simple test that should routinely be carried out by an optometrist in older adults, or those with diabetes or hypertension.

The findings suggest that eye screening could identify developing vascular damage in the brain, enabling lifestyle or drug interventions to begin earlier, when they could do most good. The findings also add to the reasons why you shouldn’t ignore pre-hypertensive and pre-diabetic conditions.

Two large studies respectively find that common health complaints and irregular heartbeat are associated with an increased risk of developing Alzheimer’s, while a rat study adds to evidence that stress is also a risk factor.

A ten-year study involving 7,239 older adults (65+) has found that each common health complaint increased dementia risk by an average of about 3%, and that these individual risks compounded. Thus, while a healthy older adult had about an 18% chance of developing dementia after 10 years, those with a dozen of these health complaints had, on average, closer to a 40% chance.

It’s important to note that these complaints were not for serious disorders that have been implicated in Alzheimer’s. The researchers constructed a ‘frailty’ index, involving 19 different health and wellbeing factors: overall health, eyesight, hearing, denture fit, arthritis/rheumatism, eye trouble, ear trouble, stomach trouble, kidney trouble, bladder control, bowel control, feet/ankle trouble, stuffy nose/sneezing, bone fractures, chest problems, cough, skin problems, dental problems, other problems.

Not all complaints are created equal. The most common complaint — arthritis/rheumatism —was only slightly higher among those with dementia. Two of the largest differences were poor eyesight (3% of the non-demented group vs 9% of those with dementia) and poor hearing (3% and 6%).

At the end of the study, 4,324 (60%) were still alive, and of these, 416 (9.6%) had Alzheimer's disease, 191 (4.4%) had another sort of dementia and 677 (15.7%) had other cognitive problems (but note that 1,023 were of uncertain cognitive ability).

While these results need to be confirmed in other research — the study used data from broader health surveys that weren’t specifically designed for this purpose, and many of those who died during the study will have probably had dementia — they do suggest the importance of maintaining good general health.

Common irregular heartbeat raises risk of dementia

In another study, which ran from 1994 to 2008 and followed 3,045 older adults (mean age 74 at study start), those with atrial fibrillation were found to have a significantly greater risk of developing Alzheimer’s.

At the beginning of the study, 4.3% of the participants had atrial fibrillation (the most common kind of chronically irregular heartbeat); a further 12.2% developed it during the study. Participants were followed for an average of seven years. Over this time, those with atrial fibrillation had a 40-50% higher risk of developing dementia of any type, including probable Alzheimer's disease. Overall, 18.8% of the participants developed some type of dementia during the course of the study.

While atrial fibrillation is associated with other cardiovascular risk factors and disease, this study shows that atrial fibrillation increases dementia risk more than just through this association. Possible mechanisms for this increased risk include:

  • weakening the heart's pumping ability, leading to less oxygen going to the brain;
  • increasing the chance of tiny blood clots going to the brain, causing small, clinically undetected strokes;
  • a combination of these plus other factors that contribute to dementia such as inflammation.

The next step is to see whether any treatments for atrial fibrillation reduce the risk of developing dementia.

Stress may increase risk for Alzheimer's disease

And a rat study has shown that increased release of stress hormones leads to cognitive impairment and that characteristic of Alzheimer’s disease, tau tangles. The rats were subjected to stress for an hour every day for a month, by such means as overcrowding or being placed on a vibrating platform. These rats developed increased hyperphosphorylation of tau protein in the hippocampus and prefrontal cortex, and these changes were associated with memory deficits and impaired behavioral flexibility.

Previous research has shown that stress leads to that other characteristic of Alzheimer’s disease: the formation of beta-amyloid.

A month-long training program has enabled volunteers to instantly recognize very faint patterns.

In a study in which 14 volunteers were trained to recognize a faint pattern of bars on a computer screen that continuously decreased in faintness, the volunteers became able to recognize fainter and fainter patterns over some 24 days of training, and this correlated with stronger EEG signals from their brains as soon as the pattern flashed on the screen. The findings indicate that learning modified the very earliest stage of visual processing.

The findings could help shape training programs for people who must learn to detect subtle patterns quickly, such as doctors reading X-rays or air traffic controllers monitoring radars, and may also help improve training for adults with visual deficits such as lazy eye.

The findings are also noteworthy for showing that learning is not confined to ‘higher-order’ processes, but can occur at even the most basic, unconscious and automatic, level of processing.

Two recent studies point to how those lacking one sense might acquire enhanced other senses, and what limits this ability.

An experiment with congenitally deaf cats has revealed how deaf or blind people might acquire other enhanced senses. The deaf cats showed only two specific enhanced visual abilities: visual localization in the peripheral field and visual motion detection. This was associated with the parts of the auditory cortex that would normally be used to pick up peripheral and moving sound (posterior auditory cortex for localization; dorsal auditory cortex for motion detection) being switched to processing this information for vision.

This suggests that only those abilities that have a counterpart in the unused part of the brain (auditory cortex for the deaf; visual cortex for the blind) can be enhanced. The findings also point to the plasticity of the brain. (As a side-note, did you know that apparently cats are the only animal besides humans that can be born deaf?)

The findings (and their broader implications) receive support from an imaging study involving 12 blind and 12 sighted people, who carried out an auditory localization task and a tactile localization task (reporting which finger was being gently stimulated). While the visual cortex was mostly inactive when the sighted people performed these tasks, parts of the visual cortex were strongly activated in the blind. Moreover, the accuracy of the blind participants directly correlated to the strength of the activation in the spatial-processing region of the visual cortex (right middle occipital gyrus). This region was also activated in the sighted for spatial visual tasks.

Researchers trained blindfolded people to recognize shapes through coded sounds, demonstrating the abstract nature of perception.

We can see shapes and we can feel them, but we can’t hear a shape. However, in a dramatic demonstration of just how flexible our brain is, researchers have devised a way of coding spatial relations in terms of sound properties such as frequency, and trained blindfolded people to recognize shapes by their sounds. They could then match what they heard to shapes they felt. Furthermore, they were able to generalize from their training to novel shapes.

The findings not only offer new possibilities for helping blind people, but also emphasize that sensory representations simply require systematic coding of some kind. This provides more evidence for the hypothesis that our perception of a coherent object ultimately occurs at an abstract level beyond the sensory input modes in which it is presented.

[1921] Kim, J. - K., & Zatorre R. J. (2010).  Can you hear shapes you touch?. Experimental Brain Research. 202(4), 747 - 754.

Data from over 600 older adults has revealed that those with very good or excellent vision had a 63% reduced risk of dementia over the 8.5-year study period, while those with poorer vision who did not visit an ophthalmologist had a 9.5-fold increased risk of Alzheimer disease. The findings point to the need for older adults to seek treatment for their eye problems.

Data from 625 elderly Americans, followed for an average of 8.5 years, has revealed that those with very good or excellent vision at the beginning of the study had a 63% reduced risk of dementia over the study period. Those with poorer vision who did not visit an ophthalmologist had a 9.5-fold increased risk of Alzheimer disease and a 5-fold increased risk of cognitively impaired but no dementia. For the very-old (90+), 78% who maintained normal cognition had received at least one previous eye procedure compared with 51.7% of those with Alzheimer disease. The findings point to the need for older adults to seek treatment for their eye problems. The study raises the possibility that poor vision is not simply a symptom of developing dementia, but a contributing factor — possibly through its effect on curtailing activities which would help prevent it.

[325] Rogers, M. A. M., & Langa K. M. (2010).  Untreated Poor Vision: A Contributing Factor to Late-Life Dementia. Am. J. Epidemiol.. 171(6), 728 - 735.

An intriguing set of experiments has showed how you can improve vision by manipulating mindset.

An intriguing set of experiments showing how you can improve perception by manipulating mindset found significantly improved vision when:

  • an eye chart was arranged in reverse order (the letters getting progressively larger rather than smaller);
  • participants were given eye exercises and told their eyes would improve with practice;
  • participants were told athletes have better vision, and then told to perform jumping jacks or skipping (seen as less athletic);
  • participants flew a flight simulator, compared to pretending to fly a supposedly broken simulator (pilots are believed to have good vision).

[158] Langer, E., Djikic M., Pirson M., Madenci A., & Donohue R. (2010).  Believing Is Seeing. Psychological Science. 21(5), 661 - 666.

Older news items (pre-2010) brought over from the old website

Age-related eye disease associated with cognitive impairment

Age-related macular degeneration (AMD) is the leading cause of visual impairment in industrialized nations, and like Alzheimer's disease, involves the buildup of beta-amyloid peptides in the brain, as well as sharing similar vascular risk factors. A study of over 2000 older adults (69-97) has revealed an association between early-stage AMD and cognitive impairment, as assessed by the Digit Symbol Substitution Test (a test of attention and processing speed). There was no association with performance on the Modified Mini-Mental State Examination (used to assess dementia).
It’s worth noting that in the same journal two studies into the association between dietary fat intake and AMD appeared. The first, four-year, study involved over 6700 older adults and found that higher trans-unsaturated fat intake was associated with a higher incidence of AMD, while higher omega-3 fatty acid and higher olive oil intake were each associated with a lower incidence. The second, ten-year, study involving nearly 2500 older adults, found regular consumption of fish, greater intake of omega-3 fatty acids, and low intake of linoleic acid (perhaps because a higher intake implies a lower intake of omega-3 oils? linoleic acid is an omega-6 fatty acid), were all associated with a lower incidence of AMD. Fish and omega-3 oils have of course been similarly associated with lower rates of dementia and age-related cognitive impairment.

[447] Baker, M. L., Wang J. J., Rogers S., Klein R., Kuller L. H., Larsen E. K., et al. (2009).  Early age-related macular degeneration, cognitive function, and dementia: the Cardiovascular Health Study. Archives of Ophthalmology. 127(5), 667 - 673.

[754] Chong, E. W. - T., Robman L. D., Simpson J. A., Hodge A. M., Aung K. Z., Dolphin T. K., et al. (2009).  Fat consumption and its association with age-related macular degeneration. Archives of Ophthalmology. 127(5), 674 - 680.

[413] Tan, J. S. L., Wang J. J., Flood V., & Mitchell P. (2009).  Dietary fatty acids and the 10-year incidence of age-related macular degeneration: the Blue Mountains Eye Study. Archives of Ophthalmology. 127(5), 656 - 665.

http://www.eurekalert.org/pub_releases/2009-05/jaaj-aed050709.php

Age-related vision problems may be associated with cognitive impairment

Age-related macular degeneration (AMD) develops when the macula, the portion of the eye that allows people to see in detail, deteriorates. An investigation into the relationship between vision problems and cognitive impairment in 2,946 patients has been carried out by The Age-Related Eye Disease Study (AREDS) Research Group. Tests were carried out every year for four years. Those who had more severe AMD had poorer average scores on cognitive tests, an association that remained even after researchers considered other factors, including age, sex, race, education, smoking, diabetes, use of cholesterol-lowering medications and high blood pressure. Average scores also decreased as vision decreased. It’s possible that there is a biological reason for the association; it is also possible that visual impairment reduces a person’s capacity to develop and maintain relationships and to participate in stimulating activities.

Chaves, P.H.M. et al. 2006. Association Between Mild Age-Related Eye Disease Study Research Group. 2006. Cognitive Impairment in the Age-Related Eye Disease Study: AREDS Report No. 16. Archives of Ophthalmology,124, 537-543.

http://www.eurekalert.org/pub_releases/2006-04/jaaj-avp040606.php

The reorganization of the visual cortex in congenitally blind people

Studies indicate that congenitally blind people have superior verbal memory abilities than the sighted. A new study helps us understand why this is so. Some 25% of the human brain is devoted to vision. Until now it was assumed that loss of vision rendered these regions useless. Now it appears that in those blind from birth, the part of the occipital cortex usually involved in vision is utilized for other purposes. Extensive regions in the occipital cortex, in particular the primary visual cortex, are activated not only during Braille reading, but also during performances of verbal memory tasks, such as recalling a list of abstract words. No such activation was found in a sighted control group. It also appears that the greater the occipital activation, the higher the scores in the verbal memory tests.

[944] Amedi, A., Raz N., Pianka P., Malach R., & Zohary E. (2003).  Early /`visual/' cortex activation correlates with superior verbal memory performance in the blind. Nat Neurosci. 6(7), 758 - 766.

http://www.eurekalert.org/pub_releases/2003-06/huoj-hur061703.php

Add comment

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.

Comments

Cześć

Cześć,

Witam wszystkich, chciałem się przedstawić na forum. Miłego dnia życzę.