visual memory

Visual perception - a round-up of recent news

July, 2011

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

Reference: 

[2291] Kim, J. G., Biederman I., & Juan C-H.
(2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study.
The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L.
(2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories.
Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A.
(2011).  Behavior and neural basis of near-optimal visual search.
Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S.
(2011).  A neural basis for real-world visual search in human occipitotemporal cortex.
Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S.
(2011).  Value-driven attentional capture.
Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Source: 

Topics: 

tags memworks: 

Gestures provide a helping hand in problem solving

March, 2011

Another study confirms the value of gestures in helping you solve spatial problems, and suggests that gesturing can help you develop better mental visualization.

In the first of three experiments, 132 students were found to gesture more often when they had difficulties solving mental rotation problems. In the second experiment, 22 students were encouraged to gesture, while 22 were given no such encouragement, and a further 22 were told to sit on their hands to prevent gesturing. Those encouraged to gesture solved more mental rotation problems.

Interestingly, the amount of gesturing decreased with experience with these spatial problems, and when the gesture group were given new spatial visualization problems in which gesturing was prohibited, their performance was still better than that of the other participants. This suggests that the spatial computation supported by gestures becomes internalized. The third experiment increased the range of spatial visualization problems helped by gesture.

The researchers suggest that hand gestures may improve spatial visualization by helping a person keep track of an object in the mind as it is rotated to a new position, and by providing additional feedback and visual cues by simulating how an object would move if the hand were holding it.

Reference: 

[2140] Chu, M., & Kita S.
(2011).  The nature of gestures' beneficial role in spatial problem solving..
Journal of Experimental Psychology: General. 140(1), 102 - 116.

Full text of the article is available at http://www.apa.org/pubs/journals/releases/xge-140-1-102.pdf

Source: 

Topics: 

tags memworks: 

tags strategies: 

Role of expectation on memory consolidation during sleep

March, 2011

A new study suggests sleep’s benefits for memory consolidation depend on you wanting to remember.

Two experiments involving a total of 191 volunteers have investigated the parameters of sleep’s effect on learning. In the first experiment, people learned 40 pairs of words, while in the second experiment, subjects played a card game matching pictures of animals and objects, and also practiced sequences of finger taps. In both groups, half the volunteers were told immediately following the tasks that they would be tested in 10 hours. Some of the participants slept during this time.

As expected, those that slept performed better on the tests (all of them: word recall, visuospatial, and procedural motor memory), but the really interesting bit is that it turned out it was only the people who slept who also knew a test was coming that had improved memory recall. These people showed greater brain activity during deep or "slow wave" sleep, and for these people only, the greater the activity during slow-wave sleep, the better their recall.

Those who didn’t sleep, however, were unaffected by whether they knew there would be a test or not.

Of course, this doesn’t mean you never remember things you don’t intend or want to remember! There is more than one process going on in the encoding and storing of our memories. However, it does confirm the importance of intention, and cast light perhaps on some of your learning failures.

Reference: 

[2148] Wilhelm, I., Diekelmann S., Molzow I., Ayoub A., Mölle M., & Born J.
(2011).  Sleep Selectively Enhances Memory Expected to Be of Future Relevance.
The Journal of Neuroscience. 31(5), 1563 - 1569.

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

Children with autism lack visual skills required for independence

February, 2011

Autism is popularly associated with intense awareness of systematic regularities, but a new study shows that the skill displayed in computer tasks is not available in real-world tasks.

Contrary to previous laboratory studies showing that children with autism often demonstrate outstanding visual search skills, new research indicates that in real-life situations, children with autism are unable to search effectively for objects. The study, involving 20 autistic children and 20 normally-developing children (aged 8-14), used a novel test room, with buttons on the floor that the children had to press to find a hidden target among multiple illuminated locations. Critically, 80% of these targets appeared on one side of the room.

Although autistics are generally believed to be more systematic, with greater sensitivity to regularities within a system, such behavior was not observed. Compared to other children, those with autism were slower to pick up on the regularities that would help them choose where to search. The slowness was not due to a lack of interest — all the children seemed to enjoy the game, and were keen to find the hidden targets.

The findings suggest that those with ASD have difficulties in applying the rules of probability to larger environments, particularly when they themselves are part of that environment.

Reference: 

[2055] Pellicano, E., Smith A. D., Cristino F., Hood B. M., Briscoe J., & Gilchrist I. D.
(2011).  Children with autism are neither systematic nor optimal foragers.
Proceedings of the National Academy of Sciences. 108(1), 421 - 426.

Source: 

Topics: 

tags development: 

tags memworks: 

tags problems: 

Cognitive recovery after brain damage more complex than realized

January, 2011

Two new studies show us that recovery after brain damage is not as simple as one region ‘taking over’ for another, and that some regions are more easily helped than others.

When stroke or brain injury damages a part of the brain controlling movement or sensation or language, other parts of the brain can learn to compensate for this damage. It’s been thought that this is a case of one region taking over the lost function. Two new studies show us the story is not so simple, and help us understand the limits of this plasticity.

In the first study, six stroke patients who have lost partial function in their prefrontal cortex, and six controls, were briefly shown a series of pictures to test the ability to remember images for a brief time (visual working memory) while electrodes recorded their EEGs. When the images were shown to the eye connected to the damaged hemisphere, the intact prefrontal cortex (that is, the one not in the hemisphere directly receiving that visual input) responded within 300 to 600 milliseconds.

Visual working memory involves a network of brain regions, of which the prefrontal cortex is one important element, and the basal ganglia, deep within the brain, are another. In the second study, the researchers extended the experiment to patients with damage not only to the prefrontal cortex, but also to the basal ganglia. Those with basal ganglia damage had problems with visual working memory no matter which part of the visual field was shown the image.

In other words, basal ganglia lesions caused a more broad network deficit, while prefrontal cortex lesions resulted in a more limited, and recoverable, deficit. The findings help us understand the different roles these brain regions play in attention, and emphasize how memory and attention are held in networks. They also show us that the plasticity compensating for brain damage is more dynamic and flexible than we realized, with intact regions stepping in on a case by case basis, very quickly, but only when the usual region fails.

Reference: 

[2034] Voytek, B., Davis M., Yago E., Barcel F., Vogel E. K., & Knight R. T.
(2010).  Dynamic Neuroplasticity after Human Prefrontal Cortex Damage.
Neuron. 68(3), 401 - 408.

[2033] Voytek, B., & Knight R. T.
(2010).  Prefrontal cortex and basal ganglia contributions to visual working memory.
Proceedings of the National Academy of Sciences. 107(42), 18167 - 18172.

Source: 

Topics: 

tags memworks: 

tags problems: 

Better reading may mean poorer face recognition

January, 2011

Evidence that illiterates use a brain region involved in reading for face processing to a greater extent than readers do, suggests that reading may have hijacked the network used for object recognition.

An imaging study of 10 illiterates, 22 people who learned to read as adults and 31 who did so as children, has confirmed that the visual word form area (involved in linking sounds with written symbols) showed more activation in better readers, although everyone had similar levels of activation in that area when listening to spoken sentences. More importantly, it also revealed that this area was much less active among the better readers when they were looking at pictures of faces.

Other changes in activation patterns were also evident (for example, readers showed greater activation in the planum temporal in response to spoken speech), and most of the changes occurred even among those who acquired literacy in adulthood — showing that the brain re-structuring doesn’t depend on a particular time-window.

The finding of competition between face and word processing is consistent with the researcher’s theory that reading may have hijacked a neural network used to help us visually track animals, and raises the intriguing possibility that our face-perception abilities suffer in proportion to our reading skills.

Reference: 

Source: 

Topics: 

tags: 

tags memworks: 

tags study: 

Training improves visual perception

December, 2010

A month-long training program has enabled volunteers to instantly recognize very faint patterns.

In a study in which 14 volunteers were trained to recognize a faint pattern of bars on a computer screen that continuously decreased in faintness, the volunteers became able to recognize fainter and fainter patterns over some 24 days of training, and this correlated with stronger EEG signals from their brains as soon as the pattern flashed on the screen. The findings indicate that learning modified the very earliest stage of visual processing.

The findings could help shape training programs for people who must learn to detect subtle patterns quickly, such as doctors reading X-rays or air traffic controllers monitoring radars, and may also help improve training for adults with visual deficits such as lazy eye.

The findings are also noteworthy for showing that learning is not confined to ‘higher-order’ processes, but can occur at even the most basic, unconscious and automatic, level of processing.

Reference: 

Source: 

Topics: 

tags memworks: 

tags problems: 

tags strategies: 

Brain area organized by color and orientation

December, 2010

Object perception rests on groups of neurons that respond to specific attributes.

New imaging techniques used on macaque monkeys explains why we find it so easy to scan many items quickly when we’re focused on one attribute, and how we can be so blind to attributes and objects we’re not focused on.

The study reveals that a region of the visual cortex called V4, which is involved in visual object recognition, shows extensive compartmentalization. There are areas for specific colors; areas for specific orientations, such as horizontal or vertical. Other groups of neurons are thought to process more complex aspects of color and form, such as integrating different contours that are the same color, to achieve overall shape perception.

Reference: 

[1998] Tanigawa, H., Lu H. D., & Roe A. W.
(2010).  Functional organization for color and orientation in macaque V4.
Nat Neurosci. 13(12), 1542 - 1548.

Source: 

Topics: 

tags memworks: 

Distinguishing between working memory and long-term memory

November, 2010

A study with four brain-damaged people challenges the idea that the hippocampus is the hub of spatial and relational processing for short-term as well as long-term memory.

Because people with damage to their hippocampus are sometimes impaired at remembering spatial information even over extremely short periods of time, it has been thought that the hippocampus is crucial for spatial information irrespective of whether the task is a working memory or a long-term memory task. This is in contrast to other types of information. In general, the hippocampus (and related structures in the mediotemporal lobe) is assumed to be involved in long-term memory, not working memory.

However, a new study involving four patients with damage to their mediotemporal lobes, has found that they were perfectly capable of remembering for one second the relative positions of three or fewer objects on a table — but incapable of remembering more. That is, as soon as the limits of working memory were reached, their performance collapsed. It appears, therefore, that there is, indeed, a fundamental distinction between working memory and long-term memory across the board, including the area of spatial information and spatial-objection relations.

The findings also underscore how little working memory is really capable of on its own (although absolutely vital for what it does!) — in real life, long-term memory and working memory work in tandem.

Reference: 

Source: 

Topics: 

tags memworks: 

How the deaf have better vision; the blind better hearing

November, 2010

Two recent studies point to how those lacking one sense might acquire enhanced other senses, and what limits this ability.

An experiment with congenitally deaf cats has revealed how deaf or blind people might acquire other enhanced senses. The deaf cats showed only two specific enhanced visual abilities: visual localization in the peripheral field and visual motion detection. This was associated with the parts of the auditory cortex that would normally be used to pick up peripheral and moving sound (posterior auditory cortex for localization; dorsal auditory cortex for motion detection) being switched to processing this information for vision.

This suggests that only those abilities that have a counterpart in the unused part of the brain (auditory cortex for the deaf; visual cortex for the blind) can be enhanced. The findings also point to the plasticity of the brain. (As a side-note, did you know that apparently cats are the only animal besides humans that can be born deaf?)

The findings (and their broader implications) receive support from an imaging study involving 12 blind and 12 sighted people, who carried out an auditory localization task and a tactile localization task (reporting which finger was being gently stimulated). While the visual cortex was mostly inactive when the sighted people performed these tasks, parts of the visual cortex were strongly activated in the blind. Moreover, the accuracy of the blind participants directly correlated to the strength of the activation in the spatial-processing region of the visual cortex (right middle occipital gyrus). This region was also activated in the sighted for spatial visual tasks.

Reference: 

Source: 

Topics: 

tags memworks: 

tags problems: 

Pages

Subscribe to RSS - visual memory