perception

Visual perception - a round-up of recent news

July, 2011

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

Reference: 

[2291] Kim, J. G., Biederman I., & Juan C-H.
(2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study.
The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L.
(2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories.
Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A.
(2011).  Behavior and neural basis of near-optimal visual search.
Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S.
(2011).  A neural basis for real-world visual search in human occipitotemporal cortex.
Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S.
(2011).  Value-driven attentional capture.
Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Source: 

Topics: 

tags memworks: 

Mindfulness meditation may help attention through better control of alpha rhythms

May, 2011

New research suggests that meditation can improve your ability to control alpha brainwaves, thus helping you block out distraction.

As I’ve discussed on many occasions, a critical part of attention (and working memory capacity) is being able to ignore distraction. There has been growing evidence that mindfulness meditation training helps develop attentional control. Now a new study helps fill out the picture of why it might do so.

The alpha rhythm is particularly active in neurons that process sensory information. When you expect a touch, sight or sound, the focusing of attention toward the expected stimulus induces a lower alpha wave height in neurons that would handle the expected sensation, making them more receptive to that information. At the same time the height of the alpha wave in neurons that would handle irrelevant or distracting information increases, making those cells less receptive to that information. In other words, alpha rhythm helps screen out distractions.

In this study, six participants who completed an eight-week mindfulness meditation program (MBSR) were found to generate larger alpha waves, and generate them faster, than the six in the control group. Alpha wave activity in the somatosensory cortex was measured while participants directed their attention to either their left hand or foot. This was done on three occasions: before training, at three weeks of the program, and after the program.

The MBSR program involves an initial two-and-a-half-hour training session, followed by daily 45-minute meditation sessions guided by a CD recording. The program is focused on training participants first to pay close attention to body sensations, then to focus on body sensations in a specific area, then being able to disengage and shifting the focus to another body area.

Apart from helping us understand why mindfulness meditation training seems to improve attention, the findings may also explain why this meditation can help sufferers of chronic pain.

Reference: 

Source: 

Topics: 

tags: 

tags memworks: 

tags strategies: 

New insight into insight, and the role of the amygdala in memory

April, 2011

A new study suggests that one-off learning (that needs no repetition) occurs because the amygdala, center of emotion in the brain, judges the information valuable.

Most memory research has concerned itself with learning over time, but many memories, of course, become fixed in our mind after only one experience. The mechanism by which we acquire knowledge from single events is not well understood, but a new study sheds some light on it.

The study involved participants being presented with images degraded almost beyond recognition. After a few moments, the original image was revealed, generating an “aha!” type moment. Insight is an experience that is frequently remembered well after a single occurrence. Participants repeated the exercise with dozens of different images.

Memory for these images was tested a week later, when participants were again shown the degraded images, and asked to recall details of the actual image.

Around half the images were remembered. But what’s intriguing is that the initial learning experience took place in a brain scanner, and to the researchers’ surprise, one of the highly active areas during the moment of insight was the amygdala. Moreover, high activity in the amygdala predicted that those images would be remembered a week later.

It seems the more we learn about the amygdala, the further its involvement extends. In this case, it’s suggested that the amygdala signals to other parts of the brain that an event is significant. In other words, it gives a value judgment, decreeing whether an event is worthy of being remembered. Presumably the greater the value, the more effort the brain puts into consolidating the information.

It is not thought, from the images used, that those associated with high activity in the amygdala were more ‘emotional’ than the other images.

Reference: 

Source: 

Topics: 

tags: 

tags memworks: 

tags strategies: 

Brain hub helps us switch attention

December, 2010

The intraparietal sulcus appears to be a hub for connecting the different sensory-processing areas as well as higher-order processes, and may be key to attention problems.

If our brains are full of clusters of neurons resolutely only responding to specific features (as suggested in my earlier report), how do we bring it all together, and how do we switch from one point of interest to another? A new study using resting state data from 58 healthy adolescents and young adults has found that the intraparietal sulcus, situated at the intersection of visual, somatosensory, and auditory association cortices and known to be a key area for processing attention, contains a miniature map of all the things we can pay attention to (visual, auditory, motor stimuli etc).

Moreover, this map is copied in at least 13 other places in the brain, all of which are connected to the intraparietal sulcus. Each copy appears to do something different with the information. For instance, one map processes eye movements while another processes analytical information. This map of the world may be a fundamental building block for how information is represented in the brain.

There were also distinct clusters within the intraparietal sulcus that showed different levels of connectivity to auditory, visual, somatosensory, and default mode networks, suggesting they are specialized for different sensory modalities.

The findings add to our understanding of how we can shift our attention so precisely, and may eventually help us devise ways of treating disorders where attention processing is off, such as autism, attention deficit disorder, and schizophrenia.

Reference: 

[1976] Anderson, J. S., Ferguson M. A., Lopez-Larson M., & Yurgelun-Todd D.
(2010).  Topographic maps of multisensory attention.
Proceedings of the National Academy of Sciences. 107(46), 20110 - 20114.

Source: 

Topics: 

tags memworks: 

How we can control individual neurons

November, 2010

Every moment a multitude of stimuli compete for our attention. Just how this competition is resolved, and how we control it, is not known. But a new study adds to our understanding.

Following on from earlier studies that found individual neurons were associated with very specific memories (such as a particular person), new research has shown that we can actually regulate the activity of specific neurons, increasing the firing rate of some while decreasing the rate of others.

The study involved 12 patients implanted with deep electrodes for intractable epilepsy. On the basis of each individual’s interests, four images were selected for each patient. Each of these images was associated with the firing of specific neurons in the mediotemporal lobe. The firing of these neurons was hooked up to a computer, allowing the patients to make their particular images appear by thinking of them. When another image appeared on top of the image as a distraction, creating a composite image, patients were asked to focus on their particular image, brightening the target image while the distractor image faded. The patients were successful 70% of the time in brightening their target image. This was primarily associated with increased firing of the specific neurons associated with that image.

I should emphasize that the use of a composite image meant that the participants had to rely on a mental representation rather than the sensory stimuli, at least initially. Moreover, when the feedback given was fake — that is, the patients’ efforts were no longer linked to the behavior of the image on the screen — success rates fell dramatically, demonstrating that their success was due to a conscious, directed action.

Different patients used different strategies to focus their attention. While some simply thought of the picture, others repeated the name of the image out loud or focused their gaze on a particular aspect of the image.

Resolving the competition of multiple internal and external stimuli is a process which involves a number of different levels and regions, but these findings help us understand at least some of the process that is under our conscious control. It would be interesting to know more about the relative effectiveness of the different strategies people used, but this was not the focus of the study. It would also be very interesting to compare effectiveness at this task across age, but of course this procedure is invasive and can only be used in special cases.

The study offers hope for building better brain-machine interfaces.

Reference: 

Source: 

Topics: 

tags memworks: 

An early marker of autism

October, 2010

A strong preference for looking at moving shapes rather than active people was evident among toddlers with autism spectrum disorder.

A study involving 110 toddlers (aged 14-42 months), of whom 37 were diagnosed with an autism spectrum disorder and 22 with a developmental delay, has compared their behavior when watching a 1-minute movie depicting moving geometric patterns (a standard screen saver) on 1 side of a video monitor and children in high action, such as dancing or doing yoga, on the other.

It was found that only one of the 51 typically-developing toddlers preferred the shapes, but 40% of the ASD toddlers did, as well as 9% of the developmentally delayed toddlers. Moreover, all those who spent over 69% of the time focusing on the moving shapes were those with ASD.

Additionally, those with ASD who preferred the geometric images also showed a particular pattern of saccades (eye movements) when viewing the images — a reduced number of saccades, demonstrated in a fixed stare. It’s suggested that a preference for moving geometric patterns combined with lengthy absorption in such images, might be an early identifier of autism. Such behavior should be taken as a signal to look for other warning signs, such as reduced enjoyment during back-and-forth games like peek-a-boo; an unusual tone of voice; failure to point at or bring objects to show; and failure to respond to their name.

Reference: 

[1891] Pierce, K., Conant D., Hazin R., Stoner R., & Desmond J.
(2010).  Preference for Geometric Patterns Early in Life As a Risk Factor for Autism.
Arch Gen Psychiatry. archgenpsychiatry.2010.113 - archgenpsychiatry.2010.113.

Source: 

Topics: 

tags development: 

tags memworks: 

tags problems: 

Sensory integration in autism

October, 2010

A new study provides evidence for the theory that sensory integration is impaired in autism.

Children with autism often focus intently on a single activity or feature of their environment. A study involving 17 autistic children (6-16 years) and 17 controls has compared brain activity as they watched a silent video of their choice while tones and vibrations were presented, separately and simultaneously.

A simple stimulus takes about 20 milliseconds to arrive in the brain. When information from multiple senses registers at the same time, integration takes about 100 to 200 milliseconds in normally developing children. But those with autism took an average of 310 milliseconds to integrate the noise and vibration when they occurred together. The children with autism also showed weaker signal strength, signified by lower amplitude brainwaves.

The findings are consistent with theories that automatic sensory integration is impaired in autism, and may help explain autism’s characteristic sensitivity to excessive sensory stimulation.

Reference: 

Source: 

Topics: 

tags: 

tags development: 

tags memworks: 

tags problems: 

Natural scenes have positive impact on brain

October, 2010

Images of nature have been found to improve attention. A new study shows that natural scenes encourage different brain regions to synchronize.

A couple of years ago I reported on a finding that walking in the park, and (most surprisingly) simply looking at photos of natural scenes, could improve memory and concentration (see below). Now a new study helps explain why. The study examined brain activity while 12 male participants (average age 22) looked at images of tranquil beach scenes and non-tranquil motorway scenes. On half the presentations they concurrently listened to the same sound associated with both scenes (waves breaking on a beach and traffic moving on a motorway produce a similar sound, perceived as a constant roar).

Intriguingly, the natural, tranquil scenes produced significantly greater effective connectivity between the auditory cortex and medial prefrontal cortex, and between the auditory cortex and posterior cingulate gyrus, temporoparietal cortex and thalamus. It’s of particular interest that this is an example of visual input affecting connectivity of the auditory cortex, in the presence of identical auditory input (which was the focus of the research). But of course the take-home message for us is that the benefits of natural scenes for memory and attention have been supported.

Previous study:

Many of us who work indoors are familiar with the benefits of a walk in the fresh air, but a new study gives new insight into why, and how, it works. In two experiments, researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. The intriguing finding was that this effect was achieved not only by walking in the botanical gardens (versus walking along main streets of Ann Arbor), but also by looking at photos of nature (versus looking at photos of urban settings). The findings are consistent with a theory that natural environments are better at restoring attention abilities, because they provide a more coherent pattern of stimulation that requires less effort, as opposed to urban environments that are provide complex and often confusing stimulation that captures attention dramatically and requires directed attention (e.g., to avoid being hit by a car).

Reference: 

[1867] Hunter, M. D., Eickhoff S. B., Pheasant R. J., Douglas M. J., Watts G. R., Farrow T. F. D., et al.
(2010).  The state of tranquility: Subjective perception is shaped by contextual modulation of auditory connectivity.
NeuroImage. 53(2), 611 - 618.

[279] Berman, M. G., Jonides J., & Kaplan S.
(2008).  The cognitive benefits of interacting with nature.
Psychological Science: A Journal of the American Psychological Society / APS. 19(12), 1207 - 1212.

Source: 

Topics: 

tags: 

tags lifestyle: 

tags memworks: 

Having a male twin improves mental rotation performance in females

October, 2010

A twin study suggests prenatal testosterone may be a factor in the innate male superiority in mental rotation*.

Because male superiority in mental rotation appears to be evident at a very young age, it has been suggested that testosterone may be a factor. To assess whether females exposed to higher levels of prenatal testosterone perform better on mental rotation tasks than females with lower levels of testosterone, researchers compared mental rotation task scores between twins from same-sex and opposite-sex pairs.

It was found that females with a male co-twin scored higher than did females with a female co-twin (there was no difference in scores between males from opposite-sex and same-sex pairs). Of course, this doesn’t prove that that the differences are produced in the womb; it may be that girls with a male twin engage in more male-typical activities. However, the association remained after allowing for computer game playing experience.

The study involved 804 twins, average age 22, of whom 351 females were from same-sex pairs and 120 from opposite-sex pairs. There was no significant difference between females from identical same-sex pairs compared to fraternal same-sex pairs.

* Please do note that ‘innate male superiority’ does NOT mean that all men are inevitably better than all women at this very specific task! My words simply reflect the evidence that the tendency of males to be better at mental rotation is found in infants as young as 3 months.

Reference: 

Source: 

Topics: 

tags: 

tags lifestyle: 

tags memworks: 

Gender gap in spatial ability can be reduced through training

October, 2010

Male superiority in mental rotation is the most-cited gender difference in cognitive abilities. A new study shows that the difference can be eliminated in 6-year-olds after a mere 8 weeks.

Following a monkey study that found training in spatial memory could raise females to the level of males, and human studies suggesting the video games might help reduce gender differences in spatial processing (see below for these), a new study shows that training in spatial skills can eliminate the gender difference in young children. Spatial ability, along with verbal skills, is one of the two most-cited cognitive differences between the sexes, for the reason that these two appear to be the most robust.

This latest study involved 116 first graders, half of whom were put in a training program that focused on expanding working memory, perceiving spatial information as a whole rather than concentrating on details, and thinking about spatial geometric pictures from different points of view. The other children took part in a substitute training program, as a control group. Initial gender differences in spatial ability disappeared for those who had been in the spatial training group after only eight weekly sessions.

Previously:

A study of 90 adult rhesus monkeys found young-adult males had better spatial memory than females, but peaked early. By old age, male and female monkeys had about the same performance. This finding is consistent with reports suggesting that men show greater age-related cognitive decline relative to women. A second study of 22 rhesus monkeys showed that in young adulthood, simple spatial-memory training did not help males but dramatically helped females, raising their performance to the level of young-adult males and wiping out the gender gap.

Another study showing that expert video gamers have improved mental rotation skills, visual and spatial memory, and multitasking skills has led researchers to conclude that training with video games may serve to reduce gender differences in visual and spatial processing, and some of the cognitive declines that come with aging.

Reference: 

Source: 

Topics: 

tags: 

tags development: 

tags lifestyle: 

tags memworks: 

tags strategies: 

Pages

Subscribe to RSS - perception
Error | About memory

Error

The website encountered an unexpected error. Please try again later.