scenes

Gist memory may be why false memories are more common in older adults

  • Gist processing appears to play a strong role in false memories.
  • Older adults rely on gist memory more.
  • Older adults find it harder to recall specific sensory details that would help confirm whether a memory is true.

Do older adults forget as much as they think, or is it rather that they ‘misremember’?

A small study adds to evidence that gist memory plays an important role in false memories at any age, but older adults are more susceptible to misremembering because of their greater use of gist memory.

Gist memory is about remembering the broad story, not the details. We use schemas a lot. Schemas are concepts we build over time for events and experiences, in order to relieve the cognitive load. They allow us to respond and process faster. We build schemas for such things as going to the dentist, going to a restaurant, attending a lecture, and so on. Schemas are very useful, reminding us what to expect and what to do in situations we have experienced before. But they are also responsible for errors of perception and memory — we see and remember what we expect to see.

As we get older, we do of course build up more and firmer schemas, making it harder to really see with fresh eyes. Which means it’s harder for us to notice the details, and easier for us to misremember what we saw.

A small study involving 20 older adults (mean age 75) had participants look at 26 different pictures of common scenes (such as a farmyard, a bathroom) for about 10 seconds, and asked them to remember as much as they could about the scenes. Later, they were shown 300 pictures of objects that were either in the scene, related to the scene (but not actually in the scene), or not commonly associated to the scene, and were required to say whether or not the objects were in the picture. Brain activity was monitored during these tests. Performance was also compared with that produced in a previous identical study, involving 22 young adults (mean age 23).

As expected and as is typical, there was a higher hit rate for schematic items and a higher rate of false memories for schematically related lures (items that belong to the schema but didn’t appear in the picture). True memories activated the typical retrieval network (medial prefrontal cortex, hippocampus/parahippocampal gyrus, inferior parietal lobe, right middle temporal gyrus, and left fusiform gyrus).

Activity in some of these regions (frontal-parietal regions, left hippocampus, right MTG, and left fusiform) distinguished hits from false alarms, supporting the idea that it’s more demanding to retrieve true memories than illusory ones. This contrasts with younger adults who in this and previous research have displayed the opposite pattern. The finding is consistent, however, with the theory that older adults tend to engage frontal resources at an earlier level of difficulty.

Older adults also displayed greater activation in the medial prefrontal cortex for both schematic and non-schematic hits than young adults did.

While true memories activated the typical retrieval network, and there were different patterns of activity for schematic vs non-schematic hits, there was no distinctive pattern of activity for retrieving false memories. However, there was increased activity in the middle frontal gyrus, middle temporal gyrus, and hippocampus/parahippocampal gyrus as a function of the rate of false memories.

Imaging also revealed that, like younger adults, older adults also engage the ventromedial prefrontal cortex when retrieving schematic information, and that they do so to a greater extent. Activation patterns also support the role of the mediotemporal lobe (MTL), and the posterior hippocampus/parahippocampal gyrus in particular, in determining true memories from false. Note that schematic information is not part of this region’s concern, and there was no consistent difference in activation in this region for schematic vs non-schematic hits. But older adults showed this shift within the hippocampus, with much of the activity moving to a more posterior region.

Sensory details are also important for distinguishing between true and false memories, but, apart from activity in the left fusiform gyrus, older adults — unlike younger adults — did not show any differential activation in the occipital cortex. This finding is consistent with previous research, and supports the conclusion that older adults don’t experience the recapitulation of sensory details in the same way that younger adults do. This, of course, adds to the difficulty they have in distinguishing true and false memories.

Older adults also showed differential activation of the right MTG, involved in gist processing, for true memories. Again, this is not found in younger adults, and supports the idea that older adults depend more on schematic gist information to assess whether a memory is true.

However, in older adults, increased activation of both the MTL and the MTG is seen as rates of false alarms increase, indicating that both gist and episodic memory contribute to their false memories. This is also in line with previous research, suggesting that memories of specific events and details can (incorrectly) provide support for false memories that are consistent with such events.

Older adults, unlike young adults, failed to show differential activity in the retrieval network for targets and lures (items that fit in with the schema, but were not in fact present in the image).

What does all this mean? Here’s what’s important:

  • older adults tend to use schema information more when trying to remember
  • older adults find it harder to recall specific sensory details that would help confirm a memory’s veracity
  • at all ages, gist processing appears to play a strong role in false memories
  • memory of specific (true) details can be used to endorse related (but false) details.

What can you do about any of this? One approach would be to make an effort to recall specific sensory details of an event rather than relying on the easier generic event that comes to mind first. So, for example, if you’re asked to go to the store to pick up orange juice, tomatoes and muesli, you might end up with more familiar items — a sort of default position, as it were, because you can’t quite remember what you were asked. If you make an effort to remember the occasion of being told — where you were, how the other person looked, what time of day it was, other things you talked about, etc — you might be able to bring the actual items to mind. A lot of the time, we simply don’t make the effort, because we don’t think we can remember.

https://www.eurekalert.org/pub_releases/2018-03/ps-fdg032118.php

Reference: 

[4331] Webb, C. E., & Dennis N. A.
(Submitted).  Differentiating True and False Schematic Memories in Older Adults.
The Journals of Gerontology: Series B.

Topics: 

tags development: 

tags memworks: 

tags problems: 

Being overweight linked to poorer memory

  • A study of younger adults adds to evidence that higher BMI is associated with poorer cognition, and points to a specific impairment in memory integration.

A small study involving 50 younger adults (18-35; average age 24) has found that those with a higher BMI performed significantly worse on a computerised memory test called the “Treasure Hunt Task”.

The task involved moving food items around complex scenes (e.g., a desert with palm trees), hiding them in various locations, and indicating afterward where and when they had hidden them. The test was designed to disentangle object, location, and temporal order memory, and the ability to integrate those separate bits of information.

Those with higher BMI were poorer at all aspects of this task. There was no difference, however, in reaction times, or time taken at encoding. In other words, they weren't slower, or less careful when they were learning. Analysis of the errors made indicated that the problem was not with spatial memory, but rather with the binding of the various elements into one coherent memory.

The results could suggest that overweight people are less able to vividly relive details of past events. This in turn might make it harder for them to keep track of what they'd eaten, perhaps making overeating more likely.

The 50 participants included 27 with BMI below 25, 24 with BMI 25-30 (overweight), and 8 with BMI over 30 (obese). 72% were female. None were diagnosed diabetics. However, the researchers didn't take other health conditions which often co-occur with obesity, such as hypertension and sleep apnea, into account.

This is a preliminary study only, and further research is needed to validate its findings. However, it's significant in that it adds to growing evidence that the cognitive impairments that accompany obesity are present early in adult life and are not driven by diabetes.

The finding is also consistent with previous research linking obesity with dysfunction of the hippocampus and the frontal lobe.

http://www.eurekalert.org/pub_releases/2016-02/uoc-bol022616.php

https://www.theguardian.com/science/neurophilosophy/2016/mar/03/obesity-linked-to-memory-deficits

Reference: 

[4183] Cheke, L. G., Simons J. S., & Clayton N. S.
(2015).  Higher body mass index is associated with episodic memory deficits in young adults.
The Quarterly Journal of Experimental Psychology. 1 - 12.

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

tags problems: 

Attention warps memory space

A recent study reveals that when we focus on searching for something, regions across the brain are pulled into the search. The study sheds light on how attention works.

In the experiments, brain activity was recorded as participants searched for people or vehicles in movie clips. Computational models showed how each of the roughly 50,000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips.

05/2013

Mynd: 

tags memworks: 

How emotion keeps some memories vivid

September, 2012

Emotionally arousing images that are remembered more vividly were seen more vividly. This may be because the amygdala focuses visual attention rather than more cognitive attention on the image.

We know that emotion affects memory. We know that attention affects perception (see, e.g., Visual perception heightened by meditation training; How mindset can improve vision). Now a new study ties it all together. The study shows that emotionally arousing experiences affect how well we see them, and this in turn affects how vividly we later recall them.

The study used images of positively and negatively arousing scenes and neutral scenes, which were overlaid with varying amounts of “visual noise” (like the ‘snow’ we used to see on old televisions). College students were asked to rate the amount of noise on each picture, relative to a specific image they used as a standard. There were 25 pictures in each category, and three levels of noise (less than standard, equal to standard, and more than standard).

Different groups explored different parameters: color; gray-scale; less noise (10%, 15%, 20% as compared to 35%, 45%, 55%); single exposure (each picture was only presented once, at one of the noise levels).

Regardless of the actual amount of noise, emotionally arousing pictures were consistently rated as significantly less noisy than neutral pictures, indicating that people were seeing them more clearly. This was true in all conditions.

Eye-tracking analysis ruled out the idea that people directed their attention differently for emotionally arousing images, but did show that more eye fixations were associated both with less noisy images and emotionally arousing ones. In other words, people were viewing emotionally important images as if they were less noisy.

One group of 22 students were given a 45-minute spatial working memory task after seeing the images, and then asked to write down all the details they could remember about the pictures they remembered seeing. The amount of detail they recalled was taken to be an indirect measure of vividness.

A second group of 27 students were called back after a week for a recognition test. They were shown 36 new images mixed in with the original 75 images, and asked to rate them as new, familiar, or recollected. They were also asked to rate the vividness of their recollection.

Although, overall, emotionally arousing pictures were not more likely to be remembered than neutral pictures, both experiments found that pictures originally seen as more vivid (less noise) were remembered more vividly and in more detail.

Brain scans from 31 students revealed that the amygdala was more active when looking at images rated as vivid, and this in turn increased activity in the visual cortex and in the posterior insula (which integrates sensations from the body). This suggests that the increased perceptual vividness is not simply a visual phenomenon, but part of a wider sensory activation.

There was another neural response to perceptual vividness: activity in the dorsolateral prefrontal cortex and the posterior parietal cortex was negatively correlated with vividness. This suggests that emotion is not simply increasing our attentional focus, it is instead changing it by reducing effortful attentional and executive processes in favor of more perceptual ones. This, perhaps, gives emotional memories their different ‘flavor’ compared to more neutral memories.

These findings clearly need more exploration before we know exactly what they mean, but the main finding from the study is that the vividness with which we recall some emotional experiences is rooted in the vividness with which we originally perceived it.

The study highlights how emotion can sharpen our attention, building on previous findings that emotional events are more easily detected when visibility is difficult, or attentional demands are high. It is also not inconsistent with a study I reported on last year, which found some information needs no repetition to be remembered because the amygdala decrees it of importance.

I should add, however, that the perceptual effect is not the whole story — the current study found that, although perceptual vividness is part of the reason for memories that are vividly remembered, emotional importance makes its own, independent, contribution. This contribution may occur after the event.

It’s suggested that individual differences in these reactions to emotionally enhanced vividness may underlie an individual’s vulnerability to post-traumatic stress disorder.

Reference: 

Source: 

Topics: 

tags memworks: 

tags problems: 

Nature walks improve cognition in people with depression

June, 2012

A small study provides more support for the idea that viewing nature can refresh your attention and improve short-term memory, and extends it to those with clinical depression.

I’ve talked before about Dr Berman’s research into Attention Restoration Theory, which proposes that people concentrate better after nature walks or even just looking at nature scenes. In his latest study, the findings have been extended to those with clinical depression.

The study involved 20 young adults (average age 26), all of whom had a diagnosis of major depressive disorder. Short-term memory and mood were assessed (using the backwards digit span task and the PANAS), and then participants were asked to think about an unresolved, painful autobiographical experience. They were then randomly assigned to go for a 50-minute walk along a prescribed route in either the Ann Arbor Arboretum (woodland park) or traffic heavy portions of downtown Ann Arbor. After the walk, mood and cognition were again assessed. A week later the participants repeated the entire procedure in the other location.

Participants exhibited a significant (16%) increase in attention and working memory after the nature walk compared to the urban walk. While participants felt more positive after both walks, there was no correlation with memory effects.

The finding is particularly interesting because depression is characterized by high levels of rumination and negative thinking. It seemed quite likely, then, that a solitary walk in the park might make depressed people feel worse, and worsen working memory. It’s intriguing that it didn’t.

It’s also worth emphasizing that, as in earlier studies, this effect of nature on cognition appears to be independent of mood (which is, of course, the basic tenet of Attention Restoration Theory).

Of course, this study is, like the others, small, and involves the same demographic. Hopefully future research will extend the sample groups, to middle-aged and older adults.

Reference: 

Source: 

Topics: 

tags: 

tags development: 

tags lifestyle: 

tags memworks: 

tags problems: 

How action videogames change some people’s brains

May, 2012

A small study has found that ten hours of playing action video games produced significant changes in brainwave activity and improved visual attention for some (but not all) novices.

Following on from research finding that people who regularly play action video games show visual attention related differences in brain activity compared to non-players, a new study has investigated whether such changes could be elicited in 25 volunteers who hadn’t played video games in at least four years. Sixteen of the participants played a first-person shooter game (Medal of Honor: Pacific Assault), while nine played a three-dimensional puzzle game (Ballance). They played the games for a total of 10 hours spread over one- to two-hour sessions.

Selective attention was assessed through an attentional visual field task, carried out prior to and after the training program. Individual learning differences were marked, and because of visible differences in brain activity after training, the action gamers were divided into two groups for analysis — those who performed above the group mean on the second attentional visual field test (7 participants), and those who performed below the mean (9). These latter individuals showed similar brain activity patterns as those in the control (puzzle) group.

In all groups, early-onset brainwaves were little affected by video game playing. This suggests that game-playing has little impact on bottom–up attentional processes, and is in keeping with earlier research showing that players and non-players don’t differ in the extent to which their attention is captured by outside stimuli.

However, later brainwaves — those thought to reflect top–down control of selective attention via increased inhibition of distracters — increased significantly in the group who played the action game and showed above-average improvement on the field test. Another increased wave suggests that the total amount of attention allocated to the task was also greater in that group (i.e., they were concentrating more on the game than the below-average group, and the control group).

The improved ability to select the right targets and ignore other stimuli suggests, too, that these players are also improving their ability to make perceptual decisions.

The next question, of course, is what personal variables underlie the difference between those who benefit more quickly from the games, and those who don’t. And how much more training is necessary for this latter group, and are there some people who won’t achieve these benefits at all, no matter how long they play? Hopefully, future research will be directed to these questions.

Reference: 

[2920] Wu, S., Cheng C K., Feng J., D'Angelo L., Alain C., & Spence I.
(2012).  Playing a First-person Shooter Video Game Induces Neuroplastic Change.
Journal of Cognitive Neuroscience. 24(6), 1286 - 1293.

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

tags strategies: 

Sleep preserves your feelings about traumatic events

January, 2012

New research suggests that sleeping within a few hours of a disturbing event keeps your emotional response to the event strong.

Previous research has shown that negative objects and events are preferentially consolidated in sleep — if you experience them in the evening, you are more likely to remember them than more neutral objects or events, but if you experience them in the morning, they are not more likely to be remembered than other memories (see collected sleep reports). However, more recent studies have failed to find this. A new study also fails to find such preferential consolidation, but does find that our emotional reaction to traumatic or disturbing events can be greatly reduced if we stay awake afterward.

Being unable to sleep after such events is of course a common response — these findings indicate there’s good reason for it, and we should go along with it rather than fighting it.

The study involved 106 young adults rating pictures on a sad-happy scale and their own responses on an excited-calm scale. Twelve hours later, they were given a recognition test: noting pictures they had seen earlier from a mix of new and old pictures. They also rated all the pictures on the two scales. There were four groups: 41 participants saw the first set late in the day and the second set 12 hours later on the following day (‘sleep group’); 41 saw the first set early and the second set 12 hours later on the same day; 12 participants saw both sets in the evening, with only 45 minutes between the sets; 12 participants saw both sets in the morning (these last two groups were to rule out circadian effects). 25 of the sleep group had their brain activity monitored while they slept.

The sleep group performed significantly better on the recognition test than the same-day group. Negative pictures were remembered better than neutral ones. However, unlike earlier studies, the sleep group didn’t preferentially remember negative pictures more than the same-day group.

But, interestingly, the sleep group was more likely to maintain the strength of initial negative responses. The same-day group showed a weaker response to negative scenes on the second showing.

It’s been theorized that late-night REM sleep is critical for emotional memory consolidation. However, this study found no significant relationship between the amount of time spent in REM sleep and recognition memory, nor was there any relationship between other sleep stages and memory. There was one significant result: those who had more REM sleep in the third quarter of the night showed the least reduction of emotional response to the negative pictures.

There were no significant circadian effects, but it’s worth noting that even the 45 minute gap between the sets was sufficient to weaken the negative effect of negative scenes.

While there was a trend toward a gender effect, it didn’t reach statistical significance, and there were no significant interactions between gender and group or emotional value.

The findings suggest that the effects of sleep on memory and emotion may be independent.

The findings also contradict previous studies showing preferential consolidation of emotional memories during sleep, but are consistent with two other recent studies that have also failed to find this. At this stage, all we can say is that there may be certain conditions in which this occurs (or doesn’t occur), but more research is needed to determine what these conditions are. Bear in mind that there is no doubt that sleep helps consolidate memories; we are talking here only about emphasizing negative memories at the expense of emotionally-neutral ones.

Reference: 

[2672] Baran, B., Pace-Schott E. F., Ericson C., & Spencer R. M. C.
(2012).  Processing of Emotional Reactivity and Emotional Memory over Sleep.
The Journal of Neuroscience. 32(3), 1035 - 1042.

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

Visual perception - a round-up of recent news

July, 2011

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

Reference: 

[2291] Kim, J. G., Biederman I., & Juan C-H.
(2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study.
The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L.
(2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories.
Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A.
(2011).  Behavior and neural basis of near-optimal visual search.
Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S.
(2011).  A neural basis for real-world visual search in human occipitotemporal cortex.
Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S.
(2011).  Value-driven attentional capture.
Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Source: 

Topics: 

tags memworks: 

Children with autism lack visual skills required for independence

February, 2011

Autism is popularly associated with intense awareness of systematic regularities, but a new study shows that the skill displayed in computer tasks is not available in real-world tasks.

Contrary to previous laboratory studies showing that children with autism often demonstrate outstanding visual search skills, new research indicates that in real-life situations, children with autism are unable to search effectively for objects. The study, involving 20 autistic children and 20 normally-developing children (aged 8-14), used a novel test room, with buttons on the floor that the children had to press to find a hidden target among multiple illuminated locations. Critically, 80% of these targets appeared on one side of the room.

Although autistics are generally believed to be more systematic, with greater sensitivity to regularities within a system, such behavior was not observed. Compared to other children, those with autism were slower to pick up on the regularities that would help them choose where to search. The slowness was not due to a lack of interest — all the children seemed to enjoy the game, and were keen to find the hidden targets.

The findings suggest that those with ASD have difficulties in applying the rules of probability to larger environments, particularly when they themselves are part of that environment.

Reference: 

[2055] Pellicano, E., Smith A. D., Cristino F., Hood B. M., Briscoe J., & Gilchrist I. D.
(2011).  Children with autism are neither systematic nor optimal foragers.
Proceedings of the National Academy of Sciences. 108(1), 421 - 426.

Source: 

Topics: 

tags development: 

tags memworks: 

tags problems: 

Memory better if timing is right

March, 2010

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but the context in which the scene is presented.

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but when the scene is presented. In the study, participants performed an attention-demanding letter-identification task while also viewing a rapid sequence of full-field photographs of urban and natural scenes. They were then tested on their memory of the scenes. It was found that, notwithstanding their attention had been focused on the target letter, only those scenes which were presented at the same time as a target letter (rather than a distractor letter) were reliably remembered. The results point to a brain mechanism that automatically encodes certain visual features into memory at behaviorally relevant points in time, regardless of the spatial focus of attention.

Reference: 

[321] Lin, J. Y., Pype A. D., Murray S. O., & Boynton G. M.
(2010).  Enhanced Memory for Scenes Presented at Behaviorally Relevant Points in Time.
PLoS Biol. 8(3), e1000337 - e1000337.

Full text available at doi:10.1371/journal.pbio.1000337

Source: 

Topics: 

tags memworks: 

Subscribe to RSS - scenes
Error | About memory

Error

The website encountered an unexpected error. Please try again later.