context

Sleep helps process traumatic experiences

  • A finding that sleeping after watching a trauma event reduced emotional distress and traumatic memories is intriguing in light of the theory that PTSD occurs through a failure of contextual processing.

A laboratory study has found that sleeping after watching a trauma event reduced emotional distress and memories related to traumatic events. The laboratory study involved 65 women being shown a neutral and a traumatic video. Typically, recurring memories of certain images haunted the test subjects for a few days (these were recorded in detail in a diary). Some participants slept in the lab for a night after the video, while the other group remained awake.

Those who slept after the film had fewer and less distressing recurring emotional memories than those who were awake. This effect was particularly evident after several days.

 One of the reasons for this benefit is thought to be that the memory consolidation processes that happen during sleep help contextualize the memories. This is interesting in view of the recent theory that PTSD is associated with a deficit in contextual processing.

However, I'd note that there is conflicting evidence about the effects of sleep on negative memories (for example, see http://www.memory-key.com/research/news/sleep-preserves-your-feelings-about-traumatic-events).

https://www.eurekalert.org/pub_releases/2016-12/uoz-shp121316.php

Reference: 

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

tags problems: 

Is PTSD a failure of context processing?

  • A new theory suggests a single dysfunction, in the processing of context, could underlie the multiple symptoms and characteristics of PTSD.

An interesting new theory for PTSD suggests that the root of the problem lies in context processing problems.

Context processing allows people and animals to recognize that a particular stimulus may require different responses depending on the context in which it is encountered. So, for example, a lion in the zoo evokes a different response than one encountered in your backyard.

Context processing involves the hippocampus, and its connections to the prefrontal cortex and the amygdala. Research has shown that activity in these brain areas is disrupted in those with PTSD.

The idea that a disruption in this circuit can interfere with context processing can explain most of the symptoms and much of the biology of PTSD. Previous models have focused on one aspect of the disorder:

  • on abnormal fear learning, which is rooted in the amygdala
  • on exaggerated threat detection, which is rooted in a network involving the amygdala, the anterior cingulate cortex and insula
  • on executive function and emotion regulation, which is mainly rooted in the prefrontal cortex.

The researchers suggest that a deficit in context processing would lead PTSD patients to feel "unmoored" from the world around them, unable to shape their responses to fit their current contexts. Instead, their brains impose an "internalized context", one that always expects danger.

This type of deficit, arising from a combination of genes and life experiences, may create vulnerability to PTSD in the first place.

The researchers are now testing their model.

https://www.eurekalert.org/pub_releases/2016-10/uomh-wrg100716.php

Reference: 

Source: 

Topics: 

tags memworks: 

tags problems: 

Memory for Facebook posts better than that of faces and books

February, 2013

Gossipy content and informal language may lie behind people's better recall of Facebook posts compared to memory for human faces or sentences from books.

Online social networking, such as Facebook, is hugely popular. A series of experiments has explored the intriguing question of whether our memories are particularly ‘tuned’ to remember the sort of information shared on such sites.

The first experiment involved 32 college students (27 female), who were asked to study either 100 randomly chosen Facebook posts, or 100 sentences randomly chosen from books on Amazon. After the study period (which involved each sentence being presented for 3 seconds), the students were given a self-paced recognition test, in which the 100 study sentences were mixed with another 100 sentences from the same source, with participants responding with a number expressing their confidence that they had (or had not) seen the sentence before (e.g., ‘1’ would indicate they were completely confident that they hadn’t seen it before, ‘20’ that they were totally confident that they had).

Recognition of Facebook posts was significantly better than recognition of sentences from books (an average of 85% correct vs 76%). The ‘Facebook advantage’ remained even when only posts with more normal surface-level characteristics were analyzed (i.e., all posts containing irregularities of spelling and typography were removed).

In the next experiment, involving 16 students (11 female), Facebook posts (a new set) were compared with neutral faces. Again, memory for Facebook posts was substantially better than that for faces. This is quite remarkable, since humans have a particular expertise for faces and tend to score highly on recognition tests for them.

One advantage the Facebook posts might have is in eliciting social thinking. The researchers attempted to test this by comparing the learning achieved when people were asked to count the words of each sentence or post, against the learning achieved when they were asked to think of someone they knew (real or fictional) who could have composed such a sentence / post. This experiment involved 64 students (41 female).

The deeper encoding encouraged by the latter strategy did improve memory for the texts, but it did so equally. The fact that it helped Facebook posts as much as it did book sentences argues against the idea that the Facebook advantage rests on social elaboration (because if so, encouraging them to be socially elaborated would have little extra effect).

Another advantage the Facebook posts might have over book sentences is that they were generally complete in themselves, making sense in a way that randomly chosen sentences from books would not. Other possibilities have to do with the gossipy nature of Facebook posts, and the informal language used. To test these theories, 180 students (138 female) were shown text from two CNN twitter feeds: Breaking News and Entertainment News. Texts included headlines, sentences, and comments.

Texts from Entertainment News were remembered significantly better than those from Breaking News (supporting the gossip advantage). Headlines were remembered significantly better than random sentences (supporting the completeness argument), but comments were remembered best of all (supporting the informality theory) — although the benefit of comments over headlines was much greater for Breaking News than Entertainment News (perhaps reflecting the effort the Entertainment News people put into making catchy headlines?).

It seems then, that three factors contribute to the greater memorability of Facebook posts: the completeness of ideas; the gossipy content; the casually generated language.

You’ll have noticed I made a special point of noting the gender imbalance in the participant pools. Given gender differences in language and social interaction, it’s a shame that the participants were so heavily skewed, and I would like this replicated with males before generalizing. However, the evidence for the advantage of more informal language is, at least, less likely to be skewed by gender.

Reference: 

[3277] Mickes, L., Darby R. S., Hwe V., Bajic D., Warker J. A., Harris C. R., et al.
(Submitted).  Major memory for microblogs.
Memory & Cognition. 1 - 9.

Source: 

Topics: 

tags memworks: 

tags strategies: 

Each memory experience biases how you approach the next one

September, 2012

A new study provides evidence that our decision to encode information as new or try and retrieve it from long-term memory is affected by how we treated the last bit of information processed.

Our life-experiences contain a wealth of new and old information. The relative proportions of these change, of course, as we age. But how do we know whether we should be encoding new information or retrieving old information? It’s easy if the information is readily accessible, but what if it’s not? Bear in mind that (especially as we get older) most information / experiences we meet share some similarity to information we already have.

This question is made even more meaningful when you consider that it is the same brain region — the hippocampus — that’s involved in both encoding and retrieval, and these two processes depend (it is thought) on two quite opposite processes. While encoding is thought to rely on pattern separation (looking for differences), retrieval is thought to depend on pattern completion.

A recent study looked at what happens in the brain when people rapidly switch between encoding new objects and retrieving recently presented ones. Participants were shown 676 pictures of objects and asked to identify each one as being shown for the first time (‘new’), being repeated (‘old’), or as a modified version of something shown earlier (‘similar’). Recognizing the similar items as similar was the question of interest, as these items contain both old and new information and so the brain’s choice between encoding and retrieval is more difficult.

What they found was that participants were more likely to recognize similar items as similar (rather than old) if they had viewed a new item on the preceding trial. In other words, the experience of a new item primed them to notice novelty. Or to put it in another way: context biases the hippocampus toward either pattern completion or pattern separation.

This was supported by a further experiment, in which participants were shown both the object pictures, and also learned associations between faces and scenes. Critically, each scene was associated with two different faces. In the next learning phase, participants were taught a new scene association for one face from each pair. Each face-scene learning trial was preceded by an object recognition trial (new and old objects were shown and participants had to identify them as old or new) — critically, either a new or old object was consistently placed before a specific face-scene association. In the final test phase, participants were tested on the new face-scene associations they had just learned, as well as the indirect associations they had not been taught (that is, between the face of each pair that had not been presented during the preceding phase, and the scene associated with its partnered face).

What this found was that participants were more likely to pair indirectly related faces if those faces had been consistently preceded by old objects, rather than new ones. Moreover, they did so more quickly when the faces had been preceded by old objects rather than new ones.

This was interpreted as indicating that the preceding experience affects how well related information is integrated during encoding.

What all this suggests is that the memory activities you’ve just engaged in bias your brain toward the same sort of activities — so whether or not you notice changes to a café or instead nostalgically recall a previous meal, may depend on whether you noticed anyone you knew as you walked down the street!

An interesting speculation by the researchers is that such a memory bias (which only lasts a very brief time) might be an adaptive mechanism, reflecting the usefulness of being more sensitive to changes in new environments and less sensitive to irregularities in familiar environments.

Reference: 

Source: 

Topics: 

tags memworks: 

Testing

Older news items (pre-2010) brought over from the old website

The importance of retrieval cues

An imaging study has revealed that it is retrieval cues that trigger activity in the hippocampus, rather than, as often argued, the strength of the memory. The study involved participants learning unrelated word pairs (a process which included making up sentences with the words), then being asked whether various familiar words had been previously seen or not — the words being shown first on their own, and then with their paired cue word. Brain activity for words judged familiar on their own was compared with activity for the same items when shown with context cues. Increased hippocampal activity occurred only with cued recall. Moreover, the amount of activity was not associated with familiarity strength, and recollected items were associated with greater activity relative to highly familiar items.
[1102] Cohn, M., Moscovitch M., Lahat A., & McAndrews M P.
(2009).  Recollection versus strength as the primary determinant of hippocampal engagement at retrieval.
Proceedings of the National Academy of Sciences. 106(52), 22451 - 22455.
http://www.eurekalert.org/pub_releases/2009-12/uot-dik120709.php

Making student self-testing an effective study tool

A series of four experiments with 150 college students using Swahili-English vocabulary words has revealed that repeated retrieval was a very effective learning strategy. However, when subjects were given control over their own learning, they did not attempt retrieval as early or as often as they should to promote the best learning. The findings are thought to reflect a powerful metacognitive illusion that occurs during self-regulated learning — namely, that easy retrieval tends to make students believe they have “learned” it before the material is properly mastered, leading to premature termination of the study practice.
[285] Karpicke, J. D.
(2009).  Metacognitive Control and Strategy Selection: Deciding to Practice Retrieval During Learning.
Journal of Experimental Psychology: General. 138(4), 469 - 486.
http://www.eurekalert.org/pub_releases/2009-12/pu-sse121009.php

Longer high-stakes tests may result in a sense of mental fatigue, but not in lower test scores

A study involving 239 freshman college students who took three different versions of the SAT Reasoning Test, of progressively longer lengths (3.5, 4.5, 5.5 hours), has revealed that although the students reported higher levels of mental fatigue with longer tests, performance was not affected. In fact, the average performance for both the standard and long tests was significantly higher than for the short test. Moreover, the fatigue experienced was less related to the length of the exam (and to the amount of sleep they’d had) than it was to personality traits. Those with higher levels of achievement motivation and competitiveness felt less fatigue, and those with higher levels of neuroticism and anxiety felt more.
[1127] Ackerman, P. L., & Kanfer R.
(2009).  Test length and cognitive fatigue: an empirical examination of effects on performance and test-taker reactions.
Journal of Experimental Psychology. Applied. 15(2), 163 - 181.
Full text available at http://www.apa.org/journals/releases/xap152-ackerman-kanfer.pdf
http://www.eurekalert.org/pub_releases/2009-06/apa-lht052809.php

Why we don't always learn from our mistakes

A study of the tip-of-the-tongue (TOT) phenomenon suggests that most errors are repeated because the very act of making a mistake, despite receiving correction, constitutes the learning of that mistake. The study asked students to retrieve words after being given a definition. If that produced a TOT state, they were randomly assigned to spend either 10 or 30 seconds trying to retrieve the answer before finally being shown it. When tested two days later, it was found that they tended to TOT on the same words as before, and were especially more likely to do so if they had spent a longer time trying to retrieve them The longer time in the error state appears to reinforce that incorrect pattern of brain activation that caused the error.
[225] Warriner, A B., & Humphreys K. R.
(2008).  Learning to fail: reoccurring tip-of-the-tongue states.
Quarterly Journal of Experimental Psychology (2006). 61(4), 535 - 542.
http://www.physorg.com/news126265455.html

Testing strengthens recall whether something's on the test or not

The simple act of taking a test appears to help you remember everything you learned, even if it isn't tested. In a series of three experiments, researchers found undergraduates tested after being given 25 minutes to study a long article about the toucan bird recalled more a day later than those given further information about the toucan in an extra study session, or those who had neither experience. In the second experiment, students were given two articles to read, one of which was tested and one of which was not. Again, the one tested was remembered significantly better a day later. The third experiment revealed that later recall was better the more time the student had spent on answering questions in the first test. This relation was especially pronounced for students with lower performance on the test, and those who were encouraged to guess did significantly better on the second test than students who were discouraged from guessing.
[1188] Chan, J. C. K., McDermott III K. B., & Roediger H. L.
(2006).  Retrieval-Induced Facilitation: Initially Nontested Material Can Benefit From Prior Testing of Related Material.
Journal of Experimental Psychology: General. 135(4), 553 - 571.
http://www.eurekalert.org/pub_releases/2006-11/apa-tsr110606.php

Repeated test-taking better for retention than repeated studying

A study indicates that testing can be a powerful means for improving learning, not just assessing it. The study compared students who studied a prose passage for about five minutes and then took either one or three immediate free-recall tests, receiving no feedback on the accuracy of answers, with students who received no tests, but were allowed another five minutes to restudy the passage each time their counterparts were involved in a testing session. While the study-only group performed better on the test after the last session, they performed worse when tested 2 days later, and dramatically worse after one week. Note that the study-only group had read the passage about 14 times in total, while the repeated testing group had read the passage only 3.4 times in its one-and-only study session. It also appears that students who rely on repeated study alone often come away with a false sense of confidence about their mastery of the material.
[272] Roediger, H. L., & Karpicke J. D.
(2006).  Test-enhanced learning: taking memory tests improves long-term retention.
Psychological Science: A Journal of the American Psychological Society / APS. 17(3), 249 - 255.
http://www.eurekalert.org/pub_releases/2006-03/wuis-rtb030606.php

tags memworks: 

tags study: 

Negative emotion can enhance memory for tested information

September, 2011

Images designed to arouse strong negative emotion can improve your memory for information you’re learning, if presented immediately after you’ve been tested on it.

In a recent study, 40 undergraduate students learned ten lists of ten pairs of Swahili-English words, with tests after each set of ten. On these tests, each correct answer was followed by an image, either a neutral one or one designed to arouse negative emotions, or by a blank screen. They then did a one-minute multiplication test before moving on to the next section.

On the final test of all 100 Swahili-English pairs, participants did best on items that had been followed by the negative pictures.

In a follow-up experiment, students were shown the images two seconds after successful retrieval. The results were the same.

In the final experiment, the section tests were replaced by a restudying period, where each presentation of a pair was followed by an image or blank screen. The effect did not occur, demonstrating that the effect depends on retrieval.

The study focused on negative emotion because earlier research has found no such memory benefit for positive images (including images designed to be sexually arousing).

The findings emphasize the importance of the immediate period after retrieval, suggesting that this is a fruitful time for manipulations that enhance or impair memory. This is consistent with the idea of reconsolidation — that when information is retrieved from memory, it is in a labile state, able to be changed. Thus, by presenting a negative image when the retrieved memory is still in that state, the memory absorbs some of that new context.

Reference: 

[2340] Finn, B., & Roediger H. L.
(2011).  Enhancing Retention Through Reconsolidation.
Psychological Science. 22(6), 781 - 786.

Source: 

Topics: 

tags memworks: 

tags study: 

Visual perception - a round-up of recent news

July, 2011

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

Reference: 

[2291] Kim, J. G., Biederman I., & Juan C-H.
(2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study.
The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L.
(2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories.
Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A.
(2011).  Behavior and neural basis of near-optimal visual search.
Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S.
(2011).  A neural basis for real-world visual search in human occipitotemporal cortex.
Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S.
(2011).  Value-driven attentional capture.
Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Source: 

Topics: 

tags memworks: 

More evidence that older adults become less able to ignore distraction

December, 2010

A new study adds to the evidence that our ability to focus on one thing and ignore irrelevant information gets worse with age, and that this may be a crucial factor in age-related cognitive impairment.

A study involving young (average age 22) and older adults (average age 77) showed participants pictures of overlapping faces and places (houses and buildings) and asked them to identify the gender of the person. While the young adults showed activity in the brain region for processing faces (fusiform face area) but not in the brain region for processing places (parahippocampal place area), both regions were active in the older adults. Additionally, on a surprise memory test 10 minutes later, older adults who showed greater activation in the place area were more likely to recognize what face was originally paired with what house.

These findings confirm earlier research showing that older adults become less capable of ignoring irrelevant information, and shows that this distracting information doesn’t merely interfere with what you’re trying to attend to, but is encoded in memory along with that information.

Reference: 

Source: 

Topics: 

tags development: 

tags memworks: 

tags problems: 

Face-blindness an example of inability to generalize

October, 2010

It seems that prosopagnosia can be, along with perfect pitch and eidetic memory, an example of what happens when your brain can’t abstract the core concept.

‘Face-blindness’ — prosopagnosia — is a condition I find fascinating, perhaps because I myself have a touch of it (it’s now recognized that this condition represents the end of a continuum rather than being an either/or proposition). The intriguing thing about this inability to recognize faces is that, in its extreme form, it can nevertheless exist side-by-side with quite normal recognition of other objects.

Prosopagnosia that is not the result of brain damage often runs in families, and a study of three family members with this condition has revealed that in some cases at least, the inability to remember faces has to do with failing to form a mental representation that abstracts the essence of the face, sans context. That is, despite being fully able to read facial expressions, attractiveness and gender from the face (indeed one of the family members is an artist who has no trouble portraying fully detailed faces), they couldn’t cope with changes in lighting conditions and viewing angles.

I’m reminded of the phenomenon of perfect pitch, which is characterized by an inability to generalize across acoustically similar tones, so an A in a different key is a completely different note. Interestingly, like prosopagnosia, perfect pitch is now thought to be more common than has been thought (recognition of it is of course limited by the fact that some musical expertise is generally needed to reveal it). This inability to abstract or generalize is also a phenomenon of eidetic memory, and I have spoken before of the perils of this.

(Note: A fascinating account of what it is like to be face-blind, from a person with the condition, can be found at: http://www.choisser.com/faceblind/)

Reference: 

Source: 

Topics: 

tags: 

tags memworks: 

People learn better when brain activity is consistent

October, 2010

A new way of analyzing brain activity has revealed that memories are stronger when the pattern of brain activity is more closely matched on each repetition.

An intriguing new study has found that people are more likely to remember specific information if the pattern of activity in their brain is similar each time they study that information. The findings are said to challenge the long-held belief that people retain information more effectively when they study it several times under different contexts, thus giving their brains multiple cues to remember it. However, although I believe this finding adds to our understanding of how to study effectively, I don’t think it challenges the multiple-context evidence.

The finding was possible because of a new approach to studying brain activity, which was used in three experiments involving students at Beijing Normal University. In the first, 24 participants were shown 120 faces, each one shown four times, at variable intervals between the repetitions. They were tested on their recognition (using a set of 240 faces), and how confident they were in their decision, one hour later. Subsequent voxel-by-voxel analysis of 20 brain regions revealed that the similarity of the patterns of brain activity in nine of those regions for each repetition of a specific face was significantly associated with recognition.

In the second experiment, 22 participants carried out a semantic judgment task on 180 familiar words (deciding whether they were concrete or abstract). Each word was repeated three times, again at variable intervals. The participants were tested on their recall of the words six hours later, and then tested for recognition. Fifteen brain regions showed a higher level of pattern similarity across repetitions for recalled items, but not for forgotten items.

In the third experiment, 22 participants performed a different semantic judgment task (living vs non-living) on 60 words. To prevent further encoding, they were also required to perform a visual orientation judgment task for 8 seconds after each semantic judgment. They were given a recall test 30 minutes after the session. Seven of the brain regions showed a significantly higher level of pattern similarity for recalled items.

It's interesting to observe how differences in the pattern of activity occurred when studying the same information only minutes apart — a difference that is presumed to be triggered by context (anything from the previous item to environmental stimuli or passing thoughts). Why do I suggest that this finding, which emphasizes the importance of same-context, doesn’t challenge the evidence for multiple-context? I think it’s an issue of scope.

The finding shows us two important things: that context changes constantly; that repetition is made stronger the closer context is matched. Nevertheless, this study doesn’t bear on the question of long-term recall. The argument has never been that multiple contexts make a memory trace stronger; it has been that it provides more paths to recall — something that becomes of increasing importance the longer the time between encoding and recall.

Reference: 

Source: 

Topics: 

tags memworks: 

tags strategies: 

Pages

Subscribe to RSS - context
Error | About memory

Error

The website encountered an unexpected error. Please try again later.