Encoding

See also

Consolidation

Latest Research News

We know that the neurotransmitter dopamine is involved in making strong memories. Now a mouse study helps us get more specific — and suggests how we can help ourselves learn.

The study, involving 120 mice, found that mice tasked with remembering where food had been hidden did better if they had been given a novel experience (exploring an unfamiliar floor surface) 30 minutes after being trained to remember the food location.

This memory improvement also occurred when the novel experience was replaced by the selective activation of dopamine-carrying neurons in the locus coeruleus that go to the hippocampus. The locus coeruleus is located in the brain stem and involved in several functions that affect emotion, anxiety levels, sleep patterns, and memory. The dopamine-carrying neurons in the locus coeruleus appear to be especially sensitive to environmental novelty.

In other words, if we’re given attention-grabbing experiences that trigger these LC neurons carrying dopamine to the hippocampus at around the time of learning, our memories will be stronger.

Now we already know that emotion helps memory, but what this new study tells us is that, as witness to the mice simply being given a new environment to explore, these dopamine-triggering experiences don’t have to be dramatic. It’s suggested that it could be as simple as playing a new video game during a quick break while studying for an exam, or playing tennis right after trying to memorize a big speech.

Remember that we’re designed to respond to novelty, to pay it more attention — and, it seems, that attention is extended to more mundane events that occur closely in time.

Emotionally positive situations boost memory for similar future events

In a similar vein, a human study has found that the benefits of reward extend forward in time.

In the study, volunteers were shown images from two categories (objects and animals), and were financially rewarded for one of these categories. As expected, they remembered images associated with a reward better. In a second session, however, they were shown new images of animals and objects without any reward. Participants still remembered the previously positively-associated category better.

Now, this doesn’t seem in any way surprising, but the interesting thing is that this benefit wasn’t seen immediately, but only after 24 hours — that is, after participants had slept and consolidated the learning.

Previous research has shown similar results when semantically related information has been paired with negative, that is, aversive stimuli.

https://www.eurekalert.org/pub_releases/2016-09/usmc-rim090716.php

http://www.eurekalert.org/pub_releases/2016-06/ibri-eps061516.php

Four studies involving a total of more than 300 younger adults (20-24) have looked at information processing on different forms of media. They found that digital platforms such as tablets and laptops for reading may make you more inclined to focus on concrete details rather than interpreting information more abstractly.

As much as possible, the material was presented on the different media in identical format.

In the first study, 76 students were randomly assigned to complete the Behavior Identification Form on either an iPad or a print-out. The Form assesses an individual's current preference for concrete or abstract thinking. Respondents have to choose one of two descriptions for a particular behavior — e.g., for “making a list”, the choice of description is between “getting organized” or “writing things down”. The form presents 25 items.

There was a marked difference between those filling out the form on the iPad vs on a physical print-out, with non-digital users showing a significantly higher preference for abstract descriptions than digital users (mean of 18.56 vs 13.75).

In the other three studies, the digital format was always a PDF on a laptop. In the first of these, 81 students read a short story by David Sedaris, then answered 24 multichoice questions on it, of which half were abstract and half concrete. Digital readers scored significantly lower on abstract questions (48% vs 66%), and higher on concrete questions (73% vs 58%).

In the next study, 60 students studied a table of information about four, fictitious Japanese car models for two minutes, before being required to select the superior model. While one model was objectively superior in regard to the attributes and attribute rating, the amount of detail means (as previous research has shown) that those employing a top-down “gist” processing do better than those using a bottom-up, detail-oriented approach. On this problem, 66% of the non-digital readers correctly chose the superior model, compared to 43% of the digital readers.

In the final study, 119 students performed the same task as in the preceding study, but all viewed the table on a laptop. Before viewing the table, however, some were assigned to one of two priming activities: a high-level task aimed at activating more abstract thinking (thinking about why they might pursue a health goal), or a low-level task aimed at activating more concrete thinking (thinking about how to pursue the same goal).

Being primed to think more abstractly did seem to help these digital users, with 48% of this group correctly answering the car judgment problem, compared to only 25% of those given the concrete priming activity, and 30% of the control group.

I note that the performance of the control group is substantially below the performance of the digital users in the previous study, although there was no apparent change in the methodology. However, this was not noted or explained in the paper, so I don't know why this was. It does lead me not to put too much weight on this idea that priming can help.

However, the findings do support the view that reading on digital devices does encourage a more concrete style of thinking, reinforcing the idea that we are inclined to process information more shallowly when we read it from a screen.

Of course, this is, as the researchers point out, not an indictment. Sometimes, this is the best way to approach certain tasks. But what it does suggest is that we need to consider what sort of processing is desirable, and modify our strategy accordingly. For example, you may find it helpful to print out material that requires a high level of abstract thinking, particularly if your degree of expertise in the subject means that it carries a high cognitive load.

http://www.eurekalert.org/pub_releases/2016-05/dc-dmm050516.php

Kaufman, G., & Flanagan, M. (2016). High-Low Split : Divergent Cognitive Construal Levels Triggered by Digital and Non-digital Platforms. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–5. doi:10.1145/2858036.2858550 http://dl.acm.org/citation.cfm?doid=2858036.2858550

A small study involving 50 younger adults (18-35; average age 24) has found that those with a higher BMI performed significantly worse on a computerised memory test called the “Treasure Hunt Task”.

The task involved moving food items around complex scenes (e.g., a desert with palm trees), hiding them in various locations, and indicating afterward where and when they had hidden them. The test was designed to disentangle object, location, and temporal order memory, and the ability to integrate those separate bits of information.

Those with higher BMI were poorer at all aspects of this task. There was no difference, however, in reaction times, or time taken at encoding. In other words, they weren't slower, or less careful when they were learning. Analysis of the errors made indicated that the problem was not with spatial memory, but rather with the binding of the various elements into one coherent memory.

The results could suggest that overweight people are less able to vividly relive details of past events. This in turn might make it harder for them to keep track of what they'd eaten, perhaps making overeating more likely.

The 50 participants included 27 with BMI below 25, 24 with BMI 25-30 (overweight), and 8 with BMI over 30 (obese). 72% were female. None were diagnosed diabetics. However, the researchers didn't take other health conditions which often co-occur with obesity, such as hypertension and sleep apnea, into account.

This is a preliminary study only, and further research is needed to validate its findings. However, it's significant in that it adds to growing evidence that the cognitive impairments that accompany obesity are present early in adult life and are not driven by diabetes.

The finding is also consistent with previous research linking obesity with dysfunction of the hippocampus and the frontal lobe.

http://www.eurekalert.org/pub_releases/2016-02/uoc-bol022616.php

https://www.theguardian.com/science/neurophilosophy/2016/mar/03/obesity-linked-to-memory-deficits

[4183] Cheke LG, Simons JS, Clayton NS. Higher body mass index is associated with episodic memory deficits in young adults. The Quarterly Journal of Experimental Psychology [Internet]. 2015 :1 - 12. Available from: http://dx.doi.org/10.1080/17470218.2015.1099163

Can you help protect yourself from the memory of traumatic events? A new study suggests that, by concentrating on concrete details as you live through the event, you can reduce the number of intrusive memories later experienced.

The study, aimed particularly at those who deliberately expose themselves to the risk of PTSD (e.g., emergency workers, military personnel, journalists in conflict zones), involved 50 volunteers who rated their mood before watching several films with traumatic scenes. After the first film, they rated their feelings. For the next four films, half the participants were asked to consider abstract questions, such as why such situations happened. The other half were asked to consider concrete questions, such as what they could see and hear and what needed to be done from that point. Afterward, they gave another rating on their mood. Finally, they were asked to watch a final film in the same way as they had practiced, rating feelings of distress and horror as they had for the first film.

The volunteers were then given a diary to record intrusive memories of anything they had seen in the films for the next week.

Both groups, unsurprisingly, saw their mood decline after the films, but those who had been practicing concrete thinking were less affected, and also experienced less intense feelings of distress and horror when watching the final film. Abstract thinkers experienced nearly twice as many intrusive memories in the following week.

The study follows previous findings that emergency workers who adopted an abstract processing approach showed poorer coping, and that those who processed negative events using abstract thinking experienced a longer period of low mood, compared to those using concrete thinking.

Further study to confirm this finding is of course needed in real-life situations, but this does suggest a strategy that people who regularly experience trauma could try. It is particularly intriguing because, on the face of it, it would seem like quite the wrong strategy. Distancing yourself from the trauma you're experiencing, trying to see it as something less real, seems a more obvious coping strategy. This study suggests it is exactly the wrong thing to do.

It also seems likely that this tendency to use concrete or abstract processing may reflect a more general trait. Self-reported proneness to intrusive memories in everyday life was significantly correlated with intrusive memories of the films. Perhaps we should all think about the way we view the world, and those of us who tend to take a more abstract approach should try paying more attention to concrete details. This is, after all, something I've been recommending in the context of fighting sensory impairment and age-related cognitive decline!

Abstract thinking certainly has its place, but as I've said before, we need flexibility. Effective cognitive management is about tailoring your style of thinking to the task's demands.

http://www.eurekalert.org/pub_releases/2016-05/uoo-tdc050516.php

A study involving 66 healthy young adults (average age 24) has revealed that different individuals have distinct brain connectivity patterns that are associated with different ways of experiencing and remembering the past.

The participants completed an online questionnaire on how well they remember autobiographical events and facts, then had their brains scanned. Brain scans found that those with richly-detailed autobiographical memories had higher mediotemporal lobe connectivity to regions at the back of the brain involved in visual perception, whereas those tending to recall the past in a factual manner showed higher mediotemporal lobe connectivity to prefrontal regions involved in organization and reasoning.

The finding supports the idea that those with superior autobiographical memory have a greater ability or tendency to reinstate rich images and perceptual details, and that this appears to be a stable personality trait.

The finding also raises interesting questions about age-related cognitive decline. Many people first recognize cognitive decline in their increasing difficulty retrieving the details of events. But this may be something that is far more obvious and significant to people who are used to retrieving richly-detailed memories. Those who rely on a factual approach may be less susceptible.

http://www.eurekalert.org/pub_releases/2015-12/bcfg-wiy121015.php

Full text available at http://www.sciencedirect.com/science/article/pii/S0010945215003834

The question of the brain's capacity usually brings up remarks that the human brain contains about 100 billion neurons. If each one has, say, 1,000 or more connections to other neurons, this produces some 100 trillion connections in which our memory can be held. These connections are between synapses, which change in strength and size when activated. These changes are a critical part of the memory code. In fact, synaptic strength is analogous to the 1s and 0s that computers use to encode information.

But, here's the thing: unlike the binary code of computers, there are more than two sizes available to synapses. On the basis of the not-very-precise tools researchers had available, they had come up with three sizes: small, medium and large. They also had calculated that the difference between the smallest and largest was a factor of 60.

Here is where the new work comes in, because new techniques have enabled researchers to now see that synapses have far more options open to them. Synapses can, it seems, vary by as little as 8%, creating a possible 26 different sizes available, which corresponds to storing 4.7 bits of information at each synapse, as opposed to one or two.

Despite the precision that this 8% speaks to, hippocampal synapses are notoriously unreliable, with signals typically activating the next neuron only 10-20% of the time. But this seeming unreliability is a feature not a bug. It means a single spike isn't going to do the job; what's needed is a stable change in synaptic strength, which comes from repeated and averaged inputs. Synapses are constantly adjusting, averaging out their success and failure rates over time.

The researchers calculate that, for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes), while for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change. In other words, every 2 to 20 minutes, your synapses are going up or down to the next size, in response to the signals they're receiving.

Based on this new information, the new estimate is that the brain can hold at least a petabyte of information, about as much as the World Wide Web currently holds. This is ten times more than previously estimated.

At the moment, only hippocampal neurons have been investigated. More work is needed to determine whether the same is true across the brain.

In the meantime, the work has given us a better notion of how memories are encoded in the brain, increased the potential capacity of the human brain, and offers a new way of thinking about information networks that may enable engineers to build better, more energy-efficient, computers.

http://www.eurekalert.org/pub_releases/2016-01/si-mco012016.php

http://www.scientificamerican.com/article/new-estimate-boosts-the-human-brain-s-memory-capacity-10-fold/

Full text at http://elifesciences.org/content/4/e10778v2

We talk about memory for ‘events’, but how does the brain decide what an event is? How does it decide what is part of an event and what isn’t? A new study suggests that our brain uses categories it creates based on temporal relationships between people, objects, and actions — i.e., items that tend to—or tend not to—pop up near one another at specific times.

This explanation is much more in line with the way semantic memory is organized, but challenges the dominant theory that says our brain draws a line between the end of one event and the start of another when things take an unexpected turn.

“Everyone agrees that ‘having a meeting’ or ‘chopping vegetables’ is a coherent chunk of temporal structure, but it’s actually not so obvious why that is if you’ve never had a meeting or chopped vegetables before. You have to have experience with the shared temporal structure of the components of the events in order for the event to hold together in your mind.”

In the study, participants were shown sequences of abstract symbols and patterns, which, unbeknownst to the participants, were grouped into three “communities” of five symbols with shapes in the same community tending to appear near one another in the sequence.

After watching these sequences for roughly half an hour, participants were asked to segment the sequences into events in a way that felt natural to them. They tended to break the sequences into events that coincided with the communities the researchers had prearranged. Images in the same community also produced similar activity in neuron groups at the border of the brain’s frontal and temporal lobes, a region involved in processing meaning.

All of which is to say that event memory seems to be less different from semantic memory than thought - perhaps this is true of other memory domains too?

http://www.futurity.org/science-technology/how-your-brain-chunks-%E2%80%98moments%E2%80%99-into-%E2%80%98events%E2%80%99/

[3383] Schapiro AC, Rogers TT, Cordova NI, Turk-Browne NB, Botvinick MM. Neural representations of events arise from temporal community structure. Nature Neuroscience [Internet]. 2013 ;16(4):486 - 492. Available from: http://www.nature.com/neuro/journal/v16/n4/abs/nn.3331.html

We know sleep helps consolidate memories. Now a new study sheds light on how your sleeping brain decides what’s worth keeping. The study found that when the information that makes up a memory has a high value—associated with, for example, making more money—the memory is more likely to be rehearsed and consolidated during sleep.

The study involved 60 young adults who learned the unique locations of 72 objects on a screen while hearing characteristic object sounds. Each object was assigned a value indicating the reward that could be earned if remembered later. Recall was tested 45 minutes later, followed by a 90 minute break, during which participants either slept or remained awake. In the sleep condition, low-intensity white noise was played to mask any external sounds. In one condition, 18 of the sound cues associated with low-value objects were also repeatedly presented during the sleep period. In the wake condition, participants either watched a movie or performed a difficult working memory task (during which the sound cues were similarly sometimes presented in the background).

For all groups, at the first memory test, recall accuracy was significantly lower for low-value items compared to high-value (there was not, unsurprisingly, any difference between the groups). But let’s get to the important results. After sleep, in the absence of sound reminders, accuracy declined significantly more for low-value objects than for high-value objects. However, when sound cues had been played during sleep (although participants had no awareness of them), low-value objects were not differentially disadvantaged.

Interestingly, the sound reminders benefited not only those low-value objects which were cued, but all the low-value objects.  But, in the wake condition, when sound cues had been softly played in the background, only those objects which had been cued benefited from the reminders.

Also interestingly, two participants who heard the cues during stage 2 sleep rather than slow-wave sleep received the least benefit.

What all this suggests is that covert reactivation may be a major factor in determining what gets chosen for consolidation, and wake and sleep reactivation might play distinct roles in this process - the former helping to strengthen individual, salient memories, and the latter strengthening, while also linking togther, categorically related memories.

The findings provide more weight to the idea that I have propounded before — that it’s worth consciously reviewing the day's memories that you want to keep, just before going to sleep.

http://www.futurity.org/science-technology/sleeping-on-it-helps-memories-stick/

[3381] Oudiette D, Antony JW, Creery JD, Paller KA. The Role of Memory Reactivation during Wakefulness and Sleep in Determining Which Memories Endure. The Journal of Neuroscience [Internet]. 2013 ;33(15):6672 - 6678. Available from: http://www.jneurosci.org/content/33/15/6672

A new study has found that errors in perceptual decisions occurred only when there was confused sensory input, not because of any ‘noise’ or randomness in the cognitive processing. The finding, if replicated across broader contexts, will change some of our fundamental assumptions about how the brain works.

The study unusually involved both humans and rats — four young adults and 19 rats — who listened to streams of randomly timed clicks coming into both the left ear and the right ear. After listening to a stream, the subjects had to choose the side from which more clicks originated.

The errors made, by both humans and rats, were invariably when two clicks overlapped. In other words, and against previous assumptions, the errors did not occur because of any ‘noise’ in the brain processing, but only when noise occurred in the sensory input.

The researchers supposedly ruled out alternative sources of confusion, such as “noise associated with holding the stimulus in mind, or memory noise, and noise associated with a bias toward one alternative or the other.”

However, before concluding that the noise which is the major source of variability and errors in more conceptual decision-making likewise stems only from noise in the incoming input (in this case external information), I would like to see the research replicated in a broader range of scenarios. Nevertheless, it’s an intriguing finding, and if indeed, as the researchers say, “the internal mental process was perfectly noiseless. All of the imperfections came from noise in the sensory processes”, then the ramifications are quite extensive.

The findings do add weight to recent evidence that a significant cause of age-related cognitive decline is sensory loss.

http://www.futurity.org/science-technology/dont-blame-your-brain-for-that-bad-decision/

[3376] Brunton BW, Botvinick MM, Brody CD. Rats and Humans Can Optimally Accumulate Evidence for Decision-Making. Science [Internet]. 2013 ;340(6128):95 - 98. Available from: http://www.sciencemag.org/content/340/6128/95

Recent research has suggested that sleep problems might be a risk factor in developing Alzheimer’s, and in mild cognitive impairment. A new study adds to this gathering evidence by connecting reduced slow-wave sleep in older adults to brain atrophy and poorer learning.

The study involved 18 healthy young adults (mostly in their 20s) and 15 healthy older adults (mostly in their 70s). Participants learned 120 word- nonsense word pairs and were tested for recognition before going to bed. Their brain activity was recorded while they slept. Brain activity was also measured in the morning, when they were tested again on the word pairs.

As has been found previously, older adults showed markedly less slow-wave activity (both over the whole brain and specifically in the prefrontal cortex) than the younger adults. Again, as in previous studies, the biggest difference between young and older adults in terms of gray matter volume was found in the medial prefrontal cortex (mPFC). Moreover, significant differences were also found in the insula and posterior cingulate cortex. These regions, like the mPFC, have also been associated with the generation of slow waves.

When mPFC volume was taken into account, age no longer significantly predicted the extent of the decline in slow-wave activity — in other words, the decline in slow-wave activity appears to be due to the brain atrophy in the medial prefrontal cortex. Atrophy in other regions of the brain (precuneus, hippocampus, temporal lobe) was not associated with the decline in slow-wave activity when age was considered.

Older adults did significantly worse on the delayed recognition test than young adults. Performance on the immediate test did not predict performance on the delayed test. Moreover, the highest performers on the immediate test among the older adults performed at the same level as the lowest young adult performers — nevertheless, these older adults did worse the following day.

Slow-wave activity during sleep was significantly associated with performance on the next day’s test. Moreover, when slow-wave activity was taken into account, neither age nor mPFC atrophy significantly predicted test performance.

In other words, age relates to shrinkage of the prefrontal cortex, this shrinkage relates to a decline in slow-wave activity during sleep, and this decline in slow-wave sleep relates to poorer cognitive performance.

The findings confirm the importance of slow-wave brainwaves for memory consolidation.

All of this suggests that poorer sleep quality contributes significantly to age-related cognitive decline, and that efforts should be made to improve quality of sleep rather than just assuming lighter, more disturbed sleep is ‘natural’ in old age!

A new study adds more support to the idea that the increasing difficulty in learning new information and skills that most of us experience as we age is not down to any difficulty in acquiring new information, but rests on the interference from all the old information.

Memory is about strengthening some connections and weakening others. A vital player in this process of synaptic plasticity is the NMDA receptor in the hippocampus. This glutamate receptor has two subunits (NR2A and NR2B), whose ratio changes as the brain develops. Children have higher ratios of NR2B, which lengthens the time neurons talk to each other, enabling them to make stronger connections, thus optimizing learning. After puberty, the ratio shifts, so there is more NR2A.

Of course, there are many other changes in the aging brain, so it’s been difficult to disentangle the effects of this changing ratio from other changes. This new study genetically modified mice to have more NR2A and less NR2B (reflecting the ratio typical of older humans), thus avoiding the other confounds.

To the researchers’ surprise, the mice were found to be still good at making strong connections (‘long-term potentiation’ - LTP), but instead had an impaired ability to weaken existing connections (‘long-term depression’ - LTD). This produces too much noise (bear in mind that each neuron averages 3,000 potential points of contact (i.e., synapses), and you will see the importance of turning down the noise!).

Interestingly, LTD responses were only abolished within a particular frequency range (3-5 Hz), and didn’t affect 1Hz-induced LTD (or 100Hz-induced LTP). Moreover, while the mice showed impaired long-term learning, their short-term memory was unaffected. The researchers suggest that these particular LTD responses are critical for ‘post-learning information sculpting’, which they suggest is a step (hitherto unknown) in the consolidation process. This step, they postulate, involves modifying the new information to fit in with existing networks of knowledge.

Previous work by these researchers has found that mice genetically modified to have an excess of NR2B became ‘super-learners’. Until now, the emphasis in learning and memory has always been on long-term potentiation, and the role (if any) of long-term depression has been much less clear. These results point to the importance of both these processes in sculpting learning and memory.

The findings also seem to fit in with the idea that a major cause of age-related cognitive decline is the failure to inhibit unwanted information, and confirm the importance of keeping your mind actively engaged and learning, because this ratio is also affected by experience.

More evidence that even an 8-week meditation training program can have measurable effects on the brain comes from an imaging study. Moreover, the type of meditation makes a difference to how the brain changes.

The study involved 36 participants from three different 8-week courses: mindful meditation, compassion meditation, and health education (control group). The courses involved only two hours class time each week, with meditation students encouraged to meditate for an average 20 minutes a day outside class. There was a great deal of individual variability in the total amount of meditation done by the end of the course (210-1491 minutes for the mindful attention training course; 190-905 minutes for the compassion training course).

Participants’ brains were scanned three weeks before the courses began, and three weeks after the end. During each brain scan, the volunteers viewed 108 images of people in situations that were either emotionally positive, negative or neutral.

In the mindful attention group, the second brain scan showed a decrease in activation in the right amygdala in response to all images, supporting the idea that meditation can improve emotional stability and response to stress. In the compassion meditation group, right amygdala activity also decreased in response to positive or neutral images, but, among those who reported practicing compassion meditation most frequently, right amygdala activity tended to increase in response to negative images. No significant changes were seen in the control group or in the left amygdala of any participant.

The findings support the idea that meditation can be effective in improving emotional control, and that compassion meditation can indeed increase compassionate feelings. Increased amygdala activation was also correlated with decreased depression scores in the compassion meditation group, which suggests that having more compassion towards others may also be beneficial for oneself.

The findings also support the idea that the changes brought about by meditation endure beyond the meditative state, and that the changes can start to occur quite quickly.

These findings are all consistent with other recent research.

One point is worth emphasizing, in the light of the difficulty in developing a training program that improves working memory rather than simply improving the task being practiced. These findings suggest that, unlike most cognitive training programs, meditation training might produce learning that is process-specific rather than stimulus- or task-specific, giving it perhaps a wider generality than most cognitive training.

The neurotransmitter dopamine is found throughout the brain and has been implicated in a number of cognitive processes, including memory. It is well-known, of course, that Parkinson's disease is characterized by low levels of dopamine, and is treated by raising dopamine levels.

A new study of older adults has now demonstrated the effect of dopamine on episodic memory. In the study, participants (aged 65-75) were shown black and white photos of indoor scenes and landscapes. The subsequent recognition test presented them with these photos mixed in with new ones, and required them to note which photos they had seen before. Half of the participants were first given Levodopa (‘L-dopa’), and half a placebo.

Recognition tests were given two and six hours after being shown the photos. There was no difference between the groups at the two-hour test, but at the six-hour test, those given L-dopa recognized up to 20% more photos than controls.

The failure to find a difference at the two-hour test was expected, if dopamine’s role is to help strengthen the memory code for long-term storage, which occurs after 4-6 hours.

Individual differences indicated that the ratio between the amount of Levodopa taken and body weight is key for an optimally effective dose.

The findings therefore suggest that at least part of the reason for the decline in episodic memory typically seen in older adults is caused by declining levels of dopamine.

Given that episodic memory is one of the first and greatest types of memory hit by Alzheimer’s, this finding also has implications for Alzheimer’s treatment.

Caffeine improves recognition of positive words

Another recent study also demonstrates, rather more obliquely, the benefits of dopamine. In this study, 200 mg of caffeine (equivalent to 2-3 cups of coffee), taken 30 minutes earlier by healthy young adults, was found to improve recognition of positive words, but had no effect on the processing of emotionally neutral or negative words. Positive words are consistently processed faster and more accurately than negative and neutral words.

Because caffeine is linked to an increase in dopamine transmission (an indirect effect, stemming from caffeine’s inhibitory effect on adenosine receptors), the researchers suggest that this effect of caffeine on positive words demonstrates that the processing advantage enjoyed by positive words is driven by the involvement of the dopaminergic system.

A small Swedish brain imaging study adds to the evidence for the cognitive benefits of learning a new language by investigating the brain changes in students undergoing a highly intensive language course.

The study involved an unusual group: conscripts in the Swedish Armed Forces Interpreter Academy. These young people, selected for their talent for languages, undergo an intensive course to allow them to learn a completely novel language (Egyptian Arabic, Russian or Dari) fluently within ten months. This requires them to acquire new vocabulary at a rate of 300-500 words every week.

Brain scans were taken of 14 right-handed volunteers from this group (6 women; 8 men), and 17 controls that were matched for age, years of education, intelligence, and emotional stability. The controls were medical and cognitive science students. The scans were taken before the start of the course/semester, and three months later.

The brain scans revealed that the language students showed significantly greater changes in several specific regions. These regions included three areas in the left hemisphere: the dorsal middle frontal gyrus, the inferior frontal gyrus, and the superior temporal gyrus. These regions all grew significantly. There was also some, more selective and smaller, growth in the middle frontal gyrus and inferior frontal gyrus in the right hemisphere. The hippocampus also grew significantly more for the interpreters compared to the controls, and this effect was greater in the right hippocampus.

Among the interpreters, language proficiency was related to increases in the right hippocampus and left superior temporal gyrus. Increases in the left middle frontal gyrus were related to teacher ratings of effort — those who put in the greatest effort (regardless of result) showed the greatest increase in this area.

In other words, both learning, and the effort put into learning, had different effects on brain development.

The main point, however, is that language learning in particular is having this effect. Bear in mind that the medical and cognitive science students are also presumably putting in similar levels of effort into their studies, and yet no such significant brain growth was observed.

Of course, there is no denying that the level of intensity with which the interpreters are acquiring a new language is extremely unusual, and it cannot be ruled out that it is this intensity, rather than the particular subject matter, that is crucial for this brain growth.

Neither can it be ruled out that the differences between the groups are rooted in the individuals selected for the interpreter group. The young people chosen for the intensive training at the interpreter academy were chosen on the basis of their talent for languages. Although brain scans showed no differences between the groups at baseline, we cannot rule out the possibility that such intensive training only benefited them because they possessed this potential for growth.

A final caveat is that the soldiers all underwent basic military training before beginning the course — three months of intense physical exercise. Physical exercise is, of course, usually very beneficial for the brain.

Nevertheless, we must give due weight to the fact that the brain scans of the two groups were comparable at baseline, and the changes discussed occurred specifically during this three-month learning period. Moreover, there is growing evidence that learning a new language is indeed ‘special’, if only because it involves such a complex network of processes and brain regions.

Given that people vary in their ‘talent’ for foreign language learning, and that learning a new language does tend to become harder as we get older, it is worth noting the link between growth of the hippocampus and superior temporal gyrus and language proficiency. The STG is involved in acoustic-phonetic processes, while the hippocampus is presumably vital for the encoding of new words into long-term memory.

Interestingly, previous research with children has suggested that the ability to learn new words is greatly affected by working memory span — specifically, by how much information they can hold in that part of working memory called phonological short-term memory. While this is less important for adults learning another language, it remains important for one particular category of new words: words that have no ready association to known words. Given the languages being studied by these Swedish interpreters, it seems likely that much if not all of their new vocabulary would fall into this category.

I wonder if the link with STG is more significant in this study, because the languages are so different from the students’ native language? I also wonder if, and to what extent, you might be able to improve your phonological short-term memory with this sort of intensive practice.

In this regard, it’s worth noting that a previous study found that language proficiency correlated with growth in the left inferior frontal gyrus in a group of English-speaking exchange students learning German in Switzerland. Is this difference because the training was less intensive? because the students had prior knowledge of German? because German and English are closely related in vocabulary? (I’m picking the last.)

The researchers point out that hippocampal plasticity might also be a critical factor in determining an individual’s facility for learning a new language. Such plasticity does, of course, tend to erode with age — but this can be largely counteracted if you keep your hippocampus limber (as it were).

All these are interesting speculations, but the main point is clear: the findings add to the growing evidence that bilingualism and foreign language learning have particular benefits for the brain, and for protecting against cognitive decline.

We know that stress has a complicated relationship with learning, but in general its effect is negative, and part of that is due to stress producing anxious thoughts that clog up working memory. A new study adds another perspective to that.

The brain scanning study involved 60 young adults, of whom half were put under stress by having a hand immersed in ice-cold water for three minutes under the supervision of a somewhat unfriendly examiner, while the other group immersed their hand in warm water without such supervision (cortisol and blood pressure tests confirmed the stress difference).

About 25 minutes after this (cortisol reaches peak levels around 25 minutes after stress), participants’ brains were scanned while participants alternated between a classification task and a visual-motor control task. The classification task required them to look at cards with different symbols and learn to predict which combinations of cards announced rain and which sunshine. Afterward, they were given a short questionnaire to determine their knowledge of the task. The control task was similar but there were no learning demands (they looked at cards on the screen and made a simple perceptual decision).

In order to determine the strategy individuals used to do the classification task, ‘ideal’ performance was modeled for four possible strategies, of which two were ‘simple’ (based on single cues) and two ‘complex’ (based on multiple cues).

Here’s the interesting thing: while both groups were successful in learning the task, the two groups learned to do it in different ways. Far more of the non-stressed group activated the hippocampus to pursue a simple and deliberate strategy, focusing on individual symbols rather than combinations of symbols. The stressed group, on the other hand, were far more likely to use the striatum only, in a more complex and subconscious processing of symbol combinations.

The stressed group also remembered significantly fewer details of the classification task.

There was no difference between the groups on the (simple, perceptual) control task.

In other words, it seems that stress interferes with conscious, purposeful learning, causing the brain to fall back on more ‘primitive’ mechanisms that involve procedural learning. Striatum-based procedural learning is less flexible than hippocampus-based declarative learning.

Why should this happen? Well, the non-conscious procedural learning going on in the striatum is much less demanding of cognitive resources, freeing up your working memory to do something important — like worrying about the source of the stress.

Unfortunately, such learning will not become part of your more flexible declarative knowledge base.

The finding may have implications for stress disorders such as depression, addiction, and PTSD. It may also have relevance for a memory phenomenon known as “forgotten baby syndrome”, in which parents forget their babies in the car. This may be related to the use of non-declarative memory, because of the stress they are experiencing.

[3071] Schwabe L, Wolf OT. Stress Modulates the Engagement of Multiple Memory Systems in Classification Learning. The Journal of Neuroscience [Internet]. 2012 ;32(32):11042 - 11049. Available from: http://www.jneurosci.org/content/32/32/11042

Our life-experiences contain a wealth of new and old information. The relative proportions of these change, of course, as we age. But how do we know whether we should be encoding new information or retrieving old information? It’s easy if the information is readily accessible, but what if it’s not? Bear in mind that (especially as we get older) most information / experiences we meet share some similarity to information we already have.

This question is made even more meaningful when you consider that it is the same brain region — the hippocampus — that’s involved in both encoding and retrieval, and these two processes depend (it is thought) on two quite opposite processes. While encoding is thought to rely on pattern separation (looking for differences), retrieval is thought to depend on pattern completion.

A recent study looked at what happens in the brain when people rapidly switch between encoding new objects and retrieving recently presented ones. Participants were shown 676 pictures of objects and asked to identify each one as being shown for the first time (‘new’), being repeated (‘old’), or as a modified version of something shown earlier (‘similar’). Recognizing the similar items as similar was the question of interest, as these items contain both old and new information and so the brain’s choice between encoding and retrieval is more difficult.

What they found was that participants were more likely to recognize similar items as similar (rather than old) if they had viewed a new item on the preceding trial. In other words, the experience of a new item primed them to notice novelty. Or to put it in another way: context biases the hippocampus toward either pattern completion or pattern separation.

This was supported by a further experiment, in which participants were shown both the object pictures, and also learned associations between faces and scenes. Critically, each scene was associated with two different faces. In the next learning phase, participants were taught a new scene association for one face from each pair. Each face-scene learning trial was preceded by an object recognition trial (new and old objects were shown and participants had to identify them as old or new) — critically, either a new or old object was consistently placed before a specific face-scene association. In the final test phase, participants were tested on the new face-scene associations they had just learned, as well as the indirect associations they had not been taught (that is, between the face of each pair that had not been presented during the preceding phase, and the scene associated with its partnered face).

What this found was that participants were more likely to pair indirectly related faces if those faces had been consistently preceded by old objects, rather than new ones. Moreover, they did so more quickly when the faces had been preceded by old objects rather than new ones.

This was interpreted as indicating that the preceding experience affects how well related information is integrated during encoding.

What all this suggests is that the memory activities you’ve just engaged in bias your brain toward the same sort of activities — so whether or not you notice changes to a café or instead nostalgically recall a previous meal, may depend on whether you noticed anyone you knew as you walked down the street!

An interesting speculation by the researchers is that such a memory bias (which only lasts a very brief time) might be an adaptive mechanism, reflecting the usefulness of being more sensitive to changes in new environments and less sensitive to irregularities in familiar environments.

I’ve reported before on how London taxi drivers increase the size of their posterior hippocampus by acquiring and practicing ‘the Knowledge’ (but perhaps at the expense of other functions). A new study in similar vein has looked at the effects of piano tuning expertise on the brain.

The study looked at the brains of 19 professional piano tuners (aged 25-78, average age 51.5 years; 3 female; 6 left-handed) and 19 age-matched controls. Piano tuning requires comparison of two notes that are close in pitch, meaning that the tuner has to accurately perceive the particular frequency difference. Exactly how that is achieved, in terms of brain function, has not been investigated until now.

The brain scans showed that piano tuners had increased grey matter in a number of brain regions. In some areas, the difference between tuners and controls was categorical — that is, tuners as a group showed increased gray matter in right hemisphere regions of the frontal operculum, the planum polare, superior frontal gyrus, and posterior cingulate gyrus, and reduced gray matter in the left hippocampus, parahippocampal gyrus, and superior temporal lobe. Differences in these areas didn’t vary systematically between individual tuners.

However, tuners also showed a marked increase in gray matter volume in several areas that was dose-dependent (that is, varied with years of tuning experience) — the anterior hippocampus, parahippocampal gyrus, right middle temporal and superior temporal gyrus, insula, precuneus, and inferior parietal lobe — as well as an increase in white matter in the posterior hippocampus.

These differences were not affected by actual chronological age, or, interestingly, level of musicality. However, they were affected by starting age, as well as years of tuning experience.

What these findings suggest is that achieving expertise in this area requires an initial development of active listening skills that is underpinned by categorical brain changes in the auditory cortex. These superior active listening skills then set the scene for the development of further skills that involve what the researchers call “expert navigation through a complex soundscape”. This process may, it seems, involve the encoding and consolidating of precise sound “templates” — hence the development of the hippocampal network, and hence the dependence on experience.

The hippocampus, apart from its general role in encoding and consolidating, has a special role in spatial navigation (as shown, for example, in the London cab driver studies, and the ‘parahippocampal place area’). The present findings extend that navigation in physical space to the more metaphoric one of relational organization in conceptual space.

The more general message from this study, of course, is confirmation for the role of expertise in developing specific brain regions, and a reminder that this comes at the expense of other regions. So choose your area of expertise wisely!

I have reported previously on research suggesting that rapamycin, a bacterial product first isolated from soil on Easter Island and used to help transplant patients prevent organ rejection, might improve learning and memory. Following on from this research, a new mouse study has extended these findings by adding rapamycin to the diet of healthy mice throughout their life span. Excitingly, it found that cognition was improved in young mice, and abolished normal cognitive decline in older mice.

Anxiety and depressive-like behavior was also reduced, and the mice’s behavior demonstrated that rapamycin was acting like an antidepressant. This effect was found across all ages.

Three "feel-good" neurotransmitters — serotonin, dopamine and norepinephrine — all showed significantly higher levels in the midbrain (but not in the hippocampus). As these neurotransmitters are involved in learning and memory as well as mood, it is suggested that this might be a factor in the improved cognition.

Other recent studies have suggested that rapamycin inhibits a pathway in the brain that interferes with memory formation and facilitates aging.

A new study has found that, when delivered quickly, a modified form of prolonged exposure therapy reduces post-traumatic stress reactions and depression.

The study involved 137 patients being treated in the emergency room of a major trauma center in Atlanta. The patients were chosen from survivors of traumatic events such as rape, car or industrial accidents, and shooting or knife attacks. Participants were randomly assigned to either receive three sessions of therapy beginning in the emergency department (an average of 12 hours after the event), or assessment only. Stress reactions were assessed at 4 and 12 weeks, and depression at baseline and 4 weeks.

Those receiving the therapy reported significantly lower post-traumatic stress at 4 weeks and 12 weeks, and significantly lower depression at 4 weeks. Analysis of subgroups revealed that the therapy was most effective in rape victims. In the cases of transport accidents and physical (non-sexual) assault, the difference between therapy and assessment-only was only barely significant (for transport at 4 weeks) or non-significant. In both subgroups, the effect was decidedly less at 12 weeks than at 4 weeks.

The therapy, carried out by trained therapists, involved participants describing the trauma they had experienced while the therapist recorded the description. The bulk of the hour-long session was taken up with reliving and processing the experience. There were three sessions spaced a week apart. The patients were instructed to listen to their recordings every day, and 85% were compliant. The therapists also explained normal reactions to trauma, helped the patients look at obtrusive thoughts of guilt or responsibility, and taught them a brief breathing or relaxation technique and self care.

While this study doesn’t itself compare the effects of immediate vs delayed therapy, the assumption that delivering the therapy so soon after the trauma is a crucial factor in its success is in line with other research (mainly to do with fear-conditioning in rodent and human laboratory studies). Moreover, while brief cognitive-behavioral therapy has previously been shown to be effective with people diagnosed with acute stress disorder, such therapy is normally begun some 2-4 weeks after trauma, and a study of female assault survivors found that although such therapy did indeed accelerate recovery compared with supportive counseling, after 9 months, PTSD severity was similar in both groups.

Another, severe, limitation of this study is that the therapy involved multiple items. We cannot assume that it was the repeated re-experiencing of the event that is critical.

However, this study is only a pilot study, and its findings are instructive rather than decisive. But at the least it does support the idea that immediate therapy is likely to help victims of trauma recover more quickly.

One final, important, note: It should not, of course, be assumed that simply having the victim describe the events — say to police officers — is in itself therapeutic. Done badly, that experience may itself be traumatic.

We know that we remember more 12 hours after learning if we have slept during that 12 hours rather than been awake throughout, but is this because sleep is actively helping us remember, or because being awake makes it harder to remember (because of interference and over-writing from other experiences). A new study aimed to disentangle these effects.

In the study, 207 students were randomly assigned to study 40 related or unrelated word pairs at 9 a.m. or 9 p.m., returning for testing either 30 minutes, 12 hours or 24 hours later.

As expected, at the 12-hour retest, those who had had a night’s sleep (Evening group) remembered more than those who had spent the 12 hours awake (Morning group). But this result was because memory for unrelated word pairs had deteriorated badly during 12 hours of wakefulness; performance on the related pairs was the same for the two groups. Performance on the related and unrelated pairs was the same for those who slept.

For those tested at 24 hours (participants from both groups having received both a full night of sleep and a full day of wakefulness), those in the Evening group (who had slept before experiencing a full day’s wakefulness) remembered significantly more than the Morning group. Specifically, the Evening group showed a very slight improvement over training, while the Morning group showed a pronounced deterioration.

This time, both groups showed a difference for related versus unrelated pairs: the Evening group showed some deterioration for unrelated pairs and a slightly larger improvement for related pairs; the Morning group showed a very small deterioration for related pairs and a much greater one for unrelated pairs. The difference between recall of related pairs and recall of unrelated pairs was, however, about the same for both groups.

In other words, unrelated pairs are just that much harder to learn than related ones (which we already know) — over time, learning them just before sleep vs learning early in the day doesn’t make any difference to that essential truth. But the former strategy will produce better learning for both types of information.

A comparison of the 12-hour and 24-hour results (this is the bit that will help us disentangle the effects of sleep and wakefulness) reveals that twice as much forgetting of unrelated pairs occurred during wakefulness in the first 12 hours, compared to wakefulness in the second 12 hours (after sleep), and 3.4 times more forgetting of related pairs (although this didn’t reach significance, the amount of forgetting being so much smaller).

In other words, sleep appears to slow the rate of forgetting that will occur when you are next awake; it stabilizes and thus protects the memories. But the amount of forgetting that occurred during sleep was the same for both word types, and the same whether that sleep occurred in the first 12 hours or the second.

Participants in the Morning and Evening groups took a similar number of training trials to reach criterion (60% correct), and there was no difference in the time it took to learn unrelated compared to related word pairs.

It’s worth noting that there was no difference between the two groups, or for the type of word pair, at the 30-minutes test either. In other words, your ability to remember something shortly after learning it is not a good guide for whether you have learned it ‘properly’, i.e., as an enduring memory.

The study tells us that the different types of information are differentially affected by wakefulness, that is, perhaps, they are more easily interfered with. This is encouraging, because semantically related information is far more common than unrelated information! But this may well serve as a reminder that integrating new material — making sure it is well understood and embedded into your existing database — is vital for effective learning.

The findings also confirm earlier evidence that running through any information (or skills) you want to learn just before going to bed is a good idea — and this is especially true if you are trying to learn information that is more arbitrary or less well understood (i.e., the sort of information for which you are likely to use mnemonic strategies, or, horror of horrors, rote repetition).

A study involving 75 perimenopausal women aged 40 to 60 has found that those with memory complaints tended to show impairments in working memory and attention. Complaints were not, however, associated with verbal learning or memory.

Complaints were also associated with depression, anxiety, somatic complaints, and sleep disturbance. But they weren’t linked to hormone levels (although estrogen is an important hormone for learning and memory).

What this suggests to me is that a primary cause of these cognitive impairments may be poor sleep, and anxiety/depression. A few years ago, I reported on a study that found that, although women’s reports of how many hot flashes they had didn’t correlate with memory impairment, an objective measure of the number of flashes they experienced during sleep did. Sleep, as I know from personal experience, is of sufficient importance that my rule-of-thumb is: don’t bother looking for any other causes of attention and memory deficits until you have sorted out your sleep!

Having said that, depressive symptoms showed greater relationship to memory complaints than sleep disturbance.

It’s no big surprise to hear that it is working memory in particular that is affected, because what many women at this time of life complain of is ‘brain fog’ — the feeling that your brain is full of cotton-wool. This doesn’t mean that you can’t learn new information, or remember old information. But it does mean that these tasks will be impeded to the extent that you need to hold on to too many bits of information. So mental arithmetic might be more difficult, or understanding complex sentences, or coping with unexpected disruptions to your routine, or concentrating on a task for a long time.

These sorts of problems are typical of those produced by on-going sleep deprivation, stress, and depression.

One caveat to the findings is that the study participants tended to be of above-average intelligence and education. This would protect them to a certain extent from cognitive decline — those with less cognitive reserve might display wider impairment. Other studies have found verbal memory, and processing speed, impaired during menopause.

Note, too, that a long-running, large population study has found no evidence for a decline in working memory, or processing speed, in women as they pass through perimenopause and menopause.

A new study explains how marijuana impairs working memory. The component THC removes AMPA receptors for the neurotransmitter glutamate in the hippocampus. This means that there are fewer receivers for the information crossing between neurons.

The research is also significant because it adds to the growing evidence for the role of astrocytes in neural transmission of information.

This is shown by the finding that genetically-engineered mice who lack type-1 cannabinoid receptors in their astroglia do not show impaired working memory when exposed to THC, while those who instead lacked the receptors in their neurons do. The activation of the cannabinoid receptor expressed by astroglia sends a signal to the neurons to begin the process that removes AMPA receptors, leading to long-term depression (a type of synaptic plasticity that weakens, rather than strengthens, neural connections).

See the Guardian and Scientific American articles for more detail on the study and the processes involved.

For more on the effects of marijuana on memory

I always like gesture studies. I think I’m probably right in saying that they started with language learning. Way back in 1980 it was shown that acting out action phrases meant they were remembered better than if the phrases had been only heard or read (the “enactment effect”). Enacted items, it turned out, “popped out” effortlessly in free recall tests — in other words, enactment had made the phrases highly accessible. Subsequent research found that this effect occurred both for both older and younger adults, and in immediate and delayed recall tests — suggesting not only that such items are more accessible but that forgetting is slower.

Following these demonstrations, there have been a few studies that have specifically looked at the effect of gestures on learning foreign languages, which have confirmed the benefits of gestures. But there are various confounding factors that are hard to remove when using natural languages, which is why the present researchers have developed an artificial language (“Vimmi”) to use in their research. In their first study, as in most other studies, the words and phrases used related to actions. In a new study, the findings were extended to more abstract vocabulary.

In this study, 20 German-speakers participated in a six-day language class to study Vimmi. The training material included 32 sentences, each containing a subject, verb, adverb, and object. While the subject nouns were concrete agents (e.g., musician, director), the other words were all abstract. Here’s a couple of sample sentences (translated, obviously): (The) designer frequently shapes (the) style. (The) pilot really enjoys (the) view. The length of the words was controlled: nouns all had 3 syllables; verbs and adverbs all had two.

For 16 of the sentences, participants saw the word in Vimmi and heard it. The translation of the word appeared on the screen fractionally later, while at the same time a video appeared in which woman performed the gesture relating to the word. The audio of the word was replayed, and participants were cued to imitate the gesture as they repeated the word. For the other 16 sentences, a video with a still image of the actress appeared, and the participants were simply cued to repeat the word when the audio was replayed.

While many of the words used gestures similar to their meaning (such as a cutting gesture for the word “cut”), the researchers found that the use of any gesture made a difference as long as it was unique and connected to a specific word. For example, the abstract word “rather” does not have an obvious gesture that would go with it. However, a gesture attached to this word also worked.

Each daily session lasted three hours. From day 2, sessions began with a free recall and a cued recall test. In the free recall test, participants were asked to write as many items as possible in both German and Vimmi. Items had to be perfectly correct to be counted. From day 4, participants were also required to produce new sentences with the words they had learned.

Right from the beginning, free recall of items which had been enacted was superior to those which hadn’t been — in German. However, in Vimmi, significant benefits from enactment occurred only from day 3. The main problem here was not forgetting the items, but correctly spelling them. In the cued recall test (translating from Vimmi to German, or German to Vimmi), again, the superiority of the enactment condition only showed up from day 3.

Perhaps the most interesting result came from the written production test. Here, people reproduced the same number of sentences they had learned on each of the three days of the test, and although enacted words were remembered at a higher rate, that rate didn’t alter, and didn’t reach significance. However, the production of new sentences improved each day, and the benefits of enactment increased each day. These benefits were significant from day 5.

The main question, however, was whether the benefits of enactment depended on word category. As expected, concrete nouns were remembered than verbs, followed by abstract nouns, and finally adverbs. When all the tests were lumped together, there was a significant benefit of enactment for all types of word. However, the situation became a little more nuanced when the data was separately analyzed.

In free recall, for Vimmi, enactment was only of significant benefit for concrete nouns and verbs. In cued recall, for translating German into Vimmi, the enactment benefit was significant for all except concrete nouns (I’m guessing concrete nouns have enough ‘natural’ power not to need gestures in this situation). For translating Vimmi into German, the benefit was only significant for verbs and abstract nouns. In new sentence production, interestingly, participants used significantly more items of all four categories if they had been enacted. This is perhaps the best evidence that enactment makes items more accessible in memory.

What all this suggests is that acting out new words helps you learn them, but some types of words may benefit more from this strategy than others. But I think we need more research before being sure about such subtleties. The pattern of results make it clear that we really need longer training, and longer delays, to get a better picture of the most effective way to use this strategy.

For example, it may be that adverbs, although they showed the most inconsistent benefits, are potentially the category that stands to gain the most from this strategy — because they are the hardest type of word to remember. Because any embodiment of such an abstract adverb must be arbitrary — symbolic rather than representational — it naturally is going to be harder to learn (yes, some adverbs could be represented, but the ones used in this study, and the ones I am talking about, are of the “rather”, “really”, “otherwise” ilk). But if you persist in learning the association between concept and gesture, you may derive greater benefit from enactment than you would from easier words, which need less help.

Here’s a practical discussion of all this from a language teacher’s perspective.

I’ve spoken before about the association between hearing loss in old age and dementia risk. Although we don’t currently understand that association, it may be that preventing hearing loss also helps prevent cognitive decline and dementia. I have previously reported on how music training in childhood can help older adults’ ability to hear speech in a noisy environment. A new study adds to this evidence.

The study looked at a specific aspect of understanding speech: auditory brainstem timing. Aging disrupts this timing, degrading the ability to precisely encode sound.

In this study, automatic brain responses to speech sounds were measured in 87 younger and older normal-hearing adults as they watched a captioned video. It was found that older adults who had begun musical training before age 9 and engaged consistently in musical activities through their lives (“musicians”) not only significantly outperformed older adults who had no more than three years of musical training (“non-musicians”), but encoded the sounds as quickly and accurately as the younger non-musicians.

The researchers qualify this finding by saying that it shows only that musical experience selectively affects the timing of sound elements that are important in distinguishing one consonant from another, not necessarily all sound elements. However, it seems probable that it extends more widely, and in any case the ability to understand speech is crucial to social interaction, which may well underlie at least part of the association between hearing loss and dementia.

The burning question for many will be whether the benefits of music training can be accrued later in life. We will have to wait for more research to answer that, but, as music training and enjoyment fit the definition of ‘mentally stimulating activities’, this certainly adds another reason to pursue such a course.

Students come into classrooms filled with inaccurate knowledge they are confident is correct, and overcoming these misconceptions is notoriously difficult. In recent years, research has shown that such false knowledge can be corrected with feedback. The hypercorrection effect, as it has been termed, expresses the finding that when students are more confident of a wrong answer, they are more likely to remember the right answer if corrected.

This is somewhat against intuition and experience, which would suggest that it is harder to correct more confidently held misconceptions.

A new study tells us how to reconcile experimental evidence and belief: false knowledge is more likely to be corrected in the short-term, but also more likely to return once the correction is forgotten.

In the study, 50 undergraduate students were tested on basic science facts. After rating their confidence in each answer, they were told the correct answer. Half the students were then retested almost immediately (after a 6 minute filler task), while the other half were retested a week later.

There were 120 questions in the test. Examples include: What is stored in a camel's hump? How many chromosomes do humans have? What is the driest area on Earth? The average percentage of correct responses on the initial test was 38%, and as expected, for the second test, performance was significantly better on the immediate compared to the delayed (90% vs 71%).

Students who were retested immediately gave the correct answer on 86% of their previous errors, and they were more likely to correct their high-confidence errors than those made with little confidence (the hypercorrection effect). Those retested a week later also showed the hypercorrection effect, albeit at a much lower level: they only corrected 56% of their previous errors. (More precisely, on the immediate test, corrected answers rose from 79% for the lowest confidence level to 92% for the highest confidence. On the delayed test, corrected answers rose from 43% to 70% on the second highest confidence level, 64% for the highest.)

In those instances where students had forgotten the correct answer, they were much more likely to reproduce the original error if their confidence had been high. Indeed, on the immediate test, the same error was rarely repeated, regardless of confidence level (the proportion of repeated errors hovered at 3-4% pretty much across the board). On the delayed test, on the other hand, there was a linear increase, with repeated errors steadily increasing from 14% to 23% as confidence level rose (with the same odd exception — at the second highest confidence level, proportion of repeated errors suddenly fell).

Overall, students were more likely to correct their errors if they remembered their error than if they didn’t (72% vs 65%). Unsurprisingly, those in the immediate group were much more likely to remember their initial errors than those in the delayed group (85% vs 61%).

In other words, it’s all about relative strength of the memories. While high-confidence errors are more likely to be corrected if the correct answer is readily accessible, they are also more likely to be repeated once the correct answer becomes less accessible. The trick to replacing false knowledge, then, is to improve the strength of the correct information.

Thus, as recency fades, you need to engage frequency to make the new memory stronger. So the finding points to the special need for multiple repetition, if you are hoping to correct entrenched false knowledge. The success of immediate testing indicates that properly spaced retrieval practice is probably the best way of replacing incorrect knowledge.

Of course, these findings apply well beyond the classroom!

Certainly experiences that arouse emotions are remembered better than ones that have no emotional connection, but whether negative or positive memories are remembered best is a question that has produced equivocal results. While initial experiments suggested positive events were remembered better than negative, more recent studies have concluded the opposite.

The idea that negative events are remembered best is consistent with a theory that negative emotion signals a problem, leading to more detailed processing, while positive emotion relies more heavily on general scripts.

However, a new study challenges those recent studies, on the basis of a more realistic comparison. Rather than focusing on a single public event, to which some people have positive feelings while others have negative feelings (events used have included the OJ Simpson trial, the fall of the Berlin Wall, and a single baseball championship game), the study looked at two baseball championships each won by different teams.

The experiment involved 1,563 baseball fans who followed or attended the 2003 and 2004 American League Championship games between the New York Yankees (2003 winners) and the Boston Red Sox (2004 winners). Of the fans, 1,216 were Red Sox fans, 218 were Yankees fans, and 129 were neutral fans. (Unfortunately the selection process disproportionately collected Red Sox fans.)

Participants were reminded who won the championship before answering questions on each game. Six questions were identical for the two games: the final score for each team, the winning and losing pitchers (multiple choice of five pitchers for each team), the location of the game, and whether the game required extra innings. Participants also reported how vividly they remembered the game, and how frequently they had thought about or seen media concerning the game.

Both Yankee and Red Sox fans remembered more details about their team winning. They also reported more vivid memories for the games their team won. Accuracy and vividness were significantly correlated. Fans also reported greater rehearsal of the game their team won, and again, rehearsal and accuracy were significantly correlated.

Analysis of the data revealed that rehearsal completely mediated the correlation between accuracy and fan type, and partially mediated the correlation between vividness and fan type.

In other words, improved memory for emotion-arousing events has everything to do with how often you think about or are reminded of the event.

PTSD, for example, is the negative memory extreme. And PTSD is characterized by the unavoidable rehearsal of the event over and over again. Each repetition makes memory for the event stronger.

In the previous studies referred to earlier, media coverage provided a similarly unavoidable repetition.

While most people tend to recall more positive than negative events (and this tendency becomes greater with age), individuals who are depressed or anxious show the opposite tendency.

So whether positive or negative events are remembered better depends on you, as well as the event.

When it comes down to it, I'm not sure it's really a helpful question - whether positive or negative events are remembered better. An interesting aspect of public events is that their portrayal often changes over time, but this is just a more extreme example of what happens with private events as well — as we change over time, so does our attitude toward those events. Telling friends about events, and receiving their comments on them, can affect our emotional response to events, as well as having an effect on our memory of those events.

Research into the effects of cannabis on cognition has produced inconsistent results. Much may depend on extent of usage, timing, and perhaps (this is speculation) genetic differences. But marijuana abuse is common among sufferers of schizophrenia and recent studies have shown that the psychoactive ingredient of marijuana can induce some symptoms of schizophrenia in healthy volunteers.

Now new research helps explain why marijuana is linked to schizophrenia, and why it might have detrimental effects on attention and memory.

In this rat study, a drug that mimics the psychoactive ingredient of marijuana (by activating the cannabinoid receptors) produced significant disruption in brain networks, with brain activity becoming uncoordinated and inaccurate.

In recent years it has become increasingly clear that synchronized brainwaves play a crucial role in information processing — especially that between the hippocampus and prefrontal cortex (see, for example, my reports last month on theta waves improving retrieval and the effect of running on theta and gamma rhythms). Interactions between the hippocampus and prefrontal cortex seem to be involved in working memory functions, and may provide the mechanism for bringing together memory and decision-making during goal-directed behaviors.

Consistent with this, during decision-making on a maze task, hippocampal theta waves and prefrontal gamma waves were impaired, and the theta synchronization between the two was disrupted. These effects correlated with impaired performance on the maze task.

These findings are consistent with earlier findings that drugs that activate the cannabinoid receptors disrupt the theta rhythm in the hippocampus and impair spatial working memory. This experiment extends that result to coordinated brainwaves beyond the hippocampus.

Similar neural activity is observed in schizophrenia patients, as well as in healthy carriers of a genetic risk variant.

The findings add to the evidence that working memory processes involve coordination between the prefrontal cortex and the hippocampus through theta rhythm synchronization. The findings are consistent with the idea that items are encoded and indexed along the phase of the theta wave into episodic representations and transferred from the hippocampus to the neocortex as a theta phase code. By disrupting that code, cannabis makes it more difficult to retain and index the information relevant to the task at hand.

In the study, two rhesus monkeys were given a standard human test of working memory capacity: an array of colored squares, varying from two to five squares, was shown for 800 msec on a screen. After a delay, varying from 800 to 1000 msec, a second array was presented. This array was identical to the first except for a change in color of one item. The monkey was rewarded if its eyes went directly to this changed square (an infra-red eye-tracking system was used to determine this). During all this, activity from single neurons in the lateral prefrontal cortex and the lateral intraparietal area — areas critical for short-term memory and implicated in human capacity limitations — was recorded.

As with humans, the more squares in the array, the worse the performance (from 85% correct for two squares to 66.5% for 5). Their working memory capacity was calculated at 3.88 objects — i.e. the same as that of humans.

That in itself is interesting, speaking as it does to the question of how human intelligence differs from other animals. But the real point of the exercise was to watch what is happening at the single neuron level. And here a surprise occurred.

That total capacity of around 4 items was composed of two independent, smaller capacities in the right and left halves of the visual space. What matters is how many objects are in the hemifield an eye is covering. Each hemifield can only handle two objects. Thus, if the left side of the visual space contains three items, and the right side only one, information about the three items from the left side will be degraded. If the left side contains four items and the right side two, those two on the right side will be fine, but information from the four items on the left will be degraded.

Notice that the effect of more items than two in a hemifield is to decrease the total information from all the items in the hemifield — not to simply lose the additional items.

The behavioral evidence correlated with brain activity, with object information in LPFC neurons decreasing with increasing number of items in the same hemifield, but not the opposite hemifield, and the same for the intraparietal neurons (the latter are active during the delay; the former during the presentation).

The findings resolve a long-standing debate: does working memory function like slots, which we fill one by one with items until all are full, or as a pool that fills with information about each object, with some information being lost as the number of items increases? And now we know why there is evidence for both views, because both contain truth. Each hemisphere might be considered a slot, but each slot is a pool.

Another long-standing question is whether the capacity limit is a failure of perception or  memory. These findings indicate that the problem is one of perception. The neural recordings showed information about the objects being lost even as the monkeys were viewing them, not later as they were remembering what they had seen.

All of this is important theoretically, but there are also immediate practical applications. The work suggests that information should be presented in such a way that it’s spread across the visual space — for example, dashboard displays should spread the displays evenly on both sides of the visual field; medical monitors that currently have one column of information should balance it in right and left columns; security personnel should see displays scrolled vertically rather than horizontally; working memory training should present information in a way that trains each hemisphere separately. The researchers are forming collaborations to develop these ideas.

[2335] Buschman TJ, Siegel M, Roy JE, Miller EK. Neural substrates of cognitive capacity limitations. Proceedings of the National Academy of Sciences [Internet]. 2011 . Available from: http://www.pnas.org/content/early/2011/06/13/1104666108.abstract

In the experiment, rats learned which lever to press to receive water, where the correct lever depended on which lever they had pressed previously (the levers were retractable; there was a variable delay between the first and second presentation of the levers). Microelectrodes in the rats’ brains provided data that enabled researchers to work out the firing patterns of neurons in CA1 that resulted from particular firing patterns in CA3 (previous research had established that long-term memory involves CA3 outputs being received in CA1).

Normal neural communication between these two subregions of the hippocampus was then chemically inhibited. While the rats still remembered the general rule, and still remembered that pressing the levers would gain them water, they could only remember which lever they had pressed for 5-10 seconds.

An artificial hippocampal system that could reproduce effective firing patterns (established in earlier training) was then implanted in the rats’ brains and long-term memory function was restored. Furthermore, when the ‘memory prosthetic’ was implanted in animals whose hippocampus was functioning normally, their memory improved.

The findings open up amazing possibilities for ameliorating brain damage. There is of course the greatly limiting factor that effective memory traces (spatiotemporal firing patterns) need to be recorded for each activity. This will be particularly problematic for individuals with significant damage. Perhaps one day we will all ‘record’ ourselves as a matter of course, in the same way that we might put by blood or genetic material ‘in case’! Still, it’s an exciting development.

The next step will be to repeat these results in monkeys.

I’ve always felt that better thinking was associated with my brain working ‘in a higher gear’ — literally working at a faster rhythm. So I was particularly intrigued by the findings of a recent mouse study that found that brainwaves associated with learning became stronger as the mice ran faster.

In the study, 12 male mice were implanted with microelectrodes that monitored gamma waves in the hippocampus, then trained to run back and forth on a linear track for a food reward. Gamma waves are thought to help synchronize neural activity in various cognitive functions, including attention, learning, temporal binding, and awareness.

We know that the hippocampus has specialized ‘place cells’ that record where we are and help us navigate. But to navigate the world, to create a map of where things are, we need to also know how fast we are moving. Having the same cells encode both speed and position could be problematic, so researchers set out to find how speed was being encoded. To their surprise and excitement, they found that the strength of the gamma rhythm grew substantially as the mice ran faster.

The results also confirmed recent claims that the gamma rhythm, which oscillates between 30 and 120 times a second, can be divided into slow and fast signals (20-45 Hz vs 45-120 Hz for mice, consistent with the 30-55 Hz vs 45-120 Hz bands found in rats) that originate from separate parts of the brain. The slow gamma waves in the CA1 region of the hippocampus were synchronized with slow gamma waves in CA3, while the fast gamma in CA1 were synchronized with fast gamma waves in the entorhinal cortex.

The two signals became increasingly separated with increasing speed, because the two bands were differentially affected by speed. While the slow waves increased linearly, the fast waves increased logarithmically. This differential effect could have to do with mechanisms in the source regions (CA3 and the medial entorhinal cortex, respectively), or to mechanisms in the different regions in CA1 where the inputs terminate (the waves coming from CA3 and the entorhinal cortex enter CA1 in different places).

In the hippocampus, gamma waves are known to interact with theta waves. Further analysis of the data revealed that the effects of speed on gamma rhythm only occurred within a narrow range of theta phases — but this ‘preferred’ theta phase also changed with running speed, more so for the slow gamma waves than the fast gamma waves (which is not inconsistent with the fact that slow gamma waves are more affected by running speed than fast gamma waves). Thus, while slow and fast gamma rhythms preferred similar phases of theta at low speeds, the two rhythms became increasingly phase-separated with increasing running speed.

What’s all this mean? Previous research has shown that if inputs from CA3 and the entorhinal cortex enter CA1 at the same time, the kind of long-term changes at the synapses that bring about learning are stronger and more likely in CA1. So at low speeds, synchronous inputs from CA3 and the entorhinal cortex at similar theta phases make them more effective at activating CA1 and inducing learning. But the faster you move, the more quickly you need to process information. The stronger gamma waves may help you do that. Moreover, the theta phase separation of slow and fast gamma that increases with running speed means that activity in CA3 (slow gamma source) increasingly anticipates activity in the medial entorhinal cortex (fast gamma source).

What does this mean at the practical level? Well at this point it can only be speculation that moving / exercising can affect learning and attention, but I personally am taking this on board. Most of us think better when we walk. This suggests that if you’re having trouble focusing and don’t have time for that, maybe walking down the hall or even jogging on the spot will help bring your brain cells into order!

Pushing speculation even further, I note that meditation by expert meditators has been associated with changes in gamma and theta rhythms. And in an intriguing comparison of the effect of spoken versus sung presentation on learning and remembering word lists, the group that sang showed greater coherence in both gamma and theta rhythms (in the frontal lobes, admittedly, but they weren’t looking elsewhere).

So, while we’re a long way from pinning any of this down, it may be that all of these — movement, meditation, music — can be useful in synchronizing your brain rhythms in a way that helps attention and learning. This exciting discovery will hopefully be the start of an exploration of these possibilities.

Trying to learn two different things one after another is challenging. Almost always some of the information from the first topic or task gets lost. Why does this happen? A new study suggests the problem occurs when the two information-sets interact, and demonstrates that disrupting that interaction prevents interference. (The study is a little complicated, but bear with me, or skip to the bottom for my conclusions.)

In the study, young adults learned two memory tasks back-to-back: a list of words, and a finger-tapping motor skills task. Immediately afterwards, they received either sham stimulation or real transcranial magnetic stimulation to the dorsolateral prefrontal cortex or the primary motor cortex. Twelve hours later the same day, they were re-tested.

As expected from previous research, word recall (being the first-learned task) declined in the control condition (sham stimulation), and this decline correlated with initial skill in the motor task. That is, the better they were at the second task, the more they forgot from the first task. This same pattern occurred among those whose motor cortex had been stimulated. However, there was no significant decrease in word recall for those who had received TMS to the dorsolateral prefrontal cortex.

Learning of the motor skill didn't differ between the three groups, indicating that this effect wasn't due to a disruption of the second task. Rather, it seems that the two tasks were interacting, and TMS to the DLPFC disrupted that interaction. This hypothesis was supported when the motor learning task was replaced by a motor performance task, which shouldn’t interfere with the word-learning task (the motor performance task was almost identical to the motor learning task except that it didn’t have a repeating sequence that could be learned). In this situation, TMS to the DLPFC produced a decrease in word recall (as it did in the other conditions, and as it would after a word-learning task without any other task following).

In the second set of experiments, the order of the motor and word tasks was reversed. Similar results occurred, with this time stimulation to the motor cortex being the effective intervention. In this case, there was a significant increase in motor skill on re-testing — which is what normally happens when a motor skill is learned on its own, without interference from another task (see my blog post on Mempowered for more on this). The word-learning task was then replaced with a vowel-counting task, which produced a non-significant trend toward a decrease in motor skill learning when TMS was applied to the motor cortex.

The effect of TMS depends on the activity in the region at the time of application. In this case, TMS was applied to the primary motor cortex and the DLPFC in the right hemisphere, because the right hemisphere is thought to be involved in integrating different types of information. The timing of the stimulation was critical: not during learning, and long before testing. The timing was designed to maximize any effects on interference between the two tasks.

The effect in this case mimics that of sleep — sleeping between tasks reduces interference between them. It’s suggested that both TMS and sleep reduce interference by reducing the communication between the prefrontal cortex and the mediotemporal lobe (of which the hippocampus is a part).

Here’s the problem: we're consolidating one set of memories while encoding another. So, we can do both at the same time, but as with any multitasking, one task is going to be done better than the other. Unsurprisingly, encoding appears to have priority over consolidation.

So something needs to regulate the activity of these two concurrent processes. Maybe something looks for commonalities between two actions occurring at the same time — this is, after all, what we’re programmed to do: we link things that occur together in space and time. So why shouldn’t that occur at this level too? Something’s just happened, and now something else is happening, and chances are they’re connected. So something in our brain works on that.

If the two events/sets of information are connected, that’s a good thing. If they’re not, we get interference, and loss of data.

So when we apply TMS to the prefrontal cortex, that integrating processor is perhaps disrupted.

The situation may be a little different where the motor task is followed by the word-list, because motor skill consolidation (during wakefulness at least) may not depend on the hippocampus (although declarative encoding does). However, the primary motor cortex may act as a bridge between motor skills and declarative memories (think of how we gesture when we explain something), and so it may this region that provides a place where the two types of information can interact (and thus interfere with each other).

In other words, the important thing appears to be whether consolidation of the first task occurs in a region where the two sets of information can interact. If it does, and assuming you don’t want the two information-sets to interact, then you want to disrupt that interaction.

Applying TMS is not, of course, a practical strategy for most of us! But the findings do suggest an approach to reducing interference. Sleep is one way, and even brief 20-minute naps have been shown to help learning. An intriguing speculation (I just throw this out) is that meditation might act similarly (rather like a sorbet between courses, clearing the palate).

Failing a way to disrupt the interaction, you might take this as a warning that it’s best to give your brain time to consolidate one lot of information before embarking on an unrelated set — even if it's in what appears to be a completely unrelated domain. This is particularly so as we get older, because consolidation appears to take longer as we age. For children, on the other hand, this is not such a worry. (See my blog post on Mempowered for more on this.)

[2338] Cohen DA, Robertson EM. Preventing interference between different memory tasks. Nat Neurosci [Internet]. 2011 ;14(8):953 - 955. Available from: http://dx.doi.org/10.1038/nn.2840

I’ve spoken often about the spacing effect — that it’s better to spread out your learning than have it all massed in a block. A study in which mice were trained on an eye movement task (the task allowed precise measurement of learning in the brain) compared learning durability after massed training or training spread over various spaced intervals (2.5 hours to 8 days, with 30 minute to one day intervals). In the case of massed training, the learning achieved at the end of training disappeared within 24 hours. However learning gained in spaced training did not.

Moreover, when a region in the cerebellum connected to motor nuclei involved in eye movement (the flocculus) was anesthetized, the learning achieved from one hour of massed training was eliminated, while learning achieved from an hour of training spaced out over four hours was unaffected. This suggests that the memories had been transferred out of the flocculus (to the vestibular nuclei) within four hours.

However, when protein synthesis in the flocculus was blocked, learning from spaced training was impaired, while learning from massed training was not. This suggests that proteins synthesized in the flocculus play a vital part in the transfer to the vestibular nuclei.

What governs whether or not you’ll retrieve a memory? I’ve talked about the importance of retrieval cues, of the match between the cue and the memory code you’re trying to retrieve, of the strength of the connections leading to the code. But these all have to do with the memory code.

Theta brainwaves, in the hippocampus especially, have been shown to be particularly important in memory function. It has been suggested that theta waves before an item is presented for processing lead to better encoding. Now a new study reveals that, when volunteers had to memorize words with a related context, they were better at later remembering the context of the word if high theta waves were evident in their brains immediately before being prompted to remember the item.

In the study, 17 students made pleasantness or animacy judgments about a series of words. Shortly afterwards, they were presented with both new and studied words, and asked to indicate whether the word was old or new, and if old, whether the word had been encountered in the context of “pleasant” or “alive”. Each trial began with a 1000 ms presentation of a simple mark for the student to focus on. Theta activity during this fixation period correlated with successful retrieval of the episodic memory relating to that item, and larger theta waves were associated with better source memory accuracy (memory for the context).

Theta activity has not been found to be particularly associated with greater attention (the reverse, if anything). It seems more likely that theta activity reflects a state of mind that is oriented toward evaluating retrieval cues (“retrieval mode”), or that it reflects reinstatement of the contextual state employed during study.

The researchers are currently investigating whether you can deliberately put your brain into a better state for memory recall.

[2333] Addante RJ, Watrous AJ, Yonelinas AP, Ekstrom AD, Ranganath C. Prestimulus theta activity predicts correct source memory retrieval. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(26):10702 - 10707. Available from: http://www.pnas.org/content/108/26/10702.abstract

In a recent study, 40 undergraduate students learned ten lists of ten pairs of Swahili-English words, with tests after each set of ten. On these tests, each correct answer was followed by an image, either a neutral one or one designed to arouse negative emotions, or by a blank screen. They then did a one-minute multiplication test before moving on to the next section.

On the final test of all 100 Swahili-English pairs, participants did best on items that had been followed by the negative pictures.

In a follow-up experiment, students were shown the images two seconds after successful retrieval. The results were the same.

In the final experiment, the section tests were replaced by a restudying period, where each presentation of a pair was followed by an image or blank screen. The effect did not occur, demonstrating that the effect depends on retrieval.

The study focused on negative emotion because earlier research has found no such memory benefit for positive images (including images designed to be sexually arousing).

The findings emphasize the importance of the immediate period after retrieval, suggesting that this is a fruitful time for manipulations that enhance or impair memory. This is consistent with the idea of reconsolidation — that when information is retrieved from memory, it is in a labile state, able to be changed. Thus, by presenting a negative image when the retrieved memory is still in that state, the memory absorbs some of that new context.

[2340] Finn B, Roediger HL. Enhancing Retention Through Reconsolidation. Psychological Science [Internet]. 2011 ;22(6):781 - 786. Available from: http://pss.sagepub.com/content/22/6/781.abstract

Childhood amnesia — our inability to remember almost everything that happened to us when very young — is always interesting. It’s not as simple as an inability to form long-term memories. Most adults can’t remember events earlier than 3-4 years (there is both individual and cultural variability), even though 2-year-olds are perfectly capable of remembering past events (side-note: memory durability increases from about a day to a year from age six months to two years). Additionally, research has shown that young children (6-8) can recall events that happened 4-6 years previously.

Given that the ability to form durable memories is in place, what governs which memories are retained? The earliest memories adults retain tend to be of events that have aroused emotions. Nothing surprising about that. More interesting is research suggesting that children can only describe memories of events using words they knew when the experience occurred — the study of young children (27, 33 or 39 months) found that, when asked about the experimental situation (involving a "magic shrinking machine") six months later, the children easily remembered how to operate the device, but were only able to describe the machine in words they knew when they first learned how to operate it.

Put another way this isn’t so surprising: our memories depend on how we encode them at the time. So two things may well be in play in early childhood amnesia: limited encoding abilities (influenced but not restricted to language) may mean the memories made are poor in quality (whatever that might mean); the development of encoding abilities means that later attempts to retrieve the memory may be far from matching the original memory. Or as one researcher put it, the format is different.

A new study about childhood amnesia looks at a different question: does the boundary move? 140 children (aged 4-13) were asked to describe their three earliest memories, and then asked again two years later (not all could provide as many as three early memories; the likelihood improved with age).

While more than a third of the 10- to 13-year-olds described the same memory as their very earliest on both occasions, children between 4 and 7 at the first interview showed very little overlap between the memories (only 2 of the 27 4-5 year-olds, and 3 of the 23 6-7 year-olds). There was a clear difference between the overlap seen in this youngest group (4-7) and the oldest (10-13), with the in-between group (8-9) being placed squarely between the two (20.7% compared to 10% and 36%).

Moreover, children under 8 at the first interview mostly had no overlap between any of the memories they provided at the two interviews, while those who were at least 8 years old did. For the oldest groups (10-13), more than half of all the memories they provided were the same.

The children were also given recall cues for memories they hadn’t spontaneously recalled. That is, they were told synopses of memories belonging to both their own earlier memories, and other children’s earlier memories. Almost all of the false memories were correctly rejected (the exceptions mostly occurred with the youngest group, those initially aged 4-5). However, the youngest children didn’t recognize over a third of their own memories, while almost all the oldest children’s memories were recognized (90% by 8-11 year-olds; all but one by 12-13 year-olds). Their age at the time of the event didn’t seem to affect the oldest or the very youngest groups, but 6-9 year-olds were more likely to recall after cuing events that happened at least a year later than those events that weren’t recalled after cuing.

In general, the earliest memories were several months later at the follow-up than they had been previously. The average age at the time of the earliest memory was 32 months, and 39.6 months on the follow-up interview. This shift in time occurred across all ages. Moreover, for the very earliest memory, the time-shift was even greater: a whole year.

In connection with the earlier study I mentioned, regarding the importance of language and encoding, it is worth noting that by and large, when the same memories were recalled, the same amount of information was recalled.

There was no difference between the genders.

The findings don’t rule out theories of the role of language. It seems clear to me that more than one thing is going on in childhood amnesia. These findings bear on another aspect: the forgetting curve.

It has been suggested that forgetting in children reflects a different function than forgetting in adults. Forgetting in adults matches a power function, reflecting the fact that forgetting slows over time (as is often quoted, most forgetting occurs in the first 24 hours; the longer you remember something, the more likely you are to remember it forever). However, there is some evidence that forgetting in children is best modeled in an exponential function, reflecting the continued vulnerability of memories. It seems they are not being consolidated in the way adults’ memories are. This may be because children don’t yet have the cognitive structures in place that allow them to embed new memories in a dense network.

As we get older, when we suffer memory problems, we often laughingly talk about our brain being ‘full up’, with no room for more information. A new study suggests that in some sense (but not the direct one!) that’s true.

To make new memories, we need to recognize that they are new memories. That means we need to be able to distinguish between events, or objects, or people. We need to distinguish between them and representations already in our database.

We are all familiar with the experience of wondering if we’ve done something. Is it that we remember ourselves doing it today, or are we remembering a previous occasion? We go looking for the car in the wrong place because the memory of an earlier occasion has taken precedence over today’s event. As we age, we do get much more of this interference from older memories.

In a new study, the brains of 40 college students and older adults (60-80) were scanned while they viewed pictures of everyday objects and classified them as either "indoor" or "outdoor." Some of the pictures were similar but not identical, and others were very different. It was found that while the hippocampus of young students treated all the similar pictures as new, the hippocampus of older adults had more difficulty with this, requiring much more distinctiveness for a picture to be classified as new.

Later, the participants were presented with completely new pictures to classify, and then, only a few minutes later, shown another set of pictures and asked whether each item was "old," "new" or "similar." Older adults tended to have fewer 'similar' responses and more 'old' responses instead, indicating that they could not distinguish between similar items.

The inability to recognize information as "similar" to something seen recently is associated with “representational rigidity” in two areas of the hippocampus: the dentate gyrus and CA3 region. The brain scans from this study confirm this, and find that this rigidity is associated with changes in the dendrites of neurons in the dentate/CA3 areas, and impaired integrity of the perforant pathway — the main input path into the hippocampus, from the entorhinal cortex. The more degraded the pathway, the less likely the hippocampus is to store similar memories as distinct from old memories.

Apart from helping us understand the mechanisms of age-related cognitive decline, the findings also have implications for the treatment of Alzheimer’s. The hippocampus is one of the first brain regions to be affected by the disease. The researchers plan to conduct clinical trials in early Alzheimer's disease patients to investigate the effect of a drug on hippocampal function and pathway integrity.

It’s well-established that feelings of encoding fluency are positively correlated with judgments of learning, so it’s been generally believed that people primarily use the simple rule, easily learned = easily remembered (ELER), to work out whether they’re likely to remember something (as discussed in the previous news report). However, new findings indicate that the situation is a little more complicated.

In the first experiment, 75 English-speaking students studied 54 Indonesian-English word pairs. Some of these were very easy, with the English words nearly identical to their Indonesian counterpart (e.g, Polisi-Police); others required more effort but had a connection that helped (e.g, Bagasi-Luggage); others were entirely dissimilar (e.g., Pembalut-Bandage).

Participants were allowed to study each pair for as long as they liked, then asked how confident they were about being able to recall the English word when supplied the Indonesian word on an upcoming test. They were tested at the end of their study period, and also asked to fill in a questionnaire which assessed the extent to which they believed that intelligence is fixed or changeable.

It’s long been known that theories of intelligence have important effects on people's motivation to learn. Those who believe each person possesses a fixed level of intelligence (entity theorists) tend to disengage when something is challenging, believing that they’re not up to the challenge. Those who believe that intelligence is malleable (incremental theorists) keep working, believing that more time and effort will yield better results.

The study found that those who believed intelligence is fixed did indeed follow the ELER heuristic, with their judgment of how well an item was learned nicely matching encoding fluency.

However those who saw intelligence as malleable did not follow the rule, but rather seemed to be following the reverse heuristic: that effortful encoding indicates greater engagement in learning, and thus is a sign that they are more likely to remember. This group therefore tended to be marginally underconfident of easy items, marginally overconfident for medium-level items, and significantly overconfident for difficult items.

However, the entanglement of item difficulty and encoding fluency weakens this finding, and accordingly a second experiment separated these two attributes.

In this experiment, 41 students were presented with two lists of nine words, one list of which was in small font (18-point Arial) and one in large font (48-point Arial). Each word was displayed for four seconds. While font size made no difference to their actual levels of recall, entity theorists were much more confident of recalling the large-size words than the small-size ones. The incremental theorists were not, however, affected by font-size.

It is suggested that the failure to find evidence of a ‘non-fluency heuristic’ in this case may be because participants had no control over learning time, therefore were less able to make relative judgments of encoding effort. Nevertheless, the main finding, that people varied in their use of the fluency heuristic depending on their beliefs about intelligence, was clear in both cases.

[2182] Miele DB, Finn B, Molden DC. Does Easily Learned Mean Easily Remembered?. Psychological Science [Internet]. 2011 ;22(3):320 - 324. Available from: http://pss.sagepub.com/content/22/3/320.abstract

Research has shown that people are generally poor at predicting how likely they are to remember something. A recent study tested the theory that the reason we’re so often inaccurate is that we make predictions about memory based on how we feel while we're encountering the information to be learned, and that can lead us astray.

In three experiments, each involving about 80 participants ranging in age from late teens to senior citizens, participants were serially shown words in large or small fonts and asked to predict how well they'd remember each (actual font sizes depended on the participants’ browsers, since this was an online experiment and participants were in their own homes, but the larger size was four times larger than the other).

In the first experiment, each word was presented either once or twice, and participants were told if they would have another chance to study the word. The length of time the word was displayed on the first occasion was controlled by the participant. On the second occasion, words were displayed for four seconds, and participants weren’t asked to make a new prediction. At the end of the study phase, they had two minutes to type as many words as they remembered.

Recall was significantly better when an item was seen twice. Recall wasn’t affected by font size, but participants were significantly more likely to believe they’d recall those presented in larger fonts. While participants realized seeing an item twice would lead to greater recall, they greatly underestimated the benefits.

Because people so grossly discounted the benefit of a single repetition, in the next experiment the comparison was between one and four study trials. This time, participants gave more weight to having three repetitions versus none, but nevertheless, their predictions were still well below the actual benefits of the repetitions.

In the third experiment, participants were given a simplified description of the first experiment and either asked what effect they’d expect font size to have, or what effect having two study trials would have. The results (similar levels of belief in the benefits of each condition) neither resembled the results in the first experiment (indicating that those people’s predictions hadn’t been made on the basis of their beliefs about memory effects), or the actual performance (demonstrating that people really aren’t very good at predicting their memory performance).

These findings were confirmed in a further experiment, in which participants were asked about both variables (rather than just one).

The findings confirm other evidence that (a) general memory knowledge tends to be poor, (b) personal memory awareness tends to be poor, and (c) ease of processing is commonly used as a heuristic to predict whether something will be remembered.

 

Addendum: a nice general article on this topic by the lead researcher Nate Kornell has just come out in Miller-McCune

Kornell, N., Rhodes, M. G., Castel, A. D., & Tauber, S. K. (in press). The ease of processing heuristic and the stability bias: Dissociating memory, memory beliefs, and memory judgments. Psychological Science.

Most memory research has concerned itself with learning over time, but many memories, of course, become fixed in our mind after only one experience. The mechanism by which we acquire knowledge from single events is not well understood, but a new study sheds some light on it.

The study involved participants being presented with images degraded almost beyond recognition. After a few moments, the original image was revealed, generating an “aha!” type moment. Insight is an experience that is frequently remembered well after a single occurrence. Participants repeated the exercise with dozens of different images.

Memory for these images was tested a week later, when participants were again shown the degraded images, and asked to recall details of the actual image.

Around half the images were remembered. But what’s intriguing is that the initial learning experience took place in a brain scanner, and to the researchers’ surprise, one of the highly active areas during the moment of insight was the amygdala. Moreover, high activity in the amygdala predicted that those images would be remembered a week later.

It seems the more we learn about the amygdala, the further its involvement extends. In this case, it’s suggested that the amygdala signals to other parts of the brain that an event is significant. In other words, it gives a value judgment, decreeing whether an event is worthy of being remembered. Presumably the greater the value, the more effort the brain puts into consolidating the information.

It is not thought, from the images used, that those associated with high activity in the amygdala were more ‘emotional’ than the other images.

A study involving 125 younger (average age 19) and older (average age 69) adults has revealed that while younger adults showed better explicit learning, older adults were better at implicit learning. Implicit memory is our unconscious memory, which influences behavior without our awareness.

In the study, participants pressed buttons in response to the colors of words and random letter strings — only the colors were relevant, not the words themselves. They then completed word fragments. In one condition, they were told to use words from the earlier color task to complete the fragments (a test of explicit memory); in the other, this task wasn’t mentioned (a test of implicit memory).

Older adults showed better implicit than explicit memory and better implicit memory than the younger, while the reverse was true for the younger adults. However, on a further test which required younger participants to engage in a number task simultaneously with the color task, younger adults behaved like older ones.

The findings indicate that shallower and less focused processing goes on during multitasking, and (but not inevitably!) with age. The fact that younger adults behaved like older ones when distracted points to the problem, for which we now have quite a body of evidence: with age, we tend to become more easily distracted.

Two experiments involving a total of 191 volunteers have investigated the parameters of sleep’s effect on learning. In the first experiment, people learned 40 pairs of words, while in the second experiment, subjects played a card game matching pictures of animals and objects, and also practiced sequences of finger taps. In both groups, half the volunteers were told immediately following the tasks that they would be tested in 10 hours. Some of the participants slept during this time.

As expected, those that slept performed better on the tests (all of them: word recall, visuospatial, and procedural motor memory), but the really interesting bit is that it turned out it was only the people who slept who also knew a test was coming that had improved memory recall. These people showed greater brain activity during deep or "slow wave" sleep, and for these people only, the greater the activity during slow-wave sleep, the better their recall.

Those who didn’t sleep, however, were unaffected by whether they knew there would be a test or not.

Of course, this doesn’t mean you never remember things you don’t intend or want to remember! There is more than one process going on in the encoding and storing of our memories. However, it does confirm the importance of intention, and cast light perhaps on some of your learning failures.

We have thought of memory problems principally in terms of forgetting, but using a new experimental method with amnesic animals has revealed that confusion between memories, rather than loss of memory, may be more important.

While previous research has found that amnesic animals couldn't distinguish between a new and an old object, the new method allows responses to new and old objects to be measured separately. Control animals, shown an object and then shown either the same or another object an hour later, spent more time (as expected) with the new object. However, amnesic animals spent less time with the new object, indicating they had some (false) memory of it.

The researchers concluded that the memory problems were the result of the brain's inability to register complete memories of the objects, and that the remaining, less detailed memories were more easily confused. In other words, it’s about poor encoding, not poor retrieval.

Excitingly, when the amnesic animals were put in a dark, quiet space before the memory test, they performed perfectly on the test.

The finding not only points to a new approach for helping those with memory problems (for example, emphasizing differentiating details), but also demonstrates how detrimental interference from other things can be when we are trying to remember something — an issue of particular relevance in modern information-rich environments. The extent to which these findings apply to other memory problems, such as dementia, remains to be seen.

In a recent study, volunteers were asked to solve a problem known as the Tower of Hanoi, a game in which you have to move stacked disks from one peg to another. Later, they were asked to explain how they did it (very difficult to do without using your hands.) The volunteers then played the game again. But for some of them, the weight of the disks had secretly reversed, so that the smallest disk was now the heaviest and needed two hands.

People who had used one hand in their gestures when talking about moving the small disk were in trouble when that disk got heavier. They took longer to complete the task than did people who used two hands in their gestures—and the more one-handed gestures they used, the longer they took.

For those who had not been asked to explain their solution (and replayed the game in the interval) were unaffected by the disk weights changing. So even though they had repeated the action with the original weights, they weren’t thrown by the unexpected changes in weights, as those who gestured with one hand were.

The findings add to the evidence that gestures make thought concrete. Related research has indicated that children can come to understand abstract concepts in mathematics and science more readily if they gesture (and perhaps if their teachers gesture).

[2043] Beilock SL, Goldin-Meadow S. Gesture Changes Thought by Grounding It in Action. Psychological Science [Internet]. 2010 ;21(11):1605 - 1610. Available from: http://pss.sagepub.com/content/21/11/1605.abstract

We know active learning is better than passive learning, but for the first time a study gives us some idea of how that works. Participants in the imaging study were asked to memorize an array of objects and their exact locations in a grid on a computer screen. Only one object was visible at a time. Those in the "active study” group used a computer mouse to guide the window revealing the objects, while those in the “passive study” group watched a replay of the window movements recorded in a previous trial by an active subject. They were then tested by having to place the items in their correct positions. After a trial, the active and passive subjects switched roles and repeated the task with a new array of objects.

The active learners learned the task significantly better than the passive learners. Better spatial recall correlated with higher and better coordinated activity in the hippocampus, dorsolateral prefrontal cortex, and cerebellum, while better item recognition correlated with higher activity in the inferior parietal lobe, parahippocampal cortex and hippocampus.

The critical role of the hippocampus was supported when the experiment was replicated with those who had damage to this region — for them, there was no benefit in actively controlling the viewing window.

This is something of a surprise to researchers. Although the hippocampus plays a crucial role in memory, it has been thought of as a passive participant in the learning process. This finding suggests that it is actually part of an active network that controls behavior dynamically.

If our brains are full of clusters of neurons resolutely only responding to specific features (as suggested in my earlier report), how do we bring it all together, and how do we switch from one point of interest to another? A new study using resting state data from 58 healthy adolescents and young adults has found that the intraparietal sulcus, situated at the intersection of visual, somatosensory, and auditory association cortices and known to be a key area for processing attention, contains a miniature map of all the things we can pay attention to (visual, auditory, motor stimuli etc).

Moreover, this map is copied in at least 13 other places in the brain, all of which are connected to the intraparietal sulcus. Each copy appears to do something different with the information. For instance, one map processes eye movements while another processes analytical information. This map of the world may be a fundamental building block for how information is represented in the brain.

There were also distinct clusters within the intraparietal sulcus that showed different levels of connectivity to auditory, visual, somatosensory, and default mode networks, suggesting they are specialized for different sensory modalities.

The findings add to our understanding of how we can shift our attention so precisely, and may eventually help us devise ways of treating disorders where attention processing is off, such as autism, attention deficit disorder, and schizophrenia.

[1976] Anderson JS, Ferguson MA, Lopez-Larson M, Yurgelun-Todd D. Topographic maps of multisensory attention. Proceedings of the National Academy of Sciences [Internet]. 2010 ;107(46):20110 - 20114. Available from: http://www.pnas.org/content/107/46/20110.abstract

The role of sleep in consolidating memory is now well-established, but recent research suggests that sleep also reorganizes memories, picking out the emotional details and reconfiguring the memories to help you produce new and creative ideas. In an experiment in which participants were shown scenes of negative or neutral objects at either 9am or 9pm and tested 12 hours later, those tested on the same day tended to forget the negative scenes entirely, while those who had a night’s sleep tended to remember the negative objects but not their neutral backgrounds.

Follow-up experiments showed the same selective consolidation of emotional elements to a lesser degree after a 90-minute daytime nap, and to a greater degree after a 24-hour or even several-month delay (as long as sleep directly followed encoding).

These findings suggest that processes that occur during sleep increase the likelihood that our emotional responses to experiences will become central to our memories of them. Moreover, additional nights of sleep may continue to modify the memory.

In a different approach, another recent study has found that when volunteers were taught new words in the evening, then tested immediately, before spending the night in the sleep lab and being retested in the morning, they could remember more words in the morning than they did immediately after learning them, and they could recognize them faster. In comparison, a control group who were trained in the morning and re-tested in the evening showed no such improvement on the second test.

Deep sleep (slow-wave sleep) rather than rapid eye movement (REM) sleep or light sleep appeared to be the important phase for strengthening the new memories. Moreover, those who experienced more sleep spindles overnight were more successful in connecting the new words to the rest of the words in their mental lexicon, suggesting that the new words were communicated from the hippocampus to the neocortex during sleep. Sleep spindles are brief but intense bursts of brain activity that reflect information transfer between the hippocampus and the neocortex.

The findings confirm the role of sleep in reorganizing new memories, and demonstrate the importance of spindle activity in the process.

Taken together, these studies point to sleep being more important to memory than has been thought. The past decade has seen a wealth of studies establishing the role of sleep in consolidating procedural (skill) memory, but these findings demonstrate a deeper, wider, and more ongoing process. The findings also emphasize the malleability of memory, and the extent to which they are constructed (not copied) and reconstructed.

A study involving young (average age 22) and older adults (average age 77) showed participants pictures of overlapping faces and places (houses and buildings) and asked them to identify the gender of the person. While the young adults showed activity in the brain region for processing faces (fusiform face area) but not in the brain region for processing places (parahippocampal place area), both regions were active in the older adults. Additionally, on a surprise memory test 10 minutes later, older adults who showed greater activation in the place area were more likely to recognize what face was originally paired with what house.

These findings confirm earlier research showing that older adults become less capable of ignoring irrelevant information, and shows that this distracting information doesn’t merely interfere with what you’re trying to attend to, but is encoded in memory along with that information.

Following on from earlier studies that found individual neurons were associated with very specific memories (such as a particular person), new research has shown that we can actually regulate the activity of specific neurons, increasing the firing rate of some while decreasing the rate of others.

The study involved 12 patients implanted with deep electrodes for intractable epilepsy. On the basis of each individual’s interests, four images were selected for each patient. Each of these images was associated with the firing of specific neurons in the mediotemporal lobe. The firing of these neurons was hooked up to a computer, allowing the patients to make their particular images appear by thinking of them. When another image appeared on top of the image as a distraction, creating a composite image, patients were asked to focus on their particular image, brightening the target image while the distractor image faded. The patients were successful 70% of the time in brightening their target image. This was primarily associated with increased firing of the specific neurons associated with that image.

I should emphasize that the use of a composite image meant that the participants had to rely on a mental representation rather than the sensory stimuli, at least initially. Moreover, when the feedback given was fake — that is, the patients’ efforts were no longer linked to the behavior of the image on the screen — success rates fell dramatically, demonstrating that their success was due to a conscious, directed action.

Different patients used different strategies to focus their attention. While some simply thought of the picture, others repeated the name of the image out loud or focused their gaze on a particular aspect of the image.

Resolving the competition of multiple internal and external stimuli is a process which involves a number of different levels and regions, but these findings help us understand at least some of the process that is under our conscious control. It would be interesting to know more about the relative effectiveness of the different strategies people used, but this was not the focus of the study. It would also be very interesting to compare effectiveness at this task across age, but of course this procedure is invasive and can only be used in special cases.

The study offers hope for building better brain-machine interfaces.

In a study involving 15 young adults, a very small electrical current delivered to the scalp above the right anterior temporal lobe significantly improved their memory for the names of famous people (by 11%). Memory for famous landmarks was not affected. The findings support the idea that the anterior temporal lobes are critically involved in the retrieval of people's names.

A follow-up study is currently investigating whether transcranial direct current stimulation (tDCS) will likewise improve name memory in older adults — indeed, because their level of recall is likely to be lower, it is hoped that the procedure will have a greater effect. If so, the next question is whether repeating tDCS may lead to longer lasting improvement. The procedure may offer hope for rehabilitation for stroke or other neurological damage.

This idea receives support from another recent study, in which 15 students spent six days learning a series of unfamiliar symbols that corresponded to the numbers zero to nine, and also had daily sessions of (tDCS). Five students were given 20 minutes of stimulation above the right parietal lobe; five had 20 minutes of stimulation above the left parietal lobe, and five experienced only 30 seconds of stimulation — too short to induce any permanent changes.

The students were tested on the new number system at the end of each day. After four days, those who had experienced current to the right parietal lobe performed as well as they would be expected to do with normal numbers. However, those who had experienced the stimulation to the left parietal lobe performed significantly worse. The control students performed at a level between the two other groups.

Most excitingly, when the students were tested six months later, they performed at the same level, indicating the stimulation had a durable effect. However, it should be noted that the effects were small and highly variable, and were limited to the new number system. While it may be that one day this sort of approach will be of benefit to those with dyscalculia, more research is needed.

We can see shapes and we can feel them, but we can’t hear a shape. However, in a dramatic demonstration of just how flexible our brain is, researchers have devised a way of coding spatial relations in terms of sound properties such as frequency, and trained blindfolded people to recognize shapes by their sounds. They could then match what they heard to shapes they felt. Furthermore, they were able to generalize from their training to novel shapes.

The findings not only offer new possibilities for helping blind people, but also emphasize that sensory representations simply require systematic coding of some kind. This provides more evidence for the hypothesis that our perception of a coherent object ultimately occurs at an abstract level beyond the sensory input modes in which it is presented.

[1921] Kim J-K, Zatorre RJ. Can you hear shapes you touch?. Experimental Brain Research [Internet]. 2010 ;202(4):747 - 754. Available from: http://www.springerlink.com/content/41gq1u30671q3737/

In an experiment to investigate why testing might improve learning, 118 students were given 48 English-Swahili translation pairs. An initial study trialwas followed by three blocks of practice trials. For one group, the practice trial involved a cued recall test followed by restudy. For the other group, they weren’t tested, but were simply presented with the information again (restudy-only). On both study and restudy trials, participants created keywords to help them remember the association. Presumably the 48 word pairs were chosen to make this relatively easy (the example given in the paper is the easy one of wingu-cloud). A final test was given one week later. In this final test, participants received either the cue only (e.g. wingu), or the cue plus keyword, or the cue plus a prompt to remember their keyword.

The group that were tested on their practice trials performed almost three times better on the final test compared to those given restudy only (providing more evidence for the thesis that testing improves learning). Supporting the hypothesis that this has to do with having more effective keywords, keywords were remembered on the cue+prompt trials more often for the test-restudy group than the restudy-only group (51% vs 34%). Moreover, providing the keywords on the final test significantly improved recall for the restudy-only group, but not the test-restudy group (the implication being that they didn’t need the help of having the keywords provided).

The researchers suggest that practice tests lead learners to develop better keywords, both by increasing the strength of the keywords and by encouraging people to change keywords that aren’t working well.

‘Face-blindness’ — prosopagnosia — is a condition I find fascinating, perhaps because I myself have a touch of it (it’s now recognized that this condition represents the end of a continuum rather than being an either/or proposition). The intriguing thing about this inability to recognize faces is that, in its extreme form, it can nevertheless exist side-by-side with quite normal recognition of other objects.

Prosopagnosia that is not the result of brain damage often runs in families, and a study of three family members with this condition has revealed that in some cases at least, the inability to remember faces has to do with failing to form a mental representation that abstracts the essence of the face, sans context. That is, despite being fully able to read facial expressions, attractiveness and gender from the face (indeed one of the family members is an artist who has no trouble portraying fully detailed faces), they couldn’t cope with changes in lighting conditions and viewing angles.

I’m reminded of the phenomenon of perfect pitch, which is characterized by an inability to generalize across acoustically similar tones, so an A in a different key is a completely different note. Interestingly, like prosopagnosia, perfect pitch is now thought to be more common than has been thought (recognition of it is of course limited by the fact that some musical expertise is generally needed to reveal it). This inability to abstract or generalize is also a phenomenon of eidetic memory, and I have spoken before of the perils of this.

(Note: A fascinating account of what it is like to be face-blind, from a person with the condition, can be found at: http://www.choisser.com/faceblind/)

An intriguing new study has found that people are more likely to remember specific information if the pattern of activity in their brain is similar each time they study that information. The findings are said to challenge the long-held belief that people retain information more effectively when they study it several times under different contexts, thus giving their brains multiple cues to remember it. However, although I believe this finding adds to our understanding of how to study effectively, I don’t think it challenges the multiple-context evidence.

The finding was possible because of a new approach to studying brain activity, which was used in three experiments involving students at Beijing Normal University. In the first, 24 participants were shown 120 faces, each one shown four times, at variable intervals between the repetitions. They were tested on their recognition (using a set of 240 faces), and how confident they were in their decision, one hour later. Subsequent voxel-by-voxel analysis of 20 brain regions revealed that the similarity of the patterns of brain activity in nine of those regions for each repetition of a specific face was significantly associated with recognition.

In the second experiment, 22 participants carried out a semantic judgment task on 180 familiar words (deciding whether they were concrete or abstract). Each word was repeated three times, again at variable intervals. The participants were tested on their recall of the words six hours later, and then tested for recognition. Fifteen brain regions showed a higher level of pattern similarity across repetitions for recalled items, but not for forgotten items.

In the third experiment, 22 participants performed a different semantic judgment task (living vs non-living) on 60 words. To prevent further encoding, they were also required to perform a visual orientation judgment task for 8 seconds after each semantic judgment. They were given a recall test 30 minutes after the session. Seven of the brain regions showed a significantly higher level of pattern similarity for recalled items.

It's interesting to observe how differences in the pattern of activity occurred when studying the same information only minutes apart — a difference that is presumed to be triggered by context (anything from the previous item to environmental stimuli or passing thoughts). Why do I suggest that this finding, which emphasizes the importance of same-context, doesn’t challenge the evidence for multiple-context? I think it’s an issue of scope.

The finding shows us two important things: that context changes constantly; that repetition is made stronger the closer context is matched. Nevertheless, this study doesn’t bear on the question of long-term recall. The argument has never been that multiple contexts make a memory trace stronger; it has been that it provides more paths to recall — something that becomes of increasing importance the longer the time between encoding and recall.

Children’s ability to remember past events improves as they get older. This has been thought by many to be due to the slow development of the prefrontal cortex. But now brain scans from 60 children (8-year-olds, 10- to 11-year-olds, and 14-year-olds) and 20 young adults have revealed marked developmental differences in the activity of the mediotemporal lobe.

The study involved the participants looking at a series of pictures (while in the scanner), and answering a different question about the image, depending on whether it was drawn in red or green ink. Later they were shown the pictures again, in black ink and mixed with new ones. They were asked whether they had seen them before and whether they had been red or green.

While the adolescents and adults selectively engaged regions of the hippocampus and posterior parahippocampal gyrus to recall event details, the younger children did not, with the 8-year-olds indiscriminately using these regions for both detail recollection and item recognition, and the 10- to 11-year-olds showing inconsistent activation. It seems that the hippocampus and posterior parahippocampal gyrus become increasingly specialized for remembering events, and these changes may partly account for long-term memory improvements during childhood.

Rodent studies have demonstrated the existence of specialized neurons involved in spatial memory. These ‘grid cells’ represent where an animal is located within its environment, firing in patterns that show up as geometrically regular, triangular grids when plotted on a map of a navigated surface. Now for the first time, evidence for these cells has been found in humans. Moreover, those with the clearest signs of grid cells performed best in a virtual reality spatial memory task, suggesting that the grid cells help us to remember the locations of objects. These cells, located particularly in the entorhinal cortex, are also critical for autobiographical memory, and are amongst the first to be affected by Alzheimer's disease, perhaps explaining why getting lost is one of the most common early symptoms.

[378] Doeller CF, Barry C, Burgess N. Evidence for grid cells in a human memory network. Nature [Internet]. 2010 ;463(7281):657 - 661. Available from: http://dx.doi.org/10.1038/nature08704

Previous research has found that individual neurons can become tuned to specific concepts or categories. We can have "cat" neurons, and "car" neurons, and even an “Angelina Jolie” neuron. A new monkey study, however, reveals that although some neurons were more attuned to car images and others to animal images, many neurons were active in both categories. More importantly, these "multitasking" neurons were in fact the best at making correct identifications when the monkey alternated between two category problems. The work could lead to a better understanding of disorders such as autism and schizophrenia in which individuals become overwhelmed by individual stimuli.

Why do women tend to be better than men at recognizing faces? Two recent studies give a clue, and also explain inconsistencies in previous research, some of which has found that face recognition mainly happens in the right hemisphere part of the face fusiform area, and some that face recognition occurs bilaterally. One study found that, while men tended to process face recognition in the right hemisphere only, women tended to process the information in both hemispheres. Another study found that both women and gay men tended to use both sides of the brain to process faces (making them faster at retrieving faces), while heterosexual men tended to use only the right. It also found that homosexual males have better face recognition memory than heterosexual males and homosexual women, and that women have better face processing than men. Additionally, left-handed heterosexual participants had better face recognition abilities than left-handed homosexuals, and also tended to be better than right-handed heterosexuals. In other words, bilaterality (using both sides of your brain) seems to make you faster and more accurate at recognizing people, and bilaterality is less likely in right-handers and heterosexual males (and perhaps homosexual women). Previous research has shown that homosexual individuals are 39% more likely to be left-handed.

Proverbio AM, Riva F, Martin E, Zani A (2010) Face Coding Is Bilateral in the Female Brain. PLoS ONE 5(6): e11242. doi:10.1371/journal.pone.0011242

[1611] Brewster PWH, Mullin CR, Dobrin RA, Steeves JKE. Sex differences in face processing are mediated by handedness and sexual orientation. Laterality: Asymmetries of Body, Brain and Cognition [Internet]. 2010 . Available from: http://www.informaworld.com/10.1080/13576500903503759

A rhesus monkey study has revealed which dendritic spines are lost with age, providing a new target for therapies to help prevent age-association cognitive impairment. It appears that it is the thin, dynamic spines in the dorsolateral prefrontal cortex, which are key to learning new things, establishing rules, and planning, that are lost. Learning of a new task was correlated with both synapse density and average spine size, but was most strongly predicted by the head volume of thin spines. There was no correlation with size or density of the large, mushroom-shaped spines, which were very stable across age and probably mediate long-term memories, enabling the retention of expertise and skills learned early in life. There was no correlation with any of these spine characteristics once the task was learned. The findings underscore the importance of building skills and broad expertise when young.

A rat study has revealed that as the rats slowly learned a new rule, groups of neurons in the medial frontal cortex switched quite abruptly to a new pattern corresponding directly to the shift in behavior, rather than showing signs of gradual transition. Such sudden neural and behavioral transitions may correspond to so- called "a-ha" moments, and support the idea that rule learning is an evidence-based decision process, perhaps accompanied by moments of sudden insight.

Visual working memory, which can only hold three of four objects at a time, is thought to be based on synchronized brain activity across a network of brain regions. Now a new study has allowed us to get a better picture of how exactly that works. Both the maintenance and the contents of working memory were connected to brief synchronizations of neural activity in alpha, beta and gamma brainwaves across frontoparietal regions that underlie executive and attentional functions and visual areas in the occipital lobe. Most interestingly, individual VWM capacity could be predicted by synchrony in a network centered on the intraparietal sulcus.

[458] Palva MJ, Monto S, Kulashekhar S, Palva S. Neuronal synchrony reveals working memory networks and predicts individual memory capacity. Proceedings of the National Academy of Sciences [Internet]. 2010 ;107(16):7580 - 7585. Available from: http://www.pnas.org/content/107/16/7580.abstract

The effect of smell on learning and memory was investigated in an experiment that used three different ambient odors (osmanthus, peppermint, and pine).

Osmanthus was used to see whether there was a difference in performance depending on whether the smell was novel or familiar. Peppermint and pine were used to see whether the appropriateness or inappropriateness of the smell made a difference to memory.

In the experiment, subjects were individually shown into a room in which the odor was present. Their attention was called to the smell, and to ensure their attention to the smell, they were given a questionnaire to fill out about the room environment. They were left alone in the room for ten minutes to promote encoding of contextual cues.

The experimenter then read out a list of 20 common nouns, pausing after each one for the subject to describe an event that the word reminded them of. Memory for the words was tested 48 hours later.

It was found that word recall was best when the novel odor (osmanthus) was present during learning and again at testing. Among the familiar odors, recall was better if the smell was contextually inappropriate (peppermint). The improvement in recall only occurs when the odor is present at both encoding (learning) and retrieval (testing). Clearly, smell is a good contextual cue.

Herz, R.S. (1997). The effects of cue distinctiveness on odor-based context dependent memory. Memory and Cognition, 25, 375-380.

There is a pervasive myth that every detail of every experience we've ever had is recorded in memory. It is interesting to note therefore, that even very familiar objects, such as coins, are rarely remembered in accurate detail1.

We see coins every day, but we don't see them. What we remember about coins are global attributes, such as size and color, not the little details, such as which way the head is pointing, what words are written on it, etc. Such details are apparently noted only if the person's attention is specifically drawn to them.

There are several interesting conclusions that can be drawn from studies that have looked at the normal encoding of familiar objects:

  • you don't automatically get more and more detail each time you see a particular object
  • only a limited amount of information is extracted the first time you see the object
  • the various features aren't equally important
  • normally, global rather than detail features are most likely to be remembered

In the present study, four experiments investigated people's memories for drawings of oak leaves. Two different types of oak leaves were used - "red oak" and "white oak". Subjects were shown two drawings for either 5 or 60 seconds. The differences between the two oak leaves varied, either:

  • globally (red vs white leaf), or
  • in terms of a major feature (the same type of leaf, but varying in that twomajor lobes are combined in one leaf but not in the other), or
  • in terms of a minor feature (one small lobe eliminated in one but not in theother).

According to the principle of top-down encoding, the time needed to detect a difference between stimuli that differ in only one critical feature will increase as the level of that feature decreases (from a global to a major specific to a lower-grade specific feature).

The results of this study supported the view that top-down encoding occurs, and indicate that, unless attention is explicitly directed to specific features, the likelihood of encoding such features becomes less the lower its structural level. One of the experiments tested whether the size of the feature made a difference, and it was decided that it didn't.

References

1. Jones, G.V. 1990. Misremembering a familiar object: When left is not right. Memory & Cognition, 18, 174-182.

Jones, G.V. & Martin, M. 1992. Misremembering a familiar object: Mnemonic illusion, not drawing bias. Memory & Cognition, 20, 211-213.

Nickerson, R.S. & Adams, M.J. 1979. Long-term memory of a common object. Cognitive Psychology, 11, 287-307.

Modigliani, V., Loverock, D.S. & Kirson, S.R. (1998). Encoding features of complex and unfamiliar objects. American Journal Of Psychology, 111, 215-239.

Older news items (pre-2010) brought over from the old website

More light shed on distinction between long and short-term memory

The once clear-cut distinction between long- and short-term memory has increasingly come under fire in recent years. A new study involving patients with a specific form of epilepsy called 'temporal lobe epilepsy with bilateral hippocampal sclerosis' has now clarified the distinction. The patients, who all had severely compromised hippocampi, were asked to try and memorize photographic images depicting normal scenes. Their memory was tested and brain activity recorded after five seconds or 60 minutes. As expected, the patients could not remember the images after 60 minutes, but could distinguish seen-before images from new at five seconds. However, their memory was poor when asked to recall details about the images. Brain activity showed that short-term memory for details required the coordinated activity of a network of visual and temporal brain areas, whereas standard short-term memory drew on a different network, involving frontal and parietal regions, and independent of the hippocampus.

Cashdollar, N., Malecki, U., Rugg-Gunn, F. J., Duncan, J. S., Lavie, N., & Duzel, E. (2009). Hippocampus-dependent and -independent theta-networks of active maintenance. Proceedings of the National Academy of Sciences, 106(48), 20493-20498. doi: 10.1073/pnas.0904823106.

http://www.eurekalert.org/pub_releases/2009-11/ucl-tal110909.php

Why smells can be so memorable

Confirming the common experience of the strength with which certain smells can evoke emotions or memories, an imaging study has found that, when people were presented with a visual object together with one, and later with a second, set of pleasant and unpleasant odors and sounds, then presented with the same objects a week later, there was unique activation in particular brain regions in the case of their first olfactory (but not auditory) associations. This unique signature existed in the hippocampus regardless of how strong the memory was — that is, it was specific to olfactory associations. Regardless of whether they were smelled or heard, people remembered early associations more clearly when they were unpleasant.

The study appeared online on November 5 in Current Biology.

http://www.physorg.com/news176649240.html

Two studies help explain the spacing effect

I talked about the spacing effect in my last newsletter. Now it seems we can point to the neurology that produces it. Not only that, but the study has found a way of modifying it, to improve learning. It’s a protein called SHP-2 phosphatase that controls the spacing effect by determining how long resting intervals between learning sessions need to last so that long-lasting memories can form. The discovery happened because more than 50% of those with a learning disorder called Noonan's disease have mutations in a gene called PTP11, which encodes the SHP-2 phosphatase protein. These mutations boost the activity levels of SHP-2 phosphatase, which, in genetically modified fruit flies, disturbs the spacing effect by increasing the interval before a new chemical signal can occur (it is the repeated formation and decay of these signals that produces memory). Accordingly, those with the mutation need longer periods between repetitions to establish long-term memory.

Pagani, M.R. et al. 2009. Spacing Effect: SHP-2 Phosphatase Regulates Resting Intervals Between Learning Trials in Long-Term Memory Induction. Cell, 139 (1), 186-198.

http://www.eurekalert.org/pub_releases/2009-10/cshl-csi092809.php

A study involving Aplysia (often used as a model for learning because of its simplicity and the large size of its neural connections) reveals that spaced and massed training lead to different types of memory formation. The changes at the synapses that underlie learning are controlled by the release of the neurotransmitter serotonin. Four to five spaced applications of serotonin generated long-term changes in the strength of the synapse and less activation of the enzyme Protein kinase C Apl II, leading to stronger connections between neurons. However, when the application of serotonin was continuous (as in massed learning), there was much more activation of PKC Apl II, suggesting that activation of this enzyme may block the mechanisms for generating long-term memory, while retaining mechanisms for short-term memory.

Villareal, G., Li, Q., Cai, D., Fink, A. E., Lim, T., Bougie, J. K., et al. (2009). Role of Protein Kinase C in the Induction and Maintenance of Serotonin-Dependent Enhancement of the Glutamate Response in Isolated Siphon Motor Neurons of Aplysia californica. J. Neurosci., 29(16), 5100-5107.

http://www.eurekalert.org/pub_releases/2009-10/mu-wow100109.php

Smart gene helps brain cells communicate

For the second time, scientists have created a smarter rat by making their brains over-express CaMKII, a protein that acts as a promoter and signaling molecule for the NR2B subunit of the NMDA receptor. Over-expressing the gene lets brain cells communicate a fraction of a second longer. The research indicates that it plays a crucial role in initiating long-term potentiation. The NR2B subunit is more common in juvenile brains; after puberty the NR2A becomes more common. This is one reason why young people tend to learn and remember better — because the NR2B keeps communication between brain cells open maybe just a hundred milliseconds longer than the NR2A. Although this genetic modification is not something that could probably be replicated in humans, it does validate NR2B as a drug target for improving memory in healthy individuals as well as those struggling with Alzheimer's or mild dementia.

Wang, D., Cui, Z., Zeng, Q., Kuang, H., Wang, L. P., Tsien, J. Z., et al. (2009). Genetic Enhancement of Memory and Long-Term Potentiation but Not CA1 Long-Term Depression in NR2B Transgenic Rats. PLoS ONE, 4(10), e7486.
Full text at http://dx.plos.org/10.1371/journal.pone.0007486

http://www.eurekalert.org/pub_releases/2009-10/mcog-sr101909.php

Concepts are born in the hippocampus

Concepts are at the heart of cognition. A study showed 25 people pairs of fractal patterns that represented the night sky and asked them to forecast the weather – either rain or sun – based on the patterns. The task could be achieved by either working out the conceptual principles, or simply memorizing which patterns produced which effects. However, the next task required them to make predictions using new patterns (but based on the same principles). Success on this task was predictable from the degree of activity in the hippocampus during the first, learning, phase. In the second phase, the ventromedial prefrontal cortex, important in decision-making, was active. The results indicate that concepts are learned and stored in the hippocampus, and then passed on to the vMPFC for application.

Kumaran, D. et al. 2009. Tracking the Emergence of Conceptual Knowledge during Human Decision Making. Neuron, 63 (6), 889-901.

http://www.newscientist.com/article/dn17862-concepts-are-born-in-the-hippocampus.html
http://www.physorg.com/news172930530.html
http://www.eurekalert.org/pub_releases/2009-09/cp-hwk091709.php

Why we learn more from our successes than our failures

A monkey study shows for the first time how single cells in the prefrontal cortex and basal ganglia change their responses as a result of information about what is the right action and what is the wrong one. Importantly, when a behavior was successful, cells became more finely tuned to what the animal was learning — but after a failure, there was little or no change in the brain, and no improvement in behavior. The finding points to the importance of successful actions in learning new associations.

Histed, M.H., Pasupathy, A. & Miller, E.K. 2009. Learning Substrates in the Primate Prefrontal Cortex and Striatum: Sustained Activity Related to Successful Actions. Neuron, 63 (2), 244-253.

http://www.eurekalert.org/pub_releases/2009-07/miot-wwl072809.php

New insight into how information is encoded in the hippocampus

Theta brain waves are known to orchestrate neuronal activity in the hippocampus, and for a long time it’s been thought that these oscillations were "in sync" across the hippocampus, timing the firing of neurons like a sort of central pacemaker. A new rat study reveals that rather than being in sync, theta oscillations actually sweep along the length of the hippocampus as traveling waves. This changes our notion of how spatial information is represented in the rat brain (and presumably has implications for our brains: theta waves are ubiquitous in mammalian brains). Rather than neurons encoding points in space, it seems that what is encoded are segments of space. This would make it easier to distinguish between representations of locations from different times. It also may have significant implications for understanding how information is transmitted from the hippocampus to other areas of the brain, since different areas of the hippocampus are connected to different areas in the brain. The fact that hippocampal activity forms a traveling wave means that these target areas receive inputs from the hippocampus in a specific sequence rather than all at once.

Lubenov, E.V. & Siapas, A.G. 2009. Hippocampal theta oscillations are travelling waves. Nature, 459, 534-539.

http://www.eurekalert.org/pub_releases/2009-05/ciot-csr052909.php

How the brain translates memory into action

We know that the hippocampus is crucial for place learning, especially for the rapid learning of temporary events (such as where we’ve parked the car). Now a new study reveals more about how that coding for specific places connects to behaviour. Selective lesioning in rats revealed that the critical part is in the middle part of the hippocampus, where links to visuospatial information connect links to the behavioural control necessary for returning to that place after a period of time. Rats whose brain still maintained an accurate memory of place nevertheless failed to find their way when a sufficient proportion of the intermediate hippocampus was removed. The findings emphasise that memory failures are not only, or always, about actual deficits in memory, but can also be about being able to act on it.

Bast, T. et al. 2009. From Rapid Place Learning to Behavioral Performance: A Key Role for the Intermediate Hippocampus. PLoS Biology, 7(4), e1000089.

http://www.physorg.com/news159116757.html
http://www.eurekalert.org/pub_releases/2009-04/plos-nwd041709.php

How what we like defines what we know

How we categorize items is crucial to both how we perceive them and how well we remember them. Expertise in a subject is a well-established factor in categorization — experts create more specific categories. Because experts usually enjoy their areas of expertise, and because time spent on a subject should result in finer categorization, we would expect positive feelings towards an item to result in more specific categories. However, research has found that positive feelings usually result in more global processing. A new study has found that preference does indeed result in finer categorization and, more surprisingly, that this is independent of expertise. It seems that preference itself activates focused thinking that directly targets the preferred object, enabling more detailed perception and finer categorization.

Smallman, R. & Roese, N.J. 2008. Preference Invites Categorization. Psychological Science, 19 (12).

http://www.physorg.com/news152203095.html

Encoding isn’t solely in the hippocampus

Perhaps we can improve memory in older adults with a simple memory trick. The hippocampus is a vital region for learning and memory, and indeed the association of related details to form a complete memory has been thought to occur entirely within this region. However, a new imaging study has found that when volunteers memorized pairs of words such as "motor/bear" as new compound words ("motorbear") rather than separate words, then the perirhinal cortex, rather than the hippocampus, was activated, and this activity predicted whether the volunteers would be able to successfully remember the pairs in the future.

Haskins, A.L. et al. 2008. Perirhinal Cortex Supports Encoding and Familiarity-Based Recognition of Novel Associations. Neuron, 59, 554-560.

http://www.sciencedaily.com/releases/2008/08/080828220519.htm
http://www.eurekalert.org/pub_releases/2008-08/uoc--mts082808.php

Computer model reveals how brain represents meaning

A new computational model has been developed that can predict with 77% accuracy which areas of the brain are activated when a person thinks about a specific concrete noun.  The success of the model points to a new understanding of how our brains represent meaning. The model was constructed on the basis of the frequency with which a noun co-occurs in text (from a trillion-word text corpus) with each of 25 verbs associated with sensory-motor functions, including see, hear, listen, taste, smell, eat, push, drive and lift. These 25 verbs appear to be basic building blocks the brain uses for representing meaning. The effect of each co-occurrence on the activation of each tiny voxel in an fMRI brain scan was established, and from this data, activation patterns were drawn.

Mitchell, T.M. et al. 2008. Predicting Human Brain Activity Associated with the Meanings of Nouns. Science, 320 (5880), 1191-1195.

http://www.physorg.com/news131290235.html
http://www.eurekalert.org/pub_releases/2008-05/cmu-cmc052308.php

Novel mechanism for long-term learning identified

There has always been a paradox at the heart of learning: repetition is vital, yet at the level of individual synapses, repetitive stimulation might actually reverse early gains in synaptic strength. Now the mechanism that resolves this apparent paradox has been uncovered. N-methyl-D-aspartate (NMDA) receptors appear from studies to be required for the synaptic strengthening that occurs during learning, but these receptors undergo a sort of Jekyll-and-Hyde transition after the initial phase of learning. Instead of helping synapses get stronger, they actually begin to weaken the synapses and impair further learning. The new study reveals that while the NMDA receptor is required to begin neural strengthening, a second neurotransmitter receptor — the metabotropic glutamate (mGlu) receptor — then comes into play. Using an NMDA antagonist to block NMDA receptors after the initiation of plasticity resulted in enhanced synaptic strengthening, while blocking mGlu receptors caused strengthening to stop.

Clem, R.L., Celikel, T. & Barth, A.L. 2008. Ongoing in Vivo Experience Triggers Synaptic Metaplasticity in the Neocortex. Science, 319 (5859), 101-104.

http://www.eurekalert.org/pub_releases/2008-01/cmu-nmf010308.php

Brain protein that's a personal trainer for your memory

A brain protein called kalirin has been shown to be critical for helping you learn and remember what you learned. When you learn something new, kalirin makes the synaptic spines on your neurons grow bigger and stronger the more you repeat the lesson. This may help explain why continued intellectual activity and learning delays cognitive decline as people grow older. "It's important to keep learning so your synapses stay healthy." Previous studies have found that kalirin levels are reduced in brains of people with diseases like Alzheimer's and schizophrenia. This latest finding suggests it may be a useful target for future drug therapy for these diseases.

Xie, Z. et al . 2007. Kalirin-7 Controls Activity-Dependent Structural and Functional Plasticity of Dendritic Spines. Neuron, 56, 640-656.

http://www.eurekalert.org/pub_releases/2007-11/nu-wyr112107.php
http://www.eurekalert.org/pub_releases/2007-11/cp-md111407.php

Why learning takes a while

New findings about how new connections are made between brain cells sheds light on why it sometimes takes a little while before we truly ‘get’ something. It seems that, although connections are made within minutes, it takes eight hours before these connections are mature enough to transmit information, and more hours before the connections are firmly enough established to become fully functional synapses, likely to survive. It was also found that when a new spine made contact with a site already hosting a contact, the new spine was highly likely to displace the old connection. This may mean that newly learned information might lead to a fading of older information.

Nägerl, U.V., Köstinger, G., Anderson, J.C., Martin, A.C. & Bonhoeffer, T. 2007. Protracted synaptogenesis after activity-dependent spinogenesis in hippocampal neurons. The Journal of Neuroscience, 27, 8149-8156.

http://www.physorg.com/news106837506.html

How memory networks are formed

We know that memories are encoded in a network of neurons, but how do the neurons “decide” which ones to connect to? A mouse study reveals that the level of a protein called CREB is critical in this decision. The findings suggest a competitivemodel in which eligible neurons are selected to participate in a memory trace as a function of their relative CREB activity at the time of learning.

Han, J-H et al. 2007. Neuronal Competition and Selection During Memory Formation. Science, 316 (5823), 457-460.

http://www.physorg.com/news96213299.html
http://www.eurekalert.org/pub_releases/2007-04/uoc--uru041707.php

Support for labeling as an aid to memory

A study involving an amnesia-inducing drug has shed light on how we form new memories. Participants in the study participants viewed words, photographs of faces and landscapes, and abstract pictures one at a time on a computer screen. Twenty minutes later, they were shown the words and images again, one at a time. Half of the images they had seen earlier, and half were new. They were then asked whether they recognized each one. For one session they were given midazolam, a drug used to relieve anxiety during surgical procedures that also causes short-term anterograde amnesia, and for one session they were given a placebo.
It was found that the participants' memory while in the placebo condition was best for words, but the worst for abstract images. Midazolam impaired the recognition of words the most, impaired memory for the photos less, and impaired recognition of abstract pictures hardly at all. The finding reinforces the idea that the ability to recollect depends on the ability to link the stimulus to a context, and that unitization increases the chances of this linking occurring. While the words were very concrete and therefore easy to link to the experimental context, the photographs were of unknown people and unknown places and thus hard to distinctively label. The abstract images were also unfamiliar and not unitized into something that could be described with a single word.

Reder, L.M. et al. 2006. Drug-Induced Amnesia Hurts Recognition, but Only for Memories That Can Be Unitized. Psychological Science, 17(7), 562-

http://www.sciencedaily.com/releases/2006/07/060719092800.htm

Why motivation helps memory

An imaging study has identified the brain region involved in anticipating rewards — specific brain structures in the mesolimbic region involved in the processing of emotions — and revealed how this reward center promotes memory formation. Cues to high-reward scenes that were later remembered activated the reward areas of the mesolimbic region as well as the hippocampus. Anticipatory activation also suggests that the brain actually prepares in advance to filter incoming information rather than simply reacting to the world.

Adcock, R.A., Thangavel, A., Knutson, B., Whitfield-Gabrieli, S. & Gabrieli, J.D.E. 2006. Reward-Motivated Learning: Mesolimbic Activation Precedes Memory Formation. Neuron, 50, 507–517.

http://www.eurekalert.org/pub_releases/2006-05/cp-tbm042706.htm

New view of hippocampus’s role in memory

Amnesiacs have overturned the established view of the hippocampus, and of the difference between long-and short-term memories. It appears the hippocampus is just as important for retrieving certain types of short-term memories as it is for long-term memories. The critical thing is not the age of the memory, but the requirement to form connections between pieces of information to create a coherent episode. The researchers suggest that, for the brain, the distinction between 'long-term' memory and 'short-term' memory are less relevant than that between ‘feature’ memory and ‘conjunction’ memory — the ability to remember specific things versus how they are related. The hippocampus may be thought of as the brain's switchboard, piecing individual bits of information together in context.

Olson, I.R., Page, K., Moore, K.S., Chatterjee, A. & Verfaellie, M. 2006. Working Memory for Conjunctions Relies on the Medial Temporal Lobe. Journal of Neuroscience, 26, 4596 – 4601.

http://www.eurekalert.org/pub_releases/2006-05/uop-aso053106.php

Priming the brain for learning

A new study has revealed that how successfully you form memories depends on your frame of mind beforehand. If your brain is primed to receive information, you will have less trouble recalling it later. Moreover, researchers could predict how likely the participant was to remember a word by observing brain activity immediately prior to presentation of the word.

Otten, L.J., Quayle, A.H., Akram, S., Ditewig, T.A. &Rugg, M.D. 2006. Brain activity before an event predicts later recollection. Nature, published online ahead of print 26February2006

http://www.nature.com/news/2006/060220/full/060220-19.html
http://www.eurekalert.org/pub_releases/2006-02/uoc--uri022806.php
http://www.eurekalert.org/pub_releases/2006-02/ucl-ywr022206.php

A single memory is processed in three separate parts of the brain

A rat study has demonstrated that a single experience is indeed processed differently in separate parts of the brain. They found that when the rats were confined in a dark compartment of a familiar box and given a mild shock, the hippocampus was involved in processing memory for context, while the anterior cingulate cortex was responsible for retaining memories involving unpleasant stimuli, and the amygdala consolidated memories more broadly and influenced the storage of both contextual and unpleasant information.

Malin, E.L. & McGaugh, J.L. 2006. Differential involvement of the hippocampus, anterior cingulate cortex, and basolateral amygdala in memory for context and footshock. Proceedings of the National Academy of Sciences, 103 (6), 1959-1963.

http://www.eurekalert.org/pub_releases/2006-02/uoc--urp020106.php

Resting after new learning may not be laziness

In an intriguing rat study, researchers recorded brain activity while rats ran up and down a straight 1.5-metre run. As the rats ran along the track, the nerve cells fired in a very specific sequence. But to the researchers’ surprise, when the rats were resting, the same brain cells replayed the sequence of electrical firing over and over, but in reverse and speeded up. This is similar to the replay that occurs during sleep and consolidates spatial memory, but the reverse aspect has not been seen before, and is presumed to have something to do with reinforcing the sequence. The researchers suggest this may have general implications.

Foster, D.J. & Wilson, M.A. 2006. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature, advance online publication; published online 12 February 2006

http://www.nature.com/news/2006/060206/full/060206-13.html

Protein that controls how neurons change as a result of experience

Two different research teams have identified a master protein that sheds light on one of neurobiology's biggest mysteries-how neurons change as a result of individual experiences. The protein, myocyte enhancer factor 2 (MEF2), turns on and off genes that control dendritic remodeling, that is the growth and pruning of neurons. In addition, one of the teams has identified how MEF2 switches from one program to the other, that is, from dendrite-promoting to dendrite-pruning, and the researchers have identified some of MEF2's targets. It’s suggested the MEF2 pathway could play a role in autism and other neurodevelopmental diseases, and this discovery could lead to new therapies for a host of diseases in which synapses either fail to form or run rampant.

Flavell, S.W. et al. 2006. Activity-Dependent Regulation of MEF2 Transcription Factors Suppresses Excitatory Synapse Number. Science, 311(5763), 1008-1012. Shalizi, A. et al. 2006. A Calcium-Regulated MEF2 Sumoylation Switch Controls Postsynaptic Differentiation. Science, 311(5763), 1012-1017.

http://www.eurekalert.org/pub_releases/2006-02/hms-rfm022106.php

Concrete evidence of the 'memory code'

I’m always talking about the “memory code”, and its existence is central to theories of memory, but now, for the first time, researchers have found concrete evidence of it. The coding system was discovered during an investigation into how the primary auditory cortex responds to different sounds. Rats were trained with various tones; it was found that the more important the tone, the greater the area of auditory cortex that became tuned to it — in other words, more neurons were involved in storing the information.

Rutkowski, R.G. & Weinberger, N.M. 2005. Encoding of learned importance of sound by magnitude of representational area in primary auditory cortex. Proceedings of the National Academy of Sciences, 102 (38), 13664-13669.

http://www.eurekalert.org/pub_releases/2005-09/uoc--unu090805.php

Seeing the formation of a memory

An optical imaging technique has enabled researchers to visualize changes in nerve connections. The study used genetically modified fruit flies, whose neuronal connections become fluorescent during synaptic transmission. The flies were conditioned to associate a brief puff of an odor with a shock. Using a high-powered microscope to watch the fluorescent signals in flies' brains as they learned, the researchers discovered that a specific set of neurons (projection neurons), had a greater number of active connections with other neurons after the conditioning experiment. These newly active connections appeared within 3 minutes after the experiment, suggesting that the synapses which became active after the learning took place were already formed but remained "silent" until they were needed to represent the new memory. The new synaptic activity disappeared by 7 minutes after the experiment, but the flies continued to avoid the odor they associated with the shock. The study suggests that the earliest representation of a new memory occurs by rapid changes – "like flipping a switch" – in the number of neuronal connections that respond to the odor, rather than by formation of new connections or by an increase in the number of neurons that represent an odor. The fact that the flies continued to show a learned response even after the new synaptic activity waned suggests that other memory traces found at higher levels in the brain took over to encode the memory for a longer period of time.

Yu, D., Ponomarev, A. & Davis, R.L. 2004. Altered representation of the spatial code for odors after olfactory classical conditioning: memory trace formation by synaptic recruitment. Neuron, 42 (3), 437–449.

http://www.eurekalert.org/pub_releases/2004-05/nion-sar051004.php

More light shed on memory encoding

Anything we perceive contains a huge amount of sensory information. How do we decide what bits to process? New research has identified brain cells that streamline and simplify sensory information, markedly reducing the brain's workload. The study found that when monkeys were taught to remember clip art pictures, their brains reduced the level of detail by sorting the pictures into categories for recall, such as images that contained "people," "buildings," "flowers," and "animals." The categorizing cells were found in the hippocampus. As humans do, different monkeys categorized items in different ways, selecting different aspects of the same stimulus image, most likely reflecting different histories, strategies, and expectations residing within individual hippocampal networks.

Hampson, R.E., Pons, T.P., Stanford, T.R. & Deadwyler, S.A. 2004. Categorization in the monkey hippocampus: A possible mechanism for encoding information into memory. PNAS, 101, 3184-3189.

http://www.eurekalert.org/pub_releases/2004-02/wfub-nfo022604.php