News Topic strategies

About these topic collections

I’ve been reporting on memory research for over ten years and these topic pages are simply collections of all the news items I have made on a particular topic. They do not pretend to be in any way exhaustive! I cover far too many areas within memory to come anywhere approaching that. What I aim to do is provide breadth, rather than depth. Outside my own area of cognitive psychology, it is difficult to know how much weight to give to any study (I urge you to read my blog post on what constitutes scientific evidence). That (among other reasons) is why my approach in my news reporting is based predominantly on replication and consistency. It's about the aggregate. So here is the aggregate of those reports I have at one point considered of sufficient interest to discuss. If you know of any research you would like to add to the collection, feel free to write about it in a comment (please provide a reference).

This page covers strategies that are not covered in other specialized strategy pages: Testing, Meditation, Gestures, Learning another language, Multitasking

Strategies specifically studied with older adults are included in the Older Adults section on this page

Latest news

A study has found that brain regions responsible for making decisions continue to be active even when the conscious brain is distracted with a different task.

The study, in which 27 adults were given information about cars and other consumer products then asked to perform a brief but challenging working memory task (involving numbers) before making their decision about the items, found that:

  • As shown previously, the brief period of distraction (two minutes) produced higher quality decisions.
  • Regions activated during the learning phase (right dorsolateral prefrontal cortex and left intermediate visual cortex) continued to be active during the distractor task.
  • The amount of activation within the visual and prefrontal cortices during the distractor task predicted the degree to which participants made better decisions (activity occurring during the working memory task, as shown by a separate performance of that task, was subtracted from overall activity).

http://www.futurity.org/science-technology/to-make-smart-choices-give-brain-a-rest/

[3394] Creswell, D. J., Bursley J. K., & Satpute A. B. (2013).  Neural Reactivation Links Unconscious Thought to Decision Making Performance. Social Cognitive and Affective Neuroscience.

As many of you will know, I like nature-improves-mind stories. A new twist comes from a small Scottish study, in which participants were fitted up with a mobile EEG monitor that enabled their brainwaves to be recorded as they walked for 25 minutes through one of three different urban settings: an urban shopping street, a path through green space, or a street in a busy commercial district. The monitors measured five ‘channels’ that are claimed to reflect “short-term excitement,” “frustration,” “engagement,” “arousal,” and “meditation level."

Consistent with Attention restoration theory, walkers entering the green zone showed lower frustration, engagement and arousal, and higher meditation, and then showed higher engagement when moving out of it — suggesting that their time in a natural environment had ‘refreshed’ their brain.

http://richardcoyne.com/2013/03/09/the-brain-in-the-city/

[3375] Aspinall, P., Mavros P., Coyne R., & Roe J. (2013).  The urban brain: analysing outdoor physical activity with mobile EEG. British journal of sports medicine.

 

Gossipy content and informal language may lie behind people's better recall of Facebook posts compared to memory for human faces or sentences from books.

Online social networking, such as Facebook, is hugely popular. A series of experiments has explored the intriguing question of whether our memories are particularly ‘tuned’ to remember the sort of information shared on such sites.

The first experiment involved 32 college students (27 female), who were asked to study either 100 randomly chosen Facebook posts, or 100 sentences randomly chosen from books on Amazon. After the study period (which involved each sentence being presented for 3 seconds), the students were given a self-paced recognition test, in which the 100 study sentences were mixed with another 100 sentences from the same source, with participants responding with a number expressing their confidence that they had (or had not) seen the sentence before (e.g., ‘1’ would indicate they were completely confident that they hadn’t seen it before, ‘20’ that they were totally confident that they had).

Recognition of Facebook posts was significantly better than recognition of sentences from books (an average of 85% correct vs 76%). The ‘Facebook advantage’ remained even when only posts with more normal surface-level characteristics were analyzed (i.e., all posts containing irregularities of spelling and typography were removed).

In the next experiment, involving 16 students (11 female), Facebook posts (a new set) were compared with neutral faces. Again, memory for Facebook posts was substantially better than that for faces. This is quite remarkable, since humans have a particular expertise for faces and tend to score highly on recognition tests for them.

One advantage the Facebook posts might have is in eliciting social thinking. The researchers attempted to test this by comparing the learning achieved when people were asked to count the words of each sentence or post, against the learning achieved when they were asked to think of someone they knew (real or fictional) who could have composed such a sentence / post. This experiment involved 64 students (41 female).

The deeper encoding encouraged by the latter strategy did improve memory for the texts, but it did so equally. The fact that it helped Facebook posts as much as it did book sentences argues against the idea that the Facebook advantage rests on social elaboration (because if so, encouraging them to be socially elaborated would have little extra effect).

Another advantage the Facebook posts might have over book sentences is that they were generally complete in themselves, making sense in a way that randomly chosen sentences from books would not. Other possibilities have to do with the gossipy nature of Facebook posts, and the informal language used. To test these theories, 180 students (138 female) were shown text from two CNN twitter feeds: Breaking News and Entertainment News. Texts included headlines, sentences, and comments.

Texts from Entertainment News were remembered significantly better than those from Breaking News (supporting the gossip advantage). Headlines were remembered significantly better than random sentences (supporting the completeness argument), but comments were remembered best of all (supporting the informality theory) — although the benefit of comments over headlines was much greater for Breaking News than Entertainment News (perhaps reflecting the effort the Entertainment News people put into making catchy headlines?).

It seems then, that three factors contribute to the greater memorability of Facebook posts: the completeness of ideas; the gossipy content; the casually generated language.

You’ll have noticed I made a special point of noting the gender imbalance in the participant pools. Given gender differences in language and social interaction, it’s a shame that the participants were so heavily skewed, and I would like this replicated with males before generalizing. However, the evidence for the advantage of more informal language is, at least, less likely to be skewed by gender.

[3277] Mickes, L., Darby R. S., Hwe V., Bajic D., Warker J. A., Harris C. R., et al. (Submitted).  Major memory for microblogs. Memory & Cognition. 1 - 9.

A study emphasizes the importance of establishing source credibility when trying to correct false information.

I’ve discussed before how hard it is to correct false knowledge. This is not only a problem for the classroom — the preconceptions students bring to a topic, and the difficulty of replacing them with the correct information — but, in these days of so much misinformation in the media and on the web, an everyday problem.

An internet study involving 574 adults presented them with an article discussing the issue of electronic health records (EHRs). They were then shown another article on the subject, supposedly from a "political blog". This text included several false statements about who was allowed access to these records (for example, that hospital administrators, health insurance companies, employers, and government officials had unrestricted access).

For some participants, this article was annotated so that the false statements were clearly marked, and directions explained that an independent fact-checking organization had found these factual errors. Other participants completed an unrelated three-minute task at the end of reading the text before being presented with the same corrections, while a third group was not advised of the inaccuracies at all (until being debriefed).

After reading the text, participants were given a questionnaire, where they listed everything they had learned about EHRs from the text, rated their feelings about each item, and marked on a 7-point scale how easy it would be for specific groups to access the records. They were also asked to judge the credibility of the fact-checking message.

Those who received the immediate corrections were significantly more accurate than those who received the delayed corrections, and both were significantly more accurate than those receiving no corrections — so at least we know that correcting false information does make a difference! More depressingly, however, the difference between any of the groups, although significant, was small — i.e., correcting false statements makes a difference, but not much of one.

Part of the problem lies, it appears, in people’s preconceptions. A breakdown by participant’s feelings on the issue revealed that the immediate correction was significantly more effective for those who were ‘for’ EHRs (note that the corrections agreed with their beliefs). Indeed, for those unfavorably disposed, the immediate corrections may as well have been no corrections at all.

But, intriguingly, predisposition only made a difference when the correction was immediate, not when it was delayed.

Mapping these results against participants’ responses to the question of credibility revealed that those unfavorably disposed (and therefore prone to believing the false claims in the text) assigned little credibility to the corrections.

Why should this, perfectly understandable, difference apply only when corrections were immediate? The researchers suggest that, by putting the corrections in direct competition with the false statements, more emphasis is put on their relative credibility — assessments of which tend to be biased by existing attitudes.

The findings suggest it is naïve to expect that it is enough to simply tell people something is false, if they have a will to believe it. It also suggests the best approach to correcting false knowledge is to emphasize the credibility of the corrector.

Of course, this study used politically charged information, about which people are likely to have decided opinions. But the results are a reminder that, as the researcher says: "Humans aren't vessels into which you can just pour accurate information. Correcting misperceptions is really a persuasion task.”

This is true even when the information is something as ‘factual’ as the cause of the seasons! Even teachers should take on board this idea that, when new information doesn’t fit in with a student’s world-view, then credibility of the source/authority (the teacher!) is paramount.

Garrett, R., & Weeks, B. (2013). The Promise and Peril of Real-Time Corrections to Political Misperceptions. Proceedings of the Computer Supported Cooperative Work and Social Computing conference. Retrieved from http://wp.comm.ohio-state.edu/misperceptions/wp-content/uploads/2012/07/...

A simulated study of life-threatening surgical crises has found that using a checklist reduced the omission of critical steps from 23% to 6%.

I reported recently on how easily and quickly we can get derailed from a chain of thought (or action). In similar vein, here’s another study that shows how easy it is to omit important steps in an emergency, even when you’re an expert — which is why I’m a great fan of checklists.

Checklists have been shown to dramatically decrease the chances of an error, in areas such as flying and medicine. However, while surgeons may use checklists as a matter of routine (a study a few years ago found that the use of routine checklists before surgery substantially reduced the chances of a serious complication — we can hope that everyone’s now on board with that!), there’s a widespread belief in medicine that operating room crises are too complex for a checklist to be useful. A new study contradicts that belief.

The study involved 17 operating room teams (anesthesia staff, operating room nurses, surgical technologists, a surgeon), who participated in 106 simulated surgical crisis scenarios in a simulated operating room. Each team was randomized to manage half of the scenarios with a set of crisis checklists and the remaining scenarios from memory alone.

When checklists were used, the teams were 74% less likely to miss critical steps. That is, without a checklist, nearly a quarter (23%) of the steps were omitted (an alarming figure!), while with a checklist, only 6% of the steps were omitted on average. Every team performed better when the checklists were available.

After experiencing these situations, almost all (97%) participants said they would want these checklists used if they experienced such a crisis if they were a patient.

It’s comforting to know that airline pilots do have checklists to use in emergency situations. Now we must hope that hospitals come on board with this as well (up-to-date checklists and implementation materials can be found at www.projectcheck.org/crisis).

For the rest of us, the study serves as a reminder that, however practiced we may think we are, forgetting steps in an action plan is only too common, and checklists are an excellent means of dealing with this — in emergency and out.

[3262] Arriaga, A. F., Bader A. M., Wong J. M., Lipsitz S. R., Berry W. R., Ziewacz J. E., et al. (2013).  Simulation-Based Trial of Surgical-Crisis Checklists. New England Journal of Medicine. 368(3), 246 - 253.

A small study involving patients with TBI has found that the best learning strategies are ones that call on the self-schema rather than episodic memory, and the best involves self-imagination.

Sometime ago, I reported on a study showing that older adults could improve their memory for a future task (remembering to regularly test their blood sugar) by picturing themselves going through the process. Imagination has been shown to be a useful strategy in improving memory (and also motor skills). A new study extends and confirms previous findings, by testing free recall and comparing self-imagination to more traditional strategies.

The study involved 15 patients with acquired brain injury who had impaired memory and 15 healthy controls. Participants memorized five lists of 24 adjectives that described personality traits, using a different strategy for each list. The five strategies were:

  • think of a word that rhymes with the trait (baseline),
  • think of a definition for the trait (semantic elaboration),
  • think about how the trait describes you (semantic self-referential processing),
  • think of a time when you acted out the trait (episodic self-referential processing), or
  • imagine acting out the trait (self-imagining).

For both groups, self-imagination produced the highest rates of free recall of the list (an average of 9.3 for the memory-impaired, compared to 3.2 using the baseline strategy; 8.1 vs 3.2 for the controls — note that the controls were given all 24 items in one list, while the memory-impaired were given 4 lists of 6 items).

Additionally, those with impaired memory did better using semantic self-referential processing than episodic self-referential processing (7.3 vs 5.7). In contrast, the controls did much the same in both conditions. This adds to the evidence that patients with brain injury often have a particular problem with episodic memory (knowledge about specific events). Episodic memory is also particularly affected in Alzheimer’s, as well as in normal aging and depression.

It’s also worth noting that all the strategies that involved the self were more effective than the two strategies that didn’t, for both groups (also, semantic elaboration was better than the baseline strategy).

The researchers suggest self-imagination (and semantic self-referential processing) might be of particular benefit for memory-impaired patients, by encouraging them to use information they can more easily access (information about their own personality traits, identity roles, and lifetime periods — what is termed the self-schema), and that future research should explore ways in which self-imagination could be used to support everyday memory tasks, such as learning new skills and remembering recent events.

A small study with older adults provides support for the idea that learning is helped if you follow it with a few minutes ‘wakeful rest’.

Back in 2010, I briefly reported on a study suggesting that a few minutes of ‘quiet time’ could help you consolidate new information. A new study provides more support for this idea.

In the first experiment, 14 older adults (aged 61-81) were told a short story, with instructions to remember as many details as possible. Immediately afterward, they were asked to describe what happened in the story. Ten minutes then elapsed, during which they either rested quietly (with eyes closed in a darkened room), or played a spot-the-difference game on the computer (comparing pairs of pictures). This task was chosen because it was non-verbal and sufficiently different from the story task to not directly compete for cognitive resources.

This first learning phase was followed by five minutes of playing the spot-the-difference game (for all participants) and then a second learning phase, in which the process was repeated with a second story, and participants experienced the other activity during the delay period (e.g., rest if they had previously played the game).

Some 30 minutes after the first story presentation (15 minutes after the second), participants were unexpectedly asked to once again recall as many details as they could from the stories. A further recall test was also given one week later.

Recall on the first delayed test (at the end of both learning phases) was significantly better for stories that had been followed by wakeful resting rather than a game. While recall declined at the same rate for both story conditions, the benefits of wakeful resting were maintained at the test one week later.

In a second experiment, the researchers looked at whether these benefits would still occur if there was no repetition (i.e., no delayed recall test at the time, only at a week). Nineteen older adults (61-87) participated.

As expected, in the absence of the short-delay retrieval test, recall at a week was slightly diminished. Nevertheless, recall for stories that had been followed by rest was still significantly better than recall for stories followed by the game.

It’s worth noting that, in a post-session interview, only 3 participants (of the 33 total) reported thinking about the story during the period of wakeful rest. One participant fell asleep. Twelve participants reported thinking about the stories at least once during the week, but there was no difference between these participants’ scores and those who didn’t think about them.

These findings support the idea that a quiet period of reflection after new learning helps the memories be consolidated. While the absence of interfering information may underlie this, the researchers did select the game specifically to interfere as little as possible with the story task. Moreover, the use of the same task as a ‘filler’ between the two learning phases was also designed to equalize any interference it might engender.

The weight of the evidence, therefore, is that ten minutes of wakeful resting aided memory by providing the mental space in which to consolidate the memory. Moreover, the fact that so few participants actively thought about the stories during that rest indicates that such consolidation is automatic and doesn’t require deliberate rehearsal.

The study did, of course, only involve older adults. I hope we will see a larger study with a wider participant pool.

A review has concluded that spatial training produces significant improvement, particularly for poorer performers, and that such training could significantly increase STEM achievement.

Spatial abilities have been shown to be important for achievement in STEM subjects (science, technology, engineering, math), but many people have felt that spatial skills are something you’re either born with or not.

In a comprehensive review of 217 research studies on educational interventions to improve spatial thinking, researchers concluded that you can indeed improve spatial skills, and that such training can transfer to new tasks. Moreover, not only can the right sort of training improve spatial skill in general, and across age and gender, but the effect of training appears to be stable and long-lasting.

One interesting finding (the researchers themselves considered it perhaps the most important finding) was the diversity in effective training — several different forms of training can be effective in improving spatial abilities. This may have something to do with the breadth covered by the label ‘spatial ability’, which include such skills as:

  • Perceiving objects, paths, or spatial configurations against a background of distracting information;
  • Piecing together objects into more complex configurations, visualizing and mentally transforming objects;
  • Understanding abstract principles, such as horizontal invariance;
  • Visualizing an environment in its entirety from a different position.

The review compared three types of training. Those that used:

  • Video games (24 studies)
  • Semester-long instructional courses on spatial reasoning (42 studies)
  • Practical training, often in a lab, that involved practicing spatial tasks, strategic instruction, or computerized lessons (138 studies).

The first two are examples of indirect training, while the last involves direct training.

On average, taken across the board, training improved performance by well over half a standard deviation when considered on its own, and still almost one half of a standard deviation when compared to a control group. This is a moderately large effect, and it extended to transfer tasks.

It also conceals a wide range, most of which is due to different treatment of control groups. Because the retesting effect is so strong in this domain (if you give any group a spatial test twice, regardless of whether they’ve been training in between the two tests, they’re going to do better on the second test), repeated testing can have a potent effect on the control group. Some ‘filler’ tasks can also inadvertently improve the control group’s performance. All of this will reduce the apparent effect of training. (Not having a control group is even worse, because you don’t know how much of the improvement is due to training and how much to the retesting effect.)

This caution is, of course, more support for the value of practice in developing spatial skills. This is further reinforced by studies that were omitted from the analysis because they would skew the data. Twelve studies found very high effect sizes — more than three times the average size of the remaining studies. All these studies took place in poorly developed countries (those with a Human Development Index above 30 at the time of the study) — Malaysia, Turkey, China, India, and Nigeria. HDI rating was even associated with the benefits of training in a dose-dependent manner — that is, the lower the standard of living, the greater the benefit.

This finding is consistent with other research indicating that lower socioeconomic status is associated with larger responses to training or intervention.

In similar vein, when the review compared 19 studies that specifically selected participants who scored poorly on spatial tests against the other studies, they found that the effects of training were significantly bigger among the selected studies.

In other words, those with poorer spatial skills will benefit most from training. It may be, indeed, that they are poor performers precisely because they have had little practice at these tasks — a question that has been much debated (particularly in the context of gender differences).

It’s worth noting that there was little difference in performance on tests carried out immediately after training ended, within a week, or within a month, indicating promising stability.

A comparison of different types of training did find that some skills were more resistant to training than others, but all types of spatial skill improved. The differences may be because some sorts of skill are harder to teach, and/or because some skills are already more practiced than others.

Given the demonstrated difficulty in increasing working memory capacity through training, it is intriguing to notice one example the researchers cite: experienced video game players have been shown to perform markedly better on some tasks that rely on spatial working memory, such as a task requiring you to estimate the number of dots shown in a brief presentation. Most of us can instantly recognize (‘subitize’) up to five dots without needing to count them, but video game players can typically subitize some 7 or 8. The extent to which this generalizes to a capacity to hold more elements in working memory is one that needs to be explored. Video game players also apparently have a smaller attentional blink, meaning that they can take in more information.

A more specific practical example of training they give is that of a study in which high school physics students were given training in using two- and three-dimensional representations over two class periods. This training significantly improved students’ ability to read a topographical map.

The researchers suggest that the size of training effect could produce a doubling of the number of people with spatial abilities equal to or greater than that of engineers, and that such training might lower the dropout rate among those majoring in STEM subjects.

Apart from that, I would argue many of us who are ‘spatially-challenged’ could benefit from a little training!

In another example of how expertise in a specific area changes the brain, brain scans of piano tuners show which areas grow, and which shrink, with experience — and starting age.

I’ve reported before on how London taxi drivers increase the size of their posterior hippocampus by acquiring and practicing ‘the Knowledge’ (but perhaps at the expense of other functions). A new study in similar vein has looked at the effects of piano tuning expertise on the brain.

The study looked at the brains of 19 professional piano tuners (aged 25-78, average age 51.5 years; 3 female; 6 left-handed) and 19 age-matched controls. Piano tuning requires comparison of two notes that are close in pitch, meaning that the tuner has to accurately perceive the particular frequency difference. Exactly how that is achieved, in terms of brain function, has not been investigated until now.

The brain scans showed that piano tuners had increased grey matter in a number of brain regions. In some areas, the difference between tuners and controls was categorical — that is, tuners as a group showed increased gray matter in right hemisphere regions of the frontal operculum, the planum polare, superior frontal gyrus, and posterior cingulate gyrus, and reduced gray matter in the left hippocampus, parahippocampal gyrus, and superior temporal lobe. Differences in these areas didn’t vary systematically between individual tuners.

However, tuners also showed a marked increase in gray matter volume in several areas that was dose-dependent (that is, varied with years of tuning experience) — the anterior hippocampus, parahippocampal gyrus, right middle temporal and superior temporal gyrus, insula, precuneus, and inferior parietal lobe — as well as an increase in white matter in the posterior hippocampus.

These differences were not affected by actual chronological age, or, interestingly, level of musicality. However, they were affected by starting age, as well as years of tuning experience.

What these findings suggest is that achieving expertise in this area requires an initial development of active listening skills that is underpinned by categorical brain changes in the auditory cortex. These superior active listening skills then set the scene for the development of further skills that involve what the researchers call “expert navigation through a complex soundscape”. This process may, it seems, involve the encoding and consolidating of precise sound “templates” — hence the development of the hippocampal network, and hence the dependence on experience.

The hippocampus, apart from its general role in encoding and consolidating, has a special role in spatial navigation (as shown, for example, in the London cab driver studies, and the ‘parahippocampal place area’). The present findings extend that navigation in physical space to the more metaphoric one of relational organization in conceptual space.

The more general message from this study, of course, is confirmation for the role of expertise in developing specific brain regions, and a reminder that this comes at the expense of other regions. So choose your area of expertise wisely!

Support for previous findings associating study abroad with increased creativity comes from a study comparing those who studied abroad with those who plan to, and those with no such intentions.

A couple of years ago I briefly reported on a finding that students who had lived abroad demonstrated greater creativity, if they first recalled a multicultural learning experience from their life abroad. A new study examines this connection, in particular investigating the as-yet-unanswered question of whether students who studied abroad were already more creative than those who didn’t.

The study involved 135 students of whom 45 had studied abroad, 45 were planning to do so, and 45 had not and were not planning to. The groups did not differ significantly in terms of age, gender, or ethnicity, and data from a sample (a third of each group) revealed no differences in terms of GPA and SAT scores. Creativity was assessed using the domain-general Abbreviated Torrance Test for Adults (ATTA) and the culture-specific Cultural Creativity Task (CCT).

Those in the Study Abroad group scored significantly higher on the CCT than those in the other two groups, who didn’t differ from each other. Additionally, those in the Study Abroad group scored significantly higher on the ATTA than those in the No Plan to Study group (those in the Plan to Study group were not significantly different from either of the other two groups).

It seems clear, then, that the findings of earlier studies are indeed ‘real’ (students who study abroad really do come home more creative than before they went) and not a product of self-selection (more creative students are more likely to travel). But the difference between the two creativity tests needs some explanation.

There is a burning issue in creativity research: is creativity a domain-general attribute, or a domain-specific one? This is not a pedantic, theoretical question! If you’re ‘creative’, does that mean you’re equally creative in all areas, or just in specific areas? Or (more likely, it seems to me) is creativity both domain-general and domain-specific?

The ATTA, as I said, measures general creativity. It does so through three 3-minute tasks: identify the troubles you might have if you could walk on air or fly (without benefit of vehicle); draw a picture using two incomplete figures (provided); draw pictures using 9 identical isosceles triangles.

The CCT has five 3-minute tasks that target culturally relevant knowledge and skills. in each case, participants are asked to give as many ideas as they can in response to a specific scenario: getting more foreign tourists to visit America; the changes that would result if you woke up with different skin color; demonstrating high social status; developing new dishes using exotic ingredients; creating a product with universal appeal.

The findings would seem to support the idea that creativity has both general and specific elements. The greater effect of studying abroad on CCT scores (compared to ATTA scores) also seem to me to be consistent with the finding I cited at the beginning — that, to get the benefit, students needed to be reminded of their multicultural experiences. In this case, the CCT scenarios would seem to play that role.

It does of course make complete sense that living abroad would have positive benefits for creativity. Creativity is about not following accustomed ruts in one’s thoughts. Those ruts are not simply generated within our own mind (as we get older, our ruts tend to get deeper), but are products of our relationship with our society. Think of clichés. The more we follow along with accustomed language and thought patterns of our group, the less creative we will be. One way to break (or at least broaden) this, is to widen our groups — by, for example, mixing in diverse circles, or by living abroad.

Interestingly, another recent study (pdf link to paper) reckons that social rejection (generally regarded as a bad thing) can make some people more creative — if they’re independent types who take pride in being different from others.

Lee, C. S., Therriault, D. J., & Linderholm, T. (2012). On the Cognitive Benefits of Cultural Experience: Exploring the Relationship between Studying Abroad and Creative Thinking. Applied Cognitive Psychology, n/a–n/a. doi:10.1002/acp.2857

Kim, S. H., Vincent, L. C., & Goncalo, J. A. (In press). Outside Advantage: Can Social Rejection Fuel Creative Thought? Journal of Experimental Psychology. General.
 

Two new studies provide support for the judicious use of sleep learning — as a means of reactivating learning that occurred during the day.

Back when I was young, sleep learning was a popular idea. The idea was that a tape would play while you were asleep, and learning would seep into your brain effortlessly. It was particularly advocated for language learning. Subsequent research, unfortunately, rejected the idea, and gradually it has faded (although not completely). Now a new study may presage a come-back.

In the study, 16 young adults (mean age 21) learned how to ‘play’ two artificially-generated tunes by pressing four keys in time with repeating 12-item sequences of moving circles — the idea being to mimic the sort of sensorimotor integration that occurs when musicians learn to play music. They then took a 90-minute nap. During slow-wave sleep, one of the tunes was repeatedly played to them (20 times over four minutes). After the nap, participants were tested on their ability to play the tunes.

A separate group of 16 students experienced the same events, but without the playing of the tune during sleep. A third group stayed awake, during which 90-minute period they played a demanding working memory task. White noise was played in the background, and the melody was covertly embedded into it.

Consistent with the idea that sleep is particularly helpful for sensorimotor integration, and that reinstating information during sleep produces reactivation of those memories, the sequence ‘practiced’ during slow-wave sleep was remembered better than the unpracticed one. Moreover, the amount of improvement was positively correlated with the proportion of time spent in slow-wave sleep.

Among those who didn’t hear any sounds during sleep, improvement likewise correlated with the proportion of time spent in slow-wave sleep. The level of improvement for this group was intermediate to that of the practiced and unpracticed tunes in the sleep-learning group.

The findings add to growing evidence of the role of slow-wave sleep in memory consolidation. Whether the benefits for this very specific skill extend to other domains (such as language learning) remains to be seen.

However, another recent study carried out a similar procedure with object-location associations. Fifty everyday objects were associated with particular locations on a computer screen, and presented at the same time with characteristic sounds (e.g., a cat with a meow and a kettle with a whistle). The associations were learned to criterion, before participants slept for 2 hours in a MR scanner. During slow-wave sleep, auditory cues related to half the learned associations were played, as well as ‘control’ sounds that had not been played previously. Participants were tested after a short break and a shower.

A difference in brain activity was found for associated sounds and control sounds — associated sounds produced increased activation in the right parahippocampal cortex — demonstrating that even in deep sleep some sort of differential processing was going on. This region overlapped with the area involved in retrieval of the associations during the earlier, end-of-training test. Moreover, when the associated sounds were played during sleep, parahippocampal connectivity with the visual-processing regions increased.

All of this suggests that, indeed, memories are being reactivated during slow-wave sleep.

Additionally, brain activity in certain regions at the time of reactivation (mediotemporal lobe, thalamus, and cerebellum) was associated with better performance on the delayed test. That is, those who had greater activity in these regions when the associated sounds were played during slow-wave sleep remembered the associations best.

The researchers suggest that successful reactivation of memories depends on responses in the thalamus, which if activated feeds forward into the mediotemporal lobe, reinstating the memories and starting the consolidation process. The role of the cerebellum may have to do with the procedural skill component.

The findings are consistent with other research.

All of this is very exciting, but of course this is not a strategy for learning without effort! You still have to do your conscious, attentive learning. But these findings suggest that we can increase our chances of consolidating the material by replaying it during sleep. Of course, there are two practical problems with this: the material needs an auditory component, and you somehow have to replay it at the right time in your sleep cycle.

A meta-analysis of 23 studies has found no evidence that working memory training has wider cognitive benefits for normally developing children and healthy adults.

I have said before that there is little evidence that working memory training has any wider benefits than to the skills being practiced. Occasionally a study arises that gets everyone all excited, but by and large training only benefits the skill being practiced — despite the fact that working memory underlies so many cognitive tasks, and limited working memory capacity is thought to negatively affect performance on so many tasks. However, one area that does seem to have had some success is working memory training for those with ADHD, and researchers have certainly not given hope of finding evidence for wider transfer among other groups (such as older adults).

A recent review of the research to date has, sadly, concluded that the benefits of working memory training programs are limited. But this is not to say there are no benefits.

For a start, the meta-analysis (analyzing data across studies) found that working memory training produced large immediate benefits for verbal working memory. These benefits were greatest for children below the age of 10.

These benefits, however, were not maintained long-term (at an average of 9 months after training, there were no significant benefits) — although benefits were found in one study in which the verbal working memory task was very similar to the training task (indicating that the specific skill practiced did maintain some improvement long-term).

Visuospatial working memory also showed immediate benefits, and these did not vary across age groups. One factor that did make a difference was type of training: the CogMed training program produced greater improvement than the researcher-developed programs (the studies included 7 that used CogMed, 2 that used Jungle Memory, 2 Cognifit, 4 n-back, 1 Memory Booster, and 7 researcher-developed programs).

Interestingly, visuospatial working memory did show some long-term benefits, although it should be noted that the average follow-up was distinctly shorter than that for verbal working memory tasks (an average of 5 months post-training).

The burning question, of course, is how well this training transferred to dissimilar tasks. Here the evidence seems sadly clear — those using untreated control groups tended to find such transfer; those using treated control groups never did. Similarly, nonrandomized studies tended to find far transfer, but randomized studies did not.

In other words, when studies were properly designed (randomized trials with a control group that is given alternative treatment rather than no treatment), there was no evidence of transfer effects from working memory training to nonverbal ability. Moreover, even when found, these effects were only present immediately and not on follow-up.

Neither was there any evidence of transfer effects, either immediate or delayed, on verbal ability, word reading, or arithmetic. There was a small to moderate effect on training on attention (as measured by the Stroop test), but this only occurred immediately, and not on follow-up.

It seems clear from this review that there are few good, methodologically sound studies on this subject. But three very important caveats should be noted in connection with the researchers’ dispiriting conclusion.

First of all, because this is an analysis across all data, important differences between groups or individuals may be concealed. This is a common criticism of meta-analysis, and the researchers do try and answer it. Nevertheless, I think it is still a very real issue, especially in light of evidence that the benefit of training may depend on whether the challenge of the training is at the right level for the individual.

On the other hand, another recent study, that compared young adults who received 20 sessions of training on a dual n-back task or a visual search program, or received no training at all, did look for an individual-differences effect, and failed to find it. Participants were tested repeatedly on their fluid intelligence, multitasking ability, working memory capacity, crystallized intelligence, and perceptual speed. Although those taking part in the training programs improved their performance on the tasks they practiced, there was no transfer to any of the cognitive measures. When participants were analyzed separately on the basis of their improvement during training, there was still no evidence of transfer to broader cognitive abilities.

The second important challenge comes from the lack of skill consolidation — having a short training program followed by months of not practicing the skill is not something any of us would expect to produce long-term benefits.

The third point concerns a recent finding that multi-domain cognitive training produces longer-lasting benefits than single-domain training (the same study also showed the benefit of booster training). It seems quite likely that working memory training is a valuable part of a training program that also includes practice in real-world tasks that incorporate working memory.

I should emphasize that these results only apply to ‘normal’ children and adults. The question of training benefits for those with attention difficulties or early Alzheimer’s is a completely different issue. But for these healthy individuals, it has to be said that the weight of the evidence is against working memory training producing more general cognitive improvement. Nevertheless, I think it’s probably an important part of a cognitive training program — as long as the emphasis is on part.

Melby-Lervåg, M., & Hulme, C. (2012). Is Working Memory Training Effective? A Meta-Analytic Review. Developmental psychology. doi:10.1037/a0028228
Full text available at http://www.apa.org/pubs/journals/releases/dev-ofp-melby-lervag.pdf

[3012] Redick, T. S., Shipstead Z., Harrison T. L., Hicks K. L., Fried D. E., Hambrick D. Z., et al. (2012).  No Evidence of Intelligence Improvement After Working Memory Training: A Randomized, Placebo-Controlled Study.. Journal of Experimental Psychology: General.
Full text available at http://psychology.gatech.edu/renglelab/publications/2012/RedicketalJEPG.pdf
 

Increasing the spacing between letters has been found to improve reading accuracy and speed in dyslexic children, with poorest readers benefiting most.

It’s generally agreed among researchers that the most efficient intervention for dyslexia is to get the child reading more — the challenge is to find ways that enable that. Training programs typically target specific component skills, which are all well and good but leave the essential problem untouched: the children still need to read more. A new study shows that a very simple manipulation substantially improves reading in a large, unselected group of dyslexic children.

The study involved 74 French and Italian children — the two groups enabling researchers to compare a transparent writing system (Italian) with a relatively opaque one (French). The children had to read 24 short, meaningful, but unrelated, sentences. The text was written in Times New Roman 14 point. Standard interletter spacing was compared to spacing increased by 2.5 points. Space between words and lines was also increased commensurately. Each child read the same sentences in two sessions, two weeks apart. In one session, standard spacing was used, and in the other, increased spacing. Order of the sessions was of course randomly assigned.

The idea behind this is that dyslexic readers seem to be particularly affected by crowding. Crowding — interference from flanking letters — mostly affects peripheral vision in normal adult readers, but has been shown to be a factor in central vision in school-aged children. Standard letter spacing appears to be optimal for skilled adult readers.

The study found that increased spacing improved accuracy in reading the text by a factor of two. Moreover, this group effect conceals substantial individual differences. Those who had the most difficulties with the text benefitted the most from the extra spacing.

Reading speed also increased. In this case, despite the 2-week interval, there was an order effect: those who read the normal text first were faster on the 2nd (spaced) reading, while those who read the spaced text first read the 2nd (normal) text at the same speed. Analysis that removed the effects of repetition found that spacing produced a speed improvement of about 0.3 syllables a second, which corresponds to the average improvement across an entire school year for Italian dyslexic children.

There was no difference between the Italian and French children, indicating that this manipulation works in both transparent (in which letters and sounds match) and opaque writing systems (like English).

Subsequent comparison of 30 of the Italian children (mean age 11) with younger normally-developing children (mean age 8) matched for reading level and IQ found that spacing benefited only the dyslexic children.

A further experiment involving some of the Italian dyslexic children compared the spaced condition with normal text that had the same line spacing as the spaced text. This confirmed that it was the letter spacing that was critical.

These findings point to a very simple way of giving dyslexic children the practice they need in reading without any training. It is not suggested that it replaces specific-skill training, but rather augments it.

[3017] Zorzi, M., Barbiero C., Facoetti A., Lonciari I., Carrozzi M., Montico M., et al. (2012).  Extra-large letter spacing improves reading in dyslexia. Proceedings of the National Academy of Sciences. 109(28), 11455 - 11459.

A small study provides more support for the idea that viewing nature can refresh your attention and improve short-term memory, and extends it to those with clinical depression.

I’ve talked before about Dr Berman’s research into Attention Restoration Theory, which proposes that people concentrate better after nature walks or even just looking at nature scenes. In his latest study, the findings have been extended to those with clinical depression.

The study involved 20 young adults (average age 26), all of whom had a diagnosis of major depressive disorder. Short-term memory and mood were assessed (using the backwards digit span task and the PANAS), and then participants were asked to think about an unresolved, painful autobiographical experience. They were then randomly assigned to go for a 50-minute walk along a prescribed route in either the Ann Arbor Arboretum (woodland park) or traffic heavy portions of downtown Ann Arbor. After the walk, mood and cognition were again assessed. A week later the participants repeated the entire procedure in the other location.

Participants exhibited a significant (16%) increase in attention and working memory after the nature walk compared to the urban walk. While participants felt more positive after both walks, there was no correlation with memory effects.

The finding is particularly interesting because depression is characterized by high levels of rumination and negative thinking. It seemed quite likely, then, that a solitary walk in the park might make depressed people feel worse, and worsen working memory. It’s intriguing that it didn’t.

It’s also worth emphasizing that, as in earlier studies, this effect of nature on cognition appears to be independent of mood (which is, of course, the basic tenet of Attention Restoration Theory).

Of course, this study is, like the others, small, and involves the same demographic. Hopefully future research will extend the sample groups, to middle-aged and older adults.

A new sleep study confirms the value of running through new material just before bedtime, particularly it seems when that material is being learned using mnemonics or by rote.

We know that we remember more 12 hours after learning if we have slept during that 12 hours rather than been awake throughout, but is this because sleep is actively helping us remember, or because being awake makes it harder to remember (because of interference and over-writing from other experiences). A new study aimed to disentangle these effects.

In the study, 207 students were randomly assigned to study 40 related or unrelated word pairs at 9 a.m. or 9 p.m., returning for testing either 30 minutes, 12 hours or 24 hours later.

As expected, at the 12-hour retest, those who had had a night’s sleep (Evening group) remembered more than those who had spent the 12 hours awake (Morning group). But this result was because memory for unrelated word pairs had deteriorated badly during 12 hours of wakefulness; performance on the related pairs was the same for the two groups. Performance on the related and unrelated pairs was the same for those who slept.

For those tested at 24 hours (participants from both groups having received both a full night of sleep and a full day of wakefulness), those in the Evening group (who had slept before experiencing a full day’s wakefulness) remembered significantly more than the Morning group. Specifically, the Evening group showed a very slight improvement over training, while the Morning group showed a pronounced deterioration.

This time, both groups showed a difference for related versus unrelated pairs: the Evening group showed some deterioration for unrelated pairs and a slightly larger improvement for related pairs; the Morning group showed a very small deterioration for related pairs and a much greater one for unrelated pairs. The difference between recall of related pairs and recall of unrelated pairs was, however, about the same for both groups.

In other words, unrelated pairs are just that much harder to learn than related ones (which we already know) — over time, learning them just before sleep vs learning early in the day doesn’t make any difference to that essential truth. But the former strategy will produce better learning for both types of information.

A comparison of the 12-hour and 24-hour results (this is the bit that will help us disentangle the effects of sleep and wakefulness) reveals that twice as much forgetting of unrelated pairs occurred during wakefulness in the first 12 hours, compared to wakefulness in the second 12 hours (after sleep), and 3.4 times more forgetting of related pairs (although this didn’t reach significance, the amount of forgetting being so much smaller).

In other words, sleep appears to slow the rate of forgetting that will occur when you are next awake; it stabilizes and thus protects the memories. But the amount of forgetting that occurred during sleep was the same for both word types, and the same whether that sleep occurred in the first 12 hours or the second.

Participants in the Morning and Evening groups took a similar number of training trials to reach criterion (60% correct), and there was no difference in the time it took to learn unrelated compared to related word pairs.

It’s worth noting that there was no difference between the two groups, or for the type of word pair, at the 30-minutes test either. In other words, your ability to remember something shortly after learning it is not a good guide for whether you have learned it ‘properly’, i.e., as an enduring memory.

The study tells us that the different types of information are differentially affected by wakefulness, that is, perhaps, they are more easily interfered with. This is encouraging, because semantically related information is far more common than unrelated information! But this may well serve as a reminder that integrating new material — making sure it is well understood and embedded into your existing database — is vital for effective learning.

The findings also confirm earlier evidence that running through any information (or skills) you want to learn just before going to bed is a good idea — and this is especially true if you are trying to learn information that is more arbitrary or less well understood (i.e., the sort of information for which you are likely to use mnemonic strategies, or, horror of horrors, rote repetition).

A small study has found that ten hours of playing action video games produced significant changes in brainwave activity and improved visual attention for some (but not all) novices.

Following on from research finding that people who regularly play action video games show visual attention related differences in brain activity compared to non-players, a new study has investigated whether such changes could be elicited in 25 volunteers who hadn’t played video games in at least four years. Sixteen of the participants played a first-person shooter game (Medal of Honor: Pacific Assault), while nine played a three-dimensional puzzle game (Ballance). They played the games for a total of 10 hours spread over one- to two-hour sessions.

Selective attention was assessed through an attentional visual field task, carried out prior to and after the training program. Individual learning differences were marked, and because of visible differences in brain activity after training, the action gamers were divided into two groups for analysis — those who performed above the group mean on the second attentional visual field test (7 participants), and those who performed below the mean (9). These latter individuals showed similar brain activity patterns as those in the control (puzzle) group.

In all groups, early-onset brainwaves were little affected by video game playing. This suggests that game-playing has little impact on bottom–up attentional processes, and is in keeping with earlier research showing that players and non-players don’t differ in the extent to which their attention is captured by outside stimuli.

However, later brainwaves — those thought to reflect top–down control of selective attention via increased inhibition of distracters — increased significantly in the group who played the action game and showed above-average improvement on the field test. Another increased wave suggests that the total amount of attention allocated to the task was also greater in that group (i.e., they were concentrating more on the game than the below-average group, and the control group).

The improved ability to select the right targets and ignore other stimuli suggests, too, that these players are also improving their ability to make perceptual decisions.

The next question, of course, is what personal variables underlie the difference between those who benefit more quickly from the games, and those who don’t. And how much more training is necessary for this latter group, and are there some people who won’t achieve these benefits at all, no matter how long they play? Hopefully, future research will be directed to these questions.

[2920] Wu, S., Cheng C. K., Feng J., D'Angelo L., Alain C., & Spence I. (2012).  Playing a First-person Shooter Video Game Induces Neuroplastic Change. Journal of Cognitive Neuroscience. 24(6), 1286 - 1293.

A smartphone training program, specifically designed for those with moderate-to-severe memory impairment, was found to significantly improve day-to-day functioning in a small study.

While smartphones and other digital assistants have been found to help people with mild memory impairment, their use by those with greater impairment has been less successful. However, a training program developed at the Baycrest Centre for Geriatric Care has been using the power of implicit memory to help impaired individuals master new skills.

The study involved 10 outpatients, aged 18 to 55 (average age 44), who had moderate-to-severe memory impairment, the result of non-neurodegenerative conditions including ruptured aneurysm, stroke, tumor, epilepsy, closed-head injury, or anoxia after a heart attack. They all reported difficulty in day-to-day functioning.

Participants were trained in the basic functions of either a smartphone or another personal digital assistant (PDA) device, using an errorless training method that tapped into their preserved implicit /procedural memory. In this method, cues are progressively faded in such a way as to ensure there is enough information to prompt the correct response. The fading of the cues was based on the trainer’s observation of the patient’s behavior.

Participants were given several one-hour training sessions to learn calendaring skills such as inputting appointments and reminders. Each application was broken down into its component steps, and each step was given its own score in terms of how much support was needed. Support could either comprise a full explanation and demonstration; full explanation plus pointing to the next step; simply pointing to the next step; simply confirming a correct query; no support. The hour-long sessions occurred twice a week (with one exception, who only received one session a week). Training continued until the individual reached criterion-level performance (98% correct over a single session). On average, this took about 8 sessions, but as a general rule, those with relatively focal impairment tended to be substantially quicker than those with more extensive cognitive impairment.

After this first training phase, participants took their devices home, where they extended their use of the device through new applications mastered using the same protocol. These new tasks were carefully scaffolded to enable progressively more difficult tasks to be learned.

To assess performance, participants were given a schedule of 10 phone calls to complete over a two-week period at different times of the day. Additionally, family members kept a log of whether real-life tasks were successfully completed or not, and both participants and family members completed several questionnaires: one rating a list of common memory mistakes on a frequency-of-occurrence scale, another measuring confidence in dealing with various memory-demanding scenarios, and a third examining the participant's ability to use the device.

All 10 individuals showed improvement in day-to-day memory functioning after taking the training, and this improvement continued when the patients were followed up three to eight months later. Specifically, prospective memory (memory for future events) improved, and patient confidence in dealing with memory-demanding situations increased. Some patients also reported broadening their use of their device to include non-prospective memory tasks (e.g. entering names and/or photos of new acquaintances, or entering details of conversations).

It should be noted that these patients were some time past their injury, which was on average some 3 ½ years earlier (ranging from 10 months to over 25 years). Accordingly, they had all been through standard rehabilitation training, and already used many memory strategies. Questioning about strategy use prior to the training revealed that six participants used more memory strategies than they had before their injury, three hadn’t changed their strategy use, and one used fewer. Strategies included: calendars, lists, reminders from others, notebooks, day planner, placing items in prominent places, writing a note, relying on routines, alarms, organizing information, saying something out loud in order to remember it, mental elaboration, concentrating hard, mental retracing, computer software, spaced repetition, creating acronyms, alphabetic retrieval search.

The purpose of this small study, which built on an earlier study involving only two patients, was to demonstrate the generalizability of the training method to a larger number of individuals with moderate-to-severe memory impairment. Hopefully, it will also reassure such individuals, who tend not to use electronic memory aids, that these are a useful tool that they can, with the right training, learn to use successfully.

Rosemary is a herb long associated with memory. A small study now provides some support for the association, and for the possible benefits of aromatherapy. And a rat study indicates that your attitude to work might change how stimulants affect you.

A small study involving 20 people has found that those who were exposed to 1,8-cineole, one of the main chemical components of rosemary essential oil, performed better on mental arithmetic tasks. Moreover, there was a dose-dependent relationship — higher blood concentrations of the chemical were associated with greater speed and accuracy.

Participants were given two types of test: serial subtraction and rapid visual information processing. These tests took place in a cubicle smelling of rosemary. Participants sat in the cubicle for either 4, 6, 8, or 10 minutes before taking the tests (this was in order to get a range of blood concentrations). Mood was assessed both before and after, and blood was tested at the end of the session.

While blood levels of the chemical correlated with accuracy and speed on both tasks, the effects were significant only for the mental arithmetic task.

Participants didn’t know that the scent was part of the study, and those who asked about it were told it was left over from a previous study.

There was no clear evidence that the chemical improved attention, but there was a significant association with one aspect of mood, with higher levels of the scent correlating with greater contentment. Contentment was the only aspect of mood that showed such a link.

It’s suggested that this chemical compound may affect learning through its inhibiting effect on acetylcholinesterase (an important enzyme in the development of Alzheimer's disease). Most Alzheimer’s drugs are cholinesterase inhibitors.

While this is very interesting (although obviously a larger study needs to confirm the findings), what I would like to see is the effects on more prolonged mental efforts. It’s also a little baffling to find the effect being limited to only one of these tasks, given that both involve attention and working memory. I would also like to see the rosemary-infused cubicle compared to some other pleasant smell.

Interestingly, a very recent study also suggests the importance of individual differences. A rat study compared the effects of amphetamines and caffeine on cognitive effort. First of all, giving the rats the choice of easy or hard visuospatial discriminations revealed that, as with humans, individuals could be divided into those who tended to choose difficult trials (“workers”) and those who preferred easy ones (“slackers”). (Easy trials took less effort, but earned commensurately smaller reward.)

Amphetamine, it was found, made the slackers worked harder, but made the workers take it easier. Caffeine, too, made the workers slack off, but had no effect on slackers.

The extent to which this applies to humans is of course unknown, but the idea that your attitude to cognitive effort might change how stimulants affect you is an intriguing one. And of course this is a more general reminder that factors, whatever they are, have varying effects on individuals. This is why it’s so important to have a large sample size, and why, as an individual, you can’t automatically assume that something will benefit you, whatever the research says.

But in the case of rosemary oil, I can’t see any downside! Try it out; maybe it will help.

While sports training benefits the spatial skills of both men and women, music training closes the gender gap by only helping women.

I talked recently about how the well-established difference in spatial ability between men and women apparently has a lot to do with confidence. I also mentioned in passing that previous research has shown that training can close the gender gap. A recent study suggests that this training may not have to be specific to spatial skills.

In the German study, 120 students were given a processing speed test and a standard mental rotation test. The students were evenly divided into three groups: musicians, athletes, and education students who didn’t participate in either sports or music.

While the expected gender gap was found among the education students, the gap was smaller among the sports students, and non-existent in the music students.

Among the education students, men got twice as many rotation problems correct as women. Among the sports students, both men and women did better than their peers in education, but since they were both about equally advantaged, a gender gap was still maintained. However, among the musicians, it was only women who benefited, bringing them up to the level of the men.

Thus, for males, athletes did best on mental rotation; for females, musicians did best.

Although it may be that those who went into music or sports had relevant “natural abilities”, the amount of training in sports/music did have a significant effect. Indeed, analysis found that the advantage of sports and music students disappeared when hours of practice and years of practicing were included.

Interestingly, too, there was an effect of processing speed. Although overall the three groups didn’t differ in processing speed, male musicians had a lower processing speed than female musicians, or male athletes (neither of which groups were significantly different from each other).

It is intriguing that music training should only benefit females’ spatial abilities. However, I’m reminded that in research showing how a few hours of video game training can help females close the gender gap, females benefited from the training far more than men. The obvious conclusion is that the males already had sufficient experience, and a few more hours were neither here nor there. Perhaps the question should rather be: why does sports practice benefit males’ spatial skills? A question that seems to point to the benefits for processing speed, but then we have to ask why sports didn’t have the same effect on women. One possible answer here is that the women had engaged in sports for a significantly shorter time (an average of 10.6 years vs 17.55), meaning that the males tended to begin their sports training at a much younger age. There was no such difference among the musicians.

(For more on spatial memory, see the aggregated news reports)

Pietsch, S., & Jansen, P. (2012). Different mental rotation performance in students of music, sport and education. Learning and Individual Differences, 22(1), 159-163. Elsevier Inc. doi:10.1016/j.lindif.2011.11.012

Comparing performance on an IQ test when it is given under normal conditions and when it is given in a group situation reveals that IQ drops in a group setting, and for some (mostly women) it drops dramatically.

This is another demonstration of stereotype threat, which is also a nice demonstration of the contextual nature of intelligence. The study involved 70 volunteers (average age 25; range 18-49), who were put in groups of 5. Participants were given a baseline IQ test, on which they were given no feedback. The group then participated in a group IQ test, in which 92 multi-choice questions were presented on a monitor (both individual and group tests were taken from Cattell’s culture fair intelligence test). Each question appeared to each person at the same time, for a pre-determined time. After each question, they were provided with feedback in the form of their own relative rank within the group, and the rank of one other group member. Ranking was based on performance on the last 10 questions. Two of each group had their brain activity monitored.

Here’s the remarkable thing. If you gather together individuals on the basis of similar baseline IQ, then you can watch their IQ diverge over the course of the group IQ task, with some dropping dramatically (e.g., 17 points from a mean IQ of 126). Moreover, even those little affected still dropped some (8 points from a mean IQ of 126).

Data from the 27 brain scans (one had to be omitted for technical reasons) suggest that everyone was initially hindered by the group setting, but ‘high performers’ (those who ended up scoring above the median) managed to largely recover, while ‘low performers’ (those who ended up scoring below the median) never did.

Personality tests carried out after the group task found no significant personality differences between high and low performers, but gender was a significant variable: 10/13 high performers were male, while 11/14 low performers were female (remember, there was no difference in baseline IQ — this is not a case of men being smarter!).

There were significant differences between the high and low performers in activity in the amygdala and the right lateral prefrontal cortex. Specifically, all participants had an initial increase in amygdala activation and diminished activity in the prefrontal cortex, but by the end of the task, the high-performing group showed decreased amygdala activation and increased prefrontal cortex activation, while the low performers didn’t change. This may reflect the high performers’ greater ability to reduce their anxiety. Activity in the nucleus accumbens was similar in both groups, and consistent with the idea that the students had expectations about the relative ranking they were about to receive.

It should be pointed out that the specific feedback given — the relative ranking — was not a factor. What’s important is that it was being given at all, and the high performers were those who became less anxious as time went on, regardless of their specific ranking.

There are three big lessons here. One is that social pressure significantly depresses talent (meetings make you stupid?), and this seems to be worse when individuals perceive themselves to have a lower social rank. The second is that our ability to regulate our emotions is important, and something we should put more energy into. And the third is that we’ve got to shake ourselves loose from the idea that IQ is something we can measure in isolation. Social context matters.

A series of experiments has found that confidence fully accounted for women’s poorer performance on a mental rotation task.

One of the few established cognitive differences between men and women lies in spatial ability. But in recent years, this ‘fact’ has been shaken by evidence that training can close the gap between the genders. In this new study, 545 students were given a standard 3D mental rotation task, while at the same time manipulating their confidence levels.

In the first experiment, 70 students were asked to rate their confidence in each answer. They could also choose not to answer. Confidence level was significantly correlated with performance both between and within genders.

On the face of it, these findings could be explained, of course, by the ability of people to be reliable predictors of their own performance. However, the researchers claim that regression analysis shows clearly that when the effect of confidence was taken into account, gender differences were eliminated. Moreover, gender significantly predicted confidence.

But of course this is still just indicative.

In the next experiment, however, the researchers tried to reduce the effect of confidence. One group of 87 students followed the same procedure as in the first experiment (“omission” group), except they were not asked to give confidence ratings. Another group of 87 students was not permitted to miss out any questions (“commission” group). The idea here was that confidence underlay the choice of whether or not to answer a question, so while the first group should perform similarly to those in the first experiment, the second group should be less affected by their confidence level.

This is indeed what was found: men significantly outperformed women in the first condition, but didn’t in the second condition. In other words, it appears that the mere possibility of not answering makes confidence an important factor.

In the third experiment, 148 students replicated the commission condition of the second experiment with the additional benefit of being allowed unlimited time. Half of the students were required to give confidence ratings.

The advantage of unlimited time improved performance overall. More importantly, the results confirmed those produced earlier: confidence ratings produced significant gender differences; there were no gender differences in the absence of such ratings.

In the final experiment, 153 students were required to complete an intentionally difficult line judgment task, which men and women both carried out at near chance levels. They were then randomly informed that their performance had been either above average (‘high confidence’) or below average (‘low confidence’). Having manipulated their confidence, the students were then given the standard mental rotation task (omission version).

As expected (remember this is the omission procedure, where subjects could miss out answers), significant gender differences were found. But there was also a significant difference between the high and low confidence groups. That is, telling people they had performed well (or badly) on the first task affected how well they did on the second. Importantly, women in the high confidence group performed as well as men in the low confidence group.

A study showing that those with ASD are less likely to use inner speech when planning their actions, a failure linked to their communication ability, has implications for us all.

I’ve reported before on evidence that young children do better on motor tasks when they talk to themselves out loud, and learn better when they explain things to themselves or (even better) their mother. A new study extends those findings to children with autism.

In the study, 15 high-functioning adults with Autism Spectrum Disorder and 16 controls (age and IQ matched) completed the Tower of London task, used to measure planning ability. This task requires you to move five colored disks on three pegs from one arrangement to another in as few moves as possible. Participants did the task under normal conditions as well as under an 'articulatory suppression' condition whereby they had to repeat out loud a certain word ('Tuesday' or 'Thursday') throughout the task, preventing them from using inner speech.

Those with ASD did significantly worse than the controls in the normal condition (although the difference wasn’t large), but they did significantly better in the suppression condition — not because their performance changed, but because the controls were significantly badly affected by having their inner speech disrupted.

On an individual basis, nearly 90% of the control participants did significantly worse on the Tower of London task when inner speech was prevented, compared to only a third of those with ASD. Moreover, the size of the effect among those with ASD was correlated with measures of communication ability (but not with verbal IQ).

A previous experiment had confirmed that these neurotypical and autistic adults both showed similar patterns of serial recall for labeled pictures. Half the pictures had phonologically similar labels (bat, cat, hat, mat, map, rat, tap, cap), and the other nine had phonologically dissimilar labels (drum, shoe, fork, bell, leaf, bird, lock, fox). Both groups were significantly affected by phonological similarity, and both groups were significantly affected when inner speech was prevented.

In other words, this group of ASD adults were perfectly capable of inner speech, but they were much less inclined to use it when planning their actions.

It seems likely that, rather than using inner speech, they were relying on their visuospatial abilities, which tend to be higher in individuals with ASD. Supporting this, visuospatial ability (measured by the block design subtest of the WAIS) was highly correlated with performance on the Tower of London test. Which may not seem surprising, but the association was minimal in control participants.

Complex planning is said to be a problem for many with ASD. It’s also suggested that the relative lack of inner speech use might contribute to some of the repetitive behaviors common in people with autism.

It may be that strategies targeted at encouraging inner speech may help those with ASD develop such skills. Such strategies include encouraging children to describe their actions out loud, and providing “parallel talk”, whereby an observer plays alongside the child while verbalizing their actions.

It is also suggested that children with ASD could benefit from verbal learning of their daily schedule at school rather than using visual timetables as is currently a common approach. This could occur in stages, moving from pictures to symbols, symbols with words, before finally being restricted to words only.

ASD is estimated to occur in 1% of the population, but perhaps this problem could be considered more widely. Rather than seeing this as an issue limited to those with ASD, we should see this as a pointer to the usefulness of inner speech, and its correlation with communication skills. As one of the researchers said: "These results show that inner speech has its roots in interpersonal communication with others early in life, and it demonstrates that people who are poor at communicating with others will generally be poor at communicating with themselves.”

One final comment: a distinction has been made between “dialogic” and “monologic” inner speech, where dialogic speech refers to a kind of conversation between different perspectives on reality, and monologic speech is simply a commentary to oneself about the state of affairs. It may be that it is specifically dialogic inner speech that is so helpful for problem-solving. It has been suggested that ASD is marked by a reduction in this kind of inner speech only, and the present researchers suggest further that it is this form of speech that may have inherently social origins and require training or experience in communicating with others.

The corollary to this is that it is only in those situations where dialogic inner speech is useful in achieving a task, that such differences between individuals will matter.

Clearly there is a need for much more research in this area, but it certainly provides food for thought.

A comparison of the brains of London taxi drivers before and after their lengthy training shows clearly that the increase in hippocampal gray matter develops with training, but this may come at the expense of other brain functions.

The evidence that adult brains could grow new neurons was a game-changer, and has spawned all manner of products to try and stimulate such neurogenesis, to help fight back against age-related cognitive decline and even dementia. An important study in the evidence for the role of experience and training in growing new neurons was Maguire’s celebrated study of London taxi drivers, back in 2000.

The small study, involving 16 male, right-handed taxi drivers with an average experience of 14.3 years (range 1.5 to 42 years), found that the taxi drivers had significantly more grey matter (neurons) in the posterior hippocampus than matched controls, while the controls showed relatively more grey matter in the anterior hippocampus. Overall, these balanced out, so that the volume of the hippocampus as a whole wasn’t different for the two groups. The volume in the right posterior hippocampus correlated with the amount of experience the driver had (the correlation remained after age was accounted for).

The posterior hippocampus is preferentially involved in spatial navigation. The fact that only the right posterior hippocampus showed an experience-linked increase suggests that the right and left posterior hippocampi are involved in spatial navigation in different ways. The decrease in anterior volume suggests that the need to store increasingly detailed spatial maps brings about a reorganization of the hippocampus.

But (although the experience-related correlation is certainly indicative) it could be that those who manage to become licensed taxi drivers in London are those who have some innate advantage, evidenced in a more developed posterior hippocampus. Only around half of those who go through the strenuous training program succeed in qualifying — London taxi drivers are unique in the world for being required to pass through a lengthy training period and pass stringent exams, demonstrating their knowledge of London’s 25,000 streets and their idiosyncratic layout, plus 20,000 landmarks.

In this new study, Maguire and her colleague made a more direct test of this question. 79 trainee taxi drivers and 31 controls took cognitive tests and had their brains scanned at two time points: at the beginning of training, and 3-4 years later. Of the 79 would-be taxi drivers, only 39 qualified, giving the researchers three groups to compare.

There were no differences in cognitive performance or brain scans between the three groups at time 1 (before training). At time 2 however, when the trainees had either passed the test or failed to acquire the Knowledge, those trainees that qualified had significantly more gray matter in the posterior hippocampus than they had had previously. There was no change in those who failed to qualify or in the controls.

Unsurprisingly, both qualified and non-qualified trainees were significantly better at judging the spatial relations between London landmarks than the control group. However, qualified trainees – but not the trainees who failed to qualify – were worse than the other groups at recalling a complex visual figure after 30 minutes (see here for an example of such a figure). Such a finding replicates previous findings of London taxi drivers. In other words, their improvement in spatial memory as it pertains to London seems to have come at a cost.

Interestingly, there was no detectable difference in the structure of the anterior hippocampus, suggesting that these changes develop later, in response to changes in the posterior hippocampus. However, the poorer performance on the complex figure test may be an early sign of changes in the anterior hippocampus that are not yet measurable in a MRI.

The ‘Knowledge’, as it is known, provides a lovely real-world example of expertise. Unlike most other examples of expertise development (e.g. music, chess), it is largely unaffected by childhood experience (there may be some London taxi drivers who began deliberately working on their knowledge of London streets in childhood, but it is surely not common!); it is developed through a training program over a limited time period common to all participants; and its participants are of average IQ and education (average school-leaving age was around 16.7 years for all groups; average verbal IQ was around or just below 100).

So what underlies this development of the posterior hippocampus? If the qualified and non-qualified trainees were comparable in education and IQ, what determined whether a trainee would ‘build up’ his hippocampus and pass the exams? The obvious answer is hard work / dedication, and this is borne out by the fact that, although the two groups were similar in the length of their training period, those who qualified spent significantly more time training every week (an average of 34.5 hours a week vs 16.7 hours). Those who qualified also attended far more tests (an average of 15.6 vs 2.6).

While neurogenesis is probably involved in this growth within the posterior hippocampus, it is also possible that growth reflects increases in the number of connections, or in the number of glia. Most probably (I think), all are involved.

There are two important points to take away from this study. One is its clear demonstration that training can produce measurable changes in a brain region. The other is the indication that this development may come at the expense of other regions (and functions).

A new study shows that some math-anxious students can overcome performance deficits through their ability to control their negative responses. The finding indicates that interventions should focus on anticipatory cognitive control.

Math-anxiety can greatly lower performance on math problems, but just because you suffer from math-anxiety doesn’t mean you’re necessarily going to perform badly. A study involving 28 college students has found that some of the students anxious about math performed better than other math-anxious students, and such performance differences were associated with differences in brain activity.

Math-anxious students who performed well showed increased activity in fronto-parietal regions of the brain prior to doing math problems — that is, in preparation for it. Those students who activated these regions got an average 83% of the problems correct, compared to 88% for students with low math anxiety, and 68% for math-anxious students who didn’t activate these regions. (Students with low anxiety didn’t activate them either.)

The fronto-parietal regions activated included the inferior frontal junction, inferior parietal lobe, and left anterior inferior frontal gyrus — regions involved in cognitive control and reappraisal of negative emotional responses (e.g. task-shifting and inhibiting inappropriate responses). Such anticipatory activity in the fronto-parietal region correlated with activity in the dorsomedial caudate, nucleus accumbens, and left hippocampus during math activity. These sub-cortical regions (regions deep within the brain, beneath the cortex) are important for coordinating task demands and motivational factors during the execution of a task. In particular, the dorsomedial caudate and hippocampus are highly interconnected and thought to form a circuit important for flexible, on-line processing. In contrast, performance was not affected by activity in ‘emotional’ regions, such as the amygdala, insula, and hypothalamus.

In other words, what’s important is not your level of anxiety, but your ability to prepare yourself for it, and control your responses. What this suggests is that the best way of dealing with math anxiety is to learn how to control negative emotional responses to math, rather than trying to get rid of them.

Given that cognitive control and emotional regulation are slow to mature, it also suggests that these effects are greater among younger students.

The findings are consistent with a theory that anxiety hinders cognitive performance by limiting the ability to shift attention and inhibit irrelevant/distracting information.

Note that students in the two groups (high and low anxiety) did not differ in working memory capacity or in general levels of anxiety.

[2600] Lyons, I. M., & Beilock S. L. (2011).  Mathematics Anxiety: Separating the Math from the Anxiety. Cerebral Cortex.

You can download a copy of the study here: Math anxiety.pdf

It seems that what is said by deeper male voices is remembered better by heterosexual women, while memory is impaired for higher male voices. Pitch didn’t affect the memorability of female voices.

I had to report on this quirky little study, because a few years ago I discovered Leonard Cohen’s gravelly voice and then just a few weeks ago had it trumped by Tom Waits — I adore these deep gravelly voices, but couldn’t say why. Now a study shows that woman are not only sensitive to male voice pitch, but this affects their memory.

In the first experiment, 45 heterosexual women were shown images of objects while listening to the name of the object spoken either by a man or woman. The pitch of the voice was manipulated to be high or low. After spending five minutes on a Sudoku puzzle, participants were asked to choose which of two similar but not identical versions of the object was the one they had seen earlier. After the memory test, participants were tested on their voice preferences.

Women strongly preferred the low pitch male voice and remembered objects more accurately when they have been introduced by the deeper male voice than the higher male voice (mean score for object recognition was 84.7% vs 77.8%). There was no significant difference in memory relating to pitch for the female voices (83.9% vs 81.7% — note that these are not significantly different from the score for the deeper male voice).

So is it that memory is enhanced for deeper male voices, or that it is impaired for higher male voices (performance on the female voices suggests the latter)? Or are both factors at play? To sort this out, the second experiment, involving a new set of 46 women, included unmanipulated male and female voices.

Once again, women were unaffected by the different variations of female voices. However, male voices produced a clear linear effect, with the unmanipulated male voices squarely in the middle of the deeper and higher versions. It appears, then, that both factors are at play: deepening a male voice enhances its memorability, while raising it impairs its memorability.

It’s thought that deeper voices are associated with more desirable traits for long-term male partners. Having a better memory for specific encounters with desirable men would allow women to compare and evaluate men according to how they might behave in different relationship contexts.

The voices used were supplied by four young adult men and four young adult women. Pitch was altered through software manipulation. Participants were told that the purpose of the experiment was to study sociosexual orientation and object preference. Contraceptive pill usage did not affect the women’s responses.

A month-long music-based program produced dramatic improvement in preschoolers’ language skills. Another study helps explain why music training helps language skills.

Music-based training 'cartoons' improved preschoolers’ verbal IQ

A study in which 48 preschoolers (aged 4-6) participated in computer-based, cognitive training programs that were projected on a classroom wall and featured colorful, animated cartoon characters delivering the lessons, has found that 90% of those who received music-based training significantly improved their scores on a test of verbal intelligence, while those who received visual art-based training did not.

The music-based training involved a combination of motor, perceptual and cognitive tasks, and included training on rhythm, pitch, melody, voice and basic musical concepts. Visual art training emphasized the development of visuo-spatial skills relating to concepts such as shape, color, line, dimension and perspective. Each group received two one-hour training sessions each day in classroom, over four weeks.

Children’s abilities and brain function were tested before the training and five to 20 days after the end of the programs. While there were no significant changes, in the brain or in performance, in the children who participated in the visual art training, nearly all of those who took the music-based training showed large improvements on a measure of vocabulary knowledge, as well as increased accuracy and reaction time. These correlated with changes in brain function.

The findings add to the growing evidence for the benefits of music training for intellectual development, especially in language.

Musical aptitude relates to reading ability through sensitivity to sound patterns

Another new study points to one reason for the correlation between music training and language acquisition. In the study, 42 children (aged 8-13) were tested on their ability to read and recognize words, as well as their auditory working memory (remembering a sequence of numbers and then being able to quote them in reverse), and musical aptitude (both melody and rhythm). Brain activity was also measured.

It turned out that both music aptitude and literacy were related to the brain’s response to acoustic regularities in speech, as well as auditory working memory and attention. Compared to good readers, poor readers had reduced activity in the auditory brainstem to rhythmic rather than random sounds. Responsiveness to acoustic regularities correlated with both reading ability and musical aptitude. Musical ability (largely driven by performance in rhythm) was also related to reading ability, and auditory working memory to both of these.

It was calculated that music skill, through the functions it shares with reading (brainstem responsiveness to auditory regularities and auditory working memory) accounts for 38% of the difference in reading ability between children.

These findings are consistent with previous findings that auditory working memory is an important component of child literacy, and that positive correlations exist between auditory working memory and musical skill.

Basically what this is saying, is that the auditory brainstem (a subcortical region — that is, below the cerebral cortex, where our ‘higher-order’ functions are carried out) is boosting the experience of predictable speech in better readers. This fine-tuning may reflect stronger top-down control in those with better musical ability and reading skills. While there may be some genetic contribution, previous research makes it clear that musicians’ increased sensitivity to sound patterns is at least partly due to training.

In other words, giving young children music training is a good first step to literacy.

The children were rated as good readers if they scored 110 or above on the Test of Word Reading Efficiency, and poor readers if they scored 90 or below. There were 8 good readers and 21 poor readers. Those 13 who scored in the middle were excluded from group analyses. Good and poor readers didn’t differ in age, gender, maternal education, years of musical training, extent of extracurricular activity, or nonverbal IQ. Only 6 of the 42 children had had at least a year of musical training (of which one was a poor reader, three were average, and two were good).

Auditory brainstem responses were gathered to the speech sound /da/, which was either presented with 100% probability, or randomly interspersed with seven other speech sounds. The children heard these sounds through an earpiece in the right ear, while they listened to the soundtrack of a chosen video with the other ear.

[2603] Moreno, S., Bialystok E., Barac R., Schellenberg E. Glenn, Cepeda N. J., & Chau T. (2011).  Short-Term Music Training Enhances Verbal Intelligence and Executive Function. Psychological Science. 22(11), 1425 - 1433.

Strait, Dana L, Jane Hornickel, and Nina Kraus. “Subcortical processing of speech regularities underlies reading and music aptitude in children.” Behavioral and brain functions : BBF 7, no. 1 (October 17, 2011): 44. http://www.ncbi.nlm.nih.gov/pubmed/22005291.

Full text is available at http://www.behavioralandbrainfunctions.com/content/pdf/1744-9081-7-44.pd...

New research suggests that successful retrieval depends not only on retrieval cues, but also on your preceding brain state.

What governs whether or not you’ll retrieve a memory? I’ve talked about the importance of retrieval cues, of the match between the cue and the memory code you’re trying to retrieve, of the strength of the connections leading to the code. But these all have to do with the memory code.

Theta brainwaves, in the hippocampus especially, have been shown to be particularly important in memory function. It has been suggested that theta waves before an item is presented for processing lead to better encoding. Now a new study reveals that, when volunteers had to memorize words with a related context, they were better at later remembering the context of the word if high theta waves were evident in their brains immediately before being prompted to remember the item.

In the study, 17 students made pleasantness or animacy judgments about a series of words. Shortly afterwards, they were presented with both new and studied words, and asked to indicate whether the word was old or new, and if old, whether the word had been encountered in the context of “pleasant” or “alive”. Each trial began with a 1000 ms presentation of a simple mark for the student to focus on. Theta activity during this fixation period correlated with successful retrieval of the episodic memory relating to that item, and larger theta waves were associated with better source memory accuracy (memory for the context).

Theta activity has not been found to be particularly associated with greater attention (the reverse, if anything). It seems more likely that theta activity reflects a state of mind that is oriented toward evaluating retrieval cues (“retrieval mode”), or that it reflects reinstatement of the contextual state employed during study.

The researchers are currently investigating whether you can deliberately put your brain into a better state for memory recall.

[2333] Addante, R. J., Watrous A. J., Yonelinas A. P., Ekstrom A. D., & Ranganath C. (2011).  Prestimulus theta activity predicts correct source memory retrieval. Proceedings of the National Academy of Sciences. 108(26), 10702 - 10707.

Another study on the dramatic impact of stereotype threat on academic achievement, and how you can counter it.

In a two-part experiment, Black and White students studied the definitions of 24 obscure English words, and were later tested, in threatening or non-threatening environments. In the threatening study environment, students were told that the task would assess their "learning abilities and limitations" and "how well people from different backgrounds learn”. In the non-threatening environment, students were told that the study focused on identifying "different learning styles". When tested one to two weeks later, students were first given a low-stress warm-up exercise with half of the word definitions. Then, in order to evoke concerns about stereotypes, a test was given which was described as evaluating "your ability to learn verbal information and your performance on problems requiring verbal reasoning ability".

The effect of these different environments on the Black students was dramatic. On the non-threatening warm-up test, Black students who had studied in the threatening learning environment performed about 50% worse than Black students who had studied in the non-threatening environment. But on the ‘real’ test, for which stereotypes had been evoked, all the Blacks — including those who had done fine on the warm-up — did poorly.

In the second experiment, only Black students were involved, and they all studied in the threatening environment. This time, however, half of the students were asked to begin with a "value affirmation" exercise, during which they chose values that mattered most to them and explained why. The other students were asked to write about a value that mattered little to them. A week later, students did the warm-up and the test. Black students who had written about a meaningful value scored nearly 70% better on the warm-up than black students who had written about other values.

[2348] Taylor, V. J., & Walton G. M. (2011).  Stereotype Threat Undermines Academic Learning. Personality and Social Psychology Bulletin. 37(8), 1055 - 1067.

A new study shows that preschoolers whose parents engage in the right number-talk develop an understanding of number earlier. Such understanding affects later math achievement.

At every level, later math learning depends on earlier understanding. Previous research has found that the knowledge children have of number before they start school predicts their achievement throughout elementary school.

One critical aspect of mathematical development is cardinal-number knowledge (e.g. knowing that the word ‘three’ refers to sets of three things). But being able to count doesn’t mean the child understands this principle. Children who enter kindergarten with a good understanding of the cardinal principle have been found to do better in mathematics.

Following research indicating an association between children’s knowledge of number and the amount of number talk their parents engage in, a new study recorded parental interactions for 44 young children aged 14-30 months. Five 90-minute sessions, four months apart, were recorded in the children’s home, and each instance in which parents talked about numbers with their children was noted and coded. The children were then (at nearly four years) tested on their understanding of the cardinal principle.

The study found that parents’ number talk involving counting or labeling sets of visible objects related to children’s later cardinal-number knowledge, whereas other types of parent number talk were not. Talk of larger sets, containing more than 3 objects, was particularly important. This is probably because children can recognize number sets of three or less in a holistic way.

Two experiments manipulating fonts to create texts that are slightly more difficult to read has found that such texts are better remembered.

It must be easier to learn when your textbook is written clearly and simply, when your teacher speaks clearly, laying the information out with such organization and clarity that everything is obvious. But the situation is not as clear-cut as it seems. Of course, organization, clarity, simplicity, are all good attributes — but maybe information can be too clearly expressed. Maybe students learn more if the information isn’t handed to them on a platter.

A recent study looked at the effects of varying the font in which a text was written, in order to vary the difficulty with which the information could be read. In the first experiment, 28 adults (aged 18-40) read a text describing three species of aliens, each with seven characteristics, about which they would be tested. The control group saw the text in 16-point Arial, while two other versions were designed to be harder to read: 12-point Comic Sans MS at 60% grayscale and 12-point Bodoni MT at 60% grayscale. These harder-to-read texts were not noticeably more difficult; they would still be easily read. Participants were given only 90 seconds to memorize the information in the lists, and then were tested on their recall of the information after some 15 minutes doing other tasks.

Those with the harder-to-read texts performed significantly better on the test than those who had the standard text (an average of 86.5% correct vs 72.8%).

In the second experiment, involving 222 high school students from six different classes (English, Physics, History, and Chemistry, and including regular, Honors, and Advanced Placement classes), the text of their worksheets (and in the case of the physics classes, PowerPoint slides) was manipulated. While some sections of the class received the materials in their normal font, others experienced the text written in either Haettenschweiler, Monotype Corsiva, Comic Sans italicized, or smeared (by moving the paper during copying).

Once again, students who read the texts in one of the difficult conditions remembered the material significantly better than those in the control condition. As in the first study, there was no difference between the difficult fonts.

While it is possible that the use of these more unusual fonts made the text more distinctive, the fonts were not so unusual as to stand out, and moreover, their novelty should have diminished over the course of the semester. It seems more likely that these findings reflect the ‘desirable difficulty’ effect. However, it should be noted that getting the ‘right’ level of difficulty is a tricky thing — you need to be in the right place of what is surely a U-shaped curve. A little too much difficulty and you can easily do far more damage than good!

Games that use the n-back task, designed to challenge working memory, may improve fluid intelligence, but only if the games are at the right level of difficulty for the individual.

It has been difficult to train individuals in such a way that they improve in general skills rather than the specific ones used in training. However, recently some success has been achieved using what is called an “n-back” task, a task that involves presenting a series of visual and/or auditory cues to a subject and asking the subject to respond if that cue has occurred, to start with, one time back. If the subject scores well, the number of times back is increased each round.

In the latest study, 62 elementary and middle school children completed a month of training on a computer program, five times a week, for 15 minutes at a time. While the active control group trained on a knowledge and vocabulary-based task, the experimental group was given a demanding spatial task in which they were presented with a sequence of images at one of six locations, one at a time, at a rate of 3s. The child had to press one key whenever the current image was at the same location as the one n items back in the series, and another key if it wasn’t. Both tasks employed themed graphics to make the task more appealing and game-like.

How far back the child needed to remember depended on their performance — if they were struggling, n would be decreased; if they were meeting the challenge, n would be increased.

Although the experimental and active control groups showed little difference on abstract reasoning tasks (reflecting fluid intelligence) at the end of the training, when the experimental group was divided into two subgroups on the basis of training gain, the story was different. Those who showed substantial improvement on the training task over the month were significantly better than the others, on the abstract reasoning task. Moreover, this improvement was maintained at follow-up testing three months later.

The key to success seems to be whether or not the games hit the “sweet spot” for the individual — fun and challenging, but not so challenging as to be frustrating. Those who showed the least improvement rated the game as more difficult, while those who improved the most found it challenging but not overwhelming.

You can try this task yourself at http://brainworkshop.sourceforge.net/.

Jaeggi, Susanne M, Martin Buschkuehl, John Jonides, and Priti Shah. “Short- and long-term benefits of cognitive training.” Proceedings of the National Academy of Sciences of the United States of America 2011 (June 13, 2011): 2-7. http://www.ncbi.nlm.nih.gov/pubmed/21670271.

[1183] Jaeggi, S. M., Buschkuehl M., Jonides J., & Perrig W. J. (2008).  From the Cover: Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences. 105(19), 6829 - 6833.

A new study confirms earlier indications that those with a high working memory capacity are better able to regulate their emotions.

Once upon a time we made a clear difference between emotion and reason. Now increasing evidence points to the necessity of emotion for good reasoning. It’s clear the two are deeply entangled.

Now a new study has found that those with a higher working memory capacity (associated with greater intelligence) are more likely to automatically apply effective emotional regulation strategies when the need arises.

The study follows on from previous research that found that people with a higher working memory capacity suppressed expressions of both negative and positive emotion better than people with lower WMC, and were also better at evaluating emotional stimuli in an unemotional manner, thereby experiencing less emotion in response to those stimuli.

In the new study, participants were given a test, then given either negative or no feedback. A subsequent test, in which participants were asked to rate their familiarity with a list of people and places (some of which were fake), evaluated whether their emotional reaction to the feedback affected their performance.

This negative feedback was quite personal. For example: "your responses indicate that you have a tendency to be egotistical, placing your own needs ahead of the interests of others"; "if you fail to mature emotionally or change your lifestyle, you may have difficulty maintaining these friendships and are likely to form insecure relations."

The false items in the test were there to check for "over claiming" — a reaction well known to make people feel better about themselves and control their reactions to criticism. Among those who received negative feedback, those with higher levels of WMC were found to over claim the most. The people who over claimed the most also reported, at the end of the study, the least negative emotions.

In other words, those with a high WMC were more likely to automatically use an emotion regulation strategy. Other emotional reappraisal strategies include controlling your facial expression or changing negative situations into positive ones. Strategies such as these are often more helpful than suppressing emotion.

Schmeichel, Brandon J.; Demaree, Heath A. 2010. Working memory capacity and spontaneous emotion regulation: High capacity predicts self-enhancement in response to negative feedback. Emotion, 10(5), 739-744.

Schmeichel, Brandon J.; Volokhov, Rachael N.; Demaree, Heath A. 2008. Working memory capacity and the self-regulation of emotional expression and experience. Journal of Personality and Social Psychology, 95(6), 1526-1540. doi: 10.1037/a0013345

A new review pointing to the impact of motivation on IQ score reminds us that this factor is significant, particularly for predicting accomplishments other than academic achievement.

Whether IQ tests really measure intelligence has long been debated. A new study provides evidence that motivation is also a factor.

Meta-analysis of 46 studies where monetary incentives were used in IQ testing has revealed a large effect of reward on IQ score. The average effect was equivalent to nearly 10 IQ points, with the size of the effect depending on the size of the reward. Rewards greater than $10 produced increases roughly equivalent to 20 IQ points. The effects of incentives were greater for individuals with lower baseline IQ scores.

Follow-up on a previous study of 500 boys (average age 12.5) who were videotaped while undertaking IQ tests in the late 80s also supports the view that motivation plays a part in IQ. The tapes had been evaluated by those trained to detect signs of boredom and each boy had been given a motivational score in this basis. Some 12 years later, half the participants agreed to interviews about their educational and occupational achievements.

As found in other research, IQ score was found to predict various life outcomes, including academic performance in adolescence and criminal convictions, employment, and years of education in early adulthood. However, after taking into account motivational score, the predictiveness of IQ score was significantly reduced.

Differences in motivational score accounted for up to 84% of the difference in years of education (no big surprise there if you think about it), but only 25% of the differences relating to how well they had done in school during their teenage years.

In other words, test motivation can be a confounding factor that has inflated estimates of the predictive validity of IQ, but the fact that academic achievement was less affected by motivation demonstrates that high intelligence (leaving aside the whole thorny issue of what intelligence is) is still required to get a high IQ score.

This is not unexpected — from the beginning of intelligence testing, psychologists have been aware that test-takers vary in how seriously they take the test, and that this will impact on their scores. Nevertheless, the findings are a reminder of this often overlooked fact, and underline the importance of motivation and self-discipline, and the need for educators to take more account of these factors.

[2220] Duckworth, A. L., Quinn P. D., Lynam D. R., Loeber R., & Stouthamer-Loeber M. (2011).  Role of test motivation in intelligence testing. Proceedings of the National Academy of Sciences.

Receiving immediate feedback on the activity in a brain region enabled people to improve their control of that region’s activity, thus improving their concentration.

I’ve always been intrigued by neurofeedback training. But when it first raised its head, technology was far less sophisticated. Now a new study has used real-time functional Magnetic Resonance Imaging (fMRI) feedback from the rostrolateral prefrontal cortex to improve people's ability to control their thoughts and focus their attention.

In the study, participants performed tasks that either raised or lowered mental introspection in 30-second intervals over four six-minute sessions. Those with access to real-time fMRI feedback could see their RLPFC activity increase during introspection and decrease during non-introspective thoughts, such as mental tasks that focused on body sensations. These participants became significantly better at controlling their thoughts and performing the mental tasks. Moreover, the improved regulation was reflected only in activity in the rostrolateral prefrontal cortex. Those given inaccurate or no brain feedback showed no such improvement.

The findings point to a means of improving attentional control, and also raise hope for clinical treatments of conditions that can benefit from improved awareness and regulation of one's thoughts, including depression, anxiety, and obsessive-compulsive disorders.

A new imaging study reveals what’s going on in the brains of expert shogi players that’s different from those of amateurs. It’s all about developing instincts.

The mental differences between a novice and an expert are only beginning to be understood, but two factors thought to be of importance are automaticity (the process by which a procedure becomes so practiced that it no longer requires conscious thought) and chunking (the unitizing of related bits of information into one tightly integrated unit — see my recent blog post on working memory). A new study adds to our understanding of this process by taking images of the brains of professional and amateur players of the Japanese chess-like game of shogi.

Eleven professional, 9 high- and 8 low-rank amateur players of shogi were presented with patterns of different types (opening shogi patterns, endgame shogi patterns, random shogi patterns, chess, Chinese chess, as well as completely different stimuli — scenes, faces, other objects, scrambled patterns).

It was found that the board game patterns, but not the other patterns, stimulated activity in the posterior precuneus of all shogi players. This activity, for the professional players, was particularly strong for shogi opening and endgame patterns, and activity in the precuneus was the only regional activity that showed a difference between these patterns and the other board game patterns. For the amateurs however, there was no differential activity for the endgame patterns, and only the high-rank amateurs showed differential activity for the opening shogi patterns. Opening patterns tend to be more stereotyped than endgame patterns (i.e., endgame patterns are better reflections of expertise).

The players were then asked for the best next-move in a series of shogi problems (a) when they only had one second to study the pattern, and (b) when they had eight seconds. When professional players had only a second to study the problem, the caudate nucleus was active. When they had 8 seconds, activity was confined to the cerebral cortex, as it was for the amateurs in both conditions. This activity in the caudate, which is part of the basal ganglia, deep within the brain, is thought to reflect the development of an intuitive response.

The researchers therefore suggest that this type of intuition, an instinct achieved through training and experience, is what marks an expert. Making part of the process unconscious not only makes it faster, but frees up valuable space in working memory for aspects that need conscious thought.

The posterior precuneus directly connects with the dorsolateral prefrontal cortex, which in turn connects to the caudate. There is also a direct connection between the precuneus and the caudate. This precuneus-caudate circuit is therefore suggested as a key part of what makes a board-game expert an expert.

Two experiments indicate that judgment about how well something is learned is based on encoding fluency only for people who believe intelligence is a fixed attribute.

It’s well-established that feelings of encoding fluency are positively correlated with judgments of learning, so it’s been generally believed that people primarily use the simple rule, easily learned = easily remembered (ELER), to work out whether they’re likely to remember something (as discussed in the previous news report). However, new findings indicate that the situation is a little more complicated.

In the first experiment, 75 English-speaking students studied 54 Indonesian-English word pairs. Some of these were very easy, with the English words nearly identical to their Indonesian counterpart (e.g, Polisi-Police); others required more effort but had a connection that helped (e.g, Bagasi-Luggage); others were entirely dissimilar (e.g., Pembalut-Bandage).

Participants were allowed to study each pair for as long as they liked, then asked how confident they were about being able to recall the English word when supplied the Indonesian word on an upcoming test. They were tested at the end of their study period, and also asked to fill in a questionnaire which assessed the extent to which they believed that intelligence is fixed or changeable.

It’s long been known that theories of intelligence have important effects on people's motivation to learn. Those who believe each person possesses a fixed level of intelligence (entity theorists) tend to disengage when something is challenging, believing that they’re not up to the challenge. Those who believe that intelligence is malleable (incremental theorists) keep working, believing that more time and effort will yield better results.

The study found that those who believed intelligence is fixed did indeed follow the ELER heuristic, with their judgment of how well an item was learned nicely matching encoding fluency.

However those who saw intelligence as malleable did not follow the rule, but rather seemed to be following the reverse heuristic: that effortful encoding indicates greater engagement in learning, and thus is a sign that they are more likely to remember. This group therefore tended to be marginally underconfident of easy items, marginally overconfident for medium-level items, and significantly overconfident for difficult items.

However, the entanglement of item difficulty and encoding fluency weakens this finding, and accordingly a second experiment separated these two attributes.

In this experiment, 41 students were presented with two lists of nine words, one list of which was in small font (18-point Arial) and one in large font (48-point Arial). Each word was displayed for four seconds. While font size made no difference to their actual levels of recall, entity theorists were much more confident of recalling the large-size words than the small-size ones. The incremental theorists were not, however, affected by font-size.

It is suggested that the failure to find evidence of a ‘non-fluency heuristic’ in this case may be because participants had no control over learning time, therefore were less able to make relative judgments of encoding effort. Nevertheless, the main finding, that people varied in their use of the fluency heuristic depending on their beliefs about intelligence, was clear in both cases.

[2182] Miele, D. B., Finn B., & Molden D. C. (2011).  Does Easily Learned Mean Easily Remembered?. Psychological Science. 22(3), 320 - 324.

A series of online experiments demonstrate that beliefs about memory, judgments of how likely you are to remember, and actual memory performance, are all largely independent of each other.

Research has shown that people are generally poor at predicting how likely they are to remember something. A recent study tested the theory that the reason we’re so often inaccurate is that we make predictions about memory based on how we feel while we're encountering the information to be learned, and that can lead us astray.

In three experiments, each involving about 80 participants ranging in age from late teens to senior citizens, participants were serially shown words in large or small fonts and asked to predict how well they'd remember each (actual font sizes depended on the participants’ browsers, since this was an online experiment and participants were in their own homes, but the larger size was four times larger than the other).

In the first experiment, each word was presented either once or twice, and participants were told if they would have another chance to study the word. The length of time the word was displayed on the first occasion was controlled by the participant. On the second occasion, words were displayed for four seconds, and participants weren’t asked to make a new prediction. At the end of the study phase, they had two minutes to type as many words as they remembered.

Recall was significantly better when an item was seen twice. Recall wasn’t affected by font size, but participants were significantly more likely to believe they’d recall those presented in larger fonts. While participants realized seeing an item twice would lead to greater recall, they greatly underestimated the benefits.

Because people so grossly discounted the benefit of a single repetition, in the next experiment the comparison was between one and four study trials. This time, participants gave more weight to having three repetitions versus none, but nevertheless, their predictions were still well below the actual benefits of the repetitions.

In the third experiment, participants were given a simplified description of the first experiment and either asked what effect they’d expect font size to have, or what effect having two study trials would have. The results (similar levels of belief in the benefits of each condition) neither resembled the results in the first experiment (indicating that those people’s predictions hadn’t been made on the basis of their beliefs about memory effects), or the actual performance (demonstrating that people really aren’t very good at predicting their memory performance).

These findings were confirmed in a further experiment, in which participants were asked about both variables (rather than just one).

The findings confirm other evidence that (a) general memory knowledge tends to be poor, (b) personal memory awareness tends to be poor, and (c) ease of processing is commonly used as a heuristic to predict whether something will be remembered.

 

Addendum: a nice general article on this topic by the lead researcher Nate Kornell has just come out in Miller-McCune

Kornell, N., Rhodes, M. G., Castel, A. D., & Tauber, S. K. (in press). The ease of processing heuristic and the stability bias: Dissociating memory, memory beliefs, and memory judgments. Psychological Science.

A new study suggests a positive mood affects attention by using up some of your working memory capacity.

Following earlier research suggesting mood affects attention, a new study tries to pin down exactly what it’s affecting.

To induce different moods, participants were shown either a video of a stand-up comedy routine or an instructional video on how to install flooring. This was followed by two tests, one of working memory capacity (the Running Memory Span), during which numbers are presented through headphones at a rate of four numbers per second ending with subjects asked to recall the last six numbers in order, and one of response inhibition (the Stroop task).

Those that watched the comedy routine performed significantly worse on the RMS task but not on the Stroop task. To confirm these results, a second experiment used a different measure of response inhibition, the Flanker task. Again, those in a better mood performed worse on the span task but not the inhibition task.

These findings point to mood affecting storage capacity — something we already had evidence for in the case of negative mood, like anxiety, but a little more surprising to find it also applies to happy moods. Basically, it seems as if any emotion, whether good or bad, is likely to leave you less room in your working memory store for information processing. That shouldn’t be taken as a cue to go all Spock! But it’s something to be aware of.

[2180] Martin, E. A., & Kerns J. G. (2011).  The influence of positive mood on different aspects of cognitive control. Cognition & Emotion. 25(2), 265 - 265.

A new study suggests we lose focus because of habituation, and we can ‘reset’ our attention by briefly switching to another task before returning.

We’ve all experienced the fading of our ability to concentrate when we’ve been focused on a task for too long. The dominant theory of why this should be so has been around for half a century, and describes attention as a limited resource that gets ‘used up’. Well, attention is assuredly a limited resource in the sense that you only have so much of it to apply. But is it limited in the sense of being used up and needing to refresh? A new study indicates that it isn’t.

The researchers make what strikes me as a cogent argument: attention is an endless resource; we are always paying attention to something. The problem is our ability to maintain attention on a single task without respite. Articulated like this, we are immediately struck by the parallel with perception. Any smell, touch, sight, sound, that remains constant eventually stops registering with us. We become habituated to it. Is that what’s happening with attention? Is it a form of habituation?

In an experimental study, 84 volunteers were tested on their ability to focus on a repetitive computerized task for 50 minutes under various conditions: one group had no breaks or distractions; two groups memorized four digits beforehand and were told to respond if they saw them on the screen during the task (but only one group were shown them during the task); one group were shown the digits but told to ignore them if they saw them.

As expected, performance declined significantly over the course of the task for most participants — with the exception of those who were twice shown the memorized digits and had to respond to them. That was all it took, a very brief break in the task, and their focus was maintained.

The finding suggests that prolonged attention to a single task actually hinders performance, but briefly deactivating and reactivating your goals is all you need to stay focused.

A three-month trial comparing the effects of exercise programs on cognitive function in sedentary, overweight children, has found dose-related benefits of regular aerobic exercise.

A study involving 171 sedentary, overweight 7- to 11-year-old children has found that those who participated in an exercise program improved both executive function and math achievement. The children were randomly selected either to a group that got 20 minutes of aerobic exercise in an after-school program, one that got 40 minutes of exercise in a similar program, or a group that had no exercise program. Those who got the greater amount of exercise improved more. Brain scans also revealed increased activity in the prefrontal cortex and reduced activity in the posterior parietal cortex, for those in the exercise group.

The program lasted around 13 weeks. The researchers are now investigating the effects of continuing the program for a full year. Gender, race, socioeconomic factors or parental education did not change the impact of the exercise program.

The effects are consistent with other studies involving older adults. It should be emphasized that these were sedentary, overweight children. These findings are telling us what the lack of exercise is doing to young minds. I note the report just previous, about counteracting what we have regarded as “normal” brain atrophy in older adults by the simple action of walking for 40 minutes three times a week. Children and older adults might be regarded as our canaries in the coal mine, more vulnerable to many factors that can affect the brain. We should take heed.

Students who watched a video of a laughing baby or listened to a peppy Mozart piece performed better on a classification task.

A link between positive mood and creativity is supported by a study in which 87 students were put into different moods (using music and video clips) and then given a category learning task to do (classifying sets of pictures with visually complex patterns). There were two category tasks: one involved classification on the basis of a rule that could be verbalized; the other was based on a multi-dimensional pattern that could not easily be verbalized.

Happy volunteers were significantly better at learning the rule to classify the patterns than sad or neutral volunteers. There was no difference between those in a neutral mood and those in a negative mood.

It had been theorized that positive mood might only affect processes that require hypothesis testing and rule selection. The mechanism by which this might occur is through increased dopamine levels in the frontal cortex. Interestingly, however, although there was no difference in performance as a function of mood, analysis based on how closely the subjects’ responses matched an optimal strategy for the task found that, again, positive mood was of significant benefit.

The researchers suggest that this effect of positive mood may be the reason behind people liking to watch funny videos at work — they’re trying to enhance their performance by putting themselves in a good mood.

The music and video clips were rated for their mood-inducing effects. Mozart’s “Eine Kleine Nachtmusik—Allegro” was the highest rated music clip (at an average rating of 6.57 on a 7-point scale), Vivaldi’s Spring was next at 6.14. The most positive video was that of a laughing baby (6.57 again), with Whose Line is it Anyway sound effects scoring close behind (6.43).

[2054] Nadler, R. T., Rabi R., & Minda J. P. (2010).  Better Mood and Better Performance. Psychological Science. 21(12), 1770 - 1776.

A five-week training program to improve working memory has significantly improved working memory, attention, and organization in many children and adolescents with ADHD.

A working memory training program developed to help children with ADHD has been tested by 52 students, aged 7 to 17. Between a quarter and a third of the children showed significant improvement in inattention, overall number of ADHD symptoms, initiation, planning/organization, and working memory, according to parental ratings. While teacher ratings were positive, they did not quite reach significance. It is worth noting that this improvement was maintained at the four-month follow-up.

The children used the software in their homes, under the supervision of their parents and the researchers. The program includes a set of 25 exercises in a computer-game format that students had to complete within 5 to 6 weeks. For example, in one exercise a robot will speak numbers in a certain order, and the student has to click on the numbers the robot spoke, on the computer screen, in the opposite order. Each session is 30 to 40 minutes long, and the exercises become progressively harder as the students improve.

The software was developed by a Swedish company called Cogmed in conjunction with the Karolinska Institute. Earlier studies in Sweden have been promising, but this is the first study in the United States, and the first to include children on medication (60% of the participants).

Being actively involved improves learning significantly, and new research shows that the hippocampus is at the heart of this process.

We know active learning is better than passive learning, but for the first time a study gives us some idea of how that works. Participants in the imaging study were asked to memorize an array of objects and their exact locations in a grid on a computer screen. Only one object was visible at a time. Those in the "active study” group used a computer mouse to guide the window revealing the objects, while those in the “passive study” group watched a replay of the window movements recorded in a previous trial by an active subject. They were then tested by having to place the items in their correct positions. After a trial, the active and passive subjects switched roles and repeated the task with a new array of objects.

The active learners learned the task significantly better than the passive learners. Better spatial recall correlated with higher and better coordinated activity in the hippocampus, dorsolateral prefrontal cortex, and cerebellum, while better item recognition correlated with higher activity in the inferior parietal lobe, parahippocampal cortex and hippocampus.

The critical role of the hippocampus was supported when the experiment was replicated with those who had damage to this region — for them, there was no benefit in actively controlling the viewing window.

This is something of a surprise to researchers. Although the hippocampus plays a crucial role in memory, it has been thought of as a passive participant in the learning process. This finding suggests that it is actually part of an active network that controls behavior dynamically.

A study involving skilled typists shows how the part of a person that does the thinking relies on different feedback than the part that does the doing.

There are a number of ways experts think differently from novices (in their area of expertise). A new study involving 72 college-age typists with about 12 years of typing experience and typing speeds comparable to professional typists indicates that our idea that highly skilled activities operate at an unconscious level is a little more complex than we thought.

In three experiments, these skilled typists typed single words shown to them one at a time on a computer screen, while occasionally the researchers inserted errors in the words they typed, or corrected errors they made. When asked to report errors, typists took credit for corrected errors and accepted blame for inserted errors, claiming authorship for the appearance of the screen. Not surprising in the first experiment, when the typists weren’t told what the researchers were doing. But even in the later experiments, when they knew some of the errors and some of the corrections weren’t theirs, they still tended to take responsibility for what they saw.

Nevertheless, regardless of what they saw and what they thought, their typing rate wasn’t affected by inserted errors. Only when the typists themselves made errors, regardless of whether or not the researchers corrected them, did their fingers slow down.

In other words, it wasn’t the feedback of the look of the word on the screen that triggered the finger slow-down, but the ‘knowledge’ the fingers had as to what they had done.

But it was the appearance of the words on the screen that governed the typists’ reporting of errors, leading the researchers to propose two error detection processes: an outer loop that supports conscious reports and an inner loop process that slows keystrokes after errors.

Logan, G.D. & Crump, M.J.C. 2010. Cognitive Illusions of Authorship Reveal Hierarchical Error Detection in Skilled Typists. Science, 330 (6004), 683-686. http://www.sciencemag.org/content/330/6004/683.abstract?sid=140a96b9-ef5...

A month-long training program has enabled volunteers to instantly recognize very faint patterns.

In a study in which 14 volunteers were trained to recognize a faint pattern of bars on a computer screen that continuously decreased in faintness, the volunteers became able to recognize fainter and fainter patterns over some 24 days of training, and this correlated with stronger EEG signals from their brains as soon as the pattern flashed on the screen. The findings indicate that learning modified the very earliest stage of visual processing.

The findings could help shape training programs for people who must learn to detect subtle patterns quickly, such as doctors reading X-rays or air traffic controllers monitoring radars, and may also help improve training for adults with visual deficits such as lazy eye.

The findings are also noteworthy for showing that learning is not confined to ‘higher-order’ processes, but can occur at even the most basic, unconscious and automatic, level of processing.

Studies involving gentle electrical stimulation to the scalp confirm crucial brain regions and demonstrate improved learning for specific knowledge.

In a study involving 15 young adults, a very small electrical current delivered to the scalp above the right anterior temporal lobe significantly improved their memory for the names of famous people (by 11%). Memory for famous landmarks was not affected. The findings support the idea that the anterior temporal lobes are critically involved in the retrieval of people's names.

A follow-up study is currently investigating whether transcranial direct current stimulation (tDCS) will likewise improve name memory in older adults — indeed, because their level of recall is likely to be lower, it is hoped that the procedure will have a greater effect. If so, the next question is whether repeating tDCS may lead to longer lasting improvement. The procedure may offer hope for rehabilitation for stroke or other neurological damage.

This idea receives support from another recent study, in which 15 students spent six days learning a series of unfamiliar symbols that corresponded to the numbers zero to nine, and also had daily sessions of (tDCS). Five students were given 20 minutes of stimulation above the right parietal lobe; five had 20 minutes of stimulation above the left parietal lobe, and five experienced only 30 seconds of stimulation — too short to induce any permanent changes.

The students were tested on the new number system at the end of each day. After four days, those who had experienced current to the right parietal lobe performed as well as they would be expected to do with normal numbers. However, those who had experienced the stimulation to the left parietal lobe performed significantly worse. The control students performed at a level between the two other groups.

Most excitingly, when the students were tested six months later, they performed at the same level, indicating the stimulation had a durable effect. However, it should be noted that the effects were small and highly variable, and were limited to the new number system. While it may be that one day this sort of approach will be of benefit to those with dyscalculia, more research is needed.

Indications that talking provides mental stimulation that helps sharpen your brain are supported and explained by new evidence that particular types of conversation are beneficial.

Following on from earlier research suggesting that simply talking helps keep your mind sharp at all ages, a new study involving 192 undergraduates indicates that the type of talking makes a difference. Engaging in brief (10 minute) conversations in which participants were simply instructed to get to know another person resulted in boosts to their executive function (the processes involved in working memory, planning, decision-making, and so on). However when participants engaged in conversations that had a competitive edge, their performance showed no improvement. The improvement was limited to executive function; neither processing speed nor general knowledge was affected.

Further experiments indicated that competitive discussion could boost executive function — if the conversations were structured to allow for interpersonal engagement. The crucial factor seems to be the process of getting into another person’s mind and trying to see things from their point of view (something most of us do naturally in conversation).

The findings also provide support for the social brain hypothesis — that we evolved our larger brains to help us deal with large social groups. They also support earlier speculations by the researcher, that parents and teachers could help children improve their intellectual skills by encouraging them to develop their social skills.

Images of nature have been found to improve attention. A new study shows that natural scenes encourage different brain regions to synchronize.

A couple of years ago I reported on a finding that walking in the park, and (most surprisingly) simply looking at photos of natural scenes, could improve memory and concentration (see below). Now a new study helps explain why. The study examined brain activity while 12 male participants (average age 22) looked at images of tranquil beach scenes and non-tranquil motorway scenes. On half the presentations they concurrently listened to the same sound associated with both scenes (waves breaking on a beach and traffic moving on a motorway produce a similar sound, perceived as a constant roar).

Intriguingly, the natural, tranquil scenes produced significantly greater effective connectivity between the auditory cortex and medial prefrontal cortex, and between the auditory cortex and posterior cingulate gyrus, temporoparietal cortex and thalamus. It’s of particular interest that this is an example of visual input affecting connectivity of the auditory cortex, in the presence of identical auditory input (which was the focus of the research). But of course the take-home message for us is that the benefits of natural scenes for memory and attention have been supported.

Previous study:

Many of us who work indoors are familiar with the benefits of a walk in the fresh air, but a new study gives new insight into why, and how, it works. In two experiments, researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. The intriguing finding was that this effect was achieved not only by walking in the botanical gardens (versus walking along main streets of Ann Arbor), but also by looking at photos of nature (versus looking at photos of urban settings). The findings are consistent with a theory that natural environments are better at restoring attention abilities, because they provide a more coherent pattern of stimulation that requires less effort, as opposed to urban environments that are provide complex and often confusing stimulation that captures attention dramatically and requires directed attention (e.g., to avoid being hit by a car).

[1867] Hunter, M. D., Eickhoff S. B., Pheasant R. J., Douglas M. J., Watts G. R., Farrow T. F. D., et al. (2010).  The state of tranquility: Subjective perception is shaped by contextual modulation of auditory connectivity. NeuroImage. 53(2), 611 - 618.

[279] Berman, M. G., Jonides J., & Kaplan S. (2008).  The cognitive benefits of interacting with nature. Psychological Science: A Journal of the American Psychological Society / APS. 19(12), 1207 - 1212.

Male superiority in mental rotation is the most-cited gender difference in cognitive abilities. A new study shows that the difference can be eliminated in 6-year-olds after a mere 8 weeks.

Following a monkey study that found training in spatial memory could raise females to the level of males, and human studies suggesting the video games might help reduce gender differences in spatial processing (see below for these), a new study shows that training in spatial skills can eliminate the gender difference in young children. Spatial ability, along with verbal skills, is one of the two most-cited cognitive differences between the sexes, for the reason that these two appear to be the most robust.

This latest study involved 116 first graders, half of whom were put in a training program that focused on expanding working memory, perceiving spatial information as a whole rather than concentrating on details, and thinking about spatial geometric pictures from different points of view. The other children took part in a substitute training program, as a control group. Initial gender differences in spatial ability disappeared for those who had been in the spatial training group after only eight weekly sessions.

Previously:

A study of 90 adult rhesus monkeys found young-adult males had better spatial memory than females, but peaked early. By old age, male and female monkeys had about the same performance. This finding is consistent with reports suggesting that men show greater age-related cognitive decline relative to women. A second study of 22 rhesus monkeys showed that in young adulthood, simple spatial-memory training did not help males but dramatically helped females, raising their performance to the level of young-adult males and wiping out the gender gap.

Another study showing that expert video gamers have improved mental rotation skills, visual and spatial memory, and multitasking skills has led researchers to conclude that training with video games may serve to reduce gender differences in visual and spatial processing, and some of the cognitive declines that come with aging.

A study of joint decision-making has found collaborative decisions are better, unless one of the individuals is unknowingly working with flawed information.

There’s been a lot of discussion, backed by some evidence, that groups are ‘smarter’ than the individuals in them, that groups make better decisions than individuals. But it is not, of course, as simple as that, and a recent study speaks to the limits of this principle. The study involved pairs of volunteers who were asked to detect a very weak signal that was shown on a computer screen. If they disagreed about when the signal occurred, then they talked together until they agreed on a joint decision. The results showed that joint decisions were better than the decision made by the better-performing individual (as long as they could talk it over).

However, when one of the participants was sometimes surreptitiously made incompetent by being shown a noisy image in which the signal was much more difficult to see, the joint decisions were worse than the decisions of the better performing partner. In other words, working with others can have a detrimental effect if one person is working with flawed information, or is incompetent but doesn't know it. Successful group decision-making and problem-solving requires the participants to be able to accurately judge their level of confidence.

[1801] Bahrami, B., Olsen K., Latham P. E., Roepstorff A., Rees G., & Frith C. D. (2010).  Optimally Interacting Minds. Science. 329(5995), 1081 - 1085.

New research confirms most students have poor study skills, and points to the effectiveness of association strategies.

No big surprise, surely: a new study has found that computers do not magically improve students’ study skills — they tend to study online material using the same techniques they would use with traditional texts. Which means, it appears, poor strategies.

More interestingly, the study found that undergraduates who used a method called SOAR (Selecting key lesson ideas, Organizing information with comparative charts and illustrations, Associating ideas to create meaningful connections, and Regulating learning through practice) 29 to 63% more on tests of the material compared to those who mindlessly over-copied long passages verbatim, took incomplete or linear notes, built lengthy outlines that make it difficult to connect related information, and relied on memory drills like re-reading text or recopying notes.

The study involved students first reporting on their strategies for dealing with computer-based texts, then creating study materials from an online text. Different groups were asked to (a) create notes in their own preferred format; (b) create linear notes (the S part of SOAR); (c) create graphically organized matrix notes (SO); (d) create a matrix and associations (SOA); or (e) create a matrix, associations, and practice questions (SOAR). Those using the full SOAR method did best (84% correct on testing), but the dramatic difference was between SO (37%) and SOA (72%) — pointing to the importance of connecting new material to information you already know. The S group scored an average 30%, and the controls 21%.

It’s also well worth noting that, in contradiction of self-reports made by the students at the beginning, there were no signs that students left to their own devices used any association strategies.

We know language affects what we perceive, but a new study shows it can also improve our ability to perceive, even when an object should be invisible to us.

I’ve talked about the importance of labels for memory, so I was interested to see that a recent series of experiments has found that hearing the name of an object improved people’s ability to see it, even when the object was flashed onscreen in conditions and speeds (50 milliseconds) that would render it invisible. The effect was specific to language; a visual preview didn’t help.

Moreover, those who consider their mental imagery particularly vivid scored higher when given the auditory cue (although this association disappeared when the position of the object was uncertain). The researchers suggest that hearing the image labeled evokes an image of the object, strengthening its visual representation and thus making it visible. They also suggested that because words in different languages pick out different things in the environment, learning different languages might shape perception in subtle ways.

A new study shows that daydreaming not only impairs your memory of something you’ve just experienced, but that daydreaming of distant places impairs memory more.

Context is important for memory. Therefore it’s not surprising that shifting your mind’s focus to another context can impair recall — or help you forget. Following on from research finding that thinking about something else blocks access to memories of the recent past, a new study has found that daydreaming about a more distant place impairs memory more compared to daydreaming about a closer place.

The study involved participants being presented with a number of words, then being asked to think either about home or their parents’ house (where they hadn’t been for several weeks) before being shown another list of words. They were then asked to recall as many words from both lists as they could. Those who had thought about home remembered more of the words from the first list than did those who had thought about their parents’ house. In another experiment, those who thought about a vacation within the U.S. remembered more words than those who thought about a vacation abroad.

The findings confirm the importance of context in recall, and point to ways in which you can manipulate your wandering thoughts to either help you remember or forget. I’d be interested to know what recall was like after a delay, however. It might be that the context effects are more pronounced in immediate recall.

[1673] Delaney, P. F., Sahakyan L., Kelley C. M., & Zimmerman C. A. (2010).  Remembering to Forget. Psychological Science. 21(7), 1036 - 1042.

Following on from several studies showing that being reminded of a negative stereotype for your group (be it race or gender) affects your test performance, a new study shows it also impairs learning.

A number of studies have demonstrated that negative stereotypes (such as “women are bad at math”) can impair performance in tests. Now a new study shows that this effect extends to learning. The study involved learning to recognize target Chinese characters among sets of two or four. Women who were reminded of the negative stereotypes involving women's math and visual processing ability failed to improve at this search task, while women who were not reminded of the stereotype got faster with practice. When participants were later asked to choose which of two colored squares, imprinted with irrelevant Chinese characters, was more saturated, those in the control group were slower to respond when one of the characters had been a target. However, those trained under stereotype threat showed no such effect, indicating that they had not learned to automatically attend to a target. It’s suggested that the women in the stereotype threat group tried too hard to overcome the negative stereotype, expending more effort but in an unproductive manner.

There are two problems here, it seems. The first is that people under stereotype threat have more invested in disproving the stereotype, and their efforts may be counterproductive. The second, that they are distracted by the stereotype (which uses up some of their precious working memory).

[1686] Rydell, R. J., Shiffrin R. M., Boucher K. L., Van Loo K., & Rydell M. T. (2010).  Stereotype threat prevents perceptual learning. Proceedings of the National Academy of Sciences.

Music may help you get in the mood for learning or intellectual work, but background music is likely to diminish your performance.

While studies have demonstrated that listening to music before doing a task can improve performance on that task, chiefly through its effect on mood, there has been little research into the effects of background music while doing the task. A new study had participants recall a list of 8 consonants in a specific order in the presence of five sound environments: quiet, liked music, disliked music, changing-state (a sequence of random digits such as "4, 7, 1, 6") and steady-state ("3, 3, 3"). The most accurate recall occurred when participants performed the task in the quieter, steady-state environments. The level of recall was similar for the changing-state and music backgrounds.

Mind you, this task (recall of random items in order) is probably particularly sensitive to the distracting effects of this sort of acoustical variation in the environment. Different tasks are likely to be differentially affected by background music, and I’d also suggest that the familiarity of the music, and possibly its predictability, also influence its impact. Personally, I am very aware of the effect of music on my concentration, and vary the music, or don’t play at all, depending on what I’m doing and my state of mind. I hope we’ll see more research into these variables.

[1683] Perham, N., & Vizard J. (2010).  Can preference for background music mediate the irrelevant sound effect?. Applied Cognitive Psychology. 9999(9999), n/a - n/a.

Abstract

As well as during sleep, it now appears that restful periods while you are awake are also times when consolidation can occur.

It is now well established that memories are consolidated during sleep. Now a new study has found that restful periods while you are awake are also times when consolidation can occur. The imaging study revealed that during resting (allowed to think about anything), there was correlated activity between the hippocampus and part of the lateral occipital complex. This activity was associated with improved memory for the previous experience. Moreover, the degree of activity correlated with how well it was remembered. You can watch a 4 ½ minute video where the researchers explain their study at http://www.cell.com/neuron/abstract/S0896-6273%2810%2900006-1

Tambini, A., Ketz, N. & Davach, L. 2010. Enhanced Brain Correlations during Rest Are Related to Memory for Recent Experiences. Neuron, 65 (2), 280-290.

Students given a 90-minute nap in the early afternoon, after a rigorous learning task, did markedly better at a later round of learning exercises, compared to those who remained awake throughout the day.

Following on from research showing that pulling an all-nighter decreases the ability to cram in new facts by nearly 40%, a study involving 39 young adults has found that those given a 90-minute nap in the early afternoon, after being subjected to a rigorous learning task, did markedly better at a later round of learning exercises, compared to those who remained awake throughout the day. The former group actually improved in their capacity to learn, while the latter became worse at learning. The findings reinforce the hypothesis that sleep is needed to clear the brain's short-term memory storage and make room for new information. Moreover, this refreshing of memory capacity was related to Stage 2 non-REM sleep (an intermediate stage between deep sleep and the REM dream stage).

The preliminary findings were presented February 21, at the annual meeting of the American Association of the Advancement of Science (AAAS) in San Diego, Calif.

Being reminded of multicultural experiences helps you become more creative in solving problems.

Three experiments involving students who had lived abroad and those who hadn't found that those who had experienced a different culture demonstrated greater creativity — but only when they first recalled a multicultural learning experience from their life abroad. Specifically, doing so (a) improved idea flexibility (e.g., the ability to solve problems in multiple ways), (b) increased awareness of underlying connections and associations, and (c) helped overcome functional fixedness. The study also demonstrated that it was learning about the underlying meaning or function of behaviors in the multicultural context that was particularly important for facilitating creativity.

[1622] Maddux, W. W., Adam H., & Galinsky A. D. (2010).  When in Rome ... Learn Why the Romans Do What They Do: How Multicultural Learning Experiences Facilitate Creativity. Personality and Social Psychology Bulletin. 731 - 741.

Full text is available free for a limited time at http://psp.sagepub.com/cgi/reprint/36/6/731

Signers reveal that more complex language helps you find a hidden object, providing more support for the theory that language shapes how we think and perceive.

Because Nicaraguan Sign Language is only about 35 years old, and still evolving rapidly, the language used by the younger generation is more complex than that used by the older generation. This enables researchers to compare the effects of language ability on other abilities. A recent study found that younger signers (in their 20s) performed better than older signers (in their 30s) on two spatial cognition tasks that involved finding a hidden object. The findings provide more support for the theory that language shapes how we think and perceive.

[1629] Pyers, J. E., Shusterman A., Senghas A., Spelke E. S., & Emmorey K. (2010).  Evidence from an emerging sign language reveals that language supports spatial cognition. Proceedings of the National Academy of Sciences. 107(27), 12116 - 12120.

A study of sight-reading ability in pianists confirms the importance of many hours of practice, but also suggests that working memory capacity makes a difference.

A new study challenges the popular theory that expertise is simply a product of tens of thousands of hours of deliberate practice. Not that anyone is claiming that this practice isn’t necessary — but it may not be sufficient. A study looking at pianists’ ability to sight-read music reveals working memory capacity helps sight-reading regardless of how much someone has practiced.

The study involved 57 volunteers who had played piano for an average of 18.6 years (range from one to 57 years). Their estimated hours of overall practice ranged from 260 to 31,096 (average: 5806), and hours of sight-reading practice ranged from zero to 9,048 (average: 1487 hours). Statistical analysis revealed that although hours of practice was the most important factor, nevertheless, working memory capacity did, independently, account for a small but significant amount of the variance between individuals.

It is interesting that not only did WMC have an effect independent of hours of practice, but hours of practice apparently had no effect on WMC — although the study was too small to tell whether a lot of practice at an early age might have affected WMC (previous research has indicated that music training can increase IQ in children).

The study is also too small to properly judge the effects of the 10,000 hours deliberate practice claimed necessary for expertise: the researchers did not advise the number of participants that were at that level, but the numbers suggest it was low.

It should also be noted that an earlier study involving 52 accomplished pianists found no effect of WMC on sight-reading ability (but did find a related effect: the ability to tap two fingers rapidly in alternation and to press a computer key quickly in response to visual and acoustic cues was unrelated to practice but correlated positively with good sight-readers).

Nevertheless, the findings are interesting, and do agree with what I imagine is the ‘commonsense’ view: yes, becoming an expert is all about the hours of effective practice you put in, but there are intellectual qualities that also matter. The question is: do they matter once you’ve put in the requisite hours of good practice?

Finally a definitive review making clear the limits of the Mozart effect (namely that it's a very small effect when it occurs, and it only occurs in very specific circumstances).

Some years ago I wrote an article discussing the fact that the so-called Mozart effect has proved very hard to replicate since its ‘discovery’ in 1993, but now we have what is regarded as a definitive review, analyzing the entirety of the scientific record on the topic (including a number of unpublished academic theses), and the finding is very clear: there is little support for the view that listening to Mozart improves cognitive (specifically spatial) abilities. First of all, in those studies showing an effect, it was very small. The size of the effect of the specific Mozart sonata used in the original study (Sonata KV 448) compared to no stimulus was similar in size to the effect of any music compared to no stimulus. There was a small significant effect for the Mozart sonata when directly compared to other music, which probably reflects the fact that the types of music used in different studies varied widely. Some types of music are doubtless less arousing than others.
There was also a large difference in the results from laboratories affiliated to Rauscher (the original researcher) or Rideout compared to other laboratories. Rauscher and Shaw 1998 did in fact emphasize that the effect required exact replication of their original study design.
I have to say that if this (small and very specific) effect depends so heavily on getting the procedural details exactly right, it’s of little practical use. I think the main lesson we can learn from all this is that your emotional state affects cognition (a well-established effect), and that you may find some types of music are best for ‘getting you in the mood’ for mental work.

[1587] Pietschnig, J., Voracek M., & Formann A. K. (Submitted).  Mozart effect-Shmozart effect: A meta-analysis. Intelligence. 38(3), 314 - 323.

Love this one! A series of experiments with college students has revealed that a glowing, bare light bulb can improve your changes of solving an insight problem.

Love this one! A series of experiments with college students has revealed that a glowing, bare light bulb can improve your changes of solving an insight problem. In one experiment, 79 students were given a spatial problem to solve. Before they started, the experimenter, remarking “It’s a little dark in here”, either turned on a lamp with an unshaded 25-watt bulb or an overhead fluorescent light. Twice as many of those exposed to the bare bulb solved the problem in the allotted three minutes (44% vs 22%). In another experiment, 69 students were given four math problems, one of which required insight to solve. Again, those exposed to the lit bulb solved the insight problem more often — but there was no difference on the other problems. A third experiment extended the finding to word problems, and in the fourth, a comparison of the unshaded 25-watt bulb with a shaded 40-watt bulb revealed that the bare, less powerful, bulb was more effective. Isn’t it wonderful that the physical representation of the icon for a bright idea should help us have bright ideas?

[305] Slepian, M. L., Weisbuch M., Rutchick A. M., Newman L. S., & Ambady N. (2010).  Shedding light on insight: Priming bright ideas. Journal of Experimental Social Psychology. 46(4), 696 - 700.

A mouse study has found working memory training improved their proficiency on a wide range of cognitive tests, and helped them better retained their cognitive abilities into old age.

A study in which 60 young adult mice were trained on a series of maze exercises designed to challenge and improve their working memory ability (in terms of retaining and using current spatial information), has found that the mice improved their proficiency on a wide range of cognitive tests, and moreover better retained their cognitive abilities into old age.

An intriguing set of experiments has showed how you can improve vision by manipulating mindset.

An intriguing set of experiments showing how you can improve perception by manipulating mindset found significantly improved vision when:

  • an eye chart was arranged in reverse order (the letters getting progressively larger rather than smaller);
  • participants were given eye exercises and told their eyes would improve with practice;
  • participants were told athletes have better vision, and then told to perform jumping jacks or skipping (seen as less athletic);
  • participants flew a flight simulator, compared to pretending to fly a supposedly broken simulator (pilots are believed to have good vision).

[158] Langer, E., Djikic M., Pirson M., Madenci A., & Donohue R. (2010).  Believing Is Seeing. Psychological Science. 21(5), 661 - 666.

The recent report splashed all over the press that supposedly found playing online brain games makes you no smarter than surfing the Internet demonstrated no more than we already know: that transfer beyond the specific tasks you practise is very rare, and that well-educated people who are not deprived of mental stimulation and have no health or disability problems are not the people likely to be helped by such games.

A six-week study got a lot of press last month. The study involved some 11,000 viewers of the BBC's science show "Bang Goes the Theory", and supposedly showed that playing online brain games makes you no smarter than surfing the Internet to answer general knowledge questions. In fact, the main problem was the media coverage. The researchers acknowledged that previous research has found some types of individuals benefit from such games (older adults, preschool children, and I would add, children with some learning disabilities such as ADHD), and that video gamers show improved skills in some areas. What they found was that, across this general, mostly well-educated group, the amount of training on these tasks didn't improve performance beyond those specific tasks. This is neither a surprise, nor news. I'll talk more about this in the newsletter coming out early next month.

Older news items (pre-2010) brought over from the old website

Blind people are 'serial memory' whizzes

In a demonstration of the benefits of mental training, a study tested the memory of 19 congenitally blind individuals and individually matched sighted controls. Those who were blind recalled more words than the sighted, but their greatest superiority was the ability to remember longer word sequences according to their original order. This is probably a result of blind people’s everyday reliance on serial-memory strategies to identify otherwise indistinguishable objects. The finding that the blind showed a better memory for all of the words regardless of where they fell (rather than the first and last word advantage more typically found) suggests that the key to their success may lie in representing item lists as word chains, perhaps by generating associations between adjacent items.

[1321] Raz, N., Striem E., Pundak G., Orlov T., & Zohary E. (2007).  Superior Serial Memory in the Blind: A Case of Cognitive Compensatory Adjustment. Current Biology. 17(13), 1129 - 1133.

http://www.eurekalert.org/pub_releases/2007-06/cp-bpa061407.php

Brain Imaging Identifies Best Memorization Strategies

Why do some people remember things better than others? An imaging study has revealed that the brain regions activated when learning vary depending on the strategy adopted. The study involved 29 right-handed, healthy young adults, ages 18-31, all of whom had normal or corrected-to-normal vision and reported no significant neurological history. Participants were given interacting object pair images (such as a turkey seated atop a horse and a banana positioned in the back of a dump truck) and told to study them in anticipation of a memory test. Earlier studies had indicated that while individuals use a variety of strategies to help them memorize new information, the following four strategies were the main strategies:

1) A visual inspection strategy in which participants carefully studied the visual appearance of objects.

2) A verbal elaboration strategy in which individuals constructed sentences about the objects to remember them.

3) A mental imagery strategy in which participants formed interactive mental images of the objects.

4) A memory retrieval strategy in which they thought about the meaning of the objects and/or personal memories associated with the objects.

Both visual inspection and verbal elaboration resulted in improved recall. Imaging revealed that people who often used verbal elaboration had greater activity in a network of regions that included prefrontal regions associated with controlled verbal processing compared to people who used this strategy less frequently. People who often used a visual inspection strategy had greater activity in a network of regions that included an extrastriate region associated with object processing compared to people who used this strategy less frequently.

[1026] Kirchhoff, B. A., & Buckner R. L. (2006).  Functional-Anatomic Correlates of Individual Differences in Memory. Neuron. 51(2), 263 - 274.

http://www.sciencedaily.com/releases/2006/08/060809082610.htm

Strategies for seniors

Latest news

A six-week specific language therapy program not only improved chronic aphasic’s ability to name objects, but produced durable changes in brain activity that continued to bring benefits post-training.

Here’s an encouraging study for all those who think that, because of age or physical damage, they must resign themselves to whatever cognitive impairment or decline they have suffered. In this study, older adults who had suffered from aphasia for a long time nevertheless improved their language function after six weeks of intensive training.

The study involved nine seniors with chronic aphasia and 10 age-matched controls. Those with aphasia were given six weeks of intensive and specific language therapy, after which they showed significantly better performance at naming objects. Brain scans revealed that the training had not only stimulated language circuits, but also integrated the default mode network (the circuits used when our brain is in its ‘resting state’ — i.e., not thinking about anything in particular), producing brain activity that was similar to that of the healthy controls.

Moreover, these new circuits continued to be active after training, with participants continuing to improve.

Previous research has implicated abnormal functioning of the default mode network in other cognitive disorders.

Although it didn’t reach significance, there was a trend suggesting that the level of integration of the default mode network prior to therapy predicted the outcome of the training.

The findings are especially relevant to the many seniors who no longer receive treatment for stroke damage they may have had for many years. They also add to the growing evidence for the importance of the default mode network. Changes in the integration of the default mode network with other circuits have also been implicated in age-related cognitive decline and Alzheimer’s.

Interestingly, some research suggests that meditation may help improve the coherence of brainwaves that overlap the default mode network. Meditation, already shown to be helpful for improving concentration and focus, may be of greater benefit for fighting age-related cognitive decline than we realize!

A pilot study suggests declines in temporal processing are an important part of age-related cognitive decline, and shows how temporal training can significantly improve some cognitive abilities.

Here’s an exciting little study, implying as it does that one particular aspect of information processing underlies much of the cognitive decline in older adults, and that this can be improved through training. No, it’s not our usual suspect, working memory, it’s something far less obvious: temporal processing.

In the study, 30 older adults (aged 65-75) were randomly assigned to three groups: one that received ‘temporal training’, one that practiced common computer games (such as Solitaire and Mahjong), and a no-activity control. Temporal training was provided by a trademarked program called Fast ForWord Language® (FFW), which was developed to help children who have trouble reading, writing, and learning.

The training, for both training groups, occupied an hour a day, four days a week, for eight weeks.

Cognitive assessment, carried out at the beginning and end of the study, and for the temporal training group again 18 months later, included tests of sequencing abilities (how quickly two sounds could be presented and still be accurately assessed for pitch or direction), attention (vigilance, divided attention, and alertness), and short-term memory (working memory span, pattern recognition, and pattern matching).

Only in the temporal training group did performance on any of the cognitive tests significantly improve after training — on the sequencing tests, divided attention, matching complex patterns, and working memory span. These positive effects still remained after 18 months (vigilance was also higher at the end of training, but this improvement wasn’t maintained).

This is, of course, only a small pilot study. I hope we will see a larger study, and one that compares this form of training against other computer training programs. It would also be good to see some broader cognitive tests — ones that are less connected to the temporal training. But I imagine that, as I’ve discussed before, an effective training program will include more than one type of training. This may well be an important component of such a program.

[3075] Szelag, E., & Skolimowska J. (2012).  Cognitive function in elderly can be ameliorated by training in temporal information processing. Restorative Neurology and Neuroscience. 30(5), 419 - 434.

Two recent conference presentations add to the evidence for the benefits of ‘brain training’, and of mental stimulation, for holding back age-related cognitive decline.

My recent reports on brain training for older adults (see, e.g., Review of working memory training programs finds no broader benefit; Cognitive training shown to help healthy older adults; Video game training benefits cognition in some older adults) converge on the idea that cognitive training can indeed be beneficial for older adults’ cognition, but there’s little wider transfer beyond the skills being practiced. That in itself can be valuable, but it does reinforce the idea that the best cognitive training covers a number of different domains or skill-sets. A new study adds little to this evidence, but does perhaps emphasize the importance of persistence and regularity in training.

The study involved 59 older adults (average age 84), of whom 33 used a brain fitness program 5 days a week for 30 minutes a day for at least 8 weeks, while the other group of 26 were put on a waiting list for the program. After two months, both groups were given access to the program, and both were encouraged to use it as much or as little as they wanted. Cognitive testing occurred before the program started, at two months, and at six months.

The first group to use the program used the program on average for 80 sessions, compared to an average 44 sessions for the wait-list group.

The higher use group showed significantly higher cognitive scores (delayed memory test; Boston Naming test) at both two and six months, while the lower (and later) use group showed improvement at the end of the six month period, but not as much as the higher use group.

I’m afraid I don’t have any more details (some details of the training program would be nice) because it was a conference presentation, so I only have access to the press release and the abstract. Because we don’t know exactly what the training entailed, we don’t know the extent to which it practiced the same skills that were tested. But we may at least add it to the evidence that you can improve cognitive skills by regular training, and that the length/amount of training (and perhaps regularity, since the average number of sessions for the wait-list group implies an average engagement of some three times a week, while the high-use group seem to have maintained their five-times-a-week habit) matters.

Another interesting presentation at the conference was an investigation into mental stimulating activities and brain activity in older adults.

In this study, 151 older adults (average age 82) from the Rush Memory and Aging Project answered questions about present and past cognitive activities, before undergoing brain scans. The questions concerned how frequently they engaged in mentally stimulating activities (such as reading books, writing letters, visiting a library, playing games) and the availability of cognitive resources (such as books, dictionaries, encyclopedias) in their home, during their lifetime (specifically, at ages 6, 12, 18, 40, and now).

Higher levels of cognitive activity and cognitive resources were also associated with better cognitive performance. Moreover, after controlling for education and total brain size, it was found that frequent cognitive activity in late life was associated with greater functional connectivity between the posterior cingulate cortex and several other regions (right orbital and middle frontal gyrus, left inferior frontal gyrus, hippocampus, right cerebellum, left inferior parietal cortex). More cognitive resources throughout life was associated with greater functional connectivity between the posterior cingulate cortex and several other regions (left superior occipital gyrus, left precuneus, left cuneus, right anterior cingulate, right middle frontal gyrus, and left inferior frontal gyrus).

Previous research has implicated a decline in connectivity with the posterior cingulate cortex in mild cognitive impairment and Alzheimer’s disease.

Cognitive activity earlier in life was not associated with differences in connectivity.

The findings provide further support for the idea “Use it or lose it!”, and suggests that mental activity protects against cognitive decline by maintaining functional connectivity in important neural networks.

Miller, K.J. et al. 2012. Memory Improves With Extended Use of Computerized Brain Fitness Program Among Older Adults. Presented August 3 at the 2012 convention of the American Psychological Association.

Han, S.D. et al. 2012. Cognitive Activity and Resources Are Associated With PCC Functional Connectivity in Older Adults. Presented August 3 at the 2012 convention of the American Psychological Association.

More evidence that learning a musical instrument in childhood, even for a few years, has long-lasting benefits for auditory processing.

Adding to the growing evidence for the long-term cognitive benefits of childhood music training, a new study has found that even a few years of music training in childhood has long-lasting benefits for auditory discrimination.

The study involved 45 adults (aged 18-31), of whom 15 had no music training, 15 had one to five years of training, and 15 had six to eleven years. Participants were presented with different complex sounds ranging in pitch while brainstem activity was monitored.

Brainstem response to the sounds was significantly stronger in those with any sort of music training, compared to those who had never had any music training. This was a categorical difference — years of training didn’t make a difference (although some minimal length may be required — only one person had only one year of training). However, recency of training did make a difference to brainstem response, and it does seem that some fading might occur over long periods of time.

This difference in brainstem response means that those with music training are better at recognizing the fundamental frequency (lowest frequency sound). This explains why music training may help protect older adults from hearing difficulties — the ability to discriminate fundamental frequencies is crucial for understanding speech, and for processing sound in noisy environments.

[3074] Skoe, E., & Kraus N. (2012).  A Little Goes a Long Way: How the Adult Brain Is Shaped by Musical Training in Childhood. The Journal of Neuroscience. 32(34), 11507 - 11510.

A comparison of the effects of regular sessions of tai chi, walking, and social discussion, has found tai chi was associated with the biggest gains in brain volume and improved cognition.

The study involved 120 healthy older adults (60-79) from Shanghai, who were randomly assigned to one of four groups: one that participated in three sessions of tai chi every week for 40 weeks; another that instead had ‘social interaction’ sessions (‘lively discussions’); another in which participants engaged in walking around a track; and a non-intervention group included as a control. Brain scans were taken before and after the 40-week intervention, and cognitive testing took place at 20 weeks as well as these times.

Compared to those who received no intervention, both those who participated in tai chi, and those who participated in the social sessions, showed significant increases in brain volume and on some cognitive measures. However, the tai chi group showed improvement on more cognitive tests than the social group (on the Mattis Dementia Rating Scale, the Trailmaking Tests, delayed recognition on the Auditory Verbal Learning Test, and verbal fluency for animals vs verbal fluency and positive trends only on Trails A and the Auditory test).

Surprisingly, there were no such significant effects from the walking intervention, which involved 30 minutes of brisk walking around a 400m circular track, sandwiched by 10 minutes of warm-up and 10 minutes cool-down exercises. This took place in the same park as the tai chi sessions (which similarly included 20 minutes of warm-up exercises, 20 minutes of tai chi, and 10 minutes of cool-down exercises).

This finding is inconsistent with other research, but the answer seems to lie in individual differences — specifically, speed of walking. Faster walkers showed significantly better performance on the Stroop test, and on delayed recall and recognition on the Auditory Verbal Learning Test. It should be noted that, unlike some studies in which participants were encouraged to reach heart-rate targets, participants in this study were simply told to walk at their own speed. This finding, then, would seem to support the view that brisk walking is needed to reap good health and cognitive benefits (which shouldn’t put anyone off — anything is better than nothing! and speed is likely to come with practice, if that’s your aim).

It should also be noted that this population has generally high rates of walking. It is likely, then, that the additional walking in these sessions did not add a great deal to their existing behavior.

There is a caveat to the strongly positive effects of tai chi: this group showed lower cognitive performance at baseline. This was because the group randomly received more individuals with very low scores (8 compared with 5 in the other groups).

The study is, of course, quite a small one, and a larger study is required to confirm these results.

One final note: the relative differences in enjoyment were not explicitly investigated, but the researchers did note that the social group, who initially were given topics to discuss in their hour-long sessions, then decided to select and organize their own discussions, and have continued to do so for two years following the end of the study. It would have been nice if the researchers had re-tested participants at that point.

Mortimer, J.A. et al. 2012. Changes in Brain Volume and Cognition in a Randomized Trial of Exercise and Social Interaction in a Community-Based Sample of Non-Demented Chinese Elders. Journal of Alzheimer's Disease, 30 (4), 757-766.
Full text available at http://health.usf.edu/nocms/publicaffairs/now/pdfs/JAD_Mortimer_30%28201...

Greater cognitive activity doesn’t appear to prevent Alzheimer’s brain damage, but is associated with more neurons in the prefrontal lobe, as well as other gender-specific benefits.

Data from the very large and long-running Cognitive Function and Ageing Study, a U.K. study involving 13,004 older adults (65+), from which 329 brains are now available for analysis, has found that cognitive lifestyle score (CLS) had no effect on Alzheimer’s pathology. Characteristics typical of Alzheimer’s, such as plaques, neurofibrillary tangles, and hippocampal atrophy, were similar in all CLS groups.

However, while cognitive lifestyle may have no effect on the development of Alzheimer's pathology, that is not to say it has no effect on the brain. In men, an active cognitive lifestyle was associated with less microvascular disease. In particular, the high CLS group showed an 80% relative reduction in deep white matter lesions. These associations remained after taking into account cardiovascular risk factors and APOE status.

This association was not found in women. However, women in the high CLS group tended to have greater brain weight.

In both genders, high CLS was associated with greater neuronal density and cortical thickness in Brodmann area 9 in the prefrontal lobe (but not, interestingly, in the hippocampus).

Cognitive lifestyle score is produced from years of education, occupational complexity coded according to social class and socioeconomic grouping, and social engagement based on frequency of contact with relatives, neighbors, and social events.

The findings provide more support for the ‘cognitive reserve’ theory, and shed some light on the mechanism, which appears to be rather different than we imagined. It may be that the changes in the prefrontal lobe (that we expected to see in the hippocampus) are a sign that greater cognitive activity helps you develop compensatory networks, rather than building up established ones. This would be consistent with research suggesting that older adults who maintain their cognitive fitness do so by developing new strategies that involve different regions, compensating for failing regions.

A comparison of multi-domain and single-domain cognitive training shows both improve cognitive performance in healthy older adults, but multi-domain training produces greater benefits.

Previous research has been equivocal about whether cognitive training helps cognitively healthy older adults. One recent review concluded that cognitive training could help slow age-related decline in a range of cognitive tasks; another found no evidence that such training helps slow or prevent the development of Alzheimer’s in healthy older adults. Most of the studies reviewed looked at single-domain training only: memory, reasoning, processing speed, reading, solving arithmetic problems, or strategy training (1). As we know from other studies, training in specific tasks is undeniably helpful for improving your performance at those specific tasks. However, there is little evidence for wider transfer. There have been few studies employing multi-domain training, although two such have found positive benefits.

In a new Chinese study, 270 healthy older adults (65-75) were randomly assigned to one of three groups. In the two experimental groups, participants were given one-hour training sessions twice a week for 12 weeks. Training took place in small groups of around 15. The first 15 minutes of each hour involved a lecture focusing on diseases common in older adults. The next 30 minutes were spent in instruction in one specific technique and how to use it in real life. The last 15 minutes were used to consolidate the skills by solving real-life problems.

One group were trained using a multi-domain approach, involving memory, reasoning, problem solving, map reading, handicrafts, health education and exercise. The other group trained on reasoning only (involving the towers of Hanoi, numerical reasoning, Raven Progressive Matrices, and verbal reasoning). Homework was assigned. Six months after training, three booster sessions (a month apart) were offered to 60% of the participants. The third group (the control) was put on a waiting list. All three groups attended a lecture on aspects of healthy living every two months.

All participants were given cognitive tests before training and after training, and again after 6 months, and after one year. Cognitive function was assessed using the Stroop Test, the Trail Making test, the Visual Reasoning test, and the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS, Form A).

Both the multi-domain and single-domain cognitive training produced significant improvement in cognitive scores (the former in RBANS, visual reasoning, and immediate and delayed memory; the latter in RBANS, visual reasoning, word interference, and visuospatial/constructional score), although single-domain training produced less durable benefits (after a year, the multi-domain group still showed the benefit in RBANS, delayed memory and visual reasoning, while the single-domain group only showed benefits in word interference). Booster training also produced benefits, consolidating training in reasoning, visuospatial/constructional abilities and faster processing.

Reasoning ability seemed particularly responsive to training. Although it would be reasonable to assume that single-domain training, which focused on reasoning, would produce greater improvement than multi-domain training in this specific area, there was in fact no difference between the two groups right after training or at six months. And at 12 months, the multi-domain group was clearly superior.

In sum, the study provides evidence that cognitive training helps prevent cognitive decline in healthy older people, that specific training can generalize to other tasks, but that programs that involve several cognitive domains produce more lasting benefits.

A study has found that playing a cognitively complex video game improved cognitive performance in some older adults, particularly those with initially poorer cognitive scores.

A number of studies have found evidence that older adults can benefit from cognitive training. However, neural plasticity is thought to decline with age, and because of this, it’s thought that the younger-old, and/or the higher-functioning, may benefit more than the older-old, or the lower-functioning. On the other hand, because their performance may already be as good as it can be, higher-functioning seniors may be less likely to benefit. You can find evidence for both of these views.

In a new study, 19 of 39 older adults (aged 60-77) were given training in a multiplayer online video game called World of Warcraft (the other 20 formed a control group). This game was chosen because it involves multitasking and switching between various cognitive abilities. It was theorized that the demands of the game would improve both spatial orientation and attentional control, and that the multiple tasks might produce more improvement in those with lower initial ability compared to those with higher ability.

WoW participants were given a 2-hour training session, involving a 1-hour lecture and demonstration, and one hour of practice. They were then expected to play the game at home for around 14 hours over the next two weeks. There was no intervention for the control group. All participants were given several cognitive tests at the beginning and end of the two week period: Mental Rotation Test; Stroop Test; Object Perspective Test; Progressive Matrices; Shipley Vocabulary Test; Everyday Cognition Battery; Digit Symbol Substitution Test.

As a group, the WoW group improved significantly more on the Stroop test (a measure of attentional control) compared to the control group. There was no change in the other tests. However, those in the WoW group who had performed more poorly on the Object Perspective Test (measuring spatial orientation) improved significantly. Similarly, on the Mental Rotation Test, ECB, and Progressive Matrices, those who performed more poorly at the beginning tended to improve after two weeks of training. There was no change on the Digit Symbol test.

The finding that only those whose performance was initially poor benefited from cognitive training is consistent with other studies suggesting that training only benefits those who are operating below par. This is not really surprising, but there are a few points that should be made.

First of all, it should be noted that this was a group of relatively high-functioning young-old adults — poorer performance in this case could be (relatively) better performance in another context. What it comes down to is whether you are operating at a level below which you are capable of — and this applies broadly, for example, experiments show that spatial training benefits females but not males (because males tend to already have practiced enough).

Given that, in expertise research, training has an on-going, apparently limitless, effect on performance, it seems likely that the limited benefits shown in this and other studies is because of the extremely limited scope of the training. Fourteen hours is not enough to improve people who are already performing adequately — but that doesn’t mean that they wouldn’t improve with more hours. I have yet to see any interventions with older adults that give them the amount of cognitive training you would expect them to need to achieve some level of mastery.

My third and final point is the specific nature of the improvements. This has also been shown in other studies, and sometimes appears quite arbitrary — for example, one 3-D puzzle game apparently improved mental rotation, while a different 3-D puzzle game had no effect. The point being that we still don’t understand the precise attributes needed to improve different skills (although the researchers advocate the use of a tool called cognitive task analysis for revealing the underlying qualities of an activity) — but we do understand that it is a matter of precise attributes, which is definitely a step in the right direction.

The main thing, then, that you should take away from this is the idea that different activities involve specific cognitive tasks, and these, and only these, will be the ones that benefit from practicing the activities. You therefore need to think about what tasks you want to improve before deciding on the activities to practice.

A program designed to improve reasoning ability in older adults also increased their openness to new experiences.

Openness to experience – being flexible and creative, embracing new ideas and taking on challenging intellectual or cultural pursuits – is one of the ‘Big 5’ personality traits. Unlike the other four, it shows some correlation with cognitive abilities. And, like them, openness to experience does tend to decline with age.

However, while there have been many attempts to improve cognitive function in older adults, to date no one has tried to increase openness to experience. Naturally enough, one might think — it’s a personality trait, and we are not inclined to view personality traits as amenable to ‘training’. However, recently there have been some indications that personality traits can be changed, through cognitive interventions or drug treatments. In this new study, a cognitive training program for older adults also produced increases in their openness to experience.

The study involved 183 older adults (aged 60-94; average age 73), who were randomly assigned to a 16-week training program or a waiting-list control group. The program included training in inductive reasoning, and puzzles that relied in part on inductive reasoning. Most of this activity was carried out at home, but there were two 1-hour classroom sessions: one to introduce the inductive reasoning training, and one to discuss strategies for Sudoku and crosswords.

Participants came to the lab each week to hand in materials and pick up the next set. Initially, they were given crossword and Sudoku puzzles with a wide range of difficulty. Subsequently, puzzle sets were matched to each participant’s skill level (assessed from the previous week’s performance). Over the training period, the puzzles became progressively more difficult, with the steps tailored to each individual.

The inductive reasoning training involved learning to recognize novel patterns and use them to solve problems. In ‘basic series problems’, the problems required inference from a serial pattern of words, letters, or numbers. ‘Everyday serial problems’ included problems such as completing a mail order form and answering questions about a bus schedule. Again, the difficulty of the problems increased steadily over the training period.

Participants were asked to spend at least 10 hours a week on program activities, and according to the daily logs they filled in, they spent an average of 11.4 hours a week. In addition to the hopefully inherent enjoyment of the activities, those who recorded 10 hours were recognized on a bulletin board tally sheet and entered into a raffle for a prize.

Cognitive and personality testing took place 4-5 weeks prior to the program starting, and 4-5 weeks after program end. Two smaller assessments also took place during the program, at week 6 and week 12.

At the end of the program, those who had participated had significantly improved their pattern-recognition and problem-solving skills. This improvement went along with a moderate but significant increase in openness. Analysis suggested that this increase in openness occurred independently of improvement in inductive reasoning.

The benefits were specific to inductive reasoning and openness, with no significant effects on divergent thinking, processing speed, verbal ability, or the other Big 5 traits.

The researchers suggest that the carefully stepped training program was important in leading to increased openness, allowing the building of a growing confidence in their reasoning abilities. Openness to experience contributes to engagement and enjoyment in stimulating activity, and has also been linked to better health and decreased mortality risk. It seems likely, then, that increases in openness can be part of a positive feedback cycle, leading to greater and more sustained engagement in mentally stimulating activities.

The corollary is that decreases in openness may lead to declines in cognitive engagement, and then to poorer cognitive function. Indeed it has been previously suggested that openness to experience plays a role in cognitive aging.

Clearly, more research is needed to tease out how far these findings extend to other activities, and the importance of scaffolding (carefully designing cognitive activities on an individualized basis to support learning), but this work reveals an overlooked aspect to the issue of mental stimulation for preventing age-related cognitive decline.

More evidence that music training protects older adults from age-related impairment in understanding speech, adding to the potential benefits of music training in preventing dementia.

I’ve spoken before about the association between hearing loss in old age and dementia risk. Although we don’t currently understand that association, it may be that preventing hearing loss also helps prevent cognitive decline and dementia. I have previously reported on how music training in childhood can help older adults’ ability to hear speech in a noisy environment. A new study adds to this evidence.

The study looked at a specific aspect of understanding speech: auditory brainstem timing. Aging disrupts this timing, degrading the ability to precisely encode sound.

In this study, automatic brain responses to speech sounds were measured in 87 younger and older normal-hearing adults as they watched a captioned video. It was found that older adults who had begun musical training before age 9 and engaged consistently in musical activities through their lives (“musicians”) not only significantly outperformed older adults who had no more than three years of musical training (“non-musicians”), but encoded the sounds as quickly and accurately as the younger non-musicians.

The researchers qualify this finding by saying that it shows only that musical experience selectively affects the timing of sound elements that are important in distinguishing one consonant from another, not necessarily all sound elements. However, it seems probable that it extends more widely, and in any case the ability to understand speech is crucial to social interaction, which may well underlie at least part of the association between hearing loss and dementia.

The burning question for many will be whether the benefits of music training can be accrued later in life. We will have to wait for more research to answer that, but, as music training and enjoyment fit the definition of ‘mentally stimulating activities’, this certainly adds another reason to pursue such a course.

An intriguing pilot study finds that regular exercise on a stationary bike enhanced with a computer game-type environment improves executive function in older adults more than ordinary exercise on a stationary bike.

We know that physical exercise greatly helps you prevent cognitive decline with aging. We know that mental stimulation also helps you prevent age-related cognitive decline. So it was only a matter of time before someone came up with a way of combining the two. A new study found that older adults improved executive function more by participating in virtual reality-enhanced exercise ("exergames") that combine physical exercise with computer-simulated environments and interactive videogame features, compared to the same exercise without the enhancements.

The Cybercycle Study involved 79 older adults (aged 58-99) from independent living facilities with indoor access to a stationary exercise bike. Of the 79, 63 participants completed the three-month study, meaning that they achieved at least 25 rides during the three months.

Unfortunately, randomization was not as good as it should have been — although the researchers planned to randomize on an individual basis, various technical problems led them to randomize on a site basis (there were eight sites), with the result that the cybercycle group and the control bike group were significantly different in age and education. Although the researchers took this into account in the analysis, that is not the same as having groups that match in these all-important variables. However, at least the variables went in opposite directions: while the cybercycle group was significantly younger (average 75.7 vs 81.6 years), it was significantly less educated (average 12.6 vs 14.8 years).

Perhaps also partly off-setting the age advantage, the cybercycle group was in poorer shape than the control group (higher BMI, glucose levels, lower physical activity level, etc), although these differences weren’t statistically significant. IQ was also lower for the cybercycle group, if not significantly so (but note the high averages for both groups: 117.6 vs 120.6). One of the three tests of executive function, Color Trails, also showed a marked group difference, but the large variability in scores meant that this difference was not statistically significant.

Although participants were screened for disorders such as Alzheimer’s and Parkinson’s, and functional disability, many of both groups were assessed as having MCI — 16 of the 38 in the cybercycle group and 14 of the 41 in the control bike group.

Participants were given cognitive tests at enrolment, one month later (before the intervention began), and after the intervention ended. The stationary bikes were identical for both groups, except the experimental bike was equipped with a virtual reality display. Cybercycle participants experienced 3D tours and raced against a "ghost rider," an avatar based on their last best ride.

The hypothesis was that cybercycling would particularly benefit executive function, and this was borne out. Executive function (measured by the Color Trails, Stroop test, and Digits Backward) improved significantly more in the cybercycle condition, and indeed was the only cognitive task to do so (other cognitive tests included verbal fluency, verbal memory, visuospatial skill, motor function). Indeed, the control group, despite getting the same amount of exercise, got worse at the Digits Backward test, and failed to show any improvement on the Stroop test.

Moreover, significantly fewer cybercyclists progressed to MCI compared to the control group (three vs nine).

There were no differences in exercise quantity or quality between the two groups — which does argue against the idea that cyber-enhanced physical activity would be more motivating. However, the cybercycling group did tend to comment on their enjoyment of the exercise. While the enjoyment may not have translated into increased activity in this situation, it may well do so in a longer, less directed intervention — i.e. real life.

It should also be remembered that the intervention was relatively short, and that other cognitive tasks might take longer to show improvement than the more sensitive executive function. This is supported by the fact that levels of the brain growth factor BDNF, assessed in 30 participants, showed a significantly greater increase of BDNF in cybercyclists.

I should also emphasize that the level of physical exercise really wasn't that great, but nevertheless the size of the cybercycle's effect on executive function was greater than usually produced by aerobic exercise (a medium effect rather than a small one).

The idea that activities that combine physical and mental exercise are of greater cognitive benefit than the sum of benefits from each type of exercise on its own is not inconsistent with previous research, and in keeping with evidence from animal studies that physical exercise and mental stimulation help the brain via different mechanisms. Moreover, I have an idea that enjoyment (in itself, not as a proxy for motivation) may be a factor in the cognitive benefits derived from activities, whether physical or mental. Mere speculation, derived from two quite separate areas of research: the idea of “flow” / “being in the zone”, and the idea that humor has physiological benefits.

Of course, as discussed, this study has a number of methodological issues that limit its findings, but hopefully it will be the beginning of an interesting line of research.  

[2724] Anderson-Hanley, C., Arciero P. J., Brickman A. M., Nimon J. P., Okuma N., Westen S. C., et al. (2012).  Exergaming and Older Adult Cognition. American Journal of Preventive Medicine. 42(2), 109 - 119.

New study modifies findings that younger adults are better decision-makers by showing older adults are better when the scenarios involve multiple considerations.

Research has shown that younger adults are better decision makers than older adults — a curious result. A new study tried to capture more ‘real-world’ decision-making, by requiring participants to evaluate each result in order to strategize the next choice.

This time (whew!), the older adults did better.

In the first experiment, groups of older (60-early 80s) and younger (college-age) adults received points each time they chose from one of four options and tried to maximize the points they earned.  For this task, the younger adults were more efficient at selecting the options that yielded more points.

In the second experiment, the rewards received depended on the choices made previously.  The “decreasing option” gave a larger number of points on each trial, but caused rewards on future trials to be lower. The “increasing option” gave a smaller reward on each trial but caused rewards on future trials to increase.  In one version of the test, the increasing option led to more points earned over the course of the experiment; in another, chasing the increasing option couldn’t make up for the points that could be accrued grabbing the bigger bite on each trial.

The older adults did better on every permutation.

Understanding more complex scenarios is where experience tells. The difference in performance also may reflect the different ways younger and older adults use their brains. Decision-making can involve two different reward learning systems, according to recent thinking. In the model-based system, a cognitive model is constructed that shows how various actions and their rewards are connected to each other. Decisions are made by simulating how one decision will affect future decisions. In the model-free system, on the other hand, only values associated with each choice are considered.

These systems are rooted in different parts of the brain. The model-based system uses the intraparietal sulcus and lateral prefrontal cortex, while the model-free system uses the ventral striatum. There is some evidence that younger adults use the ventral striatum (involved in habitual, reflexive learning and immediate reward) for decision-making more than older adults, and older adults use the dorsolateral prefrontal cortex (involved in more rational, deliberative thinking) more than younger adults.

A six-week memory fitness program offered to older adults helped improve their ability to recognize and recall words.

In a study involving 115 seniors (average age 81), those who participated in a six-week, 12-session memory training program significantly improved their verbal memory. 15-20 seniors participated in each hour-long class, which included explanations of how memory works, quick strategies for remembering names, faces and numbers, basic memory strategies such as linking ideas and creating visual images, and information on a healthy lifestyle for protecting and maintaining memory.

Most of the study participants were women, Caucasian and had attained a college degree or higher level of education.

[2491] Miller, K. J., Siddarth P., Gaines J. M., Parrish J. M., Ercoli L. M., Marx K., et al. (2011).  The Memory Fitness Program. American Journal of Geriatric Psychiatry. 1 - 1.

New evidence challenges the view that older adults learn best through errorless learning. Trial-and-error learning can be better if done the right way.

Following a 1994 study that found that errorless learning was better than trial-and-error learning for amnesic patients and older adults, errorless learning has been widely adopted in the rehabilitation industry. Errorless learning involves being told the answer without repeatedly trying to answer the question and perhaps making mistakes. For example, in the 1994 study, participants in the trial-and-error condition could produce up to three errors in answer to the question “I am thinking of a word that begins with QU”, before being told the answer was QUOTE; in contrast, participants in the errorless condition were simply told “I am thinking of a word that begins with QU and it is ‘QUOTE’.”

In a way, it is surprising that errorless learning should be better, given that trial-and-error produces much deeper and richer encoding, and a number of studies with young adults have indeed found an advantage for making errors. Moreover, it’s well established that retrieving an item leads to better learning than passively studying it, even when you retrieve the wrong item. This testing effect has also been found in older adults.

In another way, the finding is not surprising at all, because clearly the trial-and-error condition offers many opportunities for confusion. You remember that QUEEN was mentioned, for example, but you don’t remember whether it was a right or wrong answer. Source memory, as I’ve often mentioned, is particularly affected by age.

So there are good theoretical reasons for both positions regarding the value of mistakes, and there’s experimental evidence for both. Clearly it’s a matter of circumstance. One possible factor influencing the benefit or otherwise of error concerns the type of processing. Those studies that have found a benefit have generally involved conceptual associations (e.g. What’s Canada’s capital? Toronto? No, Ottawa). It may be that errors are helpful to the extent that they act as retrieval cues, and evoke a network of related concepts. Those studies that have found errors harm learning have generally involved perceptual associations, such as word stems and word fragments (e.g., QU? QUeen? No, QUote). These errors are arbitrary, produce interference, and don’t provide useful retrieval cues.

So this new study tested the idea that producing errors conceptually associated with targets would boost memory for the encoding context in which information was studied, especially for older adults who do not spontaneously elaborate on targets at encoding.

In the first experiment, 33 young (average age 21) and 31 older adults (average age 72) were shown 90 nouns presented in three different, intermixed conditions. In the read condition (designed to provide a baseline), participants read aloud the noun fragment presented without a semantic category (e.g., p­_g). In the errorless condition, the semantic category was presented with the target word fragment (e.g. a farm animal  p­_g), and the participants read aloud the category and their answer. The category and target were then displayed. In the trial-and-error condition, the category was presented and participants were encouraged to make two guesses before being shown the target fragment together with the category. The researchers changed the target if it was guessed. Participants were then tested using a list of 70 words, of which 10 came from each of the study conditions, 10 were new unrelated words, and 30 were nontarget exemplars from the TEL categories. Those that the subject had guessed were labeled as learning errors; those that hadn’t come up were labeled as related lures. In addition to an overall recognition test (press “yes” to any word you’ve studied and “no” to any new word), there were two tests that required participants to endorse items that were studied in the TEL condition and reject those studied in the EL condition, and vice versa.

The young adults did better than the older on every test. TEL produced better learning than EL, and both produced better learning than the read condition (as expected). The benefit of TEL was greater for older adults. This is in keeping with the idea that generating exemplars of a semantic category, as occurs in trial-and-error learning, helps produce a richer, more elaborated code, and that this is of greater to older adults, who are less inclined to do this without encouragement.

There was a downside, however. Older adults were also more prone to falsely endorsing prior learning errors or semantically-related lures. It’s worth noting that both groups were more likely to falsely endorse learning errors than related lures.

But the main goal of this first experiment was to disentangle the contributions of recollection and familiarity to the two types of learning. It turns out that there was no difference between young and older adults in terms of familiarity; the difference in performance between the two groups stemmed from recollection. Recollection was a problem for older adults in the errorless condition, but not in the trial-and-error condition (where the recollective component of their performance matched that of young adults). This deficit is clearly closely related to age-related deficits in source memory.

It was also found that familiarity was marginally more important in the errorless condition than the trial-and-error condition. This is consistent with the idea that targets learned without errors acquire greater fluency than those learned with errors (with the downside that they don’t pick up those contextual details that making errors can provide).

In the second experiment, 15 young and 15 older adults carried out much the same procedure, except that during the recognition test they were also required to mention the context in which the words were learned was tested (that is, were the words learned through trial-and-error or not).

Once again, trial-and-error learning was associated with better source memory relative to errorless learning, particularly for the older adults.

These results support the hypothesis that trial-and-error learning is more beneficial than errorless learning for older adults when the trials encourage semantic elaboration. But another factor may also be involved. Unlike other errorless studies, participants were required to attend to errors as well as targets. Explicit attention to errors may help protect against interference.

In a similar way, a recent study involving young adults found that feedback given in increments (thus producing errors) is more effective than feedback given all at once in full. Clearly what we want is to find that balance point, where elaborative benefits are maximized and interference is minimized.

[2496] Cyr, A. - A., & Anderson N. D. (2011).  Trial-and-error learning improves source memory among young and older adults. Psychology and Aging. No Pagination Specified - No Pagination Specified.

A small study suggests that middle-aged couples are more likely to be effective than older couples in helping fill in each other’s memory gaps, but effective collaboration also depends on conversational style.

In my book on remembering what you’re doing and what you intend to do, I briefly discuss the popular strategy of asking someone to remind you (basically, whether it’s an effective strategy depends on several factors, of which the most important is the reliability of the person doing the reminding). So I was interested to see a pilot study investigating the use of this strategy between couples.

The study confirms earlier findings that the extent to which this strategy is effective depends on how reliable the partner's memory is, but expands on that by tying it to age and conversational style.

The study involved 11 married couples, of whom five were middle-aged (average age 52), and six were older adults (average age 73). Participants completed a range of prospective memory tasks by playing the board game "Virtual Week," which encourages verbal interaction among players about completing real life tasks. For each virtual "day" in the game, participants were asked to perform 10 different prospective memory tasks — four that regularly occur (eg, taking medication with breakfast), four that were different each day (eg, purchasing gasoline for the car), and two being time-check tasks that were not based on the activities of the board game (eg, check lung capacity at two specified times).

Overall, the middle-aged group benefited more from collaboration than the older group. But it was also found that those couples who performed best were those who were more supportive and encouraging of each other.

Collaboration in memory tasks is an interesting activity, because it can be both helpful and hindering. Think about how memory works — by association. You start from some point, and if you’re on a good track, more and more should be revealed as each memory triggers another. If another person keeps interrupting your train, you can be derailed. On the other hand, they might help you fill you in gaps that you need, or even point you to the right track, if you’re on the wrong one.

In this small study, it tended to be the middle-aged couples that filled in the gaps more effectively than the older couples. That probably has a lot to do with memory reliability. So it’s not a big surprise (though useful to be aware of). But what I find more interesting (because it’s less obvious, and more importantly, because it’s more under our control) is this idea that our conversational style affects whether memory collaboration is useful or counterproductive. I look forward to results from a larger study.

[2490] Margrett, J. A., Reese-Melancon C., & Rendell P. G. (2011).  Examining Collaborative Dialogue Among Couples. Zeitschrift für Psychologie / Journal of Psychology. 219, 100 - 107.

Another study adds to the weight of evidence that meditating has cognitive benefits. The latest finding points to brain-wide improvements in connectivity.

Following on from research showing that long-term meditation is associated with gray matter increases across the brain, an imaging study involving 27 long-term meditators (average age 52) and 27 controls (matched by age and sex) has revealed pronounced differences in white-matter connectivity between their brains.

The differences reflect white-matter tracts in the meditators’ brains being more numerous, more dense, more myelinated, or more coherent in orientation (unfortunately the technology does not yet allow us to disentangle these) — thus, better able to quickly relay electrical signals.

While the differences were evident among major pathways throughout the brain, the greatest differences were seen within the temporal part of the superior longitudinal fasciculus (bundles of neurons connecting the front and the back of the cerebrum) in the left hemisphere; the corticospinal tract (a collection of axons that travel between the cerebral cortex of the brain and the spinal cord), and the uncinate fasciculus (connecting parts of the limbic system, such as the hippocampus and amygdala, with the frontal cortex) in both hemispheres.

These findings are consistent with the regions in which gray matter increases have been found. For example, the tSLF connects with the caudal area of the temporal lobe, the inferior temporal gyrus, and the superior temporal gyrus; the UNC connects the orbitofrontal cortex with the amygdala and hippocampal gyrus

It’s possible, of course, that those who are drawn to meditation, or who are likely to engage in it long term, have fundamentally different brains from other people. However, it is more likely (and more consistent with research showing the short-term effects of meditation) that the practice of meditation changes the brain.

The precise mechanism whereby meditation might have these effects can only be speculated. However, more broadly, we can say that meditation might induce physical changes in the brain, or it might be protecting against age-related reduction. Most likely of all, perhaps, both processes might be going on, perhaps in different regions or networks.

Regardless of the mechanism, the evidence that meditation has cognitive benefits is steadily accumulating.

The number of years the meditators had practiced ranged from 5 to 46. They reported a number of different meditation styles, including Shamatha, Vipassana and Zazen.

Another study confirms the cognitive benefits of extensive musical training that begins in childhood, at least for hearing.

A number of studies have demonstrated the cognitive benefits of music training for children. Now research is beginning to explore just how long those benefits last. This is the second study I’ve reported on this month, that points to childhood music training protecting older adults from aspects of cognitive decline. In this study, 37 adults aged 45 to 65, of whom 18 were classified as musicians, were tested on their auditory and visual working memory, and their ability to hear speech in noise.

The musicians performed significantly better than the non-musicians at distinguishing speech in noise, and on the auditory temporal acuity and working memory tasks. There was no difference between the groups on the visual working memory task.

Difficulty hearing speech in noise is among the most common complaints of older adults, but age-related hearing loss only partially accounts for the problem.

The musicians had all begun playing an instrument by age 8 and had consistently played an instrument throughout their lives. Those classified as non-musicians had no musical experience (12 of the 19) or less than three years at any point in their lives. The seven with some musical experience rated their proficiency on an instrument at less than 1.5 on a 10-point scale, compared to at least 8 for the musicians.

Physical activity levels were also assessed. There was no significant difference between the groups.

The finding that visual working memory was not affected supports the idea that musical training helps domain-specific skills (such as auditory and language processing) rather than general ones.

A new study finds length of musical training in childhood is associated with less cognitive decline in old age.

A study involving 70 older adults (60-83) has found that those with at least ten years of musical training performed the best on cognitive tests, followed by those with one to nine years of musical study, with those with no musical training trailing the field.

All the musicians were amateurs who began playing an instrument at about 10 years of age. Half of the high-level musicians still played an instrument at the time of the study, but they didn't perform better on the cognitive tests than the other advanced musicians who had stopped playing years earlier. Previous research suggests that both years of musical participation and age of acquisition are critical.

All the participants had similar levels of education and fitness. The cognitive tests related to visuospatial memory, naming objects and executive function.

Hanna-Pladdy, B. & MacKay, A. 2011. The relation between instrumental musical activity and cognitive aging. Neuropsychology, 25 (3), 378-86. doi: 10.1037/a0021895

Walking speed and balance may be improved in seniors through a brain training program. Research has indicated that a common pathology underlies cognitive impairment and gait and balance problems.

On the subject of the benefits of walking for seniors, it’s intriguing to note a recent pilot study that found frail seniors who walked slowly (no faster than one meter per second) benefited from a brain fitness program known as Mindfit. After eight weeks of sessions three times weekly (each session 45-60 minutes), all ten participants walked a little faster, and significantly faster while talking. Walking while talking requires considerably more concentration than normal walking. The success of this short intervention (which needs to be replicated in a larger study) offers the hope that frail elderly who may be unable to participate in physical exercise, could improve their mobility through brain fitness programs. Poor gait speed is also correlated with a higher probability of falls.

The connection between gait speed and cognitive function is an interesting one. Previous research has indicated that slow gait should alert doctors to check for cognitive impairment. One study found severe white matter lesions were more likely in those with gait and balance problems. Most recently, a longitudinal study involving over 900 older adults has found poorer global cognitive function, verbal memory, and executive function, were all predictive of greater decline in gait speed.

A new study shows improvement in visual working memory in older adults following ten hours training with a commercial brain training program. The performance gains correlated with changes in brain activity.

While brain training programs can certainly improve your ability to do the task you’re practicing, there has been little evidence that this transfers to other tasks. In particular, the holy grail has been very broad transfer, through improvement in working memory. While there has been some evidence of this in pilot programs for children with ADHD, a new study is the first to show such improvement in older adults using a commercial brain training program.

A study involving 30 healthy adults aged 60 to 89 has demonstrated that ten hours of training on a computer game designed to boost visual perception improved perceptual abilities significantly, and also increased the accuracy of their visual working memory to the level of younger adults. There was a direct link between improved performance and changes in brain activity in the visual association cortex.

The computer game was one of those developed by Posit Science. Memory improvement was measured about one week after the end of training. The improvement did not, however, withstand multi-tasking, which is a particular problem for older adults. The participants, half of whom underwent the training, were college educated. The training challenged players to discriminate between two different shapes of sine waves (S-shaped patterns) moving across the screen. The memory test (which was performed before and after training) involved watching dots move across the screen, followed by a short delay and then re-testing for the memory of the exact direction the dots had moved.

A comprehensive review of the recent research into the benefits of music training on learning and the brain concludes music training in schools should be strongly supported.

A review of the many recent studies into the effects of music training on the nervous system strongly suggests that the neural connections made during musical training also prime the brain for other aspects of human communication, including learning. It’s suggested that actively engaging with musical sounds not only helps the plasticity of the brain, but also helps provide a stable scaffolding of meaningful patterns. Playing an instrument primes the brain to choose what is relevant in a complex situation. Moreover, it trains the brain to make associations between complex sounds and their meaning — something that is also important in language. Music training can provide skills that enable speech to be better heard against background noise — useful not only for those with some hearing impairment (it’s a common difficulty as we get older), but also for children with learning disorders. The review concludes that music training tones the brain for auditory fitness, analogous to the way physical exercise tones the body, and that the evidence justifies serious investment in music training in schools.

[1678] Kraus, N., & Chandrasekaran B. (2010).  Music training for the development of auditory skills. Nat Rev Neurosci. 11(8), 599 - 605.

A month's training in sound discrimination reversed normal age-related cognitive decline in the auditory cortex in old rats.

A rat study demonstrates how specialized brain training can reverse many aspects of normal age-related cognitive decline in targeted areas. The month-long study involved daily hour-long sessions of intense auditory training targeted at the primary auditory cortex. The rats were rewarded for picking out the oddball note in a rapid sequence of six notes (five of them of the same pitch). The difference between the oddball note and the others became progressively smaller. After the training, aged rats showed substantial reversal of their previously degraded ability to process sound. Moreover, measures of neuron health in the auditory cortex had returned to nearly youthful levels.

A large study has found evidence that frequent cognitive activity can counteract the detrimental effect of poor education on at least one aspect of age-related cognitive decline -- episodic memory.

A study (“Midlife in the United States”) assessing 3,343 men and women aged 32-84 (mean age 56), of whom almost 40% had at least a 4-year college degree, has found evidence that frequent cognitive activity can counteract the detrimental effect of poor education on age-related cognitive decline. Although, as expected, those with higher education engaged in cognitive activities more often and did better on the memory tests, those with lower education who engaged in reading, writing, attending lectures, doing word games or puzzles once or week or more had memory scores similar to people with more education on tests of episodic memory (although this effect did not occur for executive functioning).

[651] Lachman, M. E., Agrigoroaei S., Murphy C., & Tun P. A. (2010).  Frequent cognitive activity compensates for education differences in episodic memory. The American Journal of Geriatric Psychiatry: Official Journal of the American Association for Geriatric Psychiatry. 18(1), 4 - 10.

A study has found resistance training significantly improved selective attention and conflict resolution in older women, but balance and tone training did not.

A study involving 155 women aged 65-75 has found that those who participated in resistance training once or twice weekly for a year significantly improved their selective attention (maintaining mental focus) and conflict resolution (as well as muscular function of course!), compared to those who participated in twice-weekly balance and tone training. Performance on the Stroop test improved by 12.6% and 10.9% in the once-weekly and twice-weekly resistance training groups respectively, while it deteriorated by 0.5% in the balance and tone group. Improved attention and conflict resolution was also significantly associated with increased gait speed.

Older news items (pre-2010) brought over from the old website

Characteristics of age-related cognitive decline in semantic memory

A study involving 117 healthy elderly (aged 60-91) has found that, while increasing age was associated with poorer memory for names of famous people, age didn’t affect memory for biographical details about them. It also found that names served as better cues to those details than faces did. A follow-up study (to be published in Neuropsychologia) found that, in contrast, those with mild cognitive impairment and early Alzheimer’s showed not only an increased inability to remember names, but also a decline in memory for biographical details.

[1308] Langlois, R., Fontaine F., Hamel C., & Joubert S. (2009).  [The impact of aging on the ability to recognize famous faces and provide biographical knowledge of famous people]. Canadian Journal on Aging = La Revue Canadienne Du Vieillissement. 28(4), 337 - 345.

http://www.eurekalert.org/pub_releases/2009-12/uom-whn121809.php

Rote learning may improve verbal memory in seniors

A study involving 24 older adults (aged 55—70) has found that six weeks of intensive rote learning (memorizing a newspaper article or poem of 500 words every week) resulted in measurable changes in N-acetylaspartate, creatine and choline, three metabolites in the brain that are related to memory performance and neural cell health, in the left posterior hippocampus — but only after a six-week rest period, at which time the participants also showed improvements in their verbal and episodic memory, and also only in one of the two learning groups. The group that didn’t show any change were said to have low compliance with the memorization task.

McNulty, J. et al. The Identification of Neurometabolic Sequelae Post-learning Using Proton Magnetic Resonance Spectroscopy. Presented November 26 at the annual meeting of the Radiological Society of North America (RSNA).

http://www.eurekalert.org/pub_releases/2006-11/rson-rli112206.php

Actors’ memory tricks help students and older adults

The ability of actors to remember large amounts of dialog verbatim is a marvel to most of us, and most of us assume they do by painful rote memorization. But two researchers have been studying the way actors learn for many years and have concluded that the secret of actors' memories is in the acting; an actor learning lines by focusing on the character’s motives and feelings — they get inside the character. To do this, they break a script down into a series of logically connected "beats" or intentions. The researchers call this process active experiencing, which uses "all physical, mental, and emotional channels to communicate the meaning of material to another person." This principle can be applied in other contexts. For example, students who imagined themselves explaining something to somebody else remembered more than those who tried to memorize the material by rote. Physical movement also helps — lines learned while doing something, such as walking across the stage, were remembered better than lines not accompanied with action. The principles have been found useful in improving memory in older adults: older adults who received a four-week course in acting showed significantly improved word-recall and problem-solving abilities compared to both a group that received a visual-arts course and a control group, and this improvement persisted four months afterward.

[2464] Noice, H., & Noice T. (2006).  What Studies of Actors and Acting Can Tell Us About Memory and Cognitive Functioning. Current Directions in Psychological Science. 15(1), 14 - 18.

http://www.eurekalert.org/pub_releases/2006-01/aps-bo012506.php

'Imagination' helps older people remember to comply with medical advice

A new study suggests a way to help older people remember to take medications and follow other medical advice. Researchers found older adults (aged 60 to 81) who spent a few minutes picturing how they would test their blood sugar were 50% more likely to actually do these tests on a regular basis than those who used other memory techniques. Participants were assigned to one of three groups. One group spent one 3-minute session visualizing exactly what they would be doing and where they would be the next day when they were scheduled to test their blood sugar levels. Another group repeatedly recited aloud the instructions for testing their blood. The last group were asked to write a list of pros and cons for testing blood sugar. All participants were asked not to use timers, alarms or other devices. Over 3 weeks, the “imagination” group remembered 76% of the time to test their blood sugar at the right times of the day compared to an average of 46% in the other two groups. They were also far less likely to go an entire day without testing than those in the other two groups.

[473] Liu, L. L., & Park D. C. (2004).  Aging and medical adherence: the use of automatic processes to achieve effortful things. Psychology and Aging. 19(2), 318 - 325.

http://www.eurekalert.org/pub_releases/2004-06/nioa-ho060104.php

How to benefit from memory training

Brain and memory training programs are increasingly popular, but they don't work well for everyone. In particular, they tend to be much less effective for those who need them the most — those 80 and older, and those with lower initial ability. But a new study shows the problem is not intrinsic, but depends on the strategies people use.  The study found that people in their 60s and 70s used a strategy of spending most of their time on studying the materials and very little on the test, and showed large improvements over the testing sessions. By contrast, most people in their 80s and older spent very little time studying and instead spent most of their time on the test. These people did not do well and showed very little improvement even after two weeks of training.

[882] Bissig, D. [1], & Lustig C. [2] (2007).  Who Benefits From Memory Training?. Psychological Science. 18, 720 - 726.

http://www.eurekalert.org/pub_releases/2007-08/uom-dpt082007.php

Add comment

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.

Comments