working memory

Why older adults lose working memory capacity

The root of age-related cognitive decline may lie in a reduced ability to ignore distractors. A new study indicates that older adults put more effort into focusing during encoding, in order to compensate for a reduced ability to hold information in working memory. The finding suggests a multi-pronged approach to improving cognitive ability in older adults.

I've reported before on the idea that the drop in working memory capacity commonly seen in old age is related to the equally typical increase in distractability. Studies of brain activity have also indicated that lower WMC is correlated with greater storage of distractor information. So those with higher WMC, it's thought, are better at filtering out distraction and focusing only on the pertinent information. Older adults may show a reduced WMC, therefore, because their ability to ignore distraction and irrelevancies has declined.

Why does that happen?

A new, large-scale study using a smartphone game suggests that the root cause is a change in the way we hold items in working memory.

The study involved 29,631 people aged 18—69, who played a smartphone game in which they had to remember the positions of an increasing number of red circles. Yellow circles, which had to be ignored, could also appear — either at the same time as the red circles, or after them. Data from this game revealed both WMC (how many red circle locations the individual could remember), and distractability (how many red circle locations they could remember in the face of irrelevant yellow circles).

Now this game isn't simply a way of measuring WMC. It enables us to make an interesting distinction based on the timing of the distraction. If the yellow circles appeared at the same time as the red ones, they are providing distraction when you are trying to encode the information. If they appear afterward, the distraction occurs when you are trying to maintain the information in working memory.

Now it would seem commonsensical that distraction at the time of encoding must be the main problem, but the fascinating finding of this study is that it was distraction during the delay (while the information is being maintained in working memory) that was the greater problem. And it was this distraction that became more and more marked with increasing age.

The study is a follow-up to a smaller 2014 study that included two experiments: a lab experiment involving 21 young adults, and data from the same smartphone game involving only the younger cohort (18-29 years; 3247 participants).

This study demonstrated that distraction during encoding and distraction during delay were independent contributory factors to WMC, suggesting that separate mechanisms are involved in filtering out distraction at encoding and maintenance.

Interestingly, analysis of the data from the smartphone game did indicate some correlation between the two in that context. One reason may be that participants in the smartphone game were exposed to higher load trials (the lab study kept WM load constant); another might be that they were in more distracting environments.

While in general researchers have till now assumed that the two processes are not distinct, it has been theorized that distractor filtering at encoding may involve a 'selective gating mechanism', while filtering during WM maintenance may involve a shutting down of perception. The former has been linked to a gating mechanism in the striatum in the basal ganglia, while the latter has been linked to an increase in alpha waves in the frontal cortex, specifically, the left middle frontal gyrus. The dorsolateral prefrontal cortex may also be involved in distractor filtering at encoding.

To return to the more recent study:

  • there was a significant decrease in WMC with increasing age in all conditions (no distraction; encoding distraction; delay distraction)
  • for older adults, the decrease in WMC was greatest in the delay distraction condition
  • when 'distraction cost' was calculated (((ND score − (ED or DD score))/ND score) × 100), there was a significant correlation between delay distraction cost and age, but not between encoding distraction cost and age
  • for older adults, performance in the encoding distraction condition was better predicted by performance in the no distraction condition than it was among the younger groups
  • this correlation was significantly different between the 30-39 age group and the 40-49 age group, between the 40s and the 50s, and between the 50s and the 60s — showing that this is a progressive change
  • older adults with a higher delay distraction cost (ie, those more affected by distractors during delay) also showed a significantly greater correlation between their no-distraction performance and encoding-distraction performance.

All of this suggests that older adults are focusing more attention during attention even when there is no distraction, and they are doing so to compensate for their reduced ability to maintain information in working memory.

This suggests several approaches to improving older adults' ability to cope:

  • use perceptual discrimination training to help improve WMC
  • make working memory training more about learning to ignore certain types of distraction
  • reduce distraction — modify daily tasks to make them more "older adult friendly"
  • (my own speculation) use meditation training to improve frontal alpha rhythms.

You can participate in the game yourself, at http://thegreatbrainexperiment.com/

http://medicalxpress.com/news/2015-05-smartphone-reveals-older.html

Reference: 

[3921] McNab, F., Zeidman P., Rutledge R. B., Smittenaar P., Brown H. R., Adams R. A., et al.
(2015).  Age-related changes in working memory and the ability to ignore distraction.
Proceedings of the National Academy of Sciences. 112(20), 6515 - 6518.

McNab, F., & Dolan, R. J. (2014). Dissociating distractor-filtering at encoding and during maintenance. Journal of Experimental Psychology. Human Perception and Performance, 40(3), 960–7. doi:10.1037/a0036013

Topics: 

tags memworks: 

tags problems: 

tags strategies: 

Clarity in short-term memory shows no link with IQ

December, 2010

The two measures of working memory capacity appear to be fully independent, and only one of them is related to intelligence.

The number of items a person can hold in short-term memory is strongly correlated with their IQ. But short-term memory has been recently found to vary along another dimension as well: some people remember (‘see’) the items in short-term memory more clearly and precisely than other people. This discovery has lead to the hypothesis that both of these factors should be considered when measuring working memory capacity. But do both these aspects correlate with fluid intelligence?

A new study presented 79 students with screen displays fleetingly showing either four or eight items. After a one-second blank screen, one item was returned and the subject asked whether that object had been in a particular location previously. Their ability to detect large and small changes in the items provided an estimate of how many items the individual could hold in working memory, and how clearly they remembered them. These measures were compared with individuals’ performance on standard measures of fluid intelligence.

Analysis of data found that these two measures of working memory — number and clarity —are completely independent of each other, and that it was the number factor only that correlated with intelligence.

This is not to say that clarity is unimportant! Only that it is not related to intelligence.

Reference: 

Source: 

Topics: 

tags memworks: 

Forgetfulness in old age may be related to changes in retrieval strategy

April, 2013

A study of younger and older adults indicates that memory search tends to decline with age because, with reduced cognitive control, seniors’ minds tend to ‘flit’ too quickly from one information cluster to another.

Evidence is accumulating that age-related cognitive decline is rooted in three related factors: processing speed slows down (because of myelin degradation); the ability to inhibit distractions becomes impaired; working memory capacity is reduced.

A new study adds to this evidence by looking at one particular aspect of age-related cognitive decline: memory search.

The study put 185 adults aged 29-99 (average age 67) through three cognitive tests: a vocabulary test, digit span (a working memory test), and the animal fluency test, in which you name as many animals as you can in one minute.

Typically, in the animal fluency test, people move through semantic categories such as ‘pets’, ‘big cats’, and so on. The best performers are those who move from category to category with optimal timing — i.e., at the point where the category has been sufficiently exhausted that efforts would be better spent on a new one.

Participants recalled on average 17 animal names, with a range from 5 to 33. While there was a decline with age, it wasn’t particularly marked until the 80s (an average of 18.3 for those in their 30s, 17.5 for those in their 60s, 16.5 for the 70s, 12.8 for the 80s, and 10 for the 90s). Digit span did show a decline, but it was not significant (from 17.5 down to 15.3), while vocabulary (consistent with previous research) showed no decline with age.

But all this is by the by — the nub of the experiment was to discover how individuals were searching their memory. This required a quite complicated analysis, which I will not go into, except to mention two important distinctions. The first is between:

  • global context cue: activates each item in the active category according to how strong it is (how frequently it has been recalled in the past);
  • local context cue: activates each item in relation to its semantic similarity to the previous item recalled.

A further distinction was made between static and dynamic processes: in dynamic models, it is assumed the user switches between local and global search. This, it is further assumed, is because memory is ‘patchy’ – that is, information is represented in clusters. Within a cluster, we use local cues, but to move from one cluster to another, we use global cues.

The point of all this was to determine whether age-related decline in memory search has to do with:

  • Reduced processing speed,
  • Persisting too long on categories, or
  • Inability to maintain focus on local cues (this would relate it back to the inhibition deficit).

By modeling the exact recall patterns, the researchers ascertained that the recall process is indeed dynamic, although the points of transition are not clearly understood. The number of transitions from one cluster to another was negatively correlated with age; it was also strongly positively correlated with performance (number of items recalled). Digit span, assumed to measure ‘cognitive control’, was also negatively correlated with number of transitions, but, as I said, was not significantly correlated with age.

In other words, it appears that there is a qualitative change with age, that increasing age is correlated with increased switching, and reduced cognitive control is behind this — although it doesn’t explain it all (perhaps because we’re still not able to fully measure cognitive control).

At a practical level, the message is that memory search may become less efficient because, as people age, they tend to change categories too frequently, before they have exhausted their full potential. While this may well be a consequence of reduced cognitive control, it seems likely (to me at least) that making a deliberate effort to fight the tendency to move on too quickly will pay dividends for older adults who want to improve their memory retrieval abilities.

Nor is this restricted to older adults — since age appears to be primarily affecting performance through its effects on cognitive control, it is likely that this applies to those with reduced working memory capacity, of any age.

Reference: 

[3378] Hills, T. T., Mata R., Wilke A., & Samanez-Larkin G. R.
(2013).  Mechanisms of Age-Related Decline in Memory Search Across the Adult Life Span.
Developmental Psychology. No - Pagination Specified.

Source: 

Topics: 

tags development: 

tags memworks: 

tags problems: 

Learning Facebook may keep seniors sharp

Preliminary findings from a small study show that older adults (68-91), after learning to use Facebook, performed about 25% better on tasks designed to measure their ability to continuously monitor and to quickly add or delete the contents of their

03/2013

Mynd: 

tags development: 

tags memworks: 

tags problems: 

tags strategies: 

How urban living affects attention

February, 2013

A comparison of traditional African villagers and those who have moved to town indicates that urban living improves working memory capacity even as it makes us more vulnerable to distraction.

Another study looking into the urban-nature effect issue takes a different tack than those I’ve previously reported on, that look at the attention-refreshing benefits of natural environments.

In this study, a rural African people living in a traditional village were compared with those who had moved to town. Participants in the first experiment included 35 adult traditional Himba, 38 adolescent traditional Himba (mean age 12), 56 adult urbanized Himba, and 37 adolescent urbanized Himba. All traditional Himba had had little contact with the Western world and only spoke their native language; all adult urbanized Himba had grown up in traditional villages and only moved to town later in life (average length of time in town was 6 years); all adolescent urbanized Himba had grown up in town the town and usually attended school regularly.

The first experiments assessed the ability to ignore peripheral distracting arrows while focusing on the right or left direction of a central arrow.

There was a significant effect of urbanization, with attention being more focused (less distracted) among the traditional Himba. Traditional Himba were also slower than urbanized Himba — but note that there was substantial overlap in response times between the two groups. There was no significant effect of age (that is, adolescents were faster than adults in their responses, but the effect of the distracters was the same across age groups), or a significant interaction between age and urbanization.

The really noteworthy part of this, was that the urbanization effect on task performance was the same for the adults who had moved to town only a few years earlier as for the adolescents who had grown up and been educated in the town. In other words, this does not appear to be an educational effect.

The second experiment looked at whether traditional Himba would perform more like urbanized Himba if there were other demands on working memory. This was done by requiring them to remember three numbers (the number words in participants’ language are around twice as long as the same numbers in English, hence their digit span is shorter).

While traditional Himba were again more focused than the urbanized in the no-load condition, when there was this extra load on working memory, there was no significant difference between the two groups. Indeed, attention was de-focused in the traditional Himba under high load to the same degree as it was for urbanized Himba under no-load conditions. Note that increasing the cognitive load made no difference for the urbanized group.

There was also a significant (though not dramatic) difference between the traditional and urbanized Himba in terms of performance on the working memory task, with traditional Himba remembering an average of 2.46/3 digits and urbanized Himba 2.64.

Experiment 3 tested the two groups on a working memory task, a standard digit span test (although, of course, in their native language). Random sequences of 2-5 digits were read out, with the participant being required to say them aloud immediately after. Once again, the urbanized Himba performed better than the traditional Himba (4.32 vs 3.05).

In other words, the problem does not seem to be that urbanization depletes working memory, rather, that urbanization encourages disengagement (i.e., we have the capacity, we just don’t use it).

In the fourth experiment, this idea was tested more directly. Rather than the arrows used in the earlier experiments, black and white faces were used, with participants required to determine the color of the central face. Additionally, inverted faces were sometimes used (faces are stimuli we pay a lot of attention to, but inverting them reduces their ‘faceness’, thus making them less interesting).

An additional group of Londoners was also included in this experiment.

While urbanized Himba and Londoners were, again, more de-focused than traditional Himba when the faces were inverted, for the ‘normal’ faces, all three groups were equally focused.

Note that the traditional Himba were not affected by the changes in the faces, being equally focused regardless of the stimulus. It was the urbanized groups that became more alert when the stimuli became more interesting.

Because it may have been a race-discrimination mechanism coming into play, the final experiment returned to the direction judgment, with faces either facing left or right. This time the usual results occurred – the urbanized groups were more de-focused than the traditional group.

In other words, just having faces was not enough; it was indeed the racial discrimination that engaged the urbanized participants (note that both these urban groups come from societies where racial judgments are very salient – multicultural London, and post-apartheid Namibia).

All of this indicates that the attention difficulties that appear so common nowadays are less because our complex environments are ‘sapping’ our attentional capacities, and more because we are in a different attentional ‘mode’. It makes sense that in environments that contain so many more competing stimuli, we should employ a different pattern of engagement, keeping a wider, more spread, awareness on the environment, and only truly focusing when something triggers our interest.

Reference: 

[3273] Linnell, K. J., Caparos S., de Fockert J. W., & Davidoff J.
(2013).  Urbanization Decreases Attentional Engagement.
Journal of experimental psychology. Human perception and performance.

Source: 

Topics: 

tags: 

tags lifestyle: 

tags memworks: 

tags problems: 

Evidence that IQ is rooted in two main brain networks

January, 2013

A very large online study helps decide between the idea of intelligence as a single factor (‘g’) versus having multiple domains.

An online study open to anyone, that ended up involving over 100,000 people of all ages from around the world, put participants through 12 cognitive tests, as well as questioning them about their background and lifestyle habits. This, together with a small brain-scan data set, provided an immense data set to investigate the long-running issue: is there such a thing as ‘g’ — i.e. is intelligence accounted for by just a single general factor; is it supported by just one brain network? — or are there multiple systems involved?

Brain scans of 16 healthy young adults who underwent the 12 cognitive tests revealed two main brain networks, with all the tasks that needed to be actively maintained in working memory (e.g., Spatial Working Memory, Digit Span, Visuospatial Working Memory) loading heavily on one, and tasks in which information had to transformed according to logical rules (e.g., Deductive Reasoning, Grammatical Reasoning, Spatial Rotation, Color-Word Remapping) loading heavily on the other.

The first of these networks involved the insula/frontal operculum, the superior frontal sulcus, and the ventral part of the anterior cingulate cortex/pre-supplementary motor area. The second involved the inferior frontal sulcus, inferior parietal lobule, and the dorsal part of the ACC/pre-SMA.

Just a reminder of individual differences, however — when analyzed by individual, this pattern was observed in 13 of the 16 participants (who are not a very heterogeneous bunch — I strongly suspect they are college students).

Still, it seems reasonable to conclude, as the researchers do, that at least two functional networks are involved in ‘intelligence’, with all 12 cognitive tasks using both networks but to highly variable extents.

Behavioral data from some 60,000 participants in the internet study who completed all tasks and questionnaires revealed that there was no positive correlation between performance on the working memory tasks and the reasoning tasks. In other words, these two factors are largely independent.

Analysis of this data revealed three, rather than two, broad components to overall cognitive performance: working memory; reasoning; and verbal processing. Re-analysis of the imaging data in search of the substrate underlying this verbal component revealed that the left inferior frontal gyrus and temporal lobes were significantly more active on tasks that loaded on the verbal component.

These three components could also be distinguished when looking at other factors. For example, while age was the most significant predictor of cognitive performance, its effect on the verbal component was much later and milder than it was for the other two components. Level of education was more important for the verbal component than the other two, while the playing of computer games had an effect on working memory and reasoning but not verbal. Chronic anxiety affected working memory but not reasoning or verbal. Smoking affected working memory more than the others. Unsurprisingly, geographical location affected verbal more than the other two components.

A further test, involving 35 healthy young adults, compared performance on the 12 tasks and score on the Cattell Culture Fair test (a classic pen and paper IQ test). The working memory component correlated most with the Cattell score, followed by the reasoning component, with the Verbal component (unsurprisingly, given that this is designed to be a ‘culture-fair’ test) showing the smallest correlation.

All of this is to say that this is decided evidence that what is generally considered ‘intelligence’ is based on the functioning of multiple brain networks rather than a single ‘g’, and that these networks are largely independent. Thus, the need to focus on and maintain task-relevant information maps onto one particular brain network, and is one strand. Another network specializes in transforming information, regardless of source or type. These, it would seem, are the main processes involved in fluid intelligence, while the Verbal component most likely reflects crystallized intelligence. There are also likely to be other networks which are not perhaps typically included in ‘general intelligence’, but are nevertheless critical for task performance (the researchers suggest the ability to adapt plans based on outcomes might be one such function).

The obvious corollary of all this is that similar IQ scores can reflect different abilities for these strands — e.g., even if your working memory capacity is not brilliant, you can develop your reasoning and verbal abilities. All this is consistent with the growing evidence that, although fundamental WMC might be fixed (and I use the word ‘fundamental’ deliberately, because WMC can be measured in a number of different ways, and I do think you can, at the least, effectively increase your WMC), intelligence (because some of its components are trainable) is not.

If you want to participate in this research, a new version of the tests is available at http://www.cambridgebrainsciences.com/theIQchallenge

Reference: 

[3214] Hampshire, A., Highfield R. R., Parkin B. L., & Owen A. M.
(2012).  Fractionating Human Intelligence.
Neuron. 76(6), 1225 - 1237.

Source: 

Topics: 

tags memworks: 

Even tiny interruptions can double or treble work errors

January, 2013

A new study quantifies the degree to which tasks that involve actions in a precise sequence are vulnerable to interruptions.

In my book on remembering intentions, I spoke of how quickly and easily your thoughts can be derailed, leading to ‘action slips’ and, in the wrong circumstances, catastrophic mistakes. A new study shows how a 3-second interruption while doing a task doubled the rate of sequence errors, while a 4s one tripled it.

The study involved 300 people, who were asked to perform a series of ordered steps on the computer. The steps had to be performed in a specific sequence, mnemonically encapsulated by UNRAVEL, with each letter identifying the step. The task rules for each step differed, requiring the participant to mentally shift gears each time. Moreover, task elements could have multiple elements — for example, the letter U could signal the step, one of two possible responses for that step, or be a stimulus requiring a specific response when the step was N. Each step required the participant to choose between two possible responses based on one stimulus feature — features included whether it was a letter or a digit, whether it was underlined or italic, whether it was red or yellow, whether the character outside the outline box was above or below. There were also more cognitive features, such as whether the letter was near the beginning of the alphabet or not. The identifying mnemonic for the step was linked to the possible responses (e.g., N step – near or far; U step — underline or italic).

At various points, participants were very briefly interrupted. In the first experiment, they were asked to type four characters (letters or digits); in the second experiment, they were asked to type only two (a very brief interruption indeed!).

All of this was designed to set up a situation emulating “train of thought” operations, where correct performance depends on remembering where you are in the sequence, and on producing a situation where performance would have reasonably high proportion of errors — one of the problems with this type of research has been the use of routine tasks that are generally performed with a high degree of accuracy, thus generating only small amounts of error data for analysis.

In both experiments, interruptions significantly increased the rate of sequence errors on the first trial after the interruption (but not on subsequent ones). Nonsequence errors were not affected. In the first experiment (four-character interruption), the sequence error rate on the first trial after the interruption was 5.8%, compared to 1.8% on subsequent trials. In the second experiment (two-character interruption), it was 4.3%.

The four-character interruptions lasted an average of 4.36s, and the two-character interruptions lasted an average of 2.76s.

Whether the characters being typed were letters or digits made no difference, suggesting that the disruptive effects of interruptions are not overly sensitive to what’s being processed during the interruption (although of course these are not wildly different processes!).

The absence of effect on nonsequence errors shows that interruptions aren’t disrupting global attentional resources, but more specifically the placekeeping task.

As I discussed in my book, the step also made a significant difference — for sequence errors, middle steps showed higher error rates than end steps.

All of this confirms and quantifies how little it takes to derail us, and reminds us that, when engaged in tasks involving the precise sequence of sub-tasks (which so many tasks do), we need to be alert to the dangers of interruptions. This is, of course, particularly true for those working in life-critical areas, such as medicine.

Reference: 

[3207] Altmann, E. M., Gregory J., & Hambrick D. Z.
(2013).  Momentary Interruptions Can Derail the Train of Thought.
Journal of Experimental Psychology: General. No - Pagination Specified.

Source: 

Topics: 

tags memworks: 

tags problems: 

Menopause forgetfulness greatest early in postmenopause

January, 2013

A smallish study suggests that the cognitive effects of menopause are greatest in the first year after menopause.

Being a woman of a certain age, I generally take notice of research into the effects of menopause on cognition. A new study adds weight, perhaps, to the idea that cognitive complaints in perimenopause and menopause are not directly a consequence of hormonal changes, but more particularly, shows that early post menopause may be the most problematic time.

The study followed 117 women from four stages of life: late reproductive, early and late menopausal transition, and early postmenopause. The late reproductive period is defined as when women first begin to notice subtle changes in their menstrual periods, but still have regular menstrual cycles. Women in the transitional stage (which can last for several years) experience fluctuation in menstrual cycles, and hormone levels begin to fluctuate significantly.

Women in the early stage of post menopause (first year after menopause), as a group, were found to perform more poorly on measures of verbal learning, verbal memory, and fine motor skill than women in the late reproductive and late transition stages. They also performed significantly worse than women in the late menopausal transition stage on attention/working memory tasks.

Surprisingly, self-reported symptoms such as sleep difficulties, depression, and anxiety did not predict memory problems. Neither were the problems correlated with hormone levels (although fluctuations could be a factor).

This seemingly contradicts earlier findings from the same researchers, who in a slightly smaller study found that those experiencing poorer working memory and attention were more likely to have poorer sleep, depression, and anxiety. That study, however, only involved women approaching and in menopause. Moreover, these aspects were not included in the abstract of the paper but only in the press release, and because I don’t have access to this particular journal, I cannot say whether there is something in the data that explains this. Because of this, I am not inclined to put too much weight on this point.

But we may perhaps take the findings as support for the view that cognitive problems experienced earlier in the menopause cycle are, when they occur, not a direct result of hormonal changes.

The important result of this study is the finding that the cognitive problems often experienced by women in their 40s and 50s are most acute during the early period of post menopause, and the indication that the causes and manifestations are different at different stages of menopause.

It should be noted, however, that there were only 14 women in the early postmenopause stage. So, we shouldn’t put too much weight on any of this. Nevertheless, it does add to the picture research is building up about the effects of menopause on women’s cognition.

While the researchers said that this effect is probably temporary — which was picked up as the headline in most media — this was not in fact investigated in this study. It would be nice to have some comparison with those, say, two or three and five years post menopause (but quite possibly this will be reported in a later paper).

Reference: 

[3237] Weber, M. T., Rubin L. H., & Maki P. M.
(2013).  Cognition in perimenopause.
Menopause: The Journal of The North American Menopause Society.

Source: 

Topics: 

tags: 

tags development: 

tags lifestyle: 

tags memworks: 

tags problems: 

Worry & fatigue main reason for ‘chemo-brain’?

January, 2013

A new study points to pre-treatment reasons for declined cognitive function following chemotherapy, and suggests that anxiety may be the main driver.

The issue of ‘chemo-brain’ — cognitive impairment following chemotherapy — has been a controversial one. While it is now (I hope) accepted by most that it is, indeed, a real issue, there is still an ongoing debate over whether the main cause is really the chemotherapy. A new study adds to the debate.

The study involved 28 women who received adjuvant chemotherapy for breast cancer, 37 who received radiotherapy, and 32 age-matched healthy controls. Brain scans while doing a verbal working memory task were taken before treatment and one month after treatment.

Women who underwent chemotherapy performed less accurately on the working memory task both before treatment and one month after treatment. They also reported a significantly higher level of fatigue. Greater fatigue correlated with poorer test performance and more cognitive problems, across both patient groups and at both times (although the correlation was stronger after treatment).

Both patient groups showed reduced function in the left inferior frontal gyrus, before therapy, but those awaiting chemotherapy showed greater impairment than those in the radiotherapy group. Pre-treatment difficulty in recruiting this brain region in high demand situations was associated with greater fatigue after treatment.

In other words, reduced working memory function before treatment began predicted how tired people felt after treatment, and how much their cognitive performance suffered. All of which suggests it is not the treatment itself that is the main problem.

But the fact that reduced working memory function precedes the fatigue indicates it’s not the fatigue that’s the main problem either. The researchers suggest that the main driver is level of worry —worry interfered with the task; level of worry was related to fatigue. And worry, as we know, can reduce working memory capacity (because it uses up part of it).

All of which is to say that support for cancer patients aimed at combating stress and anxiety might do more for ‘chemo-brain’ than anything else. In this context, I note also that there have been suggestions that sleep problems have also been linked to chemo-brain — a not unrelated issue!

Reference: 

Cimprich, B. et al. 2012. Neurocognitive impact in adjuvant chemotherapy for breast cancer linked to fatigue: A Prospective functional MRI study. Presented at the 2012 CTRC-AACR San Antonio Breast Cancer Symposium, Dec. 4-8

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

tags problems: 

Pages

Subscribe to RSS - working memory