Individual differences

Latest Research News

A small pilot study, in which participants had brain scans and working memory tests before and after single sessions of light and moderate intensity exercise and after a 12-week long training program, has shown that immediate cognitive effects from exercise mirror long-term ones. Participants who saw the biggest improvements in cognition and functional brain connectivity after single sessions of moderate-intensity physical activity also showed the biggest long-term gains in cognition and connectivity.

The finding suggests that the brain changes observed after a single workout study can be a biomarker of sorts for long-term training.

https://www.eurekalert.org/pub_releases/2019-03/cns-eau032219.php

The findings were presented by Michelle Voss at the Cognitive Neuroscience Society (CNS) in San Francisco, March 23-26, 2019.

Americans with a college education live longer without dementia and Alzheimer's

Data from the large, long-running U.S. Health and Retirement Study found that healthy cognition characterized most of the people with at least a college education into their late 80s, while those who didn’t complete high school had good cognition up until their 70s.

The study found that those who had at least a college education lived a much shorter time with dementia than those with less than a high school education: an average of 10 months for men and 19 months for women, compared to 2.57 years (men) and 4.12 years (women).

The data suggests that those who graduated high school can expect to live (on average) at least 70% of their remaining life after 65 with good cogntion, compared to more than 80% for those with a college education, and less than 50% for those who didn't finish high school.

The analysis was based on a sample of 10,374 older adults (65+; average age 74) in 2000 and 9,995 in 2010.

https://www.eurekalert.org/pub_releases/2018-04/uosc-awa041618.php

https://academic.oup.com/psychsocgerontology/article/73/suppl_1/S20/4971564 (open access)

More education linked to better cognitive functioning later in life

Data from around 196,000 subscribers to Lumosity online brain-training games found that higher levels of education were strong predictors of better cognitive performance across the 15- to 60-year-old age range of their study participants, and appear to boost performance more in areas such as reasoning than in terms of processing speed.

Differences in performance were small for test subjects with a bachelor's degree compared to those with a high school diploma, and moderate for those with doctorates compared to those with only some high school education.

But people from lower educational backgrounds learned novel tasks nearly as well as those from higher ones.

https://www.eurekalert.org/pub_releases/2017-08/l-mel082117.php

http://www.futurity.org/higher-education-cognitive-peak-1523712/

Youthful cognitive ability strongly predicts mental capacity later in life

Data from more than 1,000 men participating in the Vietnam Era Twin Study of Aging revealed that their cognitive ability at age 20 was a stronger predictor of cognitive function later in life than other factors, such as higher education, occupational complexity or engaging in late-life intellectual activities.

All of the men, now in their mid-50s to mid-60s, took the Armed Forces Qualification Test at an average age of 20. The same test of general cognitive ability (GCA) was given in late midlife, plus assessments in seven cognitive domains.

GCA at age 20 accounted for 40% of the variance in the same measure at age 62, and approximately 10% of the variance in each of the seven cognitive domains. Lifetime education, complexity of job and engagement in intellectual activities each accounted for less than 1% of variance at average age 62.

The findings suggest that the impact of education, occupational complexity and engagement in cognitive activities on later life cognitive function simply reflects earlier cognitive ability.

The researchers speculated that the role of education in increasing GCA takes place primarily during childhood and adolescence when there is still substantial brain development.

https://www.eurekalert.org/pub_releases/2019-01/uoc--yca011819.php

[4484] Crimmins, E. M., Saito Y., Kim J. Ki, Zhang Y. S., Sasson I., & Hayward M. D.
(2018).  Educational Differences in the Prevalence of Dementia and Life Expectancy with Dementia: Changes from 2000 to 2010.
The Journals of Gerontology: Series B. 73(suppl_1), S20 - S28.

Guerra-Carrillo, B., Katovich, K., & Bunge, S. A. (2017). Does higher education hone cognitive functioning and learning efficacy? Findings from a large and diverse sample. PLOS ONE, 12(8), e0182276. https://doi.org/10.1371/journal.pone.0182276

[4485] Kremen, W. S., Beck A., Elman J. A., Gustavson D. E., Reynolds C. A., Tu X. M., et al.
(2019).  Influence of young adult cognitive ability and additional education on later-life cognition.
Proceedings of the National Academy of Sciences. 116(6), 2021.

A small study that fitted 29 young adults (18-31) and 31 older adults (55-82) with a device that recorded steps taken and the vigor and speed with which they were made, has found that those older adults with a higher step rate performed better on memory tasks than those who were more sedentary. There was no such effect seen among the younger adults.

Improved memory was found for both visual and episodic memory, and was strongest with the episodic memory task. This required recalling which name went with a person's face — an everyday task that older adults often have difficulty with.

However, the effect on visual memory had more to do with time spent sedentary than step rate. With the face-name task, both time spent sedentary and step rate were significant factors, and both factors had a greater effect than they had on visual memory.

Depression and hypertension were both adjusted for in the analysis.

There was no significant difference in executive function related to physical activity, although previous studies have found an effect. Less surprisingly, there was also no significant effect on verbal memory.

Both findings might be explained in terms of cognitive demand. The evidence suggests that the effect of physical exercise is only seen when the task is sufficiently cognitively demanding. No surprise that verbal memory (which tends to be much less affected by age) didn't meet that challenge, but interestingly, the older adults in this study were also less impaired on executive function than on visual memory. This is unusual, and reminds us that, especially with small studies, you cannot ignore the individual differences.

This general principle may also account for the lack of effect among younger adults. It is interesting to speculate whether physical activity effects would be found if the younger adults were given much more challenging tasks (either by increasing their difficulty, or selecting a group who were less capable).

Step Rate was calculated by total steps taken divided by the total minutes in light, moderate, and vigorous activities, based on the notion that this would provide an independent indicator of physical activity intensity (how briskly one is walking). Sedentary Time was the total minutes spent sedentary.

http://www.eurekalert.org/pub_releases/2015-11/bumc-slp112415.php

[4045] Hayes, S. M., Alosco M. L., Hayes J. P., Cadden M., Peterson K. M., Allsup K., et al.
(2015).  Physical Activity Is Positively Associated with Episodic Memory in Aging.
Journal of the International Neuropsychological Society. 21(Special Issue 10), 780 - 790.

A study involving 66 healthy young adults (average age 24) has revealed that different individuals have distinct brain connectivity patterns that are associated with different ways of experiencing and remembering the past.

The participants completed an online questionnaire on how well they remember autobiographical events and facts, then had their brains scanned. Brain scans found that those with richly-detailed autobiographical memories had higher mediotemporal lobe connectivity to regions at the back of the brain involved in visual perception, whereas those tending to recall the past in a factual manner showed higher mediotemporal lobe connectivity to prefrontal regions involved in organization and reasoning.

The finding supports the idea that those with superior autobiographical memory have a greater ability or tendency to reinstate rich images and perceptual details, and that this appears to be a stable personality trait.

The finding also raises interesting questions about age-related cognitive decline. Many people first recognize cognitive decline in their increasing difficulty retrieving the details of events. But this may be something that is far more obvious and significant to people who are used to retrieving richly-detailed memories. Those who rely on a factual approach may be less susceptible.

http://www.eurekalert.org/pub_releases/2015-12/bcfg-wiy121015.php

Full text available at http://www.sciencedirect.com/science/article/pii/S0010945215003834

There's been a lot of talk in recent years about the importance of mindset in learning, with those who have a “growth mindset” (ie believe that intelligence can be developed) being more academically successful than those who believe that intelligence is a fixed attribute. A new study shows that a 45-minute online intervention can help struggling high school students.

The study involved 1,594 students in 13 U.S. high schools. They were randomly allocated to one of three intervention groups or the control group. The intervention groups either experienced an online program designed to develop a growth mindset, or an online program designed to foster a sense of purpose, or both programs (2 weeks apart). All interventions were expected to improve academic performance, especially in struggling students.

The interventions had no significant benefits for students who were doing okay, but were of significant benefit for those who had an initial GPA of 2 or less, or had failed at least one core subject (this group contained 519 students; a third of the total participants). For this group, each of the interventions was of similar benefit; interestingly, the combined intervention was less beneficial than either single intervention. It's plausibly suggested that this might be because the different messages weren't integrated, and students may have had some trouble in taking on board two separate messages.

Overall, for this group of students, semester grade point averages improved in core academic courses and the rate at which students performed satisfactorily in core courses increased by 6.4%.

GPA average in core subjects (math, English, science, social studies) was calculated at the end of the semester before the interventions, and at the end of the semester after the interventions. Brief questions before and after the interventions assessed the students' beliefs about intelligence, and their sense of meaningfulness about schoolwork.

GPA before intervention was positively associated with a growth mindset and a sense of purpose, explaining why the interventions had no effect on better students. Only the growth mindset intervention led to a more malleable view of intelligence; only the sense-of-purpose intervention led to a change in perception in the value of mundane academic tasks. Note that the combined intervention showed no such effects, suggesting that it had confused rather than enlightened!

In the growth mindset intervention, students read an article describing the brain’s ability to grow and reorganize itself as a consequence of hard work and good strategies. The message that difficulties don't indicate limited ability but rather provide learning opportunities, was reinforced in two writing exercises. The control group read similar materials, but with a focus on functional localization in the brain rather than its malleability.

In the sense-of-purpose interventions, students were asked to write about how they wished the world could be a better place. They read about the reasons why some students worked hard, such as “to make their families proud”; “to be a good example”; “to make a positive impact on the world”. They were then asked to think about their own goals and how school could help them achieve those objectives. The control group completed one of two modules that didn't differ in impact. In one, students described how their lives were different in high school compared to before. The other was much more similar to the intervention, except that the emphasis was on economic self-interest rather than social contribution.

The findings are interesting in showing that you can help poor learners with a simple intervention, but perhaps even more, for their indication that such interventions are best done in a more holistic and contextual way. A more integrated message would hopefully have been more effective, and surely ongoing reinforcement in the classroom would make an even bigger difference.

http://www.futurity.org/high-school-growth-mindset-910082/

Because this is such a persistent myth, I thought I should briefly report on this massive study that should hopefully put an end to this myth once and for all (I wish! Myths are not so easily squashed.)

This study used data from 377,000 U.S. high school students, and, agreeing with a previous large study, found that first-borns have a one IQ point advantage over later-born siblings, but while statistically significant, this is a difference of no practical significance.

The analysis also found that first-borns tended to be more extroverted, agreeable and conscientious, and had less anxiety than later-borns, — but those differences were “infinitesimally small”, amounting to a correlation of 0.02 (the correlation between birth order and intelligence was .04).

The study controlled for potentially confounding factors, such as a family's economic status, number of children and the relative age of the siblings at the time of the analysis.

A separate analysis of children with exactly two siblings and living with two parents, enabled the finding that there are indeed specific differences between the oldest and a second child, and between second and third children. But the magnitude of the differences was again “minuscule”.

Perhaps it's not fair to say the myth is trounced. Rather, we can say that, yeah, sure, birth order makes a difference — but the difference is so small as not to be meaningful on an individual level.

http://www.eurekalert.org/pub_releases/2015-07/uoia-msb071615.php

In 2013 I reported briefly on a pilot study showing that “super-agers” — those over 80 years old who have the brains and cognitive powers more typical of people decades younger — had an unusually large anterior cingulate cortex, with four times as many von Economo neurons.

The ACC is critical for cognitive control, executive function, and motivation. Von Economo neurons have been linked to social intelligence, being found (as yet) only in humans, great apes, whales and dolphins, with a reduction being found in frontotemporal dementia and autism.

A follow-up to that study has now been reported, confirming the larger ACC, and greater amount of von Economo neurons.

The study involved 31 super-agers, 21 more typical older adults, and 18 middle-aged adults (aged 50-60). Imaging revealed a region of the ACC in the right hemisphere in the super-agers was not only significantly thicker than the 'normal' older adults, but also larger than that of the middle-aged adults. Post-mortem analysis of 5 of the super-agers found that their ACC had 87% less tau tangles (one of the hallmarks of Alzheimer's) than 5 'normal' age-matched controls, and 92% less than that of 5 individuals with MCI. The density of von Economo neurons was also significantly higher.

Whether or not super-agers are born or made is still unknown (I'm picking a bit of both), but it's intriguing to note my recent report that people who frequently use several media devices at the same time had smaller grey matter density in the anterior cingulate cortex than those who use just one device occasionally.

I'd be interested to know the occupational and life-history of these super-agers. Did they lead lives in which they nurtured their powers of prolonged concentration? Or perhaps they belong to that other select group: the one-in-forty who can truly multitask.

http://www.eurekalert.org/pub_releases/2015-02/nu-sby020315.php

http://www.futurity.org/brains-superagers-849732/

[3880] Gefen, T., Peterson M., Papastefan S. T., Martersteck A., Whitney K., Rademaker A., et al.
(2015).  Morphometric and Histologic Substrates of Cingulate Integrity in Elders with Exceptional Memory Capacity.
The Journal of Neuroscience. 35(4), 1781 - 1791.

The first detailed characterization of the molecular structures of amyloid-beta fibrils that develop in the brains of those with Alzheimer's disease suggests that different molecular structures of amyloid-beta fibrils may distinguish the brains of Alzheimer's patients with different clinical histories and degrees of brain damage. A comparison of amyloid-beta fibril fragments from the brain tissue of two patients with different clinical histories and degrees of brain damage found different molecular structures, confirming cell research showing that amyloid-beta fibrils grown in a dish have different molecular structures depending on the specific growth conditions.

Obviously, this is a very small study, and will need to be confirmed across more patients. However, it’s important for indicating that structural variations may correlate with variations in Alzheimer’s, and that structure-specific amyloid imaging agents may need to be used.

http://www.eurekalert.org/pub_releases/2013-09/cp-aps090513.php

[3587] Lu, J-X., Qiang W., Yau W-M., Schwieters C D., Meredith S C., & Tycko R.
(2013).  Molecular Structure of β-Amyloid Fibrils in Alzheimer’s Disease Brain Tissue.
Cell. 154(6), 1257 - 1268.

A very large genetic study has revealed that genetic differences have little effect on educational achievement. The study involved more than 125,000 people from the U.S., Australia, and 13 western European countries.

All told, genes explained about 2% of differences in educational attainment (as measured by years of schooling and college graduation), with the genetic variants with the strongest effects each explaining only 0.02% (in comparison, the gene variant with the largest effect on human height accounts for about 0.4%).

http://www.futurity.org/society-culture/genes-have-small-effect-on-length-of-education/

[3443] Rietveld, C. A., Medland S. E., Derringer J., Yang J., Esko T., Martin N. W., et al.
(2013).  GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment.
Science.

I’ve reported often on the perils of multitasking. Here is yet another one, with an intriguing new finding: it seems that the people who multitask the most are those least capable of doing so!

The study surveyed 310 undergraduate psychology students to find their actual multitasking ability, perceived multitasking ability, cell phone use while driving, use of a wide array of electronic media, and personality traits such as impulsivity and sensation-seeking.

Those who scored in the top quarter on a test of multitasking ability tended not to multitask. Some 70% of participants thought they were above average at multitasking, and perceived multitasking ability (rather than actual) was associated with multitasking. Those with high levels of impulsivity and sensation-seeking were also more likely to multitask (with the exception of using a cellphone while driving, which wasn’t related to impulsivity, though it was related to sensation seeking).

The findings suggest that those who multitask don’t do so because they are good at multitasking, but because they are poor at focusing on one task.

By using brain scans from 152 Vietnam veterans with a variety of combat-related brain injuries, researchers claim to have mapped the neural basis of general intelligence and emotional intelligence.

There was significant overlap between general intelligence and emotional intelligence, both in behavioral measures and brain activity. Higher scores on general intelligence tests and personality reliably predicted higher performance on measures of emotional intelligence, and many of the same brain regions (in the frontal and parietal cortices) were found to be important to both.

More specifically, impairments in emotional intelligence were associated with selective damage to a network containing the extrastriate body area (involved in perceiving the form of other human bodies), the left posterior superior temporal sulcus (helps interpret body movement in terms of intentions), left temporo-parietal junction (helps work out other person’s mental state), and left orbitofrontal cortex (supports emotional empathy). A number of associated major white matter tracts were also part of the network.

Two of the components of general intelligence were strong contributors to emotional intelligence: verbal comprehension/crystallized intelligence, and processing speed. Verbal impairment was unsurprisingly associated with selective damage to the language network, which showed some overlap with the network underlying emotional intelligence. Similarly, damage to the fronto-parietal network linked to deficits in processing speed also overlapped in places with the emotional intelligence network.

Only one of the ‘big five’ personality traits contributed to the prediction of emotional intelligence — conscientiousness. Impairments in conscientiousness were associated with damage to brain regions widely implicated in social information processing, of which two areas (left orbitofrontal cortex and left temporo-parietal junction) were also involved in impaired emotional intelligence, suggesting where these two attributes might be connected (ability to predict and understand another’s emotions).

It’s interesting (and consistent with the growing emphasis on connectivity rather than the more simplistic focus on specific regions) that emotional intelligence was so affected by damage to white matter tracts. The central role of the orbitofrontal cortex is also intriguing – there’s been growing evidence in recent years of the importance of this region in emotional and social processing, and it’s worth noting that it’s in the right place to integrate sensory and bodily sensation information and pass that onto decision-making systems.

All of this is to say that emotional intelligence depends on social information processing and general intelligence. Traditionally, general intelligence has been thought to be distinct from social and emotional intelligence. But humans are fundamentally social animals, and – contra the message of the Enlightenment, that we have taken so much to heart – it has become increasingly clear that emotions and reason are inextricably entwined. It is not, therefore, all that surprising that general and emotional intelligence might be interdependent. It is more surprising that conscientiousness might be rooted in your degree of social empathy.

It’s also worth noting that ‘emotional intelligence’ is not simply a trendy concept – a pop quiz question regarding whether you ‘have a high EQ’ (or not), but that it can, if impaired, produce very real problems in everyday life.

Emotional intelligence was measured by the Mayer, Salovey, Caruso Emotional Intelligence Test (MSCEIT), general IQ by the Wechsler Adult Intelligence Scale, and personality by the Neuroticism-Extroversion-Openness Inventory.

One of the researchers talks about this study on this YouTube video and on this podcast.

An online study open to anyone, that ended up involving over 100,000 people of all ages from around the world, put participants through 12 cognitive tests, as well as questioning them about their background and lifestyle habits. This, together with a small brain-scan data set, provided an immense data set to investigate the long-running issue: is there such a thing as ‘g’ — i.e. is intelligence accounted for by just a single general factor; is it supported by just one brain network? — or are there multiple systems involved?

Brain scans of 16 healthy young adults who underwent the 12 cognitive tests revealed two main brain networks, with all the tasks that needed to be actively maintained in working memory (e.g., Spatial Working Memory, Digit Span, Visuospatial Working Memory) loading heavily on one, and tasks in which information had to transformed according to logical rules (e.g., Deductive Reasoning, Grammatical Reasoning, Spatial Rotation, Color-Word Remapping) loading heavily on the other.

The first of these networks involved the insula/frontal operculum, the superior frontal sulcus, and the ventral part of the anterior cingulate cortex/pre-supplementary motor area. The second involved the inferior frontal sulcus, inferior parietal lobule, and the dorsal part of the ACC/pre-SMA.

Just a reminder of individual differences, however — when analyzed by individual, this pattern was observed in 13 of the 16 participants (who are not a very heterogeneous bunch — I strongly suspect they are college students).

Still, it seems reasonable to conclude, as the researchers do, that at least two functional networks are involved in ‘intelligence’, with all 12 cognitive tasks using both networks but to highly variable extents.

Behavioral data from some 60,000 participants in the internet study who completed all tasks and questionnaires revealed that there was no positive correlation between performance on the working memory tasks and the reasoning tasks. In other words, these two factors are largely independent.

Analysis of this data revealed three, rather than two, broad components to overall cognitive performance: working memory; reasoning; and verbal processing. Re-analysis of the imaging data in search of the substrate underlying this verbal component revealed that the left inferior frontal gyrus and temporal lobes were significantly more active on tasks that loaded on the verbal component.

These three components could also be distinguished when looking at other factors. For example, while age was the most significant predictor of cognitive performance, its effect on the verbal component was much later and milder than it was for the other two components. Level of education was more important for the verbal component than the other two, while the playing of computer games had an effect on working memory and reasoning but not verbal. Chronic anxiety affected working memory but not reasoning or verbal. Smoking affected working memory more than the others. Unsurprisingly, geographical location affected verbal more than the other two components.

A further test, involving 35 healthy young adults, compared performance on the 12 tasks and score on the Cattell Culture Fair test (a classic pen and paper IQ test). The working memory component correlated most with the Cattell score, followed by the reasoning component, with the Verbal component (unsurprisingly, given that this is designed to be a ‘culture-fair’ test) showing the smallest correlation.

All of this is to say that this is decided evidence that what is generally considered ‘intelligence’ is based on the functioning of multiple brain networks rather than a single ‘g’, and that these networks are largely independent. Thus, the need to focus on and maintain task-relevant information maps onto one particular brain network, and is one strand. Another network specializes in transforming information, regardless of source or type. These, it would seem, are the main processes involved in fluid intelligence, while the Verbal component most likely reflects crystallized intelligence. There are also likely to be other networks which are not perhaps typically included in ‘general intelligence’, but are nevertheless critical for task performance (the researchers suggest the ability to adapt plans based on outcomes might be one such function).

The obvious corollary of all this is that similar IQ scores can reflect different abilities for these strands — e.g., even if your working memory capacity is not brilliant, you can develop your reasoning and verbal abilities. All this is consistent with the growing evidence that, although fundamental WMC might be fixed (and I use the word ‘fundamental’ deliberately, because WMC can be measured in a number of different ways, and I do think you can, at the least, effectively increase your WMC), intelligence (because some of its components are trainable) is not.

If you want to participate in this research, a new version of the tests is available at http://www.cambridgebrainsciences.com/theIQchallenge

[3214] Hampshire, A., Highfield R. R., Parkin B. L., & Owen A. M.
(2012).  Fractionating Human Intelligence.
Neuron. 76(6), 1225 - 1237.

There's quite a bit of evidence now that socializing — having frequent contact with others — helps protect against cognitive impairment in old age. We also know that depression is a risk factor for cognitive impairment and dementia. There have been hints that loneliness might also be a risk factor. But here’s the question: is it being alone, or feeling lonely, that is the danger?

A large Dutch study, following 2173 older adults for three years, suggests that it is the feeling of loneliness that is the main problem.

At the start of the study, some 46% of the participants were living alone, and some 50% were no longer or never married (presumably the discrepancy is because many older adults have a spouse in a care facility). Some 73% said they had no social support, while 20% reported feelings of loneliness.

Those who lived alone were significantly more likely to develop dementia over the three year study period (9.3% compared with 5.6% of those who lived with others). The unmarried were also significantly more likely to develop dementia (9.2% vs 5.3%).

On the other hand, among those without social support, 5.6% developed dementia compared with 11.4% with social support! This seems to contradict everything we know, not to mention the other results of the study, but the answer presumably lies in what is meant by ‘social support’. Social support was assessed by the question: Do you get help from family, neighbours or home support? It doesn’t ask the question of whether help would be there if they needed it. So this is not a question of social networks, but more one of how much you need help. This interpretation is supported by the finding that those receiving social support had more health problems.

So, although the researchers originally counted this question as part of the measure of social isolation, it is clearly a poor reflection of it. Effectively, then, that leaves cohabitation and marriage as the only indices of social isolation, which is obviously inadequate.

However, we still have the interesting question re loneliness. The study found that 13.4% of those who said they felt lonely developed dementia compared with 5.7% of those who didn’t feel this way. This is a greater difference than that found with the ‘socially isolated’ (as measured!). Moreover, once other risk factors, such as age, education, and other health factors, were accounted for, the association between living alone and dementia disappeared, while the association with feelings of loneliness remained.

Of course, this still doesn’t tell us what the association is! It may be that feelings of loneliness simply reflect cognitive changes that precede Alzheimer’s, but it may be that the feelings themselves are decreasing cognitive and social activity. It may also be that those who are prone to such feelings have personality traits that are in themselves risk factors for cognitive impairment.

I would like to see another large study using better metrics of social isolation, but, still, the study is interesting for its distinction between being alone and feeling lonely, and its suggestion that it is the subjective feeling that is more important.

This is not to say there is no value in having people around! For a start, as discussed, the measures of social isolation are clearly inadequate. Moreover, other people play an important role in helping with health issues, which in turn greatly impact cognitive decline.

Although there was a small effect of depression, the relationship between feeling lonely and dementia remained after this was accounted for, indicating that this is a separate factor (on the other hand feelings of loneliness were a risk factor for depression).

A decrease in cognitive score (MMSE) was also significantly greater for those experiencing feelings of loneliness, suggesting that this is also a factor in age-related cognitive decline.

The point is not so much that loneliness is more detrimental than being alone, but that loneliness in itself is a risk factor for cognitive decline and dementia. This suggests that we should develop a better understanding of loneliness, how to identify the vulnerable, and how to help them.

The importance of early diagnosis for autism spectrum disorder has been highlighted by a recent study demonstrating the value of an educational program for toddlers with ASD.

The study involved 48 toddlers (18-30 months) diagnosed with autism and age-matched normally developing controls. Those with ASD were randomly assigned to participate in a two-year program called the Early Start Denver Model, or a standard community program.

The ESDM program involved two-hour sessions by trained therapists twice a day, five days every week. Parent training also enabled ESDM strategies to be used during daily activities. The program emphasizes interpersonal exchange, social attention, and shared engagement. It also includes training in face recognition, using individualized booklets of color photos of the faces of four familiar people.

The community program involved evaluation and advice, annual follow-up sessions, programs at Birth-to-Three centers and individual speech-language therapy, occupational therapy, and/or applied behavior analysis treatments.

All of those in the ESDM program were still participating at the end of the two years, compared to 88% of the community program participants.

At the end of the program, children were assessed on various cognitive and behavioral measures, as well as brain activity.

Compared with children who participated in the community program, children who received ESDM showed significant improvements in IQ, language, adaptive behavior, and autism diagnosis. Average verbal IQ for the ESDM group was 95 compared to an average 75 for the community group, and 93 vs 80 for nonverbal IQ. These are dramatically large differences, although it must be noted that individual variability was high.

Moreover, for the ESDM group, brain activity in response to faces was similar to that of normally-developing children, while the community group showed the pattern typical of autism (greater activity in response to objects compared to faces). This was associated with improvements in social behavior.

Again, there were significant individual differences. Specifically, 73% of the ESDM group, 53% of the control group, and 29% of the community group, showed a pattern of faster response to faces. (Bear in mind, re the control group, that these children are all still quite young.) It should also be borne in mind that it was difficult to get usable EEG data from many of the children with ASD — these results come from only 60% of the children with ASD.

Nevertheless, the findings are encouraging for parents looking to help their children.

It should also be noted that, although obviously earlier is better, the findings don’t rule out benefits for older children or even adults. Relatively brief targeted training in face recognition has been shown to affect brain activity patterns in adults with ASD.

[3123] Dawson, G., Jones E. J. H., Merkle K., Venema K., Lowy R., Faja S., et al.
(2012).  Early Behavioral Intervention Is Associated With Normalized Brain Activity in Young Children With Autism.
Journal of the American Academy of Child & Adolescent Psychiatry. 51(11), 1150 - 1159.

A small Swedish brain imaging study adds to the evidence for the cognitive benefits of learning a new language by investigating the brain changes in students undergoing a highly intensive language course.

The study involved an unusual group: conscripts in the Swedish Armed Forces Interpreter Academy. These young people, selected for their talent for languages, undergo an intensive course to allow them to learn a completely novel language (Egyptian Arabic, Russian or Dari) fluently within ten months. This requires them to acquire new vocabulary at a rate of 300-500 words every week.

Brain scans were taken of 14 right-handed volunteers from this group (6 women; 8 men), and 17 controls that were matched for age, years of education, intelligence, and emotional stability. The controls were medical and cognitive science students. The scans were taken before the start of the course/semester, and three months later.

The brain scans revealed that the language students showed significantly greater changes in several specific regions. These regions included three areas in the left hemisphere: the dorsal middle frontal gyrus, the inferior frontal gyrus, and the superior temporal gyrus. These regions all grew significantly. There was also some, more selective and smaller, growth in the middle frontal gyrus and inferior frontal gyrus in the right hemisphere. The hippocampus also grew significantly more for the interpreters compared to the controls, and this effect was greater in the right hippocampus.

Among the interpreters, language proficiency was related to increases in the right hippocampus and left superior temporal gyrus. Increases in the left middle frontal gyrus were related to teacher ratings of effort — those who put in the greatest effort (regardless of result) showed the greatest increase in this area.

In other words, both learning, and the effort put into learning, had different effects on brain development.

The main point, however, is that language learning in particular is having this effect. Bear in mind that the medical and cognitive science students are also presumably putting in similar levels of effort into their studies, and yet no such significant brain growth was observed.

Of course, there is no denying that the level of intensity with which the interpreters are acquiring a new language is extremely unusual, and it cannot be ruled out that it is this intensity, rather than the particular subject matter, that is crucial for this brain growth.

Neither can it be ruled out that the differences between the groups are rooted in the individuals selected for the interpreter group. The young people chosen for the intensive training at the interpreter academy were chosen on the basis of their talent for languages. Although brain scans showed no differences between the groups at baseline, we cannot rule out the possibility that such intensive training only benefited them because they possessed this potential for growth.

A final caveat is that the soldiers all underwent basic military training before beginning the course — three months of intense physical exercise. Physical exercise is, of course, usually very beneficial for the brain.

Nevertheless, we must give due weight to the fact that the brain scans of the two groups were comparable at baseline, and the changes discussed occurred specifically during this three-month learning period. Moreover, there is growing evidence that learning a new language is indeed ‘special’, if only because it involves such a complex network of processes and brain regions.

Given that people vary in their ‘talent’ for foreign language learning, and that learning a new language does tend to become harder as we get older, it is worth noting the link between growth of the hippocampus and superior temporal gyrus and language proficiency. The STG is involved in acoustic-phonetic processes, while the hippocampus is presumably vital for the encoding of new words into long-term memory.

Interestingly, previous research with children has suggested that the ability to learn new words is greatly affected by working memory span — specifically, by how much information they can hold in that part of working memory called phonological short-term memory. While this is less important for adults learning another language, it remains important for one particular category of new words: words that have no ready association to known words. Given the languages being studied by these Swedish interpreters, it seems likely that much if not all of their new vocabulary would fall into this category.

I wonder if the link with STG is more significant in this study, because the languages are so different from the students’ native language? I also wonder if, and to what extent, you might be able to improve your phonological short-term memory with this sort of intensive practice.

In this regard, it’s worth noting that a previous study found that language proficiency correlated with growth in the left inferior frontal gyrus in a group of English-speaking exchange students learning German in Switzerland. Is this difference because the training was less intensive? because the students had prior knowledge of German? because German and English are closely related in vocabulary? (I’m picking the last.)

The researchers point out that hippocampal plasticity might also be a critical factor in determining an individual’s facility for learning a new language. Such plasticity does, of course, tend to erode with age — but this can be largely counteracted if you keep your hippocampus limber (as it were).

All these are interesting speculations, but the main point is clear: the findings add to the growing evidence that bilingualism and foreign language learning have particular benefits for the brain, and for protecting against cognitive decline.

What underlies differences in fluid intelligence? How are smart brains different from those that are merely ‘average’?

Brain imaging studies have pointed to several aspects. One is brain size. Although the history of simplistic comparisons of brain size has been turbulent (you cannot, for example, directly compare brain size without taking into account the size of the body it’s part of), nevertheless, overall brain size does count for something — 6.7% of individual variation in intelligence, it’s estimated. So, something, but not a huge amount.

Activity levels in the prefrontal cortex, research also suggests, account for another 5% of variation in individual intelligence. (Do keep in mind that these figures are not saying that, for example, prefrontal activity explains 5% of intelligence. We are talking about differences between individuals.)

A new study points to a third important factor — one that, indeed, accounts for more than either of these other factors. The strength of the connections from the left prefrontal cortex to other areas is estimated to account for 10% of individual differences in intelligence.

These findings suggest a new perspective on what intelligence is. They suggest that part of intelligence rests on the functioning of the prefrontal cortex and its ability to communicate with the rest of the brain — what researchers are calling ‘global connectivity’. This may reflect cognitive control and, in particular, goal maintenance. The left prefrontal cortex is thought to be involved in (among other things) remembering your goals and any instructions you need for accomplishing those goals.

The study involved 93 adults (average age 23; range 18-40), whose brains were monitored while they were doing nothing and when they were engaged in the cognitively challenging N-back working memory task.

Brain activity patterns revealed three regions within the frontoparietal network that were significantly involved in this task: the left lateral prefrontal cortex, right premotor cortex, and right medial posterior parietal cortex. All three of these regions also showed signs of being global hubs — that is, they were highly connected to other regions across the brain.

Of these, however, only the left lateral prefrontal cortex showed a significant association between its connectivity and individual’s fluid intelligence. This was confirmed by a second independent measure — working memory capacity — which was also correlated with this region’s connectivity, and only this region.

In other words, those with greater connectivity in the left LPFC had greater cognitive control, which is reflected in higher working memory capacity and higher fluid intelligence. There was no correlation between connectivity and crystallized intelligence.

Interestingly, although other global hubs (such as the anterior prefrontal cortex and anterior cingulate cortex) also have strong relationships with intelligence and high levels of global connectivity, they did not show correlations between their levels of connectivity and fluid intelligence. That is, although the activity within these regions may be important for intelligence, their connections to other brain regions are not.

So what’s so important about the connections the LPFC has with the rest of the brain? It appears that, although it connects widely to sensory and motor areas, it is primarily the connections within the frontoparietal control network that are most important — as well as the deactivation of connections with the default network (the network active during rest).

This is not to say that the LPFC is the ‘seat of intelligence’! Research has made it clear that a number of brain regions support intelligence, as do other areas of connectivity. The finding is important because it shows that the left LPFC supports cognitive control and intelligence through a mechanism involving global connectivity and some other as-yet-unknown property. One possibility is that this region is a ‘flexible’ hub — able to shift its connectivity with a number of different brain regions as the task demands.

In other words, what may count is how many different connectivity patterns the left LPFC has in its repertoire, and how good it is at switching to them.

An association between negative connections with the default network and fluid intelligence also adds to evidence for the importance of inhibiting task-irrelevant processing.

All this emphasizes the role of cognitive control in intelligence, and perhaps goes some way to explaining why self-regulation in children is so predictive of later success, apart from the obvious.

In the light of a general increase in caesarean sections, it’s somewhat alarming to read about a mouse study that found that vaginal birth triggers the expression of a protein in the brains of newborns that improves brain development, and this protein expression is impaired in the brains of those delivered by C-section.

The protein in question —mitochondrial uncoupling protein 2 (UCP2) — is important for the development of neurons and circuits in the hippocampus. Indeed, it has a wide role, being involved in regulation of fuel utilization, mitochondrial bioenergetics, cell proliferation, neuroprotection and synaptogenesis. UCP2 is induced by cellular stress.

Among the mice, natural birth triggered UCP2 expression in the hippocampus (presumably because of the stress of the birth), which was reduced in those who were born by C-section. Not only were levels of UCP2 lower in C-section newborns, they continued to be lower through to adulthood.

Cell cultures revealed that inhibiting UCP2 led to decreased number of neurons, neuron size, number of dendrites, and number of presynaptic clusters. Mice with (chemically or genetically) inhibited UCP2 also showed behavioral differences indicative of greater levels of anxiety. They explored less, and they showed poorer spatial memory.

The effects of reduced UCP2 on neural growth means that factors that encourage the growth of new synapses, such as physical exercise, are likely to be much less useful (if useful at all). Could this explain why exercise seems to have no cognitive benefits for a small minority? (I’m speculating here.)

Although the researchers don’t touch on this (naturally enough, since this was a laboratory study), I would also speculate that, if the crucial factor is stress during the birth, this finding applies only to planned C-sections, not to those which become necessary during the course of labor.

UCP2 is also a critical factor in fatty acid utilization, which has a flow-on effect for the creation of new synapses. One important characteristic of breast milk is its high content of long chain fatty acids. It’s suggested that the triggering of UCP2 by natural birth may help the transition to breastfeeding. This in turn has its own benefits for brain development.

We know that emotion affects memory. We know that attention affects perception (see, e.g., Visual perception heightened by meditation training; How mindset can improve vision). Now a new study ties it all together. The study shows that emotionally arousing experiences affect how well we see them, and this in turn affects how vividly we later recall them.

The study used images of positively and negatively arousing scenes and neutral scenes, which were overlaid with varying amounts of “visual noise” (like the ‘snow’ we used to see on old televisions). College students were asked to rate the amount of noise on each picture, relative to a specific image they used as a standard. There were 25 pictures in each category, and three levels of noise (less than standard, equal to standard, and more than standard).

Different groups explored different parameters: color; gray-scale; less noise (10%, 15%, 20% as compared to 35%, 45%, 55%); single exposure (each picture was only presented once, at one of the noise levels).

Regardless of the actual amount of noise, emotionally arousing pictures were consistently rated as significantly less noisy than neutral pictures, indicating that people were seeing them more clearly. This was true in all conditions.

Eye-tracking analysis ruled out the idea that people directed their attention differently for emotionally arousing images, but did show that more eye fixations were associated both with less noisy images and emotionally arousing ones. In other words, people were viewing emotionally important images as if they were less noisy.

One group of 22 students were given a 45-minute spatial working memory task after seeing the images, and then asked to write down all the details they could remember about the pictures they remembered seeing. The amount of detail they recalled was taken to be an indirect measure of vividness.

A second group of 27 students were called back after a week for a recognition test. They were shown 36 new images mixed in with the original 75 images, and asked to rate them as new, familiar, or recollected. They were also asked to rate the vividness of their recollection.

Although, overall, emotionally arousing pictures were not more likely to be remembered than neutral pictures, both experiments found that pictures originally seen as more vivid (less noise) were remembered more vividly and in more detail.

Brain scans from 31 students revealed that the amygdala was more active when looking at images rated as vivid, and this in turn increased activity in the visual cortex and in the posterior insula (which integrates sensations from the body). This suggests that the increased perceptual vividness is not simply a visual phenomenon, but part of a wider sensory activation.

There was another neural response to perceptual vividness: activity in the dorsolateral prefrontal cortex and the posterior parietal cortex was negatively correlated with vividness. This suggests that emotion is not simply increasing our attentional focus, it is instead changing it by reducing effortful attentional and executive processes in favor of more perceptual ones. This, perhaps, gives emotional memories their different ‘flavor’ compared to more neutral memories.

These findings clearly need more exploration before we know exactly what they mean, but the main finding from the study is that the vividness with which we recall some emotional experiences is rooted in the vividness with which we originally perceived it.

The study highlights how emotion can sharpen our attention, building on previous findings that emotional events are more easily detected when visibility is difficult, or attentional demands are high. It is also not inconsistent with a study I reported on last year, which found some information needs no repetition to be remembered because the amygdala decrees it of importance.

I should add, however, that the perceptual effect is not the whole story — the current study found that, although perceptual vividness is part of the reason for memories that are vividly remembered, emotional importance makes its own, independent, contribution. This contribution may occur after the event.

It’s suggested that individual differences in these reactions to emotionally enhanced vividness may underlie an individual’s vulnerability to post-traumatic stress disorder.

The research is pretty clear by this point: humans are not (with a few rare exceptions) designed to multitask. However, it has been suggested that the modern generation, with all the multitasking they do, may have been ‘re-wired’ to be more capable of this. A new study throws cold water on this idea.

The study involved 60 undergraduate students, of whom 34 were skilled action video game players (all male) and 26 did not play such games (19 men and 7 women). The students were given three visual tasks, each of which they did on its own and then again while answering Trivial Pursuit questions over a speakerphone (designed to mimic talking on a cellphone).

The tasks included a video driving game (“TrackMania”), a multiple-object tracking test (similar to a video version of a shell game), and a visual search task (hidden pictures puzzles from Highlights magazine).

While the gamers were (unsurprisingly) significantly better at the video driving game, the non-gamers were just as good as them at the other two tasks. In the dual-tasking scenarios, performance declined on all the tasks, with the driving task most affected. While the gamers were affected less by multitasking during the driving task compared to the non-gamers, there was no difference in the amount of decline between gamers and non-gamers on the other two tasks.

Clearly, the smaller effect of dual-tasking on the driving game for gamers is a product of their greater expertise at the driving game, rather than their ability to multitask better. It is well established that the more skilled you are at a task, the more automatic it becomes, and thus the less working memory capacity it will need. Working memory capacity / attention is the bottleneck that prevents us from being true multitaskers.

In other words, the oft-repeated (and somewhat depressing) conclusion remains: you can’t learn to multitask in general, you can only improve specific skills, enabling you to multitask reasonably well while doing those specific tasks.

[3001] Donohue, S., James B., Eslick A., & Mitroff S.
(2012).  Cognitive pitfall! Videogame players are not immune to dual-task costs.
Attention, Perception, & Psychophysics. 74(5), 803 - 809.

I often talk about the importance of attitudes and beliefs for memory and cognition. A new honey bee study provides support for this in relation to the effects of aging on the brain, and suggests that this principle extends across the animal kingdom.

Previous research has shown that bees that stay in the nest and take care of the young remain mentally competent, but they don’t nurse for ever. When they’re older (after about 2-3 weeks), they become foragers, and foraging bees age very quickly — both physically and mentally. Obviously, you would think, bees ‘retire’ to foraging, and their old age is brief (they begin to show cognitive decline after just two weeks).

But it’s not as simple as that, because in artificial hives where worker bees are all the same age, nurse bees of the same age as foragers don’t show the same cognitive and sensory decline. Moreover, nurse bees have been found to maintain their cognitive abilities for more than 100 days, while foragers die within 18 days and show cognitive declines after 13-15 days (although their ability to assess sweetness remains intact).

The researchers accordingly asked a very interesting question: what happens if the foragers return to babysitting?

To achieve this, they removed all of the younger nurse bees from the nest, leaving only the queen and babies. When the older, foraging bees returned to the nest, activity slowed down for several days, and then they re-organized themselves: some of the old bees returned to foraging; others took on the babysitting and housekeeping duties (cleaning, building the comb, and tending to the queen). After 10 days, around half of these latter bees had significantly improved their ability to learn new things.

This cognitive improvement was also associated with a change in two specific proteins in their brains: one that has been associated with protection against oxidative stress and inflammation associated with Alzheimer disease and Huntington disease in humans (Prx6), and another dubbed a “chaperone” protein because it protects other proteins from being damaged when brain or other tissues are exposed to cell-level stress.

Precisely what it is about returning to the hive that produces this effect is a matter of speculation, but this finding does show that learning impairment in old bees can be reversed by changes in behavior, and this reversal is correlated with specific changes in brain protein.

Having said this, it shouldn’t be overlooked that only some of the worker bees showed this brain plasticity. This is not, apparently, due to differences in genotype, but may depend on the amount of foraging experience.

The findings add weight to the idea that social interventions can help our brains stay younger, and are consistent with growing evidence that, in humans, social engagement helps protect against dementia and age-related cognitive impairment.

The (probably) experience-dependent individual differences shown by the bees is perhaps mirrored in our idea of cognitive reserve, but with a twist. The concept of cognitive reserve emphasizes that accumulating a wealth of cognitive experience (whether through education or occupation or other activities) protects your brain from the damage that might occur with age. But perhaps (and I’m speculating now) we should also consider the other side of this: repeated engagement in routine or undemanding activities may have a deleterious effect, independent of and additional to the absence of more stimulating activities.

I have said before that there is little evidence that working memory training has any wider benefits than to the skills being practiced. Occasionally a study arises that gets everyone all excited, but by and large training only benefits the skill being practiced — despite the fact that working memory underlies so many cognitive tasks, and limited working memory capacity is thought to negatively affect performance on so many tasks. However, one area that does seem to have had some success is working memory training for those with ADHD, and researchers have certainly not given hope of finding evidence for wider transfer among other groups (such as older adults).

A recent review of the research to date has, sadly, concluded that the benefits of working memory training programs are limited. But this is not to say there are no benefits.

For a start, the meta-analysis (analyzing data across studies) found that working memory training produced large immediate benefits for verbal working memory. These benefits were greatest for children below the age of 10.

These benefits, however, were not maintained long-term (at an average of 9 months after training, there were no significant benefits) — although benefits were found in one study in which the verbal working memory task was very similar to the training task (indicating that the specific skill practiced did maintain some improvement long-term).

Visuospatial working memory also showed immediate benefits, and these did not vary across age groups. One factor that did make a difference was type of training: the CogMed training program produced greater improvement than the researcher-developed programs (the studies included 7 that used CogMed, 2 that used Jungle Memory, 2 Cognifit, 4 n-back, 1 Memory Booster, and 7 researcher-developed programs).

Interestingly, visuospatial working memory did show some long-term benefits, although it should be noted that the average follow-up was distinctly shorter than that for verbal working memory tasks (an average of 5 months post-training).

The burning question, of course, is how well this training transferred to dissimilar tasks. Here the evidence seems sadly clear — those using untreated control groups tended to find such transfer; those using treated control groups never did. Similarly, nonrandomized studies tended to find far transfer, but randomized studies did not.

In other words, when studies were properly designed (randomized trials with a control group that is given alternative treatment rather than no treatment), there was no evidence of transfer effects from working memory training to nonverbal ability. Moreover, even when found, these effects were only present immediately and not on follow-up.

Neither was there any evidence of transfer effects, either immediate or delayed, on verbal ability, word reading, or arithmetic. There was a small to moderate effect on training on attention (as measured by the Stroop test), but this only occurred immediately, and not on follow-up.

It seems clear from this review that there are few good, methodologically sound studies on this subject. But three very important caveats should be noted in connection with the researchers’ dispiriting conclusion.

First of all, because this is an analysis across all data, important differences between groups or individuals may be concealed. This is a common criticism of meta-analysis, and the researchers do try and answer it. Nevertheless, I think it is still a very real issue, especially in light of evidence that the benefit of training may depend on whether the challenge of the training is at the right level for the individual.

On the other hand, another recent study, that compared young adults who received 20 sessions of training on a dual n-back task or a visual search program, or received no training at all, did look for an individual-differences effect, and failed to find it. Participants were tested repeatedly on their fluid intelligence, multitasking ability, working memory capacity, crystallized intelligence, and perceptual speed. Although those taking part in the training programs improved their performance on the tasks they practiced, there was no transfer to any of the cognitive measures. When participants were analyzed separately on the basis of their improvement during training, there was still no evidence of transfer to broader cognitive abilities.

The second important challenge comes from the lack of skill consolidation — having a short training program followed by months of not practicing the skill is not something any of us would expect to produce long-term benefits.

The third point concerns a recent finding that multi-domain cognitive training produces longer-lasting benefits than single-domain training (the same study also showed the benefit of booster training). It seems quite likely that working memory training is a valuable part of a training program that also includes practice in real-world tasks that incorporate working memory.

I should emphasize that these results only apply to ‘normal’ children and adults. The question of training benefits for those with attention difficulties or early Alzheimer’s is a completely different issue. But for these healthy individuals, it has to be said that the weight of the evidence is against working memory training producing more general cognitive improvement. Nevertheless, I think it’s probably an important part of a cognitive training program — as long as the emphasis is on part.

Melby-Lervåg, M., & Hulme, C. (2012). Is Working Memory Training Effective? A Meta-Analytic Review. Developmental psychology. doi:10.1037/a0028228
Full text available at http://www.apa.org/pubs/journals/releases/dev-ofp-melby-lervag.pdf

[3012] Redick, T. S., Shipstead Z., Harrison T. L., Hicks K. L., Fried D. E., Hambrick D. Z., et al.
(2012).  No Evidence of Intelligence Improvement After Working Memory Training: A Randomized, Placebo-Controlled Study..
Journal of Experimental Psychology: General.
Full text available at http://psychology.gatech.edu/renglelab/publications/2012/RedicketalJEPG.pdf
 

I’ve mentioned before that, for some few people, exercise doesn’t seem to have a benefit, and the benefits of exercise for fighting age-related cognitive decline may not apply to those carrying the Alzheimer’s gene.

New research suggests there is another gene variant that may impact on exercise’s effects. The new study follows on from earlier research that found that physical exercise during adolescence had more durable effects on object memory and BDNF levels than exercise during adulthood. In this study, 54 healthy but sedentary young adults (aged 18-36) were given an object recognition test before participating in either (a) a 4-week exercise program, with exercise on the final test day, (b) a 4-week exercise program, without exercise on the final test day, (c) a single bout of exercise on the final test day, or (d) remaining sedentary between test days.

Exercise both improved object recognition memory and reduced perceived stress — but only in one group: those who exercised for 4 weeks including the final day of testing. In other words, both regular exercise and recent exercise was needed to produce a memory benefit.

But there is one more factor — and this is where it gets really interesting — the benefit in this group didn’t happen for every member of the group. Only those carrying a specific genotype benefited from regular and recent exercise. This genotype has to do with the brain protein BDNF, which is involved in neurogenesis and synaptic plasticity, and which is increased by exercise. The BDNF gene comes in two flavors: Val and Met. Previous research has linked the less common Met variant to poorer memory and greater age-related cognitive decline.

In other words, it seems that the Met allele affects how much BDNF is released as a result of exercise, and this in turn affects cognitive benefits.

The object recognition test involved participants seeing a series of 50 images (previously selected as being highly recognizable and nameable), followed by a 15 minute filler task, before seeing 100 images (the previous 50 and 50 new images) and indicating which had been seen previously. The filler task involved surveys for state anxiety, perceived stress, and mood. On the first (pre-program) visit, a survey for trait anxiety was also completed.

Of the 54 participants, 31 carried two copies of the Val allele, and 23 had at least one Met allele (19 Val/Met; 4 Met/Met). The population frequency for carrying at least one Met allele is 50% for Asians, 30% in Caucasians, and 4% in African-Americans.

Although exercise decreased stress and increased positive mood, the cognitive benefits of exercise were not associated with mood or anxiety. Neither was genotype associated with mood or anxiety. However, some studies have found an association between depression and the Met variant, and this study is of course quite small.

A final note: this study is part of research looking at the benefits of exercise for children with ADHD. The findings suggest that genotyping would enable us to predict whether an individual — a child with ADHD or an older adult at risk of cognitive decline or impairment — would benefit from this treatment strategy.

Following on from research finding that people who regularly play action video games show visual attention related differences in brain activity compared to non-players, a new study has investigated whether such changes could be elicited in 25 volunteers who hadn’t played video games in at least four years. Sixteen of the participants played a first-person shooter game (Medal of Honor: Pacific Assault), while nine played a three-dimensional puzzle game (Ballance). They played the games for a total of 10 hours spread over one- to two-hour sessions.

Selective attention was assessed through an attentional visual field task, carried out prior to and after the training program. Individual learning differences were marked, and because of visible differences in brain activity after training, the action gamers were divided into two groups for analysis — those who performed above the group mean on the second attentional visual field test (7 participants), and those who performed below the mean (9). These latter individuals showed similar brain activity patterns as those in the control (puzzle) group.

In all groups, early-onset brainwaves were little affected by video game playing. This suggests that game-playing has little impact on bottom–up attentional processes, and is in keeping with earlier research showing that players and non-players don’t differ in the extent to which their attention is captured by outside stimuli.

However, later brainwaves — those thought to reflect top–down control of selective attention via increased inhibition of distracters — increased significantly in the group who played the action game and showed above-average improvement on the field test. Another increased wave suggests that the total amount of attention allocated to the task was also greater in that group (i.e., they were concentrating more on the game than the below-average group, and the control group).

The improved ability to select the right targets and ignore other stimuli suggests, too, that these players are also improving their ability to make perceptual decisions.

The next question, of course, is what personal variables underlie the difference between those who benefit more quickly from the games, and those who don’t. And how much more training is necessary for this latter group, and are there some people who won’t achieve these benefits at all, no matter how long they play? Hopefully, future research will be directed to these questions.

[2920] Wu, S., Cheng C K., Feng J., D'Angelo L., Alain C., & Spence I.
(2012).  Playing a First-person Shooter Video Game Induces Neuroplastic Change.
Journal of Cognitive Neuroscience. 24(6), 1286 - 1293.

Genetic analysis of 9,232 older adults (average age 67; range 56-84) has implicated four genes in how fast your hippocampus shrinks with age (rs7294919 at 12q24, rs17178006 at 12q14, rs6741949 at 2q24, rs7852872 at 9p33). The first of these (implicated in cell death) showed a particularly strong link to a reduced hippocampus volume — with average consequence being a hippocampus of the same size as that of a person 4-5 years older.

Faster atrophy in this crucial brain region would increase people’s risk of Alzheimer’s and cognitive decline, by reducing their cognitive reserve. Reduced hippocampal volume is also associated with schizophrenia, major depression, and some forms of epilepsy.

In addition to cell death, the genes linked to this faster atrophy are involved in oxidative stress, ubiquitination, diabetes, embryonic development and neuronal migration.

A younger cohort, of 7,794 normal and cognitively compromised people with an average age of 40, showed that these suspect gene variants were also linked to smaller hippocampus volume in this age group. A third cohort, comprised of 1,563 primarily older people, showed a significant association between the ASTN2 variant (linked to neuronal migration) and faster memory loss.

In another analysis, researchers looked at intracranial volume and brain volume in 8,175 elderly. While they found no genetic associations for brain volume (although there was one suggestive association), they did discover that intracranial volume (the space occupied by the fully developed brain within the skull — this remains unchanged with age, reflecting brain size at full maturity) was significantly associated with two gene variants (at loci rs4273712, on chromosome 6q22, and rs9915547, on 17q21). These associations were replicated in a different sample of 1,752 older adults. One of these genes is already known to play a unique evolutionary role in human development.

A meta-analysis of seven genome-wide association studies, involving 10,768 infants (average age 14.5 months), found two loci robustly associated with head circumference in infancy (rs7980687 on chromosome 12q24 and rs1042725 on chromosome 12q15). These loci have previously been associated with adult height, but these effects on infant head circumference were largely independent of height. A third variant (rs11655470 on chromosome 17q21 — note that this is the same chromosome implicated in the study of older adults) showed suggestive evidence of association with head circumference; this chromosome has also been implicated in Parkinson's disease and other neurodegenerative diseases.

Previous research has found an association between head size in infancy and later development of Alzheimer’s. It has been thought that this may have to do with cognitive reserve.

Interestingly, the analyses also revealed that a variant in a gene called HMGA2 (rs10784502 on 12q14.3) affected intelligence as well as brain size.

Why ‘Alzheimer’s gene’ increases Alzheimer’s risk

Investigation into the so-called ‘Alzheimer’s gene’ ApoE4 (those who carry two copies of this variant have roughly eight to 10 times the risk of getting Alzheimer’s disease) has found that ApoE4 causes an increase in cyclophilin A, which in turn causes a breakdown of the cells lining the blood vessels. Blood vessels become leaky, making it more likely that toxic substances will leak into the brain.

The study found that mice carrying the ApoE4 gene had five times as much cyclophilin A as normal, in cells crucial to maintaining the integrity of the blood-brain barrier. Blocking the action of cyclophilin A brought blood flow back to normal and reduced the leakage of toxic substances by 80%.

The finding is in keeping with the idea that vascular problems are at the heart of Alzheimer’s disease — although it should not be assumed from that, that other problems (such as amyloid-beta plaques and tau tangles) are not also important. However, one thing that does seem clear now is that there is not one single pathway to Alzheimer’s. This research suggests a possible treatment approach for those carrying this risky gene variant.

Note also that this gene variant is not only associated with Alzheimer’s risk, but also Down’s syndrome dementia, poor outcome following TBI, and age-related cognitive decline.

On which note, I’d like to point out recent findings from the long-running Nurses' Health Study, involving 16,514 older women (70-81), that suggest that effects of postmenopausal hormone therapy for cognition may depend on apolipoprotein E (APOE) status, with the fastest rate of decline being observed among HT users who carried the APOe4 variant (in general HT was associated with poorer cognitive performance).

It’s also interesting to note another recent finding: that intracranial volume modifies the effect of apoE4 and white matter lesions on dementia risk. The study, involving 104 demented and 135 nondemented 85-year-olds, found that smaller intracranial volume increased the risk of dementia, Alzheimer's disease, and vascular dementia in participants with white matter lesions. However, white matter lesions were not associated with increased dementia risk in those with the largest intracranial volume. But intracranial volume did not modify dementia risk in those with the apoE4 gene.

More genes involved in Alzheimer’s

More genome-wide association studies of Alzheimer's disease have now identified variants in BIN1, CLU, CR1 and PICALM genes that increase Alzheimer’s risk, although it is not yet known how these gene variants affect risk (the present study ruled out effects on the two biomarkers, amyloid-beta 42 and phosphorylated tau).

Same genes linked to early- and late-onset Alzheimer's

Traditionally, we’ve made a distinction between early-onset Alzheimer's disease, which is thought to be inherited, and the more common late-onset Alzheimer’s. New findings, however, suggest we should re-think that distinction. While the genetic case for early-onset might seem to be stronger, sporadic (non-familial) cases do occur, and familial cases occur with late-onset.

New DNA sequencing techniques applied to the APP (amyloid precursor protein) gene, and the PSEN1 and PSEN2 (presenilin) genes (the three genes linked to early-onset Alzheimer's) has found that rare variants in these genes are more common in families where four or more members were affected with late-onset Alzheimer’s, compared to normal individuals. Additionally, mutations in the MAPT (microtubule associated protein tau) gene and GRN (progranulin) gene (both linked to frontotemporal dementia) were also found in some Alzheimer's patients, suggesting they had been incorrectly diagnosed as having Alzheimer's disease when they instead had frontotemporal dementia.

Of the 439 patients in which at least four individuals per family had been diagnosed with Alzheimer's disease, rare variants in the 3 Alzheimer's-related genes were found in 60 (13.7%) of them. While not all of these variants are known to be pathogenic, the frequency of mutations in these genes is significantly higher than it is in the general population.

The researchers estimate that about 5% of those with late-onset Alzheimer's disease have changes in these genes. They suggest that, at least in some cases, the same causes may underlie both early- and late-onset disease. The difference being that those that develop it later have more protective factors.

Another gene identified in early-onset Alzheimer's

A study of the genes from 130 families suffering from early-onset Alzheimer's disease has found that 116 had mutations on genes already known to be involved (APP, PSEN1, PSEN2 — see below for some older reports on these genes), while five of the other 14 families all showed mutations on a new gene: SORL1.

I say ‘new gene’ because it hasn’t been implicated in early-onset Alzheimer’s before. However, it has been implicated in the more common late-onset Alzheimer’s, and last year a study reported that the gene was associated with differences in hippocampal volume in young, healthy adults.

The finding, then, provides more support for the idea that some cases of early-onset and late-onset Alzheimer’s have the same causes.

The SORL1 gene codes for a protein involved in the production of the beta-amyloid peptide, and the mutations seen in this study appear to cause an under-expression of SORL1, resulting in an increase in the production of the beta-amyloid peptide. Such mutations were not found in the 1500 ethnicity-matched controls.

 

Older news reports on these other early-onset genes (brought over from the old website):

New genetic cause of Alzheimer's disease

Amyloid protein originates when it is cut by enzymes from a larger precursor protein. In very rare cases, mutations appear in the amyloid precursor protein (APP), causing it to change shape and be cut differently. The amyloid protein that is formed now has different characteristics, causing it to begin to stick together and precipitate as amyloid plaques. A genetic study of Alzheimer's patients younger than 70 has found genetic variations in the promoter that increases the gene expression and thus the formation of the amyloid precursor protein. The higher the expression (up to 150% as in Down syndrome), the younger the patient (starting between 50 and 60 years of age). Thus, the amount of amyloid precursor protein is a genetic risk factor for Alzheimer's disease.

Theuns, J. et al. 2006. Promoter Mutations That Increase Amyloid Precursor-Protein Expression Are Associated with Alzheimer Disease. American Journal of Human Genetics, 78, 936-946.

http://www.eurekalert.org/pub_releases/2006-04/vfii-rda041906.php

Evidence that Alzheimer's protein switches on genes

Amyloid b-protein precursor (APP) is snipped apart by enzymes to produce three protein fragments. Two fragments remain outside the cell and one stays inside. When APP is produced in excessive quantities, one of the cleaved segments that remains outside the cell, called the amyloid b-peptides, clumps together to form amyloid plaques that kill brain cells and may lead to the development of Alzheimer’s disease. New research indicates that the short "tail" segment of APP that is trapped inside the cell might also contribute to Alzheimer’s disease, through a process called transcriptional activation - switching on genes within the cell. Researchers speculate that creation of amyloid plaque is a byproduct of a misregulation in normal APP processing.

[2866] Cao, X., & Südhof T. C.
(2001).  A Transcriptively Active Complex of APP with Fe65 and Histone Acetyltransferase Tip60.
Science. 293(5527), 115 - 120.

http://www.eurekalert.org/pub_releases/2001-07/aaft-eta070201.php

Inactivation of Alzheimer's genes in mice causes dementia and brain degeneration

Mutations in two related genes known as presenilins are the major cause of early onset, inherited forms of Alzheimer's disease, but how these mutations cause the disease has not been clear. Since presenilins are involved in the production of amyloid peptides (the major components of amyloid plaques), it was thought that such mutations might cause Alzheimer’s by increasing brain levels of amyloid peptides. Accordingly, much effort has gone into identifying compounds that could block presenilin function. Now, however, genetic engineering in mice has revealed that deletion of these genes causes memory loss and gradual death of nerve cells in the mouse brain, demonstrating that the protein products of these genes are essential for normal learning, memory and nerve cell survival.

Saura, C.A., Choi, S-Y., Beglopoulos, V., Malkani, S., Zhang, D., Shankaranarayana Rao, B.S., Chattarji, S., Kelleher, R.J.III, Kandel, E.R., Duff, K., Kirkwood, A. & Shen, J. 2004. Loss of Presenilin Function Causes Impairments of Memory and Synaptic Plasticity Followed by Age-Dependent Neurodegeneration. Neuron, 42 (1), 23-36.

http://www.eurekalert.org/pub_releases/2004-04/cp-ioa032904.php

[2858] Consortium, E N I G M-A(ENIGMA)., & Cohorts Heart Aging Research Genomic Epidemiology(charge)
(2012).  Common variants at 12q14 and 12q24 are associated with hippocampal volume.
Nature Genetics. 44(5), 545 - 551.

[2909] Taal, R. H., Pourcain B S., Thiering E., Das S., Mook-Kanamori D. O., Warrington N. M., et al.
(2012).  Common variants at 12q15 and 12q24 are associated with infant head circumference.
Nature Genetics. 44(5), 532 - 538.

[2859] Cohorts Heart Aging Research Genomic Epidemiology,(charge), & Consortium E G G(EGG).
(2012).  Common variants at 6q22 and 17q21 are associated with intracranial volume.
Nature Genetics. 44(5), 539 - 544.

[2907] Stein, J. L., Medland S. E., Vasquez A A., Hibar D. P., Senstad R. E., Winkler A. M., et al.
(2012).  Identification of common variants associated with human hippocampal and intracranial volumes.
Nature Genetics. 44(5), 552 - 561.

[2925] Bell, R. D., Winkler E. A., Singh I., Sagare A. P., Deane R., Wu Z., et al.
(2012).  Apolipoprotein E controls cerebrovascular integrity via cyclophilin A.
Nature.

Kang, J. H., & Grodstein F. (2012).  Postmenopausal hormone therapy, timing of initiation, APOE and cognitive decline. Neurobiology of Aging. 33(7), 1129 - 1137.

Skoog, I., Olesen P. J., Blennow K., Palmertz B., Johnson S. C., & Bigler E. D. (2012).  Head size may modify the impact of white matter lesions on dementia. Neurobiology of Aging. 33(7), 1186 - 1193.

[2728] Cruchaga, C., Chakraverty S., Mayo K., Vallania F. L. M., Mitra R. D., Faber K., et al.
(2012).  Rare Variants in APP, PSEN1 and PSEN2 Increase Risk for AD in Late-Onset Alzheimer's Disease Families.
PLoS ONE. 7(2), e31039 - e31039.

Full text available at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0031039

[2897] Pottier, C., Hannequin D., Coutant S., Rovelet-Lecrux A., Wallon D., Rousseau S., et al.
(2012).  High frequency of potentially pathogenic SORL1 mutations in autosomal dominant early-onset Alzheimer disease.
Molecular Psychiatry.

McCarthy, J. J., Saith S., Linnertz C., Burke J. R., Hulette C. M., Welsh-Bohmer K. A., et al. (2012).  The Alzheimer's associated 5′ region of the SORL1 gene cis regulates SORL1 transcripts expression. Neurobiology of Aging. 33(7), 1485.e1-1485.e8 - 1485.e1-1485.e8

A number of studies have found evidence that older adults can benefit from cognitive training. However, neural plasticity is thought to decline with age, and because of this, it’s thought that the younger-old, and/or the higher-functioning, may benefit more than the older-old, or the lower-functioning. On the other hand, because their performance may already be as good as it can be, higher-functioning seniors may be less likely to benefit. You can find evidence for both of these views.

In a new study, 19 of 39 older adults (aged 60-77) were given training in a multiplayer online video game called World of Warcraft (the other 20 formed a control group). This game was chosen because it involves multitasking and switching between various cognitive abilities. It was theorized that the demands of the game would improve both spatial orientation and attentional control, and that the multiple tasks might produce more improvement in those with lower initial ability compared to those with higher ability.

WoW participants were given a 2-hour training session, involving a 1-hour lecture and demonstration, and one hour of practice. They were then expected to play the game at home for around 14 hours over the next two weeks. There was no intervention for the control group. All participants were given several cognitive tests at the beginning and end of the two week period: Mental Rotation Test; Stroop Test; Object Perspective Test; Progressive Matrices; Shipley Vocabulary Test; Everyday Cognition Battery; Digit Symbol Substitution Test.

As a group, the WoW group improved significantly more on the Stroop test (a measure of attentional control) compared to the control group. There was no change in the other tests. However, those in the WoW group who had performed more poorly on the Object Perspective Test (measuring spatial orientation) improved significantly. Similarly, on the Mental Rotation Test, ECB, and Progressive Matrices, those who performed more poorly at the beginning tended to improve after two weeks of training. There was no change on the Digit Symbol test.

The finding that only those whose performance was initially poor benefited from cognitive training is consistent with other studies suggesting that training only benefits those who are operating below par. This is not really surprising, but there are a few points that should be made.

First of all, it should be noted that this was a group of relatively high-functioning young-old adults — poorer performance in this case could be (relatively) better performance in another context. What it comes down to is whether you are operating at a level below which you are capable of — and this applies broadly, for example, experiments show that spatial training benefits females but not males (because males tend to already have practiced enough).

Given that, in expertise research, training has an on-going, apparently limitless, effect on performance, it seems likely that the limited benefits shown in this and other studies is because of the extremely limited scope of the training. Fourteen hours is not enough to improve people who are already performing adequately — but that doesn’t mean that they wouldn’t improve with more hours. I have yet to see any interventions with older adults that give them the amount of cognitive training you would expect them to need to achieve some level of mastery.

My third and final point is the specific nature of the improvements. This has also been shown in other studies, and sometimes appears quite arbitrary — for example, one 3-D puzzle game apparently improved mental rotation, while a different 3-D puzzle game had no effect. The point being that we still don’t understand the precise attributes needed to improve different skills (although the researchers advocate the use of a tool called cognitive task analysis for revealing the underlying qualities of an activity) — but we do understand that it is a matter of precise attributes, which is definitely a step in the right direction.

The main thing, then, that you should take away from this is the idea that different activities involve specific cognitive tasks, and these, and only these, will be the ones that benefit from practicing the activities. You therefore need to think about what tasks you want to improve before deciding on the activities to practice.

Previous research has pointed to a typical decline in our sense of control as we get older. Maintaining a sense of control, however, appears to be a key factor in successful aging. Unsurprisingly, in view of the evidence that self-belief and metacognitive understanding are important for cognitive performance, a stronger sense of control is associated with better cognitive performance. (By metacognitive understanding I mean the knowledge that cognitive performance is malleable, not fixed, and strategies and training are effective in improving cognition.)

In an intriguing new study, 36 older adults (aged 61-87, average age 74) had their cognitive performance and their sense of control assessed every 12 hours for 60 days. Participants were asked questions about whether they felt in control of their lives and whether they felt able to achieve goals they set for themselves.

The reason I say this is intriguing is that it’s generally assumed that a person’s sense of control — how much they feel in control of their lives — is reasonably stable. While, as I said, it can change over the course of a lifetime, until recently we didn’t think that it could fluctuate significantly in the course of a single day — which is what this study found.

Moreover, those who normally reported having a low sense of control performed much better on inductive reasoning tests during periods when they reported feeling a higher sense of control. Similarly, those who normally reported feeling a high sense of control scored higher on memory tests when feeling more in control than usual.

Although we can’t be sure (since this wasn’t directly investigated), the analysis suggests that the improved cognitive functioning stems from the feeling of improved control, not vice versa.

The study builds on an earlier study that found weekly variability in older adults’ locus of control and competency beliefs.

Assessment was carried out in the form of a daily workbook, containing a number of measures, which participants completed twice daily. Each assessment took around 30-45 minutes to complete. The measures included three cognitive tests (14 alternate forms of each of these were used, to minimize test familiarity):

  • Letter series test: 30 items in which the next letter in a series had to be identified. [Inductive reasoning]
  • Number comparison: 48 items in which two number strings were presented beside each other, and participants had to identify where there was any mismatch. [Perceptual speed]
  • Rey Auditory Verbal Learning Task: participants have to study a list of 15 unrelated words for one minute, then on another page recall as many of the words as they could. [Memory]

Sense of control over the previous 12 hours was assessed by 8 questions, to which participants indicated their agreement/disagreement on a 6-point scale. Half the questions related to ‘locus of control’ and half to ‘perceived competence’.

While, unsurprisingly, compliance wasn’t perfect (it’s quite an arduous regime), participants completed on average 115 of 120 workbooks. Of the possible 4,320 results (36 x 120), only 166 were missing.

One of the things that often annoys me is the subsuming of all within-individual variability in cognitive scores into averages. Of course averages are vital, but so is variability, and this too often is glossed over. This study is, of course, all about variability, so I was very pleased to see people’s cognitive variability spelled out.

Most of the variance in locus of control was of course between people (86%), but 14% was within-individual. Similarly, the figures for perceived competence were 88% and 12%. (While locus of control and perceived competence are related, only 26% of the variability in within-person locus of control was associated with competence, meaning that they are largely independent.)

By comparison, within-individual variability was much greater for the cognitive measures: for the letter series (inductive reasoning), 32% was within-individual and 68% between-individual; for the number matching (perceptual speed), 21% was within-individual and 79% between-individual; for the memory test, an astounding 44% was within-individual and 56% between-individual.

Some of this within-individual variability in cognitive performance comes down to practice effects, which were significant for all cognitive measures. For the memory test, time of day was also significant, with performance being better in the morning. For the letter and number series tests, previous performance also had a small effect on perceived competence. For the number matching, increase in competence subsequent to increased performance was greatest for those with lower scores. However, lagged analyses indicated that beliefs preceded performance to a greater extent than performance preceding beliefs.

While it wasn’t an aspect of this study, it should also be noted that a person’s sense of control may well vary according to domain (e.g., cognition, social interaction, health) and context. In this regard, it’s interesting to note the present findings that sense of control affected inductive reasoning for low-control individuals, but memory for high-control individuals, suggesting that the cognitive domain also matters.

Now this small study was a preliminary one and there are several limitations that need to be tightened up in subsequent research, but I think it’s important for three reasons:

  • as a demonstration that cognitive performance is not a fixed attribute;
  • as a demonstration of the various factors that can affect older adults’ cognitive performance;
  • as a demonstration that your beliefs about yourself are a factor in your cognitive performance.

[2794] Neupert, S. D., & Allaire J. C.
(2012).  I think I can, I think I can: Examining the within-person coupling of control beliefs and cognition in older adults.
Psychology and Aging. No Pagination Specified - No Pagination Specified.

A small study involving 20 people has found that those who were exposed to 1,8-cineole, one of the main chemical components of rosemary essential oil, performed better on mental arithmetic tasks. Moreover, there was a dose-dependent relationship — higher blood concentrations of the chemical were associated with greater speed and accuracy.

Participants were given two types of test: serial subtraction and rapid visual information processing. These tests took place in a cubicle smelling of rosemary. Participants sat in the cubicle for either 4, 6, 8, or 10 minutes before taking the tests (this was in order to get a range of blood concentrations). Mood was assessed both before and after, and blood was tested at the end of the session.

While blood levels of the chemical correlated with accuracy and speed on both tasks, the effects were significant only for the mental arithmetic task.

Participants didn’t know that the scent was part of the study, and those who asked about it were told it was left over from a previous study.

There was no clear evidence that the chemical improved attention, but there was a significant association with one aspect of mood, with higher levels of the scent correlating with greater contentment. Contentment was the only aspect of mood that showed such a link.

It’s suggested that this chemical compound may affect learning through its inhibiting effect on acetylcholinesterase (an important enzyme in the development of Alzheimer's disease). Most Alzheimer’s drugs are cholinesterase inhibitors.

While this is very interesting (although obviously a larger study needs to confirm the findings), what I would like to see is the effects on more prolonged mental efforts. It’s also a little baffling to find the effect being limited to only one of these tasks, given that both involve attention and working memory. I would also like to see the rosemary-infused cubicle compared to some other pleasant smell.

Interestingly, a very recent study also suggests the importance of individual differences. A rat study compared the effects of amphetamines and caffeine on cognitive effort. First of all, giving the rats the choice of easy or hard visuospatial discriminations revealed that, as with humans, individuals could be divided into those who tended to choose difficult trials (“workers”) and those who preferred easy ones (“slackers”). (Easy trials took less effort, but earned commensurately smaller reward.)

Amphetamine, it was found, made the slackers worked harder, but made the workers take it easier. Caffeine, too, made the workers slack off, but had no effect on slackers.

The extent to which this applies to humans is of course unknown, but the idea that your attitude to cognitive effort might change how stimulants affect you is an intriguing one. And of course this is a more general reminder that factors, whatever they are, have varying effects on individuals. This is why it’s so important to have a large sample size, and why, as an individual, you can’t automatically assume that something will benefit you, whatever the research says.

But in the case of rosemary oil, I can’t see any downside! Try it out; maybe it will help.

This is another demonstration of stereotype threat, which is also a nice demonstration of the contextual nature of intelligence. The study involved 70 volunteers (average age 25; range 18-49), who were put in groups of 5. Participants were given a baseline IQ test, on which they were given no feedback. The group then participated in a group IQ test, in which 92 multi-choice questions were presented on a monitor (both individual and group tests were taken from Cattell’s culture fair intelligence test). Each question appeared to each person at the same time, for a pre-determined time. After each question, they were provided with feedback in the form of their own relative rank within the group, and the rank of one other group member. Ranking was based on performance on the last 10 questions. Two of each group had their brain activity monitored.

Here’s the remarkable thing. If you gather together individuals on the basis of similar baseline IQ, then you can watch their IQ diverge over the course of the group IQ task, with some dropping dramatically (e.g., 17 points from a mean IQ of 126). Moreover, even those little affected still dropped some (8 points from a mean IQ of 126).

Data from the 27 brain scans (one had to be omitted for technical reasons) suggest that everyone was initially hindered by the group setting, but ‘high performers’ (those who ended up scoring above the median) managed to largely recover, while ‘low performers’ (those who ended up scoring below the median) never did.

Personality tests carried out after the group task found no significant personality differences between high and low performers, but gender was a significant variable: 10/13 high performers were male, while 11/14 low performers were female (remember, there was no difference in baseline IQ — this is not a case of men being smarter!).

There were significant differences between the high and low performers in activity in the amygdala and the right lateral prefrontal cortex. Specifically, all participants had an initial increase in amygdala activation and diminished activity in the prefrontal cortex, but by the end of the task, the high-performing group showed decreased amygdala activation and increased prefrontal cortex activation, while the low performers didn’t change. This may reflect the high performers’ greater ability to reduce their anxiety. Activity in the nucleus accumbens was similar in both groups, and consistent with the idea that the students had expectations about the relative ranking they were about to receive.

It should be pointed out that the specific feedback given — the relative ranking — was not a factor. What’s important is that it was being given at all, and the high performers were those who became less anxious as time went on, regardless of their specific ranking.

There are three big lessons here. One is that social pressure significantly depresses talent (meetings make you stupid?), and this seems to be worse when individuals perceive themselves to have a lower social rank. The second is that our ability to regulate our emotions is important, and something we should put more energy into. And the third is that we’ve got to shake ourselves loose from the idea that IQ is something we can measure in isolation. Social context matters.

One of the few established cognitive differences between men and women lies in spatial ability. But in recent years, this ‘fact’ has been shaken by evidence that training can close the gap between the genders. In this new study, 545 students were given a standard 3D mental rotation task, while at the same time manipulating their confidence levels.

In the first experiment, 70 students were asked to rate their confidence in each answer. They could also choose not to answer. Confidence level was significantly correlated with performance both between and within genders.

On the face of it, these findings could be explained, of course, by the ability of people to be reliable predictors of their own performance. However, the researchers claim that regression analysis shows clearly that when the effect of confidence was taken into account, gender differences were eliminated. Moreover, gender significantly predicted confidence.

But of course this is still just indicative.

In the next experiment, however, the researchers tried to reduce the effect of confidence. One group of 87 students followed the same procedure as in the first experiment (“omission” group), except they were not asked to give confidence ratings. Another group of 87 students was not permitted to miss out any questions (“commission” group). The idea here was that confidence underlay the choice of whether or not to answer a question, so while the first group should perform similarly to those in the first experiment, the second group should be less affected by their confidence level.

This is indeed what was found: men significantly outperformed women in the first condition, but didn’t in the second condition. In other words, it appears that the mere possibility of not answering makes confidence an important factor.

In the third experiment, 148 students replicated the commission condition of the second experiment with the additional benefit of being allowed unlimited time. Half of the students were required to give confidence ratings.

The advantage of unlimited time improved performance overall. More importantly, the results confirmed those produced earlier: confidence ratings produced significant gender differences; there were no gender differences in the absence of such ratings.

In the final experiment, 153 students were required to complete an intentionally difficult line judgment task, which men and women both carried out at near chance levels. They were then randomly informed that their performance had been either above average (‘high confidence’) or below average (‘low confidence’). Having manipulated their confidence, the students were then given the standard mental rotation task (omission version).

As expected (remember this is the omission procedure, where subjects could miss out answers), significant gender differences were found. But there was also a significant difference between the high and low confidence groups. That is, telling people they had performed well (or badly) on the first task affected how well they did on the second. Importantly, women in the high confidence group performed as well as men in the low confidence group.

Benefits of high quality child care persist 30 years later

Back in the 1970s, some 111 infants from low-income families, of whom 98% were African-American, took part in an early childhood education program called the Abecedarian Project. From infancy until they entered kindergarten, the children attended a full-time child care facility that operated year-round. The program provided educational activities designed to support their language, cognitive, social and emotional development.

The latest data from that project, following up the participants at age 30, has found that these people had significantly more years of education than peers who were part of a control group (13.5 years vs 12.3), and were four times more likely to have earned college degrees (23% vs 6%).

They were also significantly more likely to have been consistently employed (75% had worked full time for at least 16 of the previous 24 months, compared to 53% of the control group) and less likely to have used public assistance (only 4% received benefits for at least 10% of the previous seven years, compared to 20% of the control group). However, income-to-needs ratios (income taken into account household size) didn’t vary significantly between the groups (mainly because of the wide variability; on the face of it, the means are very different, but the standard deviation is huge), and neither did criminal involvement (27% vs 28%).

See their website for more about this project.

Evidence that more time at school raises IQ

It would be interesting to see what the IQs of those groups are, particularly given that maternal IQ was around 85 for both treatment and control groups. A recent report analyzed the results of a natural experiment that occurred in Norway when compulsory schooling was increased from seven to nine years in the 1960s, meaning that students couldn’t leave until 16 rather than 14. Because all men eligible for the draft were given an IQ test at age 19, statisticians were able to look back and see what effect the increased schooling had on IQ.

They found that it had a substantial effect, with each additional year raising the average IQ by 3.7 points.

While we can’t be sure how far these results extend to other circumstances, they are clear evidence that it is possible to improve IQ through education.

Why children of higher-income parents start school with an advantage

Of course the driving idea behind improved child-care in the early years is all about the importance of getting off to a good start, and you’d expect that providing such care to children would have a greater long-term effect on IQ than simply extending time at school. Most such interventions have looked at the most deprived strata of society. An overlooked area is that of low to middle income families, who are far from having the risk factors of less fortunate families.

A British study involving 15,000 five-year-olds has found that, at the start of school, children from low to middle income families are five months behind children from higher income families in terms of vocabulary skills and have more behavior problems (they were also 8 months ahead of their lowest income peers in vocabulary).

Low-middle income (LMI) households are defined by the Resolution Foundation (who funded this research) as members of the working-age population in income deciles 2-5 who receive less than one-fifth of their gross household income from means-tested benefits (see their website for more detail on this).

Now the difference in home environment between LMI and higher income households is often not that great — particularly when you consider that it is often a difference rooted in timing. LMI households are more common in this group of families with children under five, because the parents are usually at an early stage of life. So what brings about this measurable difference in language and behavior development?

This is a tricky thing to derive from the data, and the findings must be taken with a grain of salt. And as always, interpretation is even trickier. But with this caveat, let’s see what we have. Let’s look at demographics first.

The first thing is the importance of parental education. Income plus education accounted for some 70-80% of the differences in development, with education more important for language development and income more important for behavior development. Maternal age then accounted for a further 10%. Parents in the higher-income group tended to be older and have better education (e.g., 18% of LMI mothers were under 25 at the child’s birth, compared to 6% of higher-income mothers; 30% of LMI parents had a degree compared to 67% of higher-income parents).

Interestingly, family size was equally important for language development (10%), but much less important for behavior development (in fact this was a little better in larger families). Differences in ethnicity, language, or immigration status accounted for only a small fraction of the vocabulary gap, and none of the behavior gap.

Now for the more interesting but much trickier analysis of environmental variables. The most important factor was home learning environment, accounting for around 20% of the difference. Here the researchers point to higher-income parents providing more stimulation. For example, higher-income parents were more likely to read to their 3-year-olds every day (75% vs 62%; 48% for the lowest-income group), to take them to the library at least once a month (42% vs 35% vs 26%), to take their 5-year-old to a play or concert (86% vs 75% vs 60%), to a museum/gallery (67% vs 48% vs 36%), to a sporting activity at least once a week (76% vs 57% vs 35%). Higher-income parents were also much less likely to allow their 3-year-olds to watch more than 3 hours of TV a day (7% vs 17% vs 25%). (I know the thrust of this research is the comparison between LMI and higher income, but I’ve thrown in the lowest-income figures to help provide context.)

Interestingly, the most important factor for vocabulary learning was being taken to a museum/gallery at age 5 (but remember, these correlations could go either way: it might well be that parents are more likely to take an articulate 5-year-old to such a place), with the second most important factor being reading to 3-year-old every day. These two factors accounted for most of the effects of home environment. For behavior, the most important factor was regular sport, followed by being to a play/concert, and being taken to a museum/gallery. Watching more than 3 hours of TV at age 3 did have a significant effect on both vocabulary and behavior development (a negative effect on vocabulary and a positive effect on behavior), while the same amount of TV at age 5 did not.

Differences in parenting style explained 10% of the vocabulary gap and 14% of the behavior gap, although such differences were generally small. The biggest contributors to the vocabulary gap were mother-child interaction score at age 3 and regular bedtimes at age 3. The biggest contributors to the behavior gap were regular bedtimes at age 5, regular mealtimes at age 3, child smacked at least once a month at age 5 (this factor also had a small but significant negative effect on vocabulary), and child put in timeout at least once a month at age 5.

Maternal well-being accounted for over a quarter of the behavior gap, but only a small proportion of the vocabulary gap (2% — almost all of this relates to social support score at 9 months). Half of the maternal well-being component of the behavior gap was down to psychological distress at age 5 (very much larger than the effect of psychological distress at age 3). Similarly, child and maternal health were important for behavior (18% in total), but not for vocabulary.

Material possessions, on the other hand, accounted for some 9% of the vocabulary gap, but none of the behavior gap. The most important factors here were no internet at home at age 5 (22% of LMIs vs 8% of higher-incomes), and no access to a car at age 3 (5% of LMIs had no car vs 1% of higher incomes).

As I’ve intimated, it’s hard to believe we can disentangle individual variables in the environment in an observational study, but the researchers believe the number of variables in the mix (158) and the different time points (many variables are assessed at two or more points) provided a good base for analysis.

[2676] Campbell, F. A., Pungello E. P., Burchinal M., Kainz K., Pan Y., Wasik B. H., et al.
(2012).  Adult outcomes as a function of an early childhood educational program: An Abecedarian Project follow-up.
Developmental Psychology;Developmental Psychology. No Pagination Specified - No Pagination Specified.

[2675] Brinch, C. N., & Galloway T A.
(2012).  Schooling in adolescence raises IQ scores.
Proceedings of the National Academy of Sciences. 109(2), 425 - 430.

Washbrook, E., & Waldfogel, J. (2011). On your marks : Measuring the school readiness of children in low-to-middle income families. Resolution Foundation, December 2011.

Quarter of British children performing poorly due to family disadvantage

A British study involving over 18,000 very young children (aged 9 months to 5 years) has found that those exposed to two or more “disadvantages” (28% of the children) were significantly more likely to have impaired intellectual development, expressed in a significantly reduced vocabulary and behavioral problems.

These differences were significant at three, and for the most part tended to widen between ages three or five (cognitive development, hyperactivity, peer problems and prosocial behaviors; the gap didn’t change for emotional problems, and narrowed for conduct problems). However, only the narrowing of the conduct problem gap and the widening of the peer problem gap was statistically significant.

Ten disadvantages were identified: living in overcrowded housing; having a teenage mother; having one or more parents with depression, parent with a physical disability; parent with low basic skills; maternal smoking during pregnancy; excessive alcohol intake; financial stress, unemployment; domestic violence..

Around 41% of the children did not face any of these disadvantages, and 30% faced only one of these disadvantages. Of those facing two or more, half of those (14%) only had two, while 7% of the total group experienced three risk factors, and fewer than 2% had five or more.

There was no dominant combination of risks, but parental depression was the most common factor (19%), followed by parental disability (15%). Violence was present in only 4% of families, and both parents unemployed in only 5.5%. While there was some correlation between various risk factors, these correlations were relatively modest for the most part. The highest correlations were between unemployment and disability; violence and depression; unemployment and overcrowding.

There were ethnic differences in rate: at 48%, Bangladeshi children were most likely to be exposed to multiple disadvantages, followed by Pakistani families (34%), other (including mixed) (33%), black African (31%), black Caribbean (29%), white (28%) and Indian (20%).

There were also differences depending on family income. Among those in the lowest income band (below £10,400 pa) — into which 21% of the families fell, the same proportion as is found nationally — nearly half had at least two risk factors, compared to 27% of those in families above this threshold. Moreover, children in families with multiple risk factors plus low income showed the lowest cognitive development (as measured by vocabulary).

Childhood maltreatment reduces size of hippocampus

In this context, it is interesting to note a recent finding that three key areas of the hippocampus were significantly smaller in adults who had experienced maltreatment in childhood. In this study, brain scans were taken of nearly 200 young adults (18-25), of whom 46% reported no childhood adversity and 16% reported three or more forms of maltreatment. Maltreatment was most commonly physical and verbal abuse from parents, but also included corporal punishment, sexual abuse and witnessing domestic violence.

Reduced volume in specific hippocampus regions (dentate gyrus, cornu ammonis, presubiculum and subiculum) was still evident after such confounding factors as a history of depression or PTSD were taken into account. The findings support the theory that early stress affects the development of subregions in the hippocampus.

While mother’s nurturing grows the hippocampus

Supporting this, another study, involving 92 children aged 7 to 10 who had participated in an earlier study of preschool depression, has found that those children who received a lot of nurturing from their parent (generally mother) developed a larger hippocampus than those who didn’t.

‘Nurturing’ was assessed in a videotaped interaction at the time of the preschool study. In this interaction, the parent performed a task while the child waited for her to finish so they could open an attractive gift. How the parent dealt with this common scenario — the degree to which they helped the child through the stress — was evaluated by independent raters.

Brain scans revealed that children who had been nurtured had a significantly larger hippocampus than those whose mothers were not as nurturing, and (this was the surprising bit), this effect was greater among the healthy, non-depressed children. Among this group, those with a nurturing parent had hippocampi which were on average almost 10% larger than those whose parent had not been as nurturing.

First study:
Sabates, R., Dex, S., Sabates, R., & Dex, S. (2012). Multiple risk factors in young children’s development. CLS Cohort Studies Working paper 2012/1.
Full text available at http://www.cls.ioe.ac.uk/news.aspx?itemid=1661&itemTitle=More+than+one+i...

Second study:
[2741] Teicher, M. H., Anderson C. M., & Polcari A.
(2012).  Childhood maltreatment is associated with reduced volume in the hippocampal subfields CA3, dentate gyrus, and subiculum.
Proceedings of the National Academy of Sciences.
Full text available at http://www.pnas.org/content/early/2012/02/07/1115396109.abstract?sid=f73...

Third study:
[2734] Luby, J. L., Barch D. M., Belden A., Gaffrey M. S., Tillman R., Babb C., et al.
(2012).  Maternal support in early childhood predicts larger hippocampal volumes at school age.
Proceedings of the National Academy of Sciences.

Openness to experience – being flexible and creative, embracing new ideas and taking on challenging intellectual or cultural pursuits – is one of the ‘Big 5’ personality traits. Unlike the other four, it shows some correlation with cognitive abilities. And, like them, openness to experience does tend to decline with age.

However, while there have been many attempts to improve cognitive function in older adults, to date no one has tried to increase openness to experience. Naturally enough, one might think — it’s a personality trait, and we are not inclined to view personality traits as amenable to ‘training’. However, recently there have been some indications that personality traits can be changed, through cognitive interventions or drug treatments. In this new study, a cognitive training program for older adults also produced increases in their openness to experience.

The study involved 183 older adults (aged 60-94; average age 73), who were randomly assigned to a 16-week training program or a waiting-list control group. The program included training in inductive reasoning, and puzzles that relied in part on inductive reasoning. Most of this activity was carried out at home, but there were two 1-hour classroom sessions: one to introduce the inductive reasoning training, and one to discuss strategies for Sudoku and crosswords.

Participants came to the lab each week to hand in materials and pick up the next set. Initially, they were given crossword and Sudoku puzzles with a wide range of difficulty. Subsequently, puzzle sets were matched to each participant’s skill level (assessed from the previous week’s performance). Over the training period, the puzzles became progressively more difficult, with the steps tailored to each individual.

The inductive reasoning training involved learning to recognize novel patterns and use them to solve problems. In ‘basic series problems’, the problems required inference from a serial pattern of words, letters, or numbers. ‘Everyday serial problems’ included problems such as completing a mail order form and answering questions about a bus schedule. Again, the difficulty of the problems increased steadily over the training period.

Participants were asked to spend at least 10 hours a week on program activities, and according to the daily logs they filled in, they spent an average of 11.4 hours a week. In addition to the hopefully inherent enjoyment of the activities, those who recorded 10 hours were recognized on a bulletin board tally sheet and entered into a raffle for a prize.

Cognitive and personality testing took place 4-5 weeks prior to the program starting, and 4-5 weeks after program end. Two smaller assessments also took place during the program, at week 6 and week 12.

At the end of the program, those who had participated had significantly improved their pattern-recognition and problem-solving skills. This improvement went along with a moderate but significant increase in openness. Analysis suggested that this increase in openness occurred independently of improvement in inductive reasoning.

The benefits were specific to inductive reasoning and openness, with no significant effects on divergent thinking, processing speed, verbal ability, or the other Big 5 traits.

The researchers suggest that the carefully stepped training program was important in leading to increased openness, allowing the building of a growing confidence in their reasoning abilities. Openness to experience contributes to engagement and enjoyment in stimulating activity, and has also been linked to better health and decreased mortality risk. It seems likely, then, that increases in openness can be part of a positive feedback cycle, leading to greater and more sustained engagement in mentally stimulating activities.

The corollary is that decreases in openness may lead to declines in cognitive engagement, and then to poorer cognitive function. Indeed it has been previously suggested that openness to experience plays a role in cognitive aging.

Clearly, more research is needed to tease out how far these findings extend to other activities, and the importance of scaffolding (carefully designing cognitive activities on an individualized basis to support learning), but this work reveals an overlooked aspect to the issue of mental stimulation for preventing age-related cognitive decline.

A certain level of mental decline in the senior years is regarded as normal, but some fortunate few don’t suffer from any decline at all. The Northwestern University Super Aging Project has found seniors aged 80+ who match or better the average episodic memory performance of people in their fifties. Comparison of the brains of 12 super-agers, 10 cognitively-normal seniors of similar age, and 14 middle-aged adults (average age 58) now reveals that the brains of super-agers also look like those of the middle-aged. In contrast, brain scans of cognitively average octogenarians show significant thinning of the cortex.

The difference between the brains of super-agers and the others was particularly marked in the anterior cingulate cortex. Indeed, the super agers appeared to have a much thicker left anterior cingulate cortex than the middle-aged group as well. Moreover, the brain of a super-ager who died revealed that, although there were some plaques and tangles (characteristic, in much greater quantities, of Alzheimer’s) in the mediotemporal lobe, there were almost none in the anterior cingulate. (But note an earlier report from the researchers)

Why this region should be of special importance is somewhat mysterious, but the anterior cingulate is part of the attention network, and perhaps it is this role that underlies the superior abilities of these seniors. The anterior cingulate also plays a role error detection and motivation; it will be interesting to see if these attributes are also important.

While the precise reason for the anterior cingulate to be critical to retaining cognitive abilities might be mysterious, the lack of cortical atrophy, and the suggestion that super-agers’ brains have much reduced levels of the sort of pathological damage seen in most older brains, adds weight to the growing evidence that cognitive aging reflects clinical problems, which unfortunately are all too common.

Sadly, there are no obvious lifestyle factors involved here. The super agers don’t have a lifestyle any different from their ‘cognitively average’ counterparts. However, while genetics might be behind these people’s good fortune, that doesn’t mean that lifestyle choices don’t make a big difference to those of us not so genetically fortunate. It seems increasingly clear that for most of us, without ‘super-protective genes’, health problems largely resulting from lifestyle choices are behind much of the damage done to our brains.

It should be emphasized that these unpublished results are preliminary only. This conference presentation reported on data from only 12 of 48 subjects studied.

Harrison, T., Geula, C., Shi, J., Samimi, M., Weintraub, S., Mesulam, M. & Rogalski, E. 2011. Neuroanatomic and pathologic features of cognitive SuperAging. Presented at a poster session at the 2011 Society for Neuroscience conference.

Certainly experiences that arouse emotions are remembered better than ones that have no emotional connection, but whether negative or positive memories are remembered best is a question that has produced equivocal results. While initial experiments suggested positive events were remembered better than negative, more recent studies have concluded the opposite.

The idea that negative events are remembered best is consistent with a theory that negative emotion signals a problem, leading to more detailed processing, while positive emotion relies more heavily on general scripts.

However, a new study challenges those recent studies, on the basis of a more realistic comparison. Rather than focusing on a single public event, to which some people have positive feelings while others have negative feelings (events used have included the OJ Simpson trial, the fall of the Berlin Wall, and a single baseball championship game), the study looked at two baseball championships each won by different teams.

The experiment involved 1,563 baseball fans who followed or attended the 2003 and 2004 American League Championship games between the New York Yankees (2003 winners) and the Boston Red Sox (2004 winners). Of the fans, 1,216 were Red Sox fans, 218 were Yankees fans, and 129 were neutral fans. (Unfortunately the selection process disproportionately collected Red Sox fans.)

Participants were reminded who won the championship before answering questions on each game. Six questions were identical for the two games: the final score for each team, the winning and losing pitchers (multiple choice of five pitchers for each team), the location of the game, and whether the game required extra innings. Participants also reported how vividly they remembered the game, and how frequently they had thought about or seen media concerning the game.

Both Yankee and Red Sox fans remembered more details about their team winning. They also reported more vivid memories for the games their team won. Accuracy and vividness were significantly correlated. Fans also reported greater rehearsal of the game their team won, and again, rehearsal and accuracy were significantly correlated.

Analysis of the data revealed that rehearsal completely mediated the correlation between accuracy and fan type, and partially mediated the correlation between vividness and fan type.

In other words, improved memory for emotion-arousing events has everything to do with how often you think about or are reminded of the event.

PTSD, for example, is the negative memory extreme. And PTSD is characterized by the unavoidable rehearsal of the event over and over again. Each repetition makes memory for the event stronger.

In the previous studies referred to earlier, media coverage provided a similarly unavoidable repetition.

While most people tend to recall more positive than negative events (and this tendency becomes greater with age), individuals who are depressed or anxious show the opposite tendency.

So whether positive or negative events are remembered better depends on you, as well as the event.

When it comes down to it, I'm not sure it's really a helpful question - whether positive or negative events are remembered better. An interesting aspect of public events is that their portrayal often changes over time, but this is just a more extreme example of what happens with private events as well — as we change over time, so does our attitude toward those events. Telling friends about events, and receiving their comments on them, can affect our emotional response to events, as well as having an effect on our memory of those events.

[2591] Breslin, C. W., & Safer M. A.
(2011).  Effects of Event Valence on Long-Term Memory for Two Baseball Championship Games.
Psychological Science. 22(11), 1408 - 1412.

Previous research has found that carriers of the so-called KIBRA T allele have been shown to have better episodic memory than those who don’t carry that gene variant (this is a group difference; it doesn’t mean that any carrier will remember events better than any non-carrier). A large new study confirms and extends this finding.

The study involved 2,230 Swedish adults aged 35-95. Of these, 1040 did not have a T allele, 932 had one, and 258 had two.  Those who had at least one T allele performed significantly better on tests of immediate free recall of words (after hearing a list of 12 words, participants had to recall as many of them as they could, in any order; in some tests, there was a concurrent sorting task during presentation or testing).

There was no difference between those with one T allele and those with two. The effect increased with increasing age. There was no effect of gender. There was no significant effect on performance of delayed category cued recall tests or a visuospatial task, although a trend in the appropriate direction was evident.

It should also be noted that the effect on immediate recall, although statistically significant, was not large.

Brain activity was studied in a subset of this group, involving 83 adults aged 55-60, plus another 64 matched on sex, age, and performance on the scanner task. A further group of 113 65-75 year-olds were included for comparison purposes. While in the scanner, participants carried out a face-name association task. Having been presented with face-name pairs, participants were tested on their memory by being shown the faces with three letters, of which one was the initial letter of the name.

Performance on the scanner task was significantly higher for T carriers — but only for the 55-60 age group, not for the 65-75 age group. Activity in the hippocampus was significantly higher for younger T carriers during retrieval, but not encoding. No such difference was seen in the older group.

This finding is in contrast with an earlier, and much smaller, study involving 15 carriers and 15 non-carriers, which found higher activation of the hippocampus in non-T carriers. This was taken at the time to indicate some sort of compensatory activity. The present finding challenges that idea.

Although higher hippocampal activation during retrieval is generally associated with faster retrieval, the higher activity seen in T carriers was not fully accounted for by performance. It may be that such activity also reflects deeper processing.

KIBRA-T carriers were neither more nor less likely to carry other ‘memory genes’ — APOEe4; COMTval158met; BDNFval66met.

The findings, then, fail to support the idea that non-carriers engage compensatory mechanisms, but do indicate that the KIBRA-T gene helps episodic memory by improving the hippocampus function.

BDNF gene variation predicts rate of age-related decline in skilled performance

In another study, this time into the effects of the BDNF gene, performance on an airplane simulation task on three annual occasions was compared. The study involved 144 pilots, of whom all were healthy Caucasian males aged 40-69, and 55 (38%) of whom turned out to have at least one copy of a BDNF gene that contained the ‘met’ variant. This variant is less common, occurring in about one in three Asians, one in four Europeans and Americans, and about one in 200 sub-Saharan Africans.  

While performance dropped with age for both groups, the rate of decline was much steeper for those with the ‘met’ variant. Moreover, there was a significant inverse relationship between age and hippocampal size in the met carriers — and no significant correlation between age and hippocampal size in the non-met carriers.

Comparison over a longer time-period is now being undertaken.

The finding is more evidence for the value of physical exercise as you age — physical activity is known to increase BDNF levels in your brain. BDNF levels tend to decrease with age.

The met variant has been linked to higher likelihood of depression, stroke, anorexia nervosa, anxiety-related disorders, suicidal behavior and schizophrenia. It differs from the more common ‘val’ variant in having methionine rather than valine at position 66 on this gene. The BDNF gene has been remarkably conserved across evolutionary history (fish and mammalian BDNF have around 90% agreement), suggesting that mutations in this gene are not well tolerated.

Math-anxiety can greatly lower performance on math problems, but just because you suffer from math-anxiety doesn’t mean you’re necessarily going to perform badly. A study involving 28 college students has found that some of the students anxious about math performed better than other math-anxious students, and such performance differences were associated with differences in brain activity.

Math-anxious students who performed well showed increased activity in fronto-parietal regions of the brain prior to doing math problems — that is, in preparation for it. Those students who activated these regions got an average 83% of the problems correct, compared to 88% for students with low math anxiety, and 68% for math-anxious students who didn’t activate these regions. (Students with low anxiety didn’t activate them either.)

The fronto-parietal regions activated included the inferior frontal junction, inferior parietal lobule, and left anterior inferior frontal gyrus — regions involved in cognitive control and reappraisal of negative emotional responses (e.g. task-shifting and inhibiting inappropriate responses). Such anticipatory activity in the fronto-parietal region correlated with activity in the dorsomedial caudate, nucleus accumbens, and left hippocampus during math activity. These sub-cortical regions (regions deep within the brain, beneath the cortex) are important for coordinating task demands and motivational factors during the execution of a task. In particular, the dorsomedial caudate and hippocampus are highly interconnected and thought to form a circuit important for flexible, on-line processing. In contrast, performance was not affected by activity in ‘emotional’ regions, such as the amygdala, insula, and hypothalamus.

In other words, what’s important is not your level of anxiety, but your ability to prepare yourself for it, and control your responses. What this suggests is that the best way of dealing with math anxiety is to learn how to control negative emotional responses to math, rather than trying to get rid of them.

Given that cognitive control and emotional regulation are slow to mature, it also suggests that these effects are greater among younger students.

The findings are consistent with a theory that anxiety hinders cognitive performance by limiting the ability to shift attention and inhibit irrelevant/distracting information.

Note that students in the two groups (high and low anxiety) did not differ in working memory capacity or in general levels of anxiety.

IQ has long been considered to be a fixed attribute, stable across our lifetimes. But in recent years, this assumption has come under fire, with evidence of the positive and negative effects education and experiences can have on people’s performance. Now a new (small) study provides a more direct challenge.

In 2004, 33 adolescents (aged 12-16) took IQ tests and had their brains scanned. These tests were repeated four years later. The teenagers varied considerably in their levels of ability (77-135 in 2004; 87-143 in 2008). While the average IQ score remained the same (112; 113), there were significant changes in the two IQ scores for some individuals, with some participants gaining as much as 21 points, and others falling as much as 18 points. Clear change in IQ occurred for a third of the participants, and there was no obvious connection to specific attributes (e.g., low performers didn’t get better while high performers got worse).

These changes in performance correlated with structural changes in the brain. An increase in verbal IQ score correlated with an increase in the density of grey matter in an area of the left motor cortex of the brain that is activated when articulating speech. An increase in non-verbal IQ score correlated with an increase in the density of grey matter in the anterior cerebellum, which is associated with movements of the hand. Changes in verbal IQ and changes in non-verbal IQ were independent.

While I’d really like to see this study repeated with a much larger sample, the findings are entirely consistent with research showing increases in grey matter density in specific brain regions subsequent to specific training. The novel part of this is the correlation with such large changes in IQ.

The findings add to growing evidence that teachers shouldn’t be locked into beliefs about a student’s future academic success on the basis of past performance.

Postscript: I should perhaps clarify that IQ performance at each of these time points was age-normed - this is not a case of children just becoming 'smarter with age'.

Brain imaging data from 103 healthy people aged 5-32, each of whom was scanned at least twice, has demonstrated that wiring to the frontal lobe continues to develop after adolescence.

The brain scans focused on 10 major white matter tracts. Significant changes in white matter tracts occurred in the vast majority of children and early adolescents, and these changes were mostly complete by late adolescence for projection and commissural tracts (projection tracts project from the cortex to non-cortical areas, such as the senses and the muscles, or from the thalamus to the cortex; commissural tracts cross from one hemisphere to the other). But association tracts (which connect regions within the same hemisphere) kept developing after adolescence.

This was particularly so for the inferior and superior longitudinal and fronto-occipital fascicule (the inferior longitudinal fasciculus connects the temporal and occipital lobes; the superior longitudinal fasciculus connects the frontal lobe to the occipital lobe and parts of the temporal and parietal lobes). These frontal connections are needed for complex cognitive tasks such as inhibition, executive functioning, and attention.

The researchers speculated that this continuing development may be due to the many life experiences in young adulthood, such as pursing post-secondary education, starting a career, independence and developing new social and family relationships.

But this continuing development wasn’t seen in everyone. Indeed, in some people, there was evidence of reductions, rather than growth, in white matter integrity. It may be that this is connected with the development of psychiatric disorders that typically develop in adolescence or young adulthood — perhaps directly, or because such degradation increases vulnerability to other factors (e.g., to drug use). This is speculative at the moment, but it opens up a new avenue to research.

[2528] Lebel, C., & Beaulieu C.
(2011).  Longitudinal Development of Human Brain Wiring Continues from Childhood into Adulthood.
The Journal of Neuroscience. 31(30), 10937 - 10947.

Mathematics is a complex cognitive skill, requiring years of formal study. But of course some math is much simpler than others. Counting is fairly basic; calculus is not. To what degree does ability at the simpler tasks predict ability at the more complex? None at all, it was assumed, but research with adolescents has found an association between math ability and simple number sense (or as it’s called more formally, the "Approximate Number System" or ANS).

A new study extends the finding to preschool children. The study involved 200 3- to 5-year-old children, who were tested on their number sense, mathematical ability and verbal ability. The number sense task required children to estimate which group had more dots, when seeing briefly presented groups of blue and yellow dots on a computer screen. The standardized test of early mathematics ability required them to verbally count items on a page, to tell which of two spoken number words was greater or lesser, to read Arabic numbers, as well as demonstrate their knowledge of number facts (such as addition or multiplication), calculation skills (solving written addition and subtraction problems) and number concepts (such as answering how many sets of 10 are in 100). The verbal assessment was carried out by parents and caregivers of the children.

The study found that those who could successfully tell when the difference between the groups was only one dot, also knew the most about Arabic numerals and arithmetic. In other words, the findings confirm that number sense is linked to math ability.

Because these preschoolers have not yet had formal math instruction, the conclusion being drawn is that this number sense is inborn. I have to say that seems to me rather a leap. Certainly number sense is seen in human infants and some non-human animals, and in that sense the ANS is assuredly innate. However what we’re talking about here is the differences in number sense — the degree to which it has been developed. I’d remind you of my recent report that preschoolers whose parents engage in the right number-talk develop an understanding of number earlier, and that such understanding affects later math achievement. So I think it’s decidedly premature to assume that some infants are born with a better number sense, as opposed to having the benefit of informal instruction that develops their number sense.

I think, rather, that the finding adds to the evidence that preschoolers’ experiences and environment have long-lasting effects on academic achievement.

There has been a lot of argument over the years concerning the role of genes in intelligence. The debate reflects the emotions involved more than the science. A lot of research has gone on, and it is indubitable that genes play a significant role. Most of the research however has come from studies involving twins and adopted children, so it is indirect evidence of genetic influence.

A new technique has now enabled researchers to directly examine 549,692 single nucleotide polymorphisms (SNPs — places where people have single-letter variations in their DNA) in each of 3511 unrelated people (aged 18-90, but mostly older adults). This analysis had produced an estimate of the size of the genetic contribution to individual differences in intelligence: 40% of the variation in crystallized intelligence and 51% of the variation in fluid intelligence. (See http://www.memory-key.com/memory/individual/wm-intelligence for a discussion of the difference)

The analysis also reveals that there is no ‘smoking gun’. Rather than looking for a handful of genes that govern intelligence, it seems that hundreds if not thousands of genes are involved, each in their own small way. That’s the trouble: each gene makes such a small contribution that no gene can be fingered as critical.

Discussions that involve genetics are always easily misunderstood. It needs to be emphasized that we are talking here about the differences between people. We are not saying that half of your IQ is down to your genes; we are saying that half the difference between you and another person (unrelated but with a similar background and education — study participants came from Scotland, England and Norway — that is, relatively homogenous populations) is due to your genes.

If the comparison was between, for example, a middle-class English person and someone from a poor Indian village, far less of any IQ difference would be due to genes. That is because the effects of environment would be so much greater.

These findings are consistent with the previous research using twins. The most important part of these findings is the confirmation it provides of something that earlier studies have hinted at: no single gene makes a significant contribution to variation in intelligence.

Once upon a time we made a clear difference between emotion and reason. Now increasing evidence points to the necessity of emotion for good reasoning. It’s clear the two are deeply entangled.

Now a new study has found that those with a higher working memory capacity (associated with greater intelligence) are more likely to automatically apply effective emotional regulation strategies when the need arises.

The study follows on from previous research that found that people with a higher working memory capacity suppressed expressions of both negative and positive emotion better than people with lower WMC, and were also better at evaluating emotional stimuli in an unemotional manner, thereby experiencing less emotion in response to those stimuli.

In the new study, participants were given a test, then given either negative or no feedback. A subsequent test, in which participants were asked to rate their familiarity with a list of people and places (some of which were fake), evaluated whether their emotional reaction to the feedback affected their performance.

This negative feedback was quite personal. For example: "your responses indicate that you have a tendency to be egotistical, placing your own needs ahead of the interests of others"; "if you fail to mature emotionally or change your lifestyle, you may have difficulty maintaining these friendships and are likely to form insecure relations."

The false items in the test were there to check for "over claiming" — a reaction well known to make people feel better about themselves and control their reactions to criticism. Among those who received negative feedback, those with higher levels of WMC were found to over claim the most. The people who over claimed the most also reported, at the end of the study, the least negative emotions.

In other words, those with a high WMC were more likely to automatically use an emotion regulation strategy. Other emotional reappraisal strategies include controlling your facial expression or changing negative situations into positive ones. Strategies such as these are often more helpful than suppressing emotion.

Schmeichel, Brandon J.; Demaree, Heath A. 2010. Working memory capacity and spontaneous emotion regulation: High capacity predicts self-enhancement in response to negative feedback. Emotion, 10(5), 739-744.

Schmeichel, Brandon J.; Volokhov, Rachael N.; Demaree, Heath A. 2008. Working memory capacity and the self-regulation of emotional expression and experience. Journal of Personality and Social Psychology, 95(6), 1526-1540. doi: 10.1037/a0013345

A new perspective on learning comes from a study in which 18 volunteers had to push a series of buttons as fast as possible, developing their skill over three sessions. New analytical techniques were then used to see which regions of the brain were active at the same time. The analysis revealed that those who learned new sequences more quickly in later sessions were those whose brains had displayed more 'flexibility' in the earlier sessions — that is, different areas of the brain linked with different regions at different times.

At this stage, we don’t know how stable an individual’s flexibility is. It may be that individuals vary significantly over the course of time, and if so, this information could be of use in predicting the best time to learn.

But the main point is that the functional modules, the brain networks that are involved in specific tasks, are more fluid than we thought. This finding is in keeping, of course, with the many demonstrations of damage to one region being compensated by new involvement of another region.

[2212] Bassett, D. S., Wymbs N. F., Porter M. A., Mucha P. J., Carlson J. M., & Grafton S. T.
(2011).  Dynamic reconfiguration of human brain networks during learning.
Proceedings of the National Academy of Sciences. 108(18), 7641 - 7646.

What makes one person so much better than another in picking up a new motor skill, like playing the piano or driving or typing? Brain imaging research has now revealed that one of the reasons appears to lie in the production of a brain chemical called GABA, which inhibits neurons from responding.

The responsiveness of some brains to a procedure that decreases GABA levels (tDCS) correlated both with greater brain activity in the motor cortex and with faster learning of a sequence of finger movements. Additionally, those with higher GABA concentrations at the beginning tended to have slower reaction times and less brain activation during learning.

It’s simplistic to say that low GABA is good, however! GABA is a vital chemical. Interestingly, though, low GABA has been associated with stress — and of course, stress is associated with faster reaction times and relaxation with slower ones. The point is, we need it in just the right levels, and what’s ‘right’ depends on context. Which brings us back to ‘responsiveness’ — more important than actual level, is the ability of your brain to alter how much GABA it produces, in particular places, at particular times.

However, baseline levels are important, especially where something has gone wrong. GABA levels can change after brain injury, and also may decline with age. The findings support the idea that treatments designed to influence GABA levels might improve learning. Indeed, tDCS is already in use as a tool for motor rehabilitation in stroke patients — now we have an idea why it works.

[2202] Stagg, C J., Bachtiar V., & Johansen-Berg H.
(2011).  The Role of GABA in Human Motor Learning.
Current Biology. 21(6), 480 - 484.

Following previous research suggesting that the volume of the hippocampus was reduced in some people with chronic PTSD, a twin study indicated that this may not be simply a sign that stress has shrunk the hippocampus, but that those with a smaller hippocampus are at greater risk of PTSD. Now a new study has found that Gulf War veterans who recovered from PTSD had, on average, larger hippocampi than veterans who still suffer from PTSD. Those who recovered had hippocampi of similar size to control subjects who had never had PTSD.

The study involved 244 Gulf War veterans, of whom 82 had lifetime PTSD, 44 had current PTSD, and 38 had current depression.

Because we don’t know hippocampal size prior to trauma, the findings don’t help us decide whether hippocampal size is a cause or an effect (or perhaps it would be truer to say, don’t help us decide the relative importance of these factors, because it seems most plausible that both are significant).

The really important question, of course, is whether an effective approach to PTSD treatment would be to work on increasing hippocampal volume. Exercise and mental stimulation, for example, are known to increase the creation of new brain cells in the hippocampus. In this case, the main mediator is probably the negative effects of stress (which reduces neurogenesis). There is some evidence that antidepressant treatment might increase hippocampal volume in people with PTSD.

The other conclusion we can derive from these findings is that perhaps we should not simply think of building hippocampal volume / creating new brain cells as a means of building cognitive reserve, thus protecting us from cognitive decline and dementia. We should also think of it as a means of improving our emotional resilience and protecting us from the negative effects of stress and trauma.

It’s well-established that feelings of encoding fluency are positively correlated with judgments of learning, so it’s been generally believed that people primarily use the simple rule, easily learned = easily remembered (ELER), to work out whether they’re likely to remember something (as discussed in the previous news report). However, new findings indicate that the situation is a little more complicated.

In the first experiment, 75 English-speaking students studied 54 Indonesian-English word pairs. Some of these were very easy, with the English words nearly identical to their Indonesian counterpart (e.g, Polisi-Police); others required more effort but had a connection that helped (e.g, Bagasi-Luggage); others were entirely dissimilar (e.g., Pembalut-Bandage).

Participants were allowed to study each pair for as long as they liked, then asked how confident they were about being able to recall the English word when supplied the Indonesian word on an upcoming test. They were tested at the end of their study period, and also asked to fill in a questionnaire which assessed the extent to which they believed that intelligence is fixed or changeable.

It’s long been known that theories of intelligence have important effects on people's motivation to learn. Those who believe each person possesses a fixed level of intelligence (entity theorists) tend to disengage when something is challenging, believing that they’re not up to the challenge. Those who believe that intelligence is malleable (incremental theorists) keep working, believing that more time and effort will yield better results.

The study found that those who believed intelligence is fixed did indeed follow the ELER heuristic, with their judgment of how well an item was learned nicely matching encoding fluency.

However those who saw intelligence as malleable did not follow the rule, but rather seemed to be following the reverse heuristic: that effortful encoding indicates greater engagement in learning, and thus is a sign that they are more likely to remember. This group therefore tended to be marginally underconfident of easy items, marginally overconfident for medium-level items, and significantly overconfident for difficult items.

However, the entanglement of item difficulty and encoding fluency weakens this finding, and accordingly a second experiment separated these two attributes.

In this experiment, 41 students were presented with two lists of nine words, one list of which was in small font (18-point Arial) and one in large font (48-point Arial). Each word was displayed for four seconds. While font size made no difference to their actual levels of recall, entity theorists were much more confident of recalling the large-size words than the small-size ones. The incremental theorists were not, however, affected by font-size.

It is suggested that the failure to find evidence of a ‘non-fluency heuristic’ in this case may be because participants had no control over learning time, therefore were less able to make relative judgments of encoding effort. Nevertheless, the main finding, that people varied in their use of the fluency heuristic depending on their beliefs about intelligence, was clear in both cases.

[2182] Miele, D. B., Finn B., & Molden D. C.
(2011).  Does Easily Learned Mean Easily Remembered?.
Psychological Science. 22(3), 320 - 324.

It’s well known that being too anxious about an exam can make you perform worse, and studies indicate that part of the reason for this is that your limited working memory is being clogged up with thoughts related to this anxiety. However for those who suffer from test anxiety, it’s not so easy to simply ‘relax’ and clear their heads. But now a new study has found that simply spending 10 minutes before the exam writing about your thoughts and feelings can free up brainpower previously occupied by testing worries.

In the first laboratory experiments, 20 college students were given two math tests. After the first test, the students were told that there would be a monetary reward for high marks — from both them and the student they had been paired with. They were then told that the other student had already sat the second test and improved their score, increasing the pressure. They were also they’d be videotaped, and their performance analyzed by teachers and students. Having thus upped the stakes considerably, half the students were given 10 minutes to write down any concerns they had about the test, while the other half were just given 10 minutes to sit quietly.

Under this pressure, the students who sat quietly did 12% worse on the second test. However those who wrote about their fears improved by 5%. In a subsequent experiment, those who wrote about an unrelated unemotional event did as badly as the control students (a drop of 7% this time, vs a 4% gain for the expressive writing group). In other words, it’s not enough to simply write, you need to be expressing your worries.

Moving out of the laboratory, the researchers then replayed their experiment in a 9th-grade classroom, in two studies involving 51 and 55 students sitting a biology exam. The students were scored for test anxiety six weeks before the exam. The control students were told to write about a topic that wouldn’t be covered in the exam (this being a common topic in one’s thoughts prior to an exam). It was found that those who scored high in test anxiety performed poorly in the control condition, but at the level of those low in test anxiety when in the expressive writing condition (improving their own performance by nearly a grade point). Those who were low in test anxiety performed at the same level regardless of what they wrote about prior to the exam.

One of the researchers, Sian Beilock, recently published a book on these matters: Choke: What the Secrets of the Brain Reveal About Getting It Right When You Have To

The issue of “mommy brain” is a complex one. Inconsistent research results make it clear that there is no simple answer to the question of whether or not pregnancy and infant care change women’s brains. But a new study adds to the picture.

Brain scans of 19 women two to four weeks and three to four months after they gave birth showed that grey matter volume increased by a small but significant amount in the midbrain (amygdala, substantia nigra, hypothalamus), prefrontal cortex, and parietal lobe. These areas are involved in motivation and reward, emotion regulation, planning, and sensory perception.

Mothers who were most enthusiastic about their babies were significantly more likely to show this increase in the midbrain regions. The authors speculated that the “maternal instinct” might be less of an instinctive response and more of a result of active brain building. Interestingly, while the brain’s reward regions don’t usually change as a result of learning, one experience that does have this effect is that of addiction.

While the reasons may have to do with genes, personality traits, infant behavior, or present circumstances, previous research has found that mothers who had more nurturing in their childhood had more grey matter in those brain regions involved in empathy and reading faces, which also correlated with the degree of activation in those regions when their baby cried.

A larger study is of course needed to confirm these findings.

A study involving 48 healthy adults aged 18-39 has found that extraverts who were deprived of sleep for 22 hours after spending 12 hours in group activities performed worse on a vigilance task that did those extraverts who engaged in the same activities on their own in a private room. Introverts were relatively unaffected by the degree of prior social interaction.

The researchers suggest that social interactions are cognitively complex experiences that may lead to rapid fatigue in brain regions that regulate attention and alertness, and (more radically) that introverts may have higher levels of cortical arousal, giving them greater resistance to sleep deprivation.

Rupp TL; Killgore WDS; Balkin TJ. Socializing by day may affect performance by night: vulnerability to sleep deprivation is differentially mediated by social exposure in extraverts vs introverts. SLEEP 2010;33(11):1475-1485.

A review of brain imaging and occupation data from 588 patients diagnosed with frontotemporal dementia has found that among the dementias affecting those 65 years and younger, FTD is as common as Alzheimer's disease. The study also found that the side of the brain first attacked (unlike Alzheimer’s, FTD typically begins with tissue loss in one hemisphere) is influenced by the person’s occupation.

Using occupation scores that reflect the type of skills emphasized, they found that patients with professions rated highly for verbal skills, such as school principals, had greater tissue loss on the right side of the brain, whereas those rated low for verbal skills, such as flight engineers, had greater tissue loss on the left side of the brain. This effect was expressed most clearly in the temporal lobes of the brain. In other words, the side of the brain least used in the patient's professional life was apparently the first attacked.

These findings are in keeping with the theory of cognitive reserve, but may be due to some asymmetry in the brain that both inclines them to a particular occupational path and renders the relatively deficient hemisphere more vulnerable in later life.

‘Working memory’ is thought to consist of three components: one concerned with auditory-verbal processing, one with visual-spatial processing, and a central executive that controls both. It has been hypothesized that the relationships between the components changes as children develop. Very young children are more reliant on visuospatial processing, but later the auditory-verbal module becomes more dominant. It has also been found that the two sensory modules are not strongly associated in younger (5-8) American children, but are strongly associated in older children (9-12). The same study found that this pattern was also found in Laotian children, but not in children from the Congo, none of whom showed a strong association between visual and auditory working memory. Now a new study has found that Ugandan children showed greater dominance of the auditory-verbal module, particularly among the older children (8 ½ +); however, the visuospatial module was dominant among Senegalese children, both younger and older. It is hypothesized that the cultural differences are a product of literacy training — school enrolment was much less consistent among the Senegalese. But there may also be a link to nutritional status.

No surprise to me (I’m hopeless at faces), but a twin study has found that face recognition is heritable, and that it is inherited separately from IQ. The findings provide support for a modular concept of the brain, suggesting that some cognitive abilities, like face recognition, are shaped by specialist genes rather than generalist genes. The study used 102 pairs of identical twins and 71 pairs of fraternal twins aged 7 to 19 from Beijing schools to calculate that 39% of the variance between individuals on a face recognition task is attributable to genetic effects. In an independent sample of 321 students, the researchers found that face recognition ability was not correlated with IQ.

Zhu, Q. et al. 2010. Heritability of the specific cognitive ability of face perception. Current Biology, 20 (2), 137-142.

You may think that telling students to strive for excellence is always a good strategy, but it turns out that it is not quite as simple as that. A series of four experiments looking at how students' attitudes toward achievement influenced their performance on various tasks has found that while those with high achievement motivation did better on a task when they also were exposed to subconscious "priming" that related to winning, mastery or excellence, those with low achievement motivation did worse. Similarly, when given a choice, those with high achievement motivation were more likely to resume an interrupted task which they were told tested their verbal reasoning ability. However, those with high achievement motivation did worse on a word-search puzzle when they were told the exercise was fun. The findings point to the fact that people have different goals (e.g., achievement vs enjoyment), and that effective motivation requires this to be taken account of.

[730] Hart, W., & Albarracín D.
(2009).  The effects of chronic achievement motivation and achievement primes on the activation of achievement and fun goals..
Journal of Personality and Social Psychology. 97(6), 1129 - 1141.

More data from the National Survey of Midlife Development in the United States has revealed that cognitive abilities reflect to a greater extent how old you feel, not how old you actually are. Of course that may be because cognitive ability contributes to a person’s wellness and energy. But it also may reflect benefits of trying to maintain a sense of youthfulness by keeping up with new trends and activities that feel invigorating.

[171] Schafer, M. H., & Shippee T. P.
(2009).  Age Identity, Gender, and Perceptions of Decline: Does Feeling Older Lead to Pessimistic Dispositions About Cognitive Aging?.
The Journals of Gerontology Series B: Psychological Sciences and Social Sciences. 65B(1), 91 - 96.

An analysis technique using artificial neural networks has revealed that the most important factors for predicting whether amnestic mild cognitive impairment (MCI-A) would develop into Alzheimer’s within 2 years were hyperglycemia, female gender and having the APOE4 gene (in that order). These were followed by the scores on attentional and short memory tests.

Tabaton, M. et al. 2010. Artificial Neural Networks Identify the Predictive Values of Risk Factors on the Conversion of Amnestic Mild Cognitive Impairment. Journal of Alzheimer's Disease, 19 (3), 1035-1040.

Three experiments involving students who had lived abroad and those who hadn't found that those who had experienced a different culture demonstrated greater creativity — but only when they first recalled a multicultural learning experience from their life abroad. Specifically, doing so (a) improved idea flexibility (e.g., the ability to solve problems in multiple ways), (b) increased awareness of underlying connections and associations, and (c) helped overcome functional fixedness. The study also demonstrated that it was learning about the underlying meaning or function of behaviors in the multicultural context that was particularly important for facilitating creativity.

[1622] Maddux, W. W., Adam H., & Galinsky A. D.
(2010).  When in Rome ... Learn Why the Romans Do What They Do: How Multicultural Learning Experiences Facilitate Creativity.
Personality and Social Psychology Bulletin. 731 - 741.

Full text is available free for a limited time at http://psp.sagepub.com/cgi/reprint/36/6/731

Why do women tend to be better than men at recognizing faces? Two recent studies give a clue, and also explain inconsistencies in previous research, some of which has found that face recognition mainly happens in the right hemisphere part of the face fusiform area, and some that face recognition occurs bilaterally. One study found that, while men tended to process face recognition in the right hemisphere only, women tended to process the information in both hemispheres. Another study found that both women and gay men tended to use both sides of the brain to process faces (making them faster at retrieving faces), while heterosexual men tended to use only the right. It also found that homosexual males have better face recognition memory than heterosexual males and homosexual women, and that women have better face processing than men. Additionally, left-handed heterosexual participants had better face recognition abilities than left-handed homosexuals, and also tended to be better than right-handed heterosexuals. In other words, bilaterality (using both sides of your brain) seems to make you faster and more accurate at recognizing people, and bilaterality is less likely in right-handers and heterosexual males (and perhaps homosexual women). Previous research has shown that homosexual individuals are 39% more likely to be left-handed.

Proverbio AM, Riva F, Martin E, Zani A (2010) Face Coding Is Bilateral in the Female Brain. PLoS ONE 5(6): e11242. doi:10.1371/journal.pone.0011242

[1611] Brewster, P. W. H., Mullin C. R., Dobrin R. A., & Steeves J. K. E.
(2010).  Sex differences in face processing are mediated by handedness and sexual orientation.
Laterality: Asymmetries of Body, Brain and Cognition.

A new study challenges the popular theory that expertise is simply a product of tens of thousands of hours of deliberate practice. Not that anyone is claiming that this practice isn’t necessary — but it may not be sufficient. A study looking at pianists’ ability to sight-read music reveals working memory capacity helps sight-reading regardless of how much someone has practiced.

The study involved 57 volunteers who had played piano for an average of 18.6 years (range from one to 57 years). Their estimated hours of overall practice ranged from 260 to 31,096 (average: 5806), and hours of sight-reading practice ranged from zero to 9,048 (average: 1487 hours). Statistical analysis revealed that although hours of practice was the most important factor, nevertheless, working memory capacity did, independently, account for a small but significant amount of the variance between individuals.

It is interesting that not only did WMC have an effect independent of hours of practice, but hours of practice apparently had no effect on WMC — although the study was too small to tell whether a lot of practice at an early age might have affected WMC (previous research has indicated that music training can increase IQ in children).

The study is also too small to properly judge the effects of the 10,000 hours deliberate practice claimed necessary for expertise: the researchers did not advise the number of participants that were at that level, but the numbers suggest it was low.

It should also be noted that an earlier study involving 52 accomplished pianists found no effect of WMC on sight-reading ability (but did find a related effect: the ability to tap two fingers rapidly in alternation and to press a computer key quickly in response to visual and acoustic cues was unrelated to practice but correlated positively with good sight-readers).

Nevertheless, the findings are interesting, and do agree with what I imagine is the ‘commonsense’ view: yes, becoming an expert is all about the hours of effective practice you put in, but there are intellectual qualities that also matter. The question is: do they matter once you’ve put in the requisite hours of good practice?

Analysis of 30 years of SAT and ACT tests administered to the top 5% of U.S. 7th graders has found that the ratio of 7th graders scoring 700 or above on the SAT-math has dropped from about 13 boys to 1 girl to about 4 boys to 1 girl. The ratio dropped dramatically between 1981 and 1995, and has remained relatively stable since then. The top scores on scientific reasoning, a relatively new section of the ACT that was not included in the original study, show a similar ratio of boys to girls.

A new analysis of data first published in 2002 in a controversial book called IQ and the Wealth of Nations and then expanded in 2006, argues that national differences in IQ are best explained not by differences in national wealth (the original researchers’ explanation), but by the toll of infectious diseases. The idea is that energy used to fight infection is energy taken from brain development in children. Using 2004 data on infectious disease burden from the World Health Organization, and factors that have been linked to national IQ, such as nutrition, literacy, education, gross domestic product, and temperature, the analysis revealed that infectious disease burden was more closely correlated to average IQ than the other variables, alone accounting for 67% of the worldwide variation in intelligence. The researchers also suggest that the Flynn effect (the rise in IQs seen in developed countries during the 20th century) may be caused in part by the decrease in the intensity of infectious diseases as nations develop.

[1619] Eppig, C., Fincher C. L., & Thornhill R.
(2010).  Parasite prevalence and the worldwide distribution of cognitive ability.
Proceedings of the Royal Society B: Biological Sciences.

A study involving 54 older adults (66-76) and 58 younger adults (18-35) challenges the idea that age itself causes people to become more risk-averse and to make poorer decisions. Analysis revealed that it is individual differences in processing speed and memory that affect decision quality, not age. The stereotype has arisen no doubt because more older people process slowly and have poorer memory. The finding points to the need to identify ways in which to present information that reduces the demand on memory or the need to process information very quickly, to enable those in need of such help (both young and old) to make the best choices. Self-knowledge also helps — recognizing if you need to take more time to make a decision.

A study assessing the performance of 200 people on a simulated freeway driving task, with or without having a cell phone conversation that involved memorizing words and solving math problems, has found that, as expected, performance on both tasks was significantly impaired. However, for a very few, performance on these tasks was unaffected (indeed their performance on the memory task improved!). These few people — five of them (2.5%) — also performed substantially better on these tasks when performed alone.

Watson, J.M. & Strayer, D.L. 2010. Supertaskers: Profiles in extraordinary multitasking ability. Psychonomic Bulletin and Review. In Press.

Full text is available at http://www.psych.utah.edu/lab/appliedcognition/publications/supertaskers...

Examination of the brains from 9 “super-aged” — people over 80 whose memory performance was at the level of 50-year-olds — has found that some of them had almost no tau tangles. The accumulation of tau tangles has been thought to be a natural part of the aging process; an excess of them is linked to Alzheimer’s disease. The next step is to work out why some people are immune to tangle formation, while others appear immune to the effects. Perhaps the first group is genetically protected, while the others are reaping the benefits of a preventive lifestyle.

The findings were presented March 23 at the 239th National Meeting of the American Chemical Society (ACS).

A study involving 136 healthy institutionalized infants (average age 21 months) from six orphanages in Bucharest, Romania, has found that those randomly assigned to a foster care program showed rapid increases in height and weight (but not head circumference), so that by 12 months, all of them were in the normal range for height, 90% were in the normal range for weight, and 94% were in the normal range of weight for height. Caregiving quality (particularly sensitivity and positive regard for the child, including physical affection) positively correlated with catch-up. Children whose height caught up to normal levels also appeared to improve their cognitive abilities. Each incremental increase of one in standardized height scores between baseline and 42 months was associated with an average increase of 12.6 points in verbal IQ.

A survey of 824 undergraduate students has found that those who were evening types had lower average grades than those who were morning types.

The finding was presented at SLEEP 2008, the 22nd Annual Meeting of the Associated Professional Sleep Societies (APSS).

Older news items (pre-2010) brought over from the old website

Learning styles challenged

A review of the research on learning styles finds that although numerous studies have claimed to show the existence of different kinds of learners, nearly all of the studies fail to satisfy key criteria for scientific validity — in particular, by randomly assigning learners classified by their “style” to one of several different learning methods (implicit in the idea of learning styles is the concept that individuals differ in regard to what mode of instruction or study is most effective for them). Of the few that did, some provided evidence flatly contradictory to the meshing hypothesis (the most common hypothesis, postulating that instruction is best provided in a format that matches the preferences of the learner) and the few findings in line with the idea did not assess popular learning-style schemes (71 different models of learning styles have been proposed over the years). The reviewers do no contest that people have preferences in how information is presented to them, or that people differ in the degree to which they use different processing modes, or that there might be untested learning styles that have significant effects. However, they argue that the lack of evidence for the postulated interaction effect is good reason not to spend limited education resources on this area that would better be devoted to adopting other educational practices that have a strong evidence base.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2009). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9(3), 105-119.

http://www.eurekalert.org/pub_releases/2009-12/afps-lsd121609.php

Insight into the processes of 'positive' and 'negative' learners

An intriguing study of the electrical signals emanating from the brain has revealed two types of learners. A brainwave event called an "event-related potential" (ERP) is important in learning; a particular type of ERP called "error-related negativity" (ERN), is associated with activity in the anterior cingulate cortex. This region is activated during demanding cognitive tasks, and ERNs are typically more negative after participants make incorrect responses compared to correct choices. Unexpectedly, studies of this ERN found a difference between "positive" learners, who perform better at choosing the correct response than avoiding the wrong one, and "negative" learners, who learn better to avoid incorrect responses. The negative learners showed larger ERNs, suggesting that "these individuals are more affected by, and therefore learn more from, their errors.” Positive learners had larger ERNs when faced with high-conflict win/win decisions among two good options than during lose/lose decisions among two bad options, whereas negative learners showed the opposite pattern.

Frank, M.J., Woroch, B.S. & Curran, T. 2005. Error-Related Negativity Predicts Reinforcement Learning and Conflict Biases. Neuron, 47, 495-501.

http://www.eurekalert.org/pub_releases/2005-08/cp-iit081205.php

Error | About memory

Error

The website encountered an unexpected error. Please try again later.