Latest Research News
The idea that bilingual children have superior executive function compared to monolingual children has been challenged in recent research. Executive function controls your attention, and helps with such tasks as remembering instructions, controlling responses, and shifting swiftly between tasks. It is positively correlated with children's academic achievement.
However, executive function is a complex construct, with several different components. It has been suggested that inconsistent research findings as to the advantage of bilingualism may be related to differences in how executive function is measured and conceptualized.
A new German study hopes to have dealt with this issues through its methodology and analysis.
The study compared 242 children (aged 5-15) who spoke both Turkish and German, and 95 children who spoke only German. The children’s executive function was tested using a computerized task called Hearts and Flowers, that required the child to press a different key in response to stimuli on the screen, depending on the condition. The congruent condition matched the key to the location of the heart stimulus; the incongruent condition required the child to press the key on the opposite side to where the flower stimulus appeared; the mixed condition tested the ability of the child to use the correct rule depending on which stimulus appeared.
The study found no significant differences in executive function between the two groups, after taking into account maternal education, child gender, age, and working memory (digit span backwards).
The researchers also took into account children's German and Turkish vocabulary size and exposure to both languages, factors for which previous studies on the topic had been criticized for lacking.
Paper available at https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0209981
Jaekel N, Jaekel J, Willard J, Leyendecker B (2019) No evidence for effects of Turkish immigrant children‘s bilingualism on executive functions. PLoS ONE 14(1): e0209981. https://doi.org/10.1371/journal.pone.0209981
A Spanish study investigating the effects of traffic-related air pollution on children walking to school has found higher levels of particulate matter and black carbon were associated with decreased growth in working memory capacity. Working memory capacity grows during childhood (and tends to fall in old age).
The study involved 1,234 children aged 7-10, from 39 schools across the city of Barcelona. The children were tested four times over a year to establish their developmental trajectories in working memory and inattentiveness. Average particulate matter, black carbon, and nitrogen dioxide, were estimated for the children’s walking routes using standard measures.
None of the pollutants were associated with inattentiveness. The effect of NO2 on working memory was inconclusive. However, increased concentrations of particulate matter and black carbon were associated with a reduction in the annual growth of working memory of 4.6% and 3.9%, respectively. Boys were more affected than girls.
The study followed an earlier study showing that exposure to traffic-related pollutants in schools was associated with slower cognitive development. Research has previously shown that 20% of a child's daily dose of black carbon (which is directly related to traffic) is inhaled during urban commutes.
The finding emphasizes that even “short exposures to very high concentrations of pollutants can have a disproportionately high impact on health”, and this may be especially true for children, with their smaller lung capacity and higher breathing rate.
The researchers emphasize that the solution for parents is not to stop children walking to school, since those who commute by car or public transport are also exposed to the pollution. Rather, the aim should be to try and find (or make) less polluted, low-traffic paths to school.
A Canadian study involving French-speaking university students has found that repeating aloud, especially to another person, improves memory for words.
In the first experiment, 20 students read a series of words while wearing headphones that emitted white noise, in order to mask their own voices and eliminate auditory feedback. Four actions were compared:
- repeating silently in their head
- repeating silently while moving their lips
- repeating aloud while looking at the screen
- repeating aloud while looking at someone.
They were tested on their memory of the words after a distraction task. The memory test only required them to recognize whether or not the words had occurred previously.
There was a significant effect on memory. The order of the conditions matches the differences in memory, with memory worst in the first condition, and best in the last.
In the second experiment, 19 students went through the same process, except that the stimuli were pseudo-words. In this case, there was no memory difference between the conditions.
The effect is thought to be due to the benefits of motor sensory feedback, but the memory benefit of directing your words at a person rather than a screen suggests that such feedback goes beyond the obvious. Visual attention appears to be an important memory enhancer (no great surprise when we put it that way!).
Most of us have long ago learned that explaining something to someone really helps our own understanding (or demonstrates that we don’t in fact understand it!). This finding supports another, related, experience that most of us have had: the simple act of telling someone something helps our memory.
The study, involving 120 mice, found that mice tasked with remembering where food had been hidden did better if they had been given a novel experience (exploring an unfamiliar floor surface) 30 minutes after being trained to remember the food location.
This memory improvement also occurred when the novel experience was replaced by the selective activation of dopamine-carrying neurons in the locus coeruleus that go to the hippocampus. The locus coeruleus is located in the brain stem and involved in several functions that affect emotion, anxiety levels, sleep patterns, and memory. The dopamine-carrying neurons in the locus coeruleus appear to be especially sensitive to environmental novelty.
In other words, if we’re given attention-grabbing experiences that trigger these LC neurons carrying dopamine to the hippocampus at around the time of learning, our memories will be stronger.
Now we already know that emotion helps memory, but what this new study tells us is that, as witness to the mice simply being given a new environment to explore, these dopamine-triggering experiences don’t have to be dramatic. It’s suggested that it could be as simple as playing a new video game during a quick break while studying for an exam, or playing tennis right after trying to memorize a big speech.
Remember that we’re designed to respond to novelty, to pay it more attention — and, it seems, that attention is extended to more mundane events that occur closely in time.
Emotionally positive situations boost memory for similar future events
In a similar vein, a human study has found that the benefits of reward extend forward in time.
In the study, volunteers were shown images from two categories (objects and animals), and were financially rewarded for one of these categories. As expected, they remembered images associated with a reward better. In a second session, however, they were shown new images of animals and objects without any reward. Participants still remembered the previously positively-associated category better.
Now, this doesn’t seem in any way surprising, but the interesting thing is that this benefit wasn’t seen immediately, but only after 24 hours — that is, after participants had slept and consolidated the learning.
Previous research has shown similar results when semantically related information has been paired with negative, that is, aversive stimuli.
Four studies involving a total of more than 300 younger adults (20-24) have looked at information processing on different forms of media. They found that digital platforms such as tablets and laptops for reading may make you more inclined to focus on concrete details rather than interpreting information more abstractly.
As much as possible, the material was presented on the different media in identical format.
In the first study, 76 students were randomly assigned to complete the Behavior Identification Form on either an iPad or a print-out. The Form assesses an individual's current preference for concrete or abstract thinking. Respondents have to choose one of two descriptions for a particular behavior — e.g., for “making a list”, the choice of description is between “getting organized” or “writing things down”. The form presents 25 items.
There was a marked difference between those filling out the form on the iPad vs on a physical print-out, with non-digital users showing a significantly higher preference for abstract descriptions than digital users (mean of 18.56 vs 13.75).
In the other three studies, the digital format was always a PDF on a laptop. In the first of these, 81 students read a short story by David Sedaris, then answered 24 multichoice questions on it, of which half were abstract and half concrete. Digital readers scored significantly lower on abstract questions (48% vs 66%), and higher on concrete questions (73% vs 58%).
In the next study, 60 students studied a table of information about four, fictitious Japanese car models for two minutes, before being required to select the superior model. While one model was objectively superior in regard to the attributes and attribute rating, the amount of detail means (as previous research has shown) that those employing a top-down “gist” processing do better than those using a bottom-up, detail-oriented approach. On this problem, 66% of the non-digital readers correctly chose the superior model, compared to 43% of the digital readers.
In the final study, 119 students performed the same task as in the preceding study, but all viewed the table on a laptop. Before viewing the table, however, some were assigned to one of two priming activities: a high-level task aimed at activating more abstract thinking (thinking about why they might pursue a health goal), or a low-level task aimed at activating more concrete thinking (thinking about how to pursue the same goal).
Being primed to think more abstractly did seem to help these digital users, with 48% of this group correctly answering the car judgment problem, compared to only 25% of those given the concrete priming activity, and 30% of the control group.
I note that the performance of the control group is substantially below the performance of the digital users in the previous study, although there was no apparent change in the methodology. However, this was not noted or explained in the paper, so I don't know why this was. It does lead me not to put too much weight on this idea that priming can help.
However, the findings do support the view that reading on digital devices does encourage a more concrete style of thinking, reinforcing the idea that we are inclined to process information more shallowly when we read it from a screen.
Of course, this is, as the researchers point out, not an indictment. Sometimes, this is the best way to approach certain tasks. But what it does suggest is that we need to consider what sort of processing is desirable, and modify our strategy accordingly. For example, you may find it helpful to print out material that requires a high level of abstract thinking, particularly if your degree of expertise in the subject means that it carries a high cognitive load.
Kaufman, G., & Flanagan, M. (2016). High-Low Split : Divergent Cognitive Construal Levels Triggered by Digital and Non-digital Platforms. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–5. doi:10.1145/2858036.2858550 http://dl.acm.org/citation.cfm?doid=2858036.2858550
A sleep study involving 28 participants had them follow a controlled sleep/wake schedule for three weeks before staying in a sleep laboratory for 4.5 days, during which time they experienced a cycle of sleep deprivation and recovery in the absence of seasonal cues such as natural light, time information and social interaction. The same participants went through this entire procedure several times over some 18 months. Brain activity was assessed while participants undertook an n-back working memory task, and a task that tested sustained attention.
While performance on these tasks didn't change with the seasons, the amount of effort needed to accomplish them did. Brain activity involved in sustained attention (especially in the thalamus, amygdala and hippocampus) was highest in the summer and lowest in the winter. Brain activity associated with working memory (especially the pulvinar, insula, prefrontal and frontopolar regions), was higher in the fall and lower in the spring.
Seasonality, therefore, could be one factor in cognitive differences that occur for an individual tested at different times.
Participants were healthy young adults; it would be interesting to see if the same results are found in older adults. It's possible that the effects are greater.
A study involving 218 participants aged 18-88 has looked at the effects of age on the brain activity of participants viewing an edited version of a 1961 Hitchcock TV episode (given that participants viewed the movie while in a MRI machine, the 25 minute episode was condensed to 8 minutes).
While many studies have looked at how age changes brain function, the stimuli used have typically been quite simple. This thriller-type story provides more complex and naturalistic stimuli.
Younger adults' brains responded to the TV program in a very uniform way, while older adults showed much more idiosyncratic responses. The TV program (“Bang! You're dead”) has previously been shown to induce widespread synchronization of brain responses (such movies are, after all, designed to focus attention on specific people and objects; following along with the director is, in a manner of speaking, how we follow the plot). The synchronization seen here among younger adults may reflect the optimal response, attention focused on the most relevant stimulus. (There is much less synchronization when the stimuli are more everyday.)
The increasing asynchronization with age seen here has previously been linked to poorer comprehension and memory. In this study, there was a correlation between synchronization and measures of attentional control, such as fluid intelligence and reaction time variability. There was no correlation between synchronization and crystallized intelligence.
The greatest differences were seen in the brain regions controlling attention (the superior frontal lobe and the intraparietal sulcus) and language processing (the bilateral middle temporal gyrus and left inferior frontal gyrus).
The researchers accordingly suggested that the reason for the variability in brain patterns seen in older adults lies in their poorer attentional control — specifically, their top-down control (ability to focus) rather than bottom-up attentional capture. Attentional capture has previously been shown to be well preserved in old age.
Of course, it's not necessarily bad that a watcher doesn't rigidly follow the director's manipulation! The older adults may be showing more informed and cunning observation than the younger adults. However, previous studies have found that older adults watching a movie tend to vary more in where they draw an event boundary; those showing most variability in this regard were the least able to remember the sequence of events.
The current findings therefore support the idea that older adults may have increasing difficulty in understanding events — somthing which helps explain why some old people have increasing trouble following complex plots.
The findings also add to growing evidence that age affects functional connectivity (how well the brain works together).
It should be noted, however, that it is possible that there could also be cohort effects going on — that is, effects of education and life experience.
I've written at length about implementation plans in my book “Planning to Remember: How to Remember What You're Doing and What You Plan to Do”. Essentially, they're intentions you make in which you explicitly tie together your intended action with a specific situational cue (such as seeing a post box).
A new study looked at the benefits of using an implementation intention for those with low working memory capacity.
The study involved 100 college students, of whom half were instructed to form an implementation intention in the event-based prospective memory task. The task was in the context of a lexical decision task in which the student had to press a different key depending on whether a word or a pseudo-word was presented, and to press the spacebar when a waiting message appeared between each trial. However (and this is the prospective element), if they saw one of four cue words, they were to stop doing the lexical task and say aloud both the cue word and its associated target word. They were then given the four word pairs to learn.
After they had mastered the word pairs, students in the implementation intention group were also given various sentences to say aloud, of the form: “When I see the word _______ (hotel, eraser, thread, credit) while making a word decision, I will stop doing the lexical decision task and call out _____-______ (hotel-glass, eraser-pencil, thread-book, credit-card) to the experimenter during the waiting message.” They said each sentence (relating to each word pair) twice.
Both groups were given a 5-minute survey to fill out before beginning the trials. At the end of the trials, their working memory was assessed using both the Operation Span task and the Reading Span task.
Overall, as expected, the implementation intention group performed significantly better on the prospective memory task. Unlike other research, there was no significant effect of working memory capacity on prospective memory performance. But this is because other studies haven't used implementation intentions — among those who made no such implement plans, low working memory capacity did indeed negatively affect prospective memory performance. However, those with low working memory capacity did just as well as those with high WMC when they formed implementation intentions (in fact, they did slightly better).
The most probable benefit of the strategy is that it heightened sensitivity to the event cues, something which is of particular value to those with low working memory capacity, who by definition have poorer attentional control.
It should be noted that this was an attentionally demanding task — there is some evidence that working memory ability only relates to prospective memory ability when the prospective memory task requires a high amount of attentional demand. But what constitutes “attentionally demanding” varies depending on the individual.
Perhaps this bears on evidence suggesting that a U-shaped function might apply, with a certain level of cognitive ability needed to benefit from implementation intentions, while those above a certain level find them unnecessary. But again, this depends on how attentionally demanding the task is. We can all benefit from forming implementation intentions in very challenging situations. It should also be remembered that WMC is affected not only more permanently by age, but also more temporarily by stress, anxiety, and distraction.
Of course, this experiment framed the situation in a very short-term way, with the intentions only needing to be remembered for about 15 minutes. A more naturalistic study is needed to confirm the results.
In 2013 I reported briefly on a pilot study showing that “super-agers” — those over 80 years old who have the brains and cognitive powers more typical of people decades younger — had an unusually large anterior cingulate cortex, with four times as many von Economo neurons.
The ACC is critical for cognitive control, executive function, and motivation. Von Economo neurons have been linked to social intelligence, being found (as yet) only in humans, great apes, whales and dolphins, with a reduction being found in frontotemporal dementia and autism.
A follow-up to that study has now been reported, confirming the larger ACC, and greater amount of von Economo neurons.
The study involved 31 super-agers, 21 more typical older adults, and 18 middle-aged adults (aged 50-60). Imaging revealed a region of the ACC in the right hemisphere in the super-agers was not only significantly thicker than the 'normal' older adults, but also larger than that of the middle-aged adults. Post-mortem analysis of 5 of the super-agers found that their ACC had 87% less tau tangles (one of the hallmarks of Alzheimer's) than 5 'normal' age-matched controls, and 92% less than that of 5 individuals with MCI. The density of von Economo neurons was also significantly higher.
Whether or not super-agers are born or made is still unknown (I'm picking a bit of both), but it's intriguing to note my recent report that people who frequently use several media devices at the same time had smaller grey matter density in the anterior cingulate cortex than those who use just one device occasionally.
I'd be interested to know the occupational and life-history of these super-agers. Did they lead lives in which they nurtured their powers of prolonged concentration? Or perhaps they belong to that other select group: the one-in-forty who can truly multitask.
A recent study reveals that when we focus on searching for something, regions across the brain are pulled into the search. The study sheds light on how attention works.
In the experiments, brain activity was recorded as participants searched for people or vehicles in movie clips. Computational models showed how each of the roughly 50,000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips.
When participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles.
Now this might not sound very surprising, but it appears to contradict our whole developing picture of the brain as having specialized areas for specific categories — instead, areas normally involved in recognizing categories such as plants or buildings were being switched to become attuned to humans or vehicles. The changes occurred across the brain, not just in those regions devoted to vision, and in fact, the largest changes were seen in the prefrontal cortex.
What this suggests is that categories are represented in highly organized, continuous maps, a ‘semantic space’, as it were. By increasing the representation of the target category (and related categories) at the expense of other categories, this semantic space is changed. Note that this did not come about in response to the detection of the target; it occurred in response to the direction of attention — the goal setting.
In other words, in the same way that gravity warps the space-time continuum (well, probably not the exact same way!), attention warps your mental continuum.
You can play with an interactive online brain viewer which tries to portray this semantic space.
Why do we find it so hard to stay on task for long? A recent study uses a new technique to show how the task control network and the default mode network interact (and fight each other for control).
The task control network (which includes the dorsal anterior cingulate and bilateral anterior insula) regulates attention to surroundings, controlling your concentration on tasks. The default mode network, on the other hand, becomes active when a person seems to be doing 'nothing', and becomes less active when a task is being performed.
The study shows that we work better and faster the better the default mode network is suppressed by the task control network. However, when the default mode network is not sufficiently suppressed by the task control network, it sends signals to the task control network, interfering with its performance (and we lose focus).
Interestingly, in certain conditions, such as autism, depression, and mild cognitive impairment, the default mode network remains unchanged whether the person is performing a task or interacting with the environment. Additionally, deficits in the functioning of the default mode network have been implicated in age-related cognitive decline.
The findings add a new perspective to our ideas about attention. One of the ongoing questions concerns the relative importance of the two main aspects of attention: focus, and resisting distraction. A lot of work in recent years has indicated that a large part of age-related cognitive decline is a growing difficulty in resisting distraction. Similarly, there is some evidence that people with a low working memory capacity are less able to ignore irrelevant information.
This recent finding, then, suggests that these difficulties in ignoring distracting / irrelevant stimuli reflect the failure of the task control network to adequately suppress the activity of the default mode network. This puts the emphasis back on training for focus, and may help explain why meditation practices are effective in improving concentration.
As many of you will know, I like nature-improves-mind stories. A new twist comes from a small Scottish study, in which participants were fitted up with a mobile EEG monitor that enabled their brainwaves to be recorded as they walked for 25 minutes through one of three different urban settings: an urban shopping street, a path through green space, or a street in a busy commercial district. The monitors measured five ‘channels’ that are claimed to reflect “short-term excitement,” “frustration,” “engagement,” “arousal,” and “meditation level."
Consistent with Attention restoration theory, walkers entering the green zone showed lower frustration, engagement and arousal, and higher meditation, and then showed higher engagement when moving out of it — suggesting that their time in a natural environment had ‘refreshed’ their brain.
Another study looking into the urban-nature effect issue takes a different tack than those I’ve previously reported on, that look at the attention-refreshing benefits of natural environments.
In this study, a rural African people living in a traditional village were compared with those who had moved to town. Participants in the first experiment included 35 adult traditional Himba, 38 adolescent traditional Himba (mean age 12), 56 adult urbanized Himba, and 37 adolescent urbanized Himba. All traditional Himba had had little contact with the Western world and only spoke their native language; all adult urbanized Himba had grown up in traditional villages and only moved to town later in life (average length of time in town was 6 years); all adolescent urbanized Himba had grown up in town the town and usually attended school regularly.
The first experiments assessed the ability to ignore peripheral distracting arrows while focusing on the right or left direction of a central arrow.
There was a significant effect of urbanization, with attention being more focused (less distracted) among the traditional Himba. Traditional Himba were also slower than urbanized Himba — but note that there was substantial overlap in response times between the two groups. There was no significant effect of age (that is, adolescents were faster than adults in their responses, but the effect of the distracters was the same across age groups), or a significant interaction between age and urbanization.
The really noteworthy part of this, was that the urbanization effect on task performance was the same for the adults who had moved to town only a few years earlier as for the adolescents who had grown up and been educated in the town. In other words, this does not appear to be an educational effect.
The second experiment looked at whether traditional Himba would perform more like urbanized Himba if there were other demands on working memory. This was done by requiring them to remember three numbers (the number words in participants’ language are around twice as long as the same numbers in English, hence their digit span is shorter).
While traditional Himba were again more focused than the urbanized in the no-load condition, when there was this extra load on working memory, there was no significant difference between the two groups. Indeed, attention was de-focused in the traditional Himba under high load to the same degree as it was for urbanized Himba under no-load conditions. Note that increasing the cognitive load made no difference for the urbanized group.
There was also a significant (though not dramatic) difference between the traditional and urbanized Himba in terms of performance on the working memory task, with traditional Himba remembering an average of 2.46/3 digits and urbanized Himba 2.64.
Experiment 3 tested the two groups on a working memory task, a standard digit span test (although, of course, in their native language). Random sequences of 2-5 digits were read out, with the participant being required to say them aloud immediately after. Once again, the urbanized Himba performed better than the traditional Himba (4.32 vs 3.05).
In other words, the problem does not seem to be that urbanization depletes working memory, rather, that urbanization encourages disengagement (i.e., we have the capacity, we just don’t use it).
In the fourth experiment, this idea was tested more directly. Rather than the arrows used in the earlier experiments, black and white faces were used, with participants required to determine the color of the central face. Additionally, inverted faces were sometimes used (faces are stimuli we pay a lot of attention to, but inverting them reduces their ‘faceness’, thus making them less interesting).
An additional group of Londoners was also included in this experiment.
While urbanized Himba and Londoners were, again, more de-focused than traditional Himba when the faces were inverted, for the ‘normal’ faces, all three groups were equally focused.
Note that the traditional Himba were not affected by the changes in the faces, being equally focused regardless of the stimulus. It was the urbanized groups that became more alert when the stimuli became more interesting.
Because it may have been a race-discrimination mechanism coming into play, the final experiment returned to the direction judgment, with faces either facing left or right. This time the usual results occurred – the urbanized groups were more de-focused than the traditional group.
In other words, just having faces was not enough; it was indeed the racial discrimination that engaged the urbanized participants (note that both these urban groups come from societies where racial judgments are very salient – multicultural London, and post-apartheid Namibia).
All of this indicates that the attention difficulties that appear so common nowadays are less because our complex environments are ‘sapping’ our attentional capacities, and more because we are in a different attentional ‘mode’. It makes sense that in environments that contain so many more competing stimuli, we should employ a different pattern of engagement, keeping a wider, more spread, awareness on the environment, and only truly focusing when something triggers our interest.
In my book on remembering intentions, I spoke of how quickly and easily your thoughts can be derailed, leading to ‘action slips’ and, in the wrong circumstances, catastrophic mistakes. A new study shows how a 3-second interruption while doing a task doubled the rate of sequence errors, while a 4s one tripled it.
The study involved 300 people, who were asked to perform a series of ordered steps on the computer. The steps had to be performed in a specific sequence, mnemonically encapsulated by UNRAVEL, with each letter identifying the step. The task rules for each step differed, requiring the participant to mentally shift gears each time. Moreover, task elements could have multiple elements — for example, the letter U could signal the step, one of two possible responses for that step, or be a stimulus requiring a specific response when the step was N. Each step required the participant to choose between two possible responses based on one stimulus feature — features included whether it was a letter or a digit, whether it was underlined or italic, whether it was red or yellow, whether the character outside the outline box was above or below. There were also more cognitive features, such as whether the letter was near the beginning of the alphabet or not. The identifying mnemonic for the step was linked to the possible responses (e.g., N step – near or far; U step — underline or italic).
At various points, participants were very briefly interrupted. In the first experiment, they were asked to type four characters (letters or digits); in the second experiment, they were asked to type only two (a very brief interruption indeed!).
All of this was designed to set up a situation emulating “train of thought” operations, where correct performance depends on remembering where you are in the sequence, and on producing a situation where performance would have reasonably high proportion of errors — one of the problems with this type of research has been the use of routine tasks that are generally performed with a high degree of accuracy, thus generating only small amounts of error data for analysis.
In both experiments, interruptions significantly increased the rate of sequence errors on the first trial after the interruption (but not on subsequent ones). Nonsequence errors were not affected. In the first experiment (four-character interruption), the sequence error rate on the first trial after the interruption was 5.8%, compared to 1.8% on subsequent trials. In the second experiment (two-character interruption), it was 4.3%.
The four-character interruptions lasted an average of 4.36s, and the two-character interruptions lasted an average of 2.76s.
Whether the characters being typed were letters or digits made no difference, suggesting that the disruptive effects of interruptions are not overly sensitive to what’s being processed during the interruption (although of course these are not wildly different processes!).
The absence of effect on nonsequence errors shows that interruptions aren’t disrupting global attentional resources, but more specifically the placekeeping task.
As I discussed in my book, the step also made a significant difference — for sequence errors, middle steps showed higher error rates than end steps.
All of this confirms and quantifies how little it takes to derail us, and reminds us that, when engaged in tasks involving the precise sequence of sub-tasks (which so many tasks do), we need to be alert to the dangers of interruptions. This is, of course, particularly true for those working in life-critical areas, such as medicine.
We know that emotion affects memory. We know that attention affects perception (see, e.g., Visual perception heightened by meditation training; How mindset can improve vision). Now a new study ties it all together. The study shows that emotionally arousing experiences affect how well we see them, and this in turn affects how vividly we later recall them.
The study used images of positively and negatively arousing scenes and neutral scenes, which were overlaid with varying amounts of “visual noise” (like the ‘snow’ we used to see on old televisions). College students were asked to rate the amount of noise on each picture, relative to a specific image they used as a standard. There were 25 pictures in each category, and three levels of noise (less than standard, equal to standard, and more than standard).
Different groups explored different parameters: color; gray-scale; less noise (10%, 15%, 20% as compared to 35%, 45%, 55%); single exposure (each picture was only presented once, at one of the noise levels).
Regardless of the actual amount of noise, emotionally arousing pictures were consistently rated as significantly less noisy than neutral pictures, indicating that people were seeing them more clearly. This was true in all conditions.
Eye-tracking analysis ruled out the idea that people directed their attention differently for emotionally arousing images, but did show that more eye fixations were associated both with less noisy images and emotionally arousing ones. In other words, people were viewing emotionally important images as if they were less noisy.
One group of 22 students were given a 45-minute spatial working memory task after seeing the images, and then asked to write down all the details they could remember about the pictures they remembered seeing. The amount of detail they recalled was taken to be an indirect measure of vividness.
A second group of 27 students were called back after a week for a recognition test. They were shown 36 new images mixed in with the original 75 images, and asked to rate them as new, familiar, or recollected. They were also asked to rate the vividness of their recollection.
Although, overall, emotionally arousing pictures were not more likely to be remembered than neutral pictures, both experiments found that pictures originally seen as more vivid (less noise) were remembered more vividly and in more detail.
Brain scans from 31 students revealed that the amygdala was more active when looking at images rated as vivid, and this in turn increased activity in the visual cortex and in the posterior insula (which integrates sensations from the body). This suggests that the increased perceptual vividness is not simply a visual phenomenon, but part of a wider sensory activation.
There was another neural response to perceptual vividness: activity in the dorsolateral prefrontal cortex and the posterior parietal cortex was negatively correlated with vividness. This suggests that emotion is not simply increasing our attentional focus, it is instead changing it by reducing effortful attentional and executive processes in favor of more perceptual ones. This, perhaps, gives emotional memories their different ‘flavor’ compared to more neutral memories.
These findings clearly need more exploration before we know exactly what they mean, but the main finding from the study is that the vividness with which we recall some emotional experiences is rooted in the vividness with which we originally perceived it.
The study highlights how emotion can sharpen our attention, building on previous findings that emotional events are more easily detected when visibility is difficult, or attentional demands are high. It is also not inconsistent with a study I reported on last year, which found some information needs no repetition to be remembered because the amygdala decrees it of importance.
I should add, however, that the perceptual effect is not the whole story — the current study found that, although perceptual vividness is part of the reason for memories that are vividly remembered, emotional importance makes its own, independent, contribution. This contribution may occur after the event.
It’s suggested that individual differences in these reactions to emotionally enhanced vividness may underlie an individual’s vulnerability to post-traumatic stress disorder.
A review of 10 observational and four intervention studies as said to provide strong evidence for a positive relationship between physical activity and academic performance in young people (6-18). While only three of the four intervention studies and three of the 10 observational studies found a positive correlation, that included the two studies (one intervention and one observational) that researchers described as “high-quality”.
An important feature of the high-quality studies was that they used objective measures of physical activity, rather than students' or teachers' reports. More high-quality studies are clearly needed. Note that the quality score of the 14 studies ranged from 22%! to 75%.
Interestingly, a recent media report (NOT, I hasten to add, a peer-reviewed study appearing in an academic journal) spoke of data from public schools in Lincoln, Nebraska, which apparently has a district-wide physical-fitness test, which found that those were passed the fitness test were significantly more likely to also pass state reading and math tests.
Specifically, data from the last two years apparently shows that 80% of the students who passed the fitness test either met or exceeded state standards in math, compared to 66% of those who didn't pass the fitness test, and 84% of those who passed the fitness test met or exceeded state standards in reading, compared to 71% of those who failed the fitness test.
Another recent study looks at a different aspect of this association between physical exercise and academic performance.
The Italian study involved138 normally-developing children aged 8-11, whose attention was tested before and after three different types of class: a normal academic class; a PE class focused on cardiovascular endurance and involving continuous aerobic circuit training followed by a shuttle run exercise; a PE class combining both physical and mental activity by involving novel use of basketballs in varying mini-games that were designed to develop coordination and movement-based problem-solving. These two types of physical activity offered the same exercise intensity, but very different skill demands.
The attention test was a short (5-minute) paper-and-pencil task in which the children had to mark each occurrence of “d” with double quotation marks either above or below in 14 lines of randomly mixed p and d letters with one to four single and/or double quotation marks either over and/or under each letter.
Processing speed increased 9% after mental exercise (normal academic class) and 10% after physical exercise. These were both significantly better than the increase of 4% found after the combined physical and mental exertion.
Similarly, scores on the test improved 13% after the academic class, 10% after the standard physical exercise, and only 2% after the class combining physical and mental exertion.
Now it’s important to note is that this is of course an investigation of the immediate arousal benefits of exercise, rather than an investigation of the long-term benefits of being fit, which is a completely different question.
But the findings do bear on the use of PE classes in the school setting, and the different effects that different types of exercise might have.
First of all, there’s the somewhat surprising finding that attention was at least as great, if not better, after an academic class than the PE class. It would not have been surprising if attention had flagged. It seems likely that what we are seeing here is a reflection of being in the right head-space — that is, the advantage of continuing with the same sort of activity.
But the main finding is the, also somewhat unexpected, relative drop in attention after the PE class that combined mental and physical exertion.
It seems plausible that the reason for this lies in the cognitive demands of the novel activity, which is, I think, the main message we should take away from this study, rather than any comparison between physical and mental activity. However, it would not be surprising if novel activities that combine physical and mental skills tend to be more demanding than skills that are “purely” (few things are truly pure I know) one or the other.
Of course, it shouldn’t be overlooked that attention wasn’t hampered by any of these activities!
First study: http://blogs.edweek.org/edweek/schooled_in_sports/2012/01/strong_evidenc...
Second study: http://blogs.edweek.org/edweek/schooled_in_sports/2011/11/students_fitne...
Third study: http://news.yahoo.com/exercise-might-boost-kids-academic-ability-1602081...
I had to report on this quirky little study, because a few years ago I discovered Leonard Cohen’s gravelly voice and then just a few weeks ago had it trumped by Tom Waits — I adore these deep gravelly voices, but couldn’t say why. Now a study shows that woman are not only sensitive to male voice pitch, but this affects their memory.
In the first experiment, 45 heterosexual women were shown images of objects while listening to the name of the object spoken either by a man or woman. The pitch of the voice was manipulated to be high or low. After spending five minutes on a Sudoku puzzle, participants were asked to choose which of two similar but not identical versions of the object was the one they had seen earlier. After the memory test, participants were tested on their voice preferences.
Women strongly preferred the low pitch male voice and remembered objects more accurately when they have been introduced by the deeper male voice than the higher male voice (mean score for object recognition was 84.7% vs 77.8%). There was no significant difference in memory relating to pitch for the female voices (83.9% vs 81.7% — note that these are not significantly different from the score for the deeper male voice).
So is it that memory is enhanced for deeper male voices, or that it is impaired for higher male voices (performance on the female voices suggests the latter)? Or are both factors at play? To sort this out, the second experiment, involving a new set of 46 women, included unmanipulated male and female voices.
Once again, women were unaffected by the different variations of female voices. However, male voices produced a clear linear effect, with the unmanipulated male voices squarely in the middle of the deeper and higher versions. It appears, then, that both factors are at play: deepening a male voice enhances its memorability, while raising it impairs its memorability.
It’s thought that deeper voices are associated with more desirable traits for long-term male partners. Having a better memory for specific encounters with desirable men would allow women to compare and evaluate men according to how they might behave in different relationship contexts.
The voices used were supplied by four young adult men and four young adult women. Pitch was altered through software manipulation. Participants were told that the purpose of the experiment was to study sociosexual orientation and object preference. Contraceptive pill usage did not affect the women’s responses.
I’ve always felt that better thinking was associated with my brain working ‘in a higher gear’ — literally working at a faster rhythm. So I was particularly intrigued by the findings of a recent mouse study that found that brainwaves associated with learning became stronger as the mice ran faster.
In the study, 12 male mice were implanted with microelectrodes that monitored gamma waves in the hippocampus, then trained to run back and forth on a linear track for a food reward. Gamma waves are thought to help synchronize neural activity in various cognitive functions, including attention, learning, temporal binding, and awareness.
We know that the hippocampus has specialized ‘place cells’ that record where we are and help us navigate. But to navigate the world, to create a map of where things are, we need to also know how fast we are moving. Having the same cells encode both speed and position could be problematic, so researchers set out to find how speed was being encoded. To their surprise and excitement, they found that the strength of the gamma rhythm grew substantially as the mice ran faster.
The results also confirmed recent claims that the gamma rhythm, which oscillates between 30 and 120 times a second, can be divided into slow and fast signals (20-45 Hz vs 45-120 Hz for mice, consistent with the 30-55 Hz vs 45-120 Hz bands found in rats) that originate from separate parts of the brain. The slow gamma waves in the CA1 region of the hippocampus were synchronized with slow gamma waves in CA3, while the fast gamma in CA1 were synchronized with fast gamma waves in the entorhinal cortex.
The two signals became increasingly separated with increasing speed, because the two bands were differentially affected by speed. While the slow waves increased linearly, the fast waves increased logarithmically. This differential effect could have to do with mechanisms in the source regions (CA3 and the medial entorhinal cortex, respectively), or to mechanisms in the different regions in CA1 where the inputs terminate (the waves coming from CA3 and the entorhinal cortex enter CA1 in different places).
In the hippocampus, gamma waves are known to interact with theta waves. Further analysis of the data revealed that the effects of speed on gamma rhythm only occurred within a narrow range of theta phases — but this ‘preferred’ theta phase also changed with running speed, more so for the slow gamma waves than the fast gamma waves (which is not inconsistent with the fact that slow gamma waves are more affected by running speed than fast gamma waves). Thus, while slow and fast gamma rhythms preferred similar phases of theta at low speeds, the two rhythms became increasingly phase-separated with increasing running speed.
What’s all this mean? Previous research has shown that if inputs from CA3 and the entorhinal cortex enter CA1 at the same time, the kind of long-term changes at the synapses that bring about learning are stronger and more likely in CA1. So at low speeds, synchronous inputs from CA3 and the entorhinal cortex at similar theta phases make them more effective at activating CA1 and inducing learning. But the faster you move, the more quickly you need to process information. The stronger gamma waves may help you do that. Moreover, the theta phase separation of slow and fast gamma that increases with running speed means that activity in CA3 (slow gamma source) increasingly anticipates activity in the medial entorhinal cortex (fast gamma source).
What does this mean at the practical level? Well at this point it can only be speculation that moving / exercising can affect learning and attention, but I personally am taking this on board. Most of us think better when we walk. This suggests that if you’re having trouble focusing and don’t have time for that, maybe walking down the hall or even jogging on the spot will help bring your brain cells into order!
Pushing speculation even further, I note that meditation by expert meditators has been associated with changes in gamma and theta rhythms. And in an intriguing comparison of the effect of spoken versus sung presentation on learning and remembering word lists, the group that sang showed greater coherence in both gamma and theta rhythms (in the frontal lobes, admittedly, but they weren’t looking elsewhere).
So, while we’re a long way from pinning any of this down, it may be that all of these — movement, meditation, music — can be useful in synchronizing your brain rhythms in a way that helps attention and learning. This exciting discovery will hopefully be the start of an exploration of these possibilities.
Full text available at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0021408
Previous research has found practice improves your ability at distinguishing visual images that vary along one dimension, and that this learning is specific to the visual images you train on and quite durable. A new study extends the finding to more natural stimuli that vary on multiple dimensions.
In the small study, 9 participants learned to identify faces and 6 participants learned to identify “textures” (noise patterns) over the course of two hour-long sessions of 840 trials (consecutive days). Faces were cropped to show only internal features and only shown briefly, so this was not a particularly easy task. Participants were then tested over a year later (range: 10-18 months; average 13 and 15 months, respectively).
On the test, participants were shown both images from training and new images that closely resembled them. While accuracy rates were high for the original images, they plummeted for the very similar new images, indicating that despite the length of time since they had seen the original images, they still retained much of the memory of them.
Although practice improved performance across nearly all items and for all people, there were significant differences between both participants and individual stimuli. More interestingly, individual differences (in both stimuli and people) were stable across sessions (e.g., if you were third-best on day 1, you were probably third-best on day 2 too, even though you were doing better). In other words, learning didn’t produce any qualitative changes in the representations of different items — practice had nearly the same effect on all; differences were rooted in initial difficulty of discriminating the pattern.
However, while it’s true that individual differences were stable, that doesn’t mean that every person improved their performance the exact same amount with the same amount of practice. Interestingly (and this is just from my eye-ball examination of the graphs), it looks like there was more individual variation among the group looking at noise patterns. This isn’t surprising. We all have a lot of experience discriminating faces; we’re all experts. This isn’t the case with the textures. For these, people had to ‘catch on’ to the features that were useful in discriminating patterns. You would expect more variability between people in how long it takes to work out a strategy, and how good that strategy is. Interestingly, three of the six people in the texture group actually performed better on the test than they had done on the second day of training, over a year ago. For the other three, and all nine of those in the face group, test performance was worse than it had been on the second day of training (but decidedly better than the first day).
The durability and specificity of this perceptual learning, the researchers point out, resembles that found in implicit memory and some types of sensory adaptation. It also indicates that such perceptual learning is not limited, as has been thought, to changes early in the visual pathway, but produces changes in a wider network of cortical neurons, particularly in the inferior temporal cortex.
The second, unrelated, study also bears on this issue of specificity.
We look at a scene and extract the general features — a crowd of people, violently riotous or riotously happy? — or we look at a scene and extract specific features that over time we use to build patterns about what goes with what. The first is called “statistical summary perception”; the second “statistical learning”.
A study designed to disentangle these two processes found that you can only do one or other; you can’t derive both types of information at the same time. Thus, when people were shown grids of lines slanted to varying degrees, they could either assess whether the lines were generally leaning to the left or right, or they could learn to recognize pairs of lines that had been hidden repeatedly in the grids — but they couldn’t do both.
The fact that each of these tasks interfered with the other suggests that the two processes are fundamentally related.
Full text available at http://dailynews.mcmaster.ca/images/PsychSciFinal.pdf
An increasing number of studies have been showing the benefits of bilingualism, both for children and in old age. However, there’s debate over whether the apparent benefits for children are real, or a product of cultural (“Asians work harder!” or more seriously, are taught more behavioral control from an early age) or environmental factors (such as socioeconomic status).
A new study aimed to disentangle these complicating factors, by choosing 56 4-year-olds with college-educated parents, from middle-class neighborhoods, and comparing English-speaking U.S. children, Korean-speaking children in the U.S. and in Korea, and Korean-English bilingual children in the U.S.
The children were tested on a computer-game-like activity designed to assess the alerting, orienting, and executive control components of executive attention (a child version of the Attention Network Test). They were also given a vocabulary test (the Peabody Picture Vocabulary Test-III) in their own language, if monolingual, or in English for the bilinguals.
As expected, given their young age, English monolinguals scored well above bilinguals (learning more than one language slows the acquisition of vocabulary in the short-term). Interestingly, however, while Korean monolinguals in Korea performed at a comparable level to the English monolinguals, Korean monolinguals in the U.S. performed at the level of the bilinguals. In other words, the monolinguals living in a country where their language is a majority language have comparable language skills, and those living in a country in which their primary language is a minority language have similar, and worse, language skills.
That’s interesting, but the primary purpose of the study was to look at executive control. And here the bilingual children shone over the monolinguals. Specifically, the bilingual children were significantly more accurate on the attention test than the monolingual Koreans in the U.S. (whether they spoke Korean or English). Although their performance in terms of accuracy was not significantly different from that of the monolingual children in Korea, these children obtained their high accuracy at the expense of speed. The bilinguals were both accurate and fast, suggesting a different mechanism is at work.
The findings confirm earlier research indicating that bilingualism, independent of culture, helps develop executive attention, and points to how early this advantage begins.
The Korean-only and bilingual children from the United States had first generation native Korean parents. The bilingual children had about 11 months of formal exposure to English through a bilingual daycare program, resulting in them spending roughly 45% of their time using Korean (at home and in the community) and 55% of their time using English (at daycare). The children in Korea belonged to a daycare center that did offer a weekly 15-minute session during which they were exposed to English through educational DVDs, but their understanding of English was minimal. Similarly, the Korean-only children in the U.S. would have had some exposure to English, but it was insufficient to allow them to understand English instructions. The researchers’ informal observation of the Korean daycare center and the ones in the U.S. was that the programs were quite similar, and neither was more enriching.
Here’s a perception study with an intriguing twist. In my recent round-up of perception news I spoke of how images with people in them were more memorable, and of how some images ‘jump out’ at you. This study showed different images to each participant’s left and right eye at the same time, creating a contest between them. The amount of time it takes the participant to report seeing each image indicates the relative priority granted by the brain.
So, 66 college students were shown faces of people, and told something ‘gossipy’ about each one. The gossip could be negative, positive or neutral — for example, the person “threw a chair at a classmate”; “helped an elderly woman with her groceries”; “passed a man on the street.” These faces were then shown to one eye while the other eye saw a picture of a house.
The students had to press one button when they could see a face and another when they saw a house. As a control, some faces were used that the students had never seen. The students took the same length of time to register seeing the unknown faces and those about which they had been told neutral or positive information, but pictures of people about whom they had heard negative information registered around half a second quicker, and were looked at for longer.
A second experiment confirmed the findings and showed that subjects saw the faces linked to negative gossip for longer periods than faces about whom they had heard about upsetting personal experiences.
Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.
Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?
Brain region behind the scene-facilitation effect identified
A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.
The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.
The little we need to identify a scene
The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.
When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.
In other words, most of what the brain is responding to in the photo is also evident in the line drawing.
In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.
The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.
Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.
The brain performs visual search near optimally
Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.
Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.
In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.
The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.
In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.
Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.
The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).
'Rewarding' objects can't be ignored
Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.
What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?
In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).
This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.
The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.
People make an image memorable
What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.
In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.
Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.
The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.
Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.
Brain region behind the scene-facilitation effect identified http://medicalxpress.com/news/2011-06-source-key-brain-function.html
The little we need to identify a scene http://medicalxpress.com/news/2011-05-simple-line-lot-brains.html http://medicalxpress.com/news/2011-05-mind-brain-scans-reveal-secrets.html
The brain performs visual search near optimally http://medicalxpress.com/news/2011-05-brain-visual-optimally.html http://www.scientificamerican.com/blog/post.cfm?id=human-brains-are-opti...
'Rewarding' objects can't be ignored http://www.eurekalert.org/pub_releases/2011-06/jhu-yap060711.php
People make an image memorable http://www.eurekalert.org/pub_releases/2011-05/miot-mrw052411.php
As I’ve discussed on many occasions, a critical part of attention (and working memory capacity) is being able to ignore distraction. There has been growing evidence that mindfulness meditation training helps develop attentional control. Now a new study helps fill out the picture of why it might do so.
The alpha rhythm is particularly active in neurons that process sensory information. When you expect a touch, sight or sound, the focusing of attention toward the expected stimulus induces a lower alpha wave height in neurons that would handle the expected sensation, making them more receptive to that information. At the same time the height of the alpha wave in neurons that would handle irrelevant or distracting information increases, making those cells less receptive to that information. In other words, alpha rhythm helps screen out distractions.
In this study, six participants who completed an eight-week mindfulness meditation program (MBSR) were found to generate larger alpha waves, and generate them faster, than the six in the control group. Alpha wave activity in the somatosensory cortex was measured while participants directed their attention to either their left hand or foot. This was done on three occasions: before training, at three weeks of the program, and after the program.
The MBSR program involves an initial two-and-a-half-hour training session, followed by daily 45-minute meditation sessions guided by a CD recording. The program is focused on training participants first to pay close attention to body sensations, then to focus on body sensations in a specific area, then being able to disengage and shifting the focus to another body area.
Apart from helping us understand why mindfulness meditation training seems to improve attention, the findings may also explain why this meditation can help sufferers of chronic pain.
Comparison of young adults (mean age 24.5) and older adults (mean age 69.1) in a visual memory test involving multitasking has pinpointed the greater problems older adults have with multitasking. The study involved participants viewing a natural scene and maintaining it in mind for 14.4 seconds. In the middle of the maintenance period, an image of a face popped up and participants were asked to determine its sex and age. They were then asked to recall the original scene.
As expected, older people had more difficulty with this. Brain scans revealed that, for both groups, the interruption caused their brains to disengage from the network maintaining the memory and reallocate resources to processing the face. But the younger adults had no trouble disengaging from that task as soon as it was completed and re-establishing connection with the memory maintenance network, while the older adults failed both to disengage from the interruption and to reestablish the network associated with the disrupted memory.
This finding adds to the evidence that an important (perhaps the most important) reason for cognitive decline in older adults is a growing inability to inhibit processing, and extends the processes to which that applies.
A study involving 171 sedentary, overweight 7- to 11-year-old children has found that those who participated in an exercise program improved both executive function and math achievement. The children were randomly selected either to a group that got 20 minutes of aerobic exercise in an after-school program, one that got 40 minutes of exercise in a similar program, or a group that had no exercise program. Those who got the greater amount of exercise improved more. Brain scans also revealed increased activity in the prefrontal cortex and reduced activity in the posterior parietal cortex, for those in the exercise group.
The program lasted around 13 weeks. The researchers are now investigating the effects of continuing the program for a full year. Gender, race, socioeconomic factors or parental education did not change the impact of the exercise program.
The effects are consistent with other studies involving older adults. It should be emphasized that these were sedentary, overweight children. These findings are telling us what the lack of exercise is doing to young minds. I note the report just previous, about counteracting what we have regarded as “normal” brain atrophy in older adults by the simple action of walking for 40 minutes three times a week. Children and older adults might be regarded as our canaries in the coal mine, more vulnerable to many factors that can affect the brain. We should take heed.
A link between positive mood and creativity is supported by a study in which 87 students were put into different moods (using music and video clips) and then given a category learning task to do (classifying sets of pictures with visually complex patterns). There were two category tasks: one involved classification on the basis of a rule that could be verbalized; the other was based on a multi-dimensional pattern that could not easily be verbalized.
Happy volunteers were significantly better at learning the rule to classify the patterns than sad or neutral volunteers. There was no difference between those in a neutral mood and those in a negative mood.
It had been theorized that positive mood might only affect processes that require hypothesis testing and rule selection. The mechanism by which this might occur is through increased dopamine levels in the frontal cortex. Interestingly, however, although there was no difference in performance as a function of mood, analysis based on how closely the subjects’ responses matched an optimal strategy for the task found that, again, positive mood was of significant benefit.
The researchers suggest that this effect of positive mood may be the reason behind people liking to watch funny videos at work — they’re trying to enhance their performance by putting themselves in a good mood.
The music and video clips were rated for their mood-inducing effects. Mozart’s “Eine Kleine Nachtmusik—Allegro” was the highest rated music clip (at an average rating of 6.57 on a 7-point scale), Vivaldi’s Spring was next at 6.14. The most positive video was that of a laughing baby (6.57 again), with Whose Line is it Anyway sound effects scoring close behind (6.43).
A study involving 80 college students (34 men and 46 women) between the ages of 18 and 40, has found that those given a caffeinated energy drink reported feeling more stimulated and less tired than those given a decaffeinated soda or no drink. However, although reaction times were faster for those consuming caffeine than those given a placebo drink or no drink, reaction times slowed for increasing doses of caffeine, suggesting that smaller amounts of caffeine are more effective.
The three caffeine groups were given caffeine levels of either 1.8 ml/kg, 3.6 ml/kg or 5.4 ml/kg. The computerized "go/no-go" test which tested their reaction times was given half an hour after consuming the drinks.
In another study, 52 children aged 12-17 drank flattened Sprite containing caffeine at four concentrations: 0, 50 mg, 100 mg or 200 mg. Changes in blood pressure and heart rate were then checked every 10 minutes for one hour, at which point they were given a questionnaire and an opportunity to eat all they wanted of certain types of junk food.
Interestingly, there were significant gender differences, with boys drinking high-caffeine Sprite showing greater increases in diastolic blood pressure (the lower number) than boys drinking the low-caffeine Sprite, but girls being unaffected. Boys were also more inclined to report consuming caffeine for energy or “the rush”, than girls were.
Those participants who ingested the most caffeine also ate more high-sugar snack foods in the laboratory, and reported higher protein and fat consumption outside the lab.
We know active learning is better than passive learning, but for the first time a study gives us some idea of how that works. Participants in the imaging study were asked to memorize an array of objects and their exact locations in a grid on a computer screen. Only one object was visible at a time. Those in the "active study” group used a computer mouse to guide the window revealing the objects, while those in the “passive study” group watched a replay of the window movements recorded in a previous trial by an active subject. They were then tested by having to place the items in their correct positions. After a trial, the active and passive subjects switched roles and repeated the task with a new array of objects.
The active learners learned the task significantly better than the passive learners. Better spatial recall correlated with higher and better coordinated activity in the hippocampus, dorsolateral prefrontal cortex, and cerebellum, while better item recognition correlated with higher activity in the inferior parietal lobe, parahippocampal cortex and hippocampus.
The critical role of the hippocampus was supported when the experiment was replicated with those who had damage to this region — for them, there was no benefit in actively controlling the viewing window.
This is something of a surprise to researchers. Although the hippocampus plays a crucial role in memory, it has been thought of as a passive participant in the learning process. This finding suggests that it is actually part of an active network that controls behavior dynamically.
If our brains are full of clusters of neurons resolutely only responding to specific features (as suggested in my earlier report), how do we bring it all together, and how do we switch from one point of interest to another? A new study using resting state data from 58 healthy adolescents and young adults has found that the intraparietal sulcus, situated at the intersection of visual, somatosensory, and auditory association cortices and known to be a key area for processing attention, contains a miniature map of all the things we can pay attention to (visual, auditory, motor stimuli etc).
Moreover, this map is copied in at least 13 other places in the brain, all of which are connected to the intraparietal sulcus. Each copy appears to do something different with the information. For instance, one map processes eye movements while another processes analytical information. This map of the world may be a fundamental building block for how information is represented in the brain.
There were also distinct clusters within the intraparietal sulcus that showed different levels of connectivity to auditory, visual, somatosensory, and default mode networks, suggesting they are specialized for different sensory modalities.
The findings add to our understanding of how we can shift our attention so precisely, and may eventually help us devise ways of treating disorders where attention processing is off, such as autism, attention deficit disorder, and schizophrenia.
A study involving young (average age 22) and older adults (average age 77) showed participants pictures of overlapping faces and places (houses and buildings) and asked them to identify the gender of the person. While the young adults showed activity in the brain region for processing faces (fusiform face area) but not in the brain region for processing places (parahippocampal place area), both regions were active in the older adults. Additionally, on a surprise memory test 10 minutes later, older adults who showed greater activation in the place area were more likely to recognize what face was originally paired with what house.
These findings confirm earlier research showing that older adults become less capable of ignoring irrelevant information, and shows that this distracting information doesn’t merely interfere with what you’re trying to attend to, but is encoded in memory along with that information.
Following on from earlier studies that found individual neurons were associated with very specific memories (such as a particular person), new research has shown that we can actually regulate the activity of specific neurons, increasing the firing rate of some while decreasing the rate of others.
The study involved 12 patients implanted with deep electrodes for intractable epilepsy. On the basis of each individual’s interests, four images were selected for each patient. Each of these images was associated with the firing of specific neurons in the mediotemporal lobe. The firing of these neurons was hooked up to a computer, allowing the patients to make their particular images appear by thinking of them. When another image appeared on top of the image as a distraction, creating a composite image, patients were asked to focus on their particular image, brightening the target image while the distractor image faded. The patients were successful 70% of the time in brightening their target image. This was primarily associated with increased firing of the specific neurons associated with that image.
I should emphasize that the use of a composite image meant that the participants had to rely on a mental representation rather than the sensory stimuli, at least initially. Moreover, when the feedback given was fake — that is, the patients’ efforts were no longer linked to the behavior of the image on the screen — success rates fell dramatically, demonstrating that their success was due to a conscious, directed action.
Different patients used different strategies to focus their attention. While some simply thought of the picture, others repeated the name of the image out loud or focused their gaze on a particular aspect of the image.
Resolving the competition of multiple internal and external stimuli is a process which involves a number of different levels and regions, but these findings help us understand at least some of the process that is under our conscious control. It would be interesting to know more about the relative effectiveness of the different strategies people used, but this was not the focus of the study. It would also be very interesting to compare effectiveness at this task across age, but of course this procedure is invasive and can only be used in special cases.
The study offers hope for building better brain-machine interfaces.
Two independent studies have found that students whose birthdays fell just before their school's age enrollment cutoff date—making them among the youngest in their class—had a substantially higher rate of ADHD diagnoses than students who were born later. One study, using data from the Early Childhood Longitudinal Study-Kindergarten cohort, found that ADHD diagnoses among children born just prior to their state’s kindergarten eligibility cutoff date are more than 60% more prevalent than among those born just afterward (who therefore waited an extra year to begin school). Moreover, such children are more than twice as likely to be taking Ritalin in grades 5 and 8. While the child’s school starting age strongly affects teachers’ perceptions of ADHD symptoms, it only weakly affects parental perceptions (who are more likely to compare their child with others of the same age, rather than others in the same class). The other study, using data from the 1997 to 2006 National Health Interview Survey, found that 9.7% of those born just before the cutoff date were diagnosed with ADHD compared to 7.6% of those born just after.
The two findings suggest that many of these children are mistakenly being diagnosed with ADHD simply because they are less emotionally or intellectually mature than their (older) classmates.
I’ve talked about the importance of labels for memory, so I was interested to see that a recent series of experiments has found that hearing the name of an object improved people’s ability to see it, even when the object was flashed onscreen in conditions and speeds (50 milliseconds) that would render it invisible. The effect was specific to language; a visual preview didn’t help.
Moreover, those who consider their mental imagery particularly vivid scored higher when given the auditory cue (although this association disappeared when the position of the object was uncertain). The researchers suggest that hearing the image labeled evokes an image of the object, strengthening its visual representation and thus making it visible. They also suggested that because words in different languages pick out different things in the environment, learning different languages might shape perception in subtle ways.
A rat study demonstrates how specialized brain training can reverse many aspects of normal age-related cognitive decline in targeted areas. The month-long study involved daily hour-long sessions of intense auditory training targeted at the primary auditory cortex. The rats were rewarded for picking out the oddball note in a rapid sequence of six notes (five of them of the same pitch). The difference between the oddball note and the others became progressively smaller. After the training, aged rats showed substantial reversal of their previously degraded ability to process sound. Moreover, measures of neuron health in the auditory cortex had returned to nearly youthful levels.
It’s now well established that older brains tend to find it harder to filter out irrelevant information. But now a new study suggests that that isn’t all bad. The study compared the performance of 24 younger adults (17-29) and 24 older adults (60-73) on two memory tasks separated by a 10-minute break. In the first task, they were shown pictures overlapped by irrelevant words, told to ignore the words and concentrate on the pictures only, and to respond every time the same picture appeared twice in a row. The second task required them to remember how the pictures and words were paired together in the first task. The older adults showed a 30% advantage over younger adults in their memory for the preserved pairs. It’s suggested that older adults encode extraneous co-occurrences in the environment and transfer this knowledge to subsequent tasks, improving their ability to make decisions.
Full text available at http://pss.sagepub.com/content/early/2010/01/15/0956797609359910.full
A paralyzed patient implanted with a brain-computer interface device has allowed scientists to determine the relationship between brain waves and attention. Recordings found a characteristic pattern of activity as the subject paid close attention to the task. High-frequency beta oscillations increased in strength as the subject waited for the relevant instruction, with peaks of activity occurring just before each instructional cue. After receiving the relevant instruction and before the subject moved the cursor, the beta oscillation intensity fell dramatically to lower levels through the remaining, irrelevant instructions. On the other hand, the slower delta oscillation adjusted its frequency to mirror the timing of each instructional cue. The authors suggest that this "internal metronome" function may help fine-tune beta oscillations, so that maximum attention is paid at the appropriate time.
In another demonstration of the many factors that affect exam success, three experiments involving a total of 131 college students have found that seeing the letter A before an exam makes a student more likely to perform better than if he sees the letter F instead. In the first experiment, 23 undergraduates took a word-analogies test, of which half were labeled "Test Bank ID: F" in the top right corner, and half "Test Bank ID: A". The A group got an average of 11.08 of 12 answers correct, compared to 9.42 for the F group. The same pattern was confirmed in two more studies. Moreover, performance of students whose exams were labeled "Test Bank ID:J" fell between those with the A and F test papers. While hard to believe, these findings are consistent with the many findings supporting the idea of "stereotype threat" (the tendency to do less well on a test when a person fears their performance could confirm a negative stereotype about their racial or gender group).
Another study showing the cognitive benefits of meditation has revealed benefits to perception and attention. The study involved 30 participants attending a three-month meditation retreat, during which they attended group sessions twice a day and engaging in individual practice for about six hours a day. The meditation practice involved sustained selective attention on a chosen stimulus (e.g., the participant’s breath). By midway through the retreat, meditators had become better at making fine visual distinctions, and better able to sustain attention during the half-hour test, compared to matched controls. Those who continued practicing meditation after the retreat still showed improvements in perception when they were retested about five months later.
A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but when the scene is presented. In the study, participants performed an attention-demanding letter-identification task while also viewing a rapid sequence of full-field photographs of urban and natural scenes. They were then tested on their memory of the scenes. It was found that, notwithstanding their attention had been focused on the target letter, only those scenes which were presented at the same time as a target letter (rather than a distractor letter) were reliably remembered. The results point to a brain mechanism that automatically encodes certain visual features into memory at behaviorally relevant points in time, regardless of the spatial focus of attention.
Full text available at doi:10.1371/journal.pbio.1000337
An intriguing set of experiments showing how you can improve perception by manipulating mindset found significantly improved vision when:
- an eye chart was arranged in reverse order (the letters getting progressively larger rather than smaller);
- participants were given eye exercises and told their eyes would improve with practice;
- participants were told athletes have better vision, and then told to perform jumping jacks or skipping (seen as less athletic);
- participants flew a flight simulator, compared to pretending to fly a supposedly broken simulator (pilots are believed to have good vision).
A study of over 3,100 older men (49-71) from across Europe has found that men with higher levels of vitamin D performed consistently better in an attention and speed of processing task. There was no difference on visual memory tasks. Although previous studies have suggested low vitamin D levels may be associated with poorer cognitive performance, findings have been inconsistent. Vitamin D is primarily synthesised from sun exposure but is also found in certain foods such as oily fish.
Older news items (pre-2010) brought over from the old website
Attention is more about reducing the noticeability of the unattended
No visual scene can be processed in one fell swoop — we piece it together from the bits we pay attention to (which explains why we sometimes miss objects completely, and can’t understood how we could have missed them when we finally notice them). We know that paying attention to something increases the firing rate of neurons tuned for that type of stimulus, and until a recent study we thought that was the main process underlying our improved perception when we focus on something. However a macaque study has found that the main cause — perhaps four times as important — is a reduction in the background noise, allowing the information coming in to be much more noticeable.
Brainwaves regulate our searching
A long-standing question concerns how we search complex visual scenes. For example, when you enter a crowded room, how do you go about searching for your friends? Now a monkey study reveals that visual attention jumps sequentially from point to point, shifting focus around 25 times in a second. Intriguingly, and unexpectedly, it seems this timing is determined by brainwaves. The finding connects speed of thinking with the oscillation frequency of brainwaves, giving a new significance to brainwaves (whose function is rather mysterious, but of increasing interest to researchers), and also suggesting an innovative approach to improving attention.
Ability to ignore distraction most important for attention
Confirming an earlier study, a series of four experiments involving 84 students has found that students with high working memory capacity were noticeably better able to ignore distractions and stay focused on their tasks. The findings provide more evidence that the poor attentional capacity of individuals with low working memory capacity result from a reduced ability to ignore attentional capture (stimuli that involuntarily “capture” your attention, like a loud noise or a suddenly appearing object), rather than an inability to focus.
Stress disrupts task-switching, but the brain can bounce back
A new neuroimaging study involving 20 male M.D. candidates in the middle of preparing for their board exams has found that they had a harder time shifting their attention from one task to another after a month of stress than other healthy young men who were not under stress. The finding replicates what has been found in rat studies, and similarly correlates with impaired function in an area of the prefrontal cortex that is involved in attention. However, the brains recovered their function within a month of the end of the stressful period.
Attention, it’s all about connecting
An imaging study in which volunteers spent an hour identifying letters that flashed on a screen has shed light on what happens when our attention wanders. Reduced communication in the ventral fronto-parietal network, critical for attention, was found to predict slower response times 5-8 seconds before the letters were presented.
Daniel Weissman presented the results at the 38th annual meeting of the Society for Neuroscience, held Nov. 15 to 19 in Washington, DC.
The importance of acetylcholine
A rat study suggests that acetylcholine, a neurotransmitter known to be important for attention, is critical for "feature binding"— the process by which our brain combines all of the specific features of an object and gives us a complete and unified picture of it. The findings may lead to improved therapies and treatments for a variety of attention and memory disorders.
Attention grabbers snatch lion's share of visual memory
It’s long been thought that when we look at a visually "busy" scene, we are only able to store a very limited number of objects in our visual short-term or working memory. For some time, this figure was believed to be four or five objects, but a recent report suggested it could be as low as two. However, a new study reveals that although it might not be large, it’s more flexible than we thought. Rather than being restricted to a limited number of objects, it can be shared out across the whole image, with more memory allocated for objects of interest and less for background detail. What’s of interest might be something we’ve previously decided on (i.e., we’re searching for), or something that grabs our attention. Eye movements also reveal how brief our visual memory is, and that what our eyes are looking at isn’t necessarily what we’re ‘seeing’ — when people were asked to look at objects in a particular sequence, but the final object disappeared before their eyes moved on to it, it was found that the observers could more accurately recall the location of the object that they were about to look at than the one that they had just been looking at.
How Ritalin works to focus attention
Ritalin has been widely used for decades to treat attention deficit hyperactivity disorder (ADHD), but until now the mechanism of how it works hasn’t been well understood. Now a rat study has found that Ritalin, in low doses, fine-tunes the functioning of neurons in the prefrontal cortex, and has little effect elsewhere in the brain. It appears that Ritalin dramatically increases the sensitivity of neurons in the prefrontal cortex to signals coming from the hippocampus. However, in higher doses, PFC neurons stopped responding to incoming information, impairing cognition. Low doses also reinforced coordinated activity of neurons, and weakened activity that wasn't well coordinated. All of this suggests that Ritalin strengthens dominant and important signals within the PFC, while lessening weaker signals that may act as distractors.
A new study provides more evidence that the ability to deliberately focus your attention is physically separate in the brain from the part that helps you filter out distraction. The study trained monkeys to take attention tests on a video screen in return for a treat of apple juice. When the monkeys voluntarily concentrated (‘top-down’ attention), the prefrontal cortex was active, but when something distracting grabbed their attention (‘bottom-up’ attention), the parietal cortex became active. The electrical activity in these two areas vibrated in synchrony as they signaled each other, but top-down attention involved synchrony that was stronger in the lower-frequencies and bottom-up attention involved higher frequencies. These findings may help us develop treatments for attention disorders.
Asymmetrical brains let fish multitask
A fish study provides support for a theory that lateralized brains allow animals to better handle multiple activities, explaining why vertebrate brains evolved to function asymmetrically. The minnow study found that nonlateralized minnows were as good as those bred to be lateralized (enabling it to favor one or other eye) at catching shrimp. However, when the minnows also had to look out for a sunfish (a minnow predator), the nonlateralized minnows took nearly twice as long to catch 10 shrimp as the lateralized fish.
Why are uniforms uniform? Because color helps us track objects
Laboratory tests have revealed that humans can pay attention to only 3 objects at a time. Yet there are instances in the real world — for example, in watching a soccer match — when we certainly think we are paying attention to more than 3 objects. Are we wrong? No. Anew study shows how we do it — it’s all in the color coding. People can focus on more than three items at a time if those items share a common color. But, logically enough, no more than 3 color sets.
An advantage of age
A study comparing the ability of young and older adults to indicate which direction a set of bars moved across a computer screen has found that although younger participants were faster when the bars were small or low in contrast, when the bars were large and high in contrast, the older people were faster. The results suggest that the ability of one neuron to inhibit another is reduced as we age (inhibition helps us find objects within clutter, but makes it hard to see the clutter itself). The loss of inhibition as we age has previously been seen in connection with cognition and speech studies, and is reflected in our greater inability to tune out distraction as we age. Now we see the same process in vision.
We weren't made to multitask
A new imaging study supports the view that we can’t perform two tasks at once, rather, the tasks must wait their turn — queuing up for their turn at processing.
More light shed on memory encoding
Anything we perceive contains a huge amount of sensory information. How do we decide what bits to process? New research has identified brain cells that streamline and simplify sensory information, markedly reducing the brain's workload. The study found that when monkeys were taught to remember clip art pictures, their brains reduced the level of detail by sorting the pictures into categories for recall, such as images that contained "people," "buildings," "flowers," and "animals." The categorizing cells were found in the hippocampus. As humans do, different monkeys categorized items in different ways, selecting different aspects of the same stimulus image, most likely reflecting different histories, strategies, and expectations residing within individual hippocampal networks.
Neural circuits that control eye movements play crucial role in visual attention
Everyone agrees that to improve your memory it is important to “pay attention”. Unfortunately, noone really knows how to improve our ability to “pay attention”. An important step in telling us how visual attention works was recently made in a study that looked at the brain circuits that control eye movements. It appears that those brain circuits that program eye movements also govern whether the myriad signals that pour in from the locations where the eyes could move should be amplified or suppressed. It appears that the very act of preparing to move the eye to a particular location can cause an amplification (or suppression) of signals from that area. This is possible because humans and primates can attend to something without moving their eyes to that object.
Different aspects of attention located in different parts of the brain
We all know attention is important, but we’ve never been sure exactly what it is. Recent research suggests there’s good reason for this – attention appears to be multi-faceted, far less simple than originally conceived. Patients with specific lesions in the frontal lobes and other parts of the brain have provided evidence that different types of attentional problems are associated with injuries in different parts of the brain, suggesting that attention is not, as has been thought, a global process. The researchers have found evidence for at least three distinct processes, each located in different parts of the frontal lobes. These are: (1) a system that helps us maintain a general state of readiness to respond, in the superior medial frontal regions; (2) a system that sets our threshold for responding to an external stimulus, in the left dorsolateral region; and (3) a system that helps us selectively attend to appropriate stimuli, in the right dorsolateral region.