categorization

Attention warps memory space

A recent study reveals that when we focus on searching for something, regions across the brain are pulled into the search. The study sheds light on how attention works.

In the experiments, brain activity was recorded as participants searched for people or vehicles in movie clips. Computational models showed how each of the roughly 50,000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips.

05/2013

How your brain chunks ‘moments’ into ‘events’

We talk about memory for ‘events’, but how does the brain decide what an event is? How does it decide what is part of an event and what isn’t? A new study suggests that our brain uses categories it creates based on temporal relationships between people, objects, and actions — i.e., items that tend to—or tend not to—pop up near one another at specific times.

05/2013

Ability to remember memories' origin develops slowly

October, 2011

A study comparing the brains of children, adolescents, and young adults has found that the ability to remember the origin of memories is slow to mature. As with older adults, impaired source memory increases susceptibility to false memories.

In the study, 18 children (aged 7-8), 20 adolescents (13-14), and 20 young adults (20-29) were shown pictures and asked to decide whether it was a new picture or one they had seen earlier. Some of the pictures were of known objects and others were fanciful figures (this was in order to measure the effects of novelty in general). After a 10-minute break, they resumed the task — with the twist that any pictures that had appeared in the first session should be judged “new” if that was the first appearance in the second session. EEG measurements (event-related potentials — ERPs) were taken during the sessions.

ERPs at the onset of a test stimulus (each picture) are different for new and old (repeated) stimuli. Previous studies have established various old/new effects that reflect item and source memory in adults. In the case of item memory, recognition is thought to be based on two processes — familiarity and recollection — which are reflected in ERPs of different timings and location (familiarity: mid-frontal at 300-500 msec; recollection: parietal at 400-70 msec). Familiarity is seen as a fast assessment of similarity, while recollection varies according to the amount of retrieved information.

Source memory appears to require control processes that involve the prefrontal cortex. Given that this region is the slowest to mature, it would not be surprising if source memory is a problematic memory task for the young. And indeed, previous research has found that children do have particular difficulty in sourcing memories when the sources are highly similar.

In the present study, children performed more poorly than adolescents and adults on both item memory and source memory. Adolescents performed more poorly than adults on item memory but not on source memory. Children performed more poorly on source memory than item memory, but adolescents and adults showed no difference between the two tasks.

All groups responded faster to new items than old, and ERP responses to general novelty were similar across the groups — although children showed a left-frontal focus that may reflect the transition from analytic to a more holistic processing approach.

ERPs to old items, however, showed a difference: for adults, they were especially pronounced at frontal sites, and occurred at around 350-450 msec; for children and adolescents they were most pronounced at posterior sites, occurring at 600-800 msec for children and 400-600 msec for adolescents. Only adults showed the early midfrontal response that is assumed to reflect familiarity processing. On the other hand, the late old/new effect occurring at parietal sites and thought to reflect recollection, was similar across all age groups. The early old/new effect seen in children and adolescents at central and parietal regions is thought to reflect early recollection.

In other words, only adults showed the brain responses typical of familiarity as well as recollection. Now, some research has found evidence of familiarity processing in children, so this shouldn’t be taken as proof against familiarity processing in the young. What seems most likely is that children are less likely to use such processing. Clearly the next step is to find out the factors that affect this.

Another interesting point is the early recollective response shown by children and adolescents. It’s speculated that these groups may have used more retrieval cues — conceptual as well as perceptual — that facilitated recollection. I’m reminded of a couple of studies I reported on some years ago, that found that young children were better than adults on a recognition task in some circumstances — because children were using a similarity-based process and adults a categorization-based one. In these cases, it had more to do with knowledge than development.

It’s also worth noting that, in adults, the recollective response was accentuated in the right-frontal area. This suggests that recollection was overlapping with post-retrieval monitoring. It’s speculated that adults’ greater use of familiarity produces a greater need for monitoring, because of the greater uncertainty.

What all this suggests is that preadolescent children are less able to strategically recollect source information, and that strategic recollection undergoes an important step in early adolescence that is probably related to improvements in cognitive control. But this process is still being refined in adolescents, in particular as regards monitoring and coping with uncertainty.

Interestingly, source memory is also one of the areas affected early in old age.

Failure to remember the source of a memory has many practical implications, in particular in the way it renders people more vulnerable to false memories.

tags problems: 
Topics: 

Helping students & children get enough sleep

October, 2011

Simple interventions can help college students improve their sleep. Regular sleep habits are important for young children. Sleep deprivation especially affects performance on open-ended problems.

One survey of nearly 200 undergraduate college students who were not living with a parent or legal guardian found that 55% reported getting less than seven hours sleep. This is consistent with other surveys. The latest study confirms such a result, but also finds that students tend to think their sleep quality is better than it is (70% of students surveyed described their sleep as "fairly good" or better). It’s suggested that this disconnect arises from students making comparisons in an environment where poor sleep is common — even though they realized, on being questioned, that poor sleep undermined their memory, concentration, class attendance, mood, and enthusiasm.

None of this is surprising, of course. But this study did something else — it tried to help.

The researchers launched a campuswide media campaign consisting of posters, student newspaper advertisements and a "Go to Bed SnoozeLetter", all delivering information about the health effects of sleep and tips to sleep better, such as keeping regular bedtime and waking hours, exercising regularly, avoiding caffeine and nicotine in the evening, and so on. The campaign cost less than $2,500, and nearly 10% (90/971) said it helped them sleep better.

Based on interviews conducted as part of the research, the researchers compiled lists of the top five items that helped and hindered student sleep:

Helpers

  • Taking time to de-stress and unwind
  • Creating a room atmosphere conducive to sleep
  • Being prepared for the next day
  • Eating something
  • Exercising

Hindrances

  • Dorm noise
  • Roommate (both for positive/social reasons and negative reasons)
  • Schoolwork
  • Having a room atmosphere not conducive to sleep
  • Personal health issues

In another study, this one involving 142 Spanish schoolchildren aged 6-7, children who slept less than 9 hours and went to bed late or at irregular times showed poorer academic performance. Regular sleep habits affected some specific skills independent of sleep duration.

69% of the children returned home after 9pm at least three evenings a week or went to bed after 11pm at least four nights a week.

And a recent study into the effects of sleep deprivation points to open-ended problem solving being particularly affected. In the study, 35 West Point cadets were given two types of categorization task. The first involved cate­gorizing drawings of fictional animals as either “A” or “not A”; the second required the students to sort two types of fic­tional animals, “A” and “B.” The two tests were separated by 24 hours, during which half the students had their usual night’s sleep, and half did not.

Although the second test required the students to learn criteria for two animals instead of one, sleep deprivation impaired performance on the first test, not the second.

These findings suggest the fault lies in attention lapses. Open-ended tasks, as in the first test, require more focused attention than those that offer two clear choices, as the second test did.

News reports on sleep deprivation are collated here.

Reference: 

Orzech KM, Salafsky DB, Hamilton LA. The State of Sleep Among College Students at a Large Public University. Journal of American College Health [Internet]. 2011 ;59:612 - 619. Available from: http://www.tandfonline.com/doi/abs/10.1080/07448481.2010.520051

Cladellas R, Chamarro A, del Badia MM, Oberst U, Carbonell X. Efectos de las horas y los habitos de sueno en el rendimiento academico de ninos de 6 y 7 anos: un estudio preliminarEffects of sleeping hours and sleeping habits on the academic performance of six- and seven-year-old children: A preliminary study. Cultura y Educación. 2011 ;23(1):119 - 128.

Maddox WT; Glass BD; Zeithamova D; Savarie ZR; Bowen C; Matthews MD; Schnyer DM. The effects of sleep deprivation on dissociable prototype learning systems. SLEEP 2011;34(3):253-260.

tags problems: 
tags development: 
tags strategies: 
tags memworks: 
tags lifestyle: 
Topics: 

Visual perception - a round-up of recent news

July, 2011

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

Reference: 

Kim JG, Biederman I, Juan C-H. The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study. The Journal of Neuroscience [Internet]. 2011 ;31(22):8320 - 8324. Available from: http://www.jneurosci.org/content/31/22/8320.abstract

Walther DB, Chai B, Caddigan E, Beck DM, Fei-Fei L. Simple line drawings suffice for functional MRI decoding of natural scene categories. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(23):9661 - 9666. Available from: http://www.pnas.org/content/108/23/9661.abstract

Ma WJ, Navalpakkam V, Beck JM, van den Berg R, Pouget A. Behavior and neural basis of near-optimal visual search. Nat Neurosci [Internet]. 2011 ;14(6):783 - 790. Available from: http://dx.doi.org/10.1038/nn.2814

Peelen MV, Kastner S. A neural basis for real-world visual search in human occipitotemporal cortex. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(29):12125 - 12130. Available from: http://www.pnas.org/content/108/29/12125.abstract

Anderson BA, Laurent PA, Yantis S. Value-driven attentional capture. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(25):10367 - 10371. Available from: http://www.pnas.org/content/108/25/10367.abstract

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Topics: 

Face-blindness an example of inability to generalize

October, 2010

It seems that prosopagnosia can be, along with perfect pitch and eidetic memory, an example of what happens when your brain can’t abstract the core concept.

‘Face-blindness’ — prosopagnosia — is a condition I find fascinating, perhaps because I myself have a touch of it (it’s now recognized that this condition represents the end of a continuum rather than being an either/or proposition). The intriguing thing about this inability to recognize faces is that, in its extreme form, it can nevertheless exist side-by-side with quite normal recognition of other objects.

Prosopagnosia that is not the result of brain damage often runs in families, and a study of three family members with this condition has revealed that in some cases at least, the inability to remember faces has to do with failing to form a mental representation that abstracts the essence of the face, sans context. That is, despite being fully able to read facial expressions, attractiveness and gender from the face (indeed one of the family members is an artist who has no trouble portraying fully detailed faces), they couldn’t cope with changes in lighting conditions and viewing angles.

I’m reminded of the phenomenon of perfect pitch, which is characterized by an inability to generalize across acoustically similar tones, so an A in a different key is a completely different note. Interestingly, like prosopagnosia, perfect pitch is now thought to be more common than has been thought (recognition of it is of course limited by the fact that some musical expertise is generally needed to reveal it). This inability to abstract or generalize is also a phenomenon of eidetic memory, and I have spoken before of the perils of this.

(Note: A fascinating account of what it is like to be face-blind, from a person with the condition, can be found at: http://www.choisser.com/faceblind/)

Topics: 

Task determines whether better for neurons to generalize or specialize

July, 2010

A monkey study reveals that, although some neurons are specialized to recognize specific concepts, most are more generalized and these are usually better at categorizing objects.

Previous research has found that individual neurons can become tuned to specific concepts or categories. We can have "cat" neurons, and "car" neurons, and even an “Angelina Jolie” neuron. A new monkey study, however, reveals that although some neurons were more attuned to car images and others to animal images, many neurons were active in both categories. More importantly, these "multitasking" neurons were in fact the best at making correct identifications when the monkey alternated between two category problems. The work could lead to a better understanding of disorders such as autism and schizophrenia in which individuals become overwhelmed by individual stimuli.

tags problems: 
tags memworks: 

Words influence infants' cognition from first months of life

March, 2010

Like human faces, infants are predisposed to pay attention to words. Now a new study shows that they learn concepts from them from a very early age.

Like human faces, infants are predisposed to pay attention to words. Now a new study shows that they learn concepts from them from a very early age. In the study, in which 46 three-month-old infants were shown a series of pictures of fish that were paired either with words (e.g., "Look at the toma!") or beeps (carefully matched to the words for tone and duration), those who heard the words subsequently showed signs of having formed the category “fish”, while those who heard the tones did not. Categorization was assumed when infants shown a picture of a new fish and a dinosaur side-by-side, looked longer at one picture than the other.

tags development: