identity memory

Identity memory

Recognizing a person is a complex matter.

There are several different types of memory code for identity information. These include:

  • structural codes
  • semantic codes
  • visually-derived semantic codes
  • name codes

The interesting thing about these different memory codes is that it appears that they can only be accessed in a particular order. This is part of the reason names are so much harder to recall - they're at the end of the chain.

Improving your memory for people requires you to improve the connections between these memory codes.

Difficulty in remembering people’s names is one of the most common memory tasks that people wish to be better at. And the reason for this is not that their memory is poor, but because it is so embarrassing when their memory lets them down.

This isn’t just an issue at a personal level. It’s a particular issue for anyone who has to deal with a lot of people, many of whom they will see at infrequent intervals. Nothing makes a person — a client, a customer, a student — feel more valued than being remembered.

But we have, in fact, a remarkably good memory for other people’s faces. Think about the ease with which you distinguish between hundreds, even thousands, of human faces, and then think about how hard it is to distinguish between the faces of birds, or dogs, or monkeys. This is not because human faces are any more distinctive than the faces of other animals. Think about how much harder it is for you to distinguish between the faces of people of an unfamiliar racial type.

Contrary to what many European-descended people believe, Asian faces are no less distinctive than European faces, but the differences between any human face are sufficiently subtle that they take a great deal of experience to learn. The importance of learning these subtle differences is shown in the way new babies focus on faces, and prefer them to other objects.

Our memory for other people is of course more than a memory for faces, although that part probably has the most impressive capacity. We also remember people’s names and various biographical details. We can recognize people by hearing their voice, at a distance by seeing their shape or the way that they move, or even by their clothing.

But it’s faces that give certainty.

Many years ago, when I was in my second year at university, I left the student cafeteria and nearly bumped into a young woman in a white lab coat. I murmured some sort of apology and started to move on, and she said my name. I stared at her blankly. She said, ‘You don’t recognize me, do you?’ Even with this prompt, I didn’t immediately get it. I still remember staring at her unfamiliar face, and then … the features seemed to shift under my eyes. It was very weird. Suddenly I knew her. I was mortified, and stunned. I hadn’t seen her in a year, but we’d been best friends all through high school. How could I not immediately recognize her?

Identity information is complex

Identity information is encoded in memory in quite complex ways. To more effectively use those codes, to improve your memory for names, faces, and important personal details, it helps to understand how identity information is recorded in memory.

There are three ways we can “recognize” a person:

  • we might recognize them as having been seen before, without recalling anything about them
  • we might identify them as a particular person, without recalling their name (“that’s a friend of my son’s”)
  • we might identify them by name

If you think about it you will realize that you never, ever, recall information about a person without recognizing them as familiar. While this sounds terribly obvious, there is actually a clinical condition (the Capgras delusion) whereby a person, while recognizing the people around them, believes they have been replaced by doubles (imposters, robots, aliens). This is simply because the normal accompanying feeling of familiarity is missing.

You also never remember a person’s name without knowing who she is. This is because names are held in a separate place to biographical details, and can only be accessed through those details.

Identity codes and how they are structured in memory

Why is there this hierarchy? Why can we only access names through biographical information? Because identity information is ordered. Your memory for a person is not like this:

diagram

But like this:

diagram

In other words, there are several different kinds of identity information, and they are clustered according to type, and can in fact only be accessed in a particular order.

Of the various identity codes (bits of encoded identity information), there are three kinds that are important for recognizing a person:

  • structural codes (physical features)
  • semantic codes (biographical details, e.g., occupation, marital status, address)
  • name codes

There is a fourth type of code that is useful for remembering unfamiliar faces:

  • visually-derived semantic codes (e.g., age, gender, attributions such as “he looks honest/intelligent/sly”)

Semantic codes that are visually derived have an advantage over biographical codes, because the link with the structural code is meaningful and thus strong, whereas the connection between the structural codes and biographical details is entirely arbitrary. To say someone looks like a fox connects meaningfully with the person’s facial features, whereas to say that someone is a lawyer has no particular connection with the person’s face (to say someone looks like a lawyer would of course be meaningfully connected).

Visually-derived semantic codes are useful for remembering new faces because the link with the physical features of the face is strong and meaningful.

However you cannot identify a person without reference to the biographical codes.

The interesting aspect of these different codes is that you can only access them in a particular order:

diagram

When you recognize a face as familiar but can’t recall anything about the person, the physical features have failed to trigger the biographical details. When you identify a person by recalling details about them, but can’t recall their name, the biographical information has failed to trigger the name.

Whether the name is recalled therefore depends on the strength of the connection between the biographical details and the name.

In other words, to improve your memory for a person’s identity, you must strengthen the link between the physical features and the biographical information. To improve your memory for the person’s name, you must strengthen the link between the biographical information and the name.

 

Note: A fascinating account of what it is like to be face-blind, from a person with the condition, can be found at: http://www.choisser.com/faceblind/

tags memworks: 
tags strategies: 

Face Recognition

Older news items (pre-2010) brought over from the old website

Children recognize other children’s faces better than adults do

It is well known that people find it easier to distinguish between the faces of people from their own race, compared to those from a different race. It is also known that adults recognize the faces of other adults better than the faces of children. This may relate to holistic processing of the face (seeing the face as a whole rather than analyzing it feature by feature) — it may be that we more easily recognize faces for which we have strong holistic ‘templates’. A new study has tested to see whether the same is true for children aged 8 to 13. The study found that children had stronger holistic processing for other children than adults did. This may reflect an own-age bias, but I’d love to see what happens with teachers, or any other adults who spend much of their time with many children.

Susilo T, Crookes K, McKone E, Turner H. The Composite Task Reveals Stronger Holistic Processing in Children than Adults for Child Faces. PLoS ONE [Internet]. 2009 ;4(7):e6460 - e6460. Available from: http://dx.doi.org/10.1371/journal.pone.0006460

Full text at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006460
http://dsc.discovery.com/news/2009/08/18/children-faces.html

Alcoholics show abnormal brain activity when processing facial expressions

Excessive chronic drinking is known to be associated with deficits in comprehending emotional information, such as recognizing different facial expressions. Now an imaging study of abstinent long-term alcoholics has found that they show decreased and abnormal activity in the amygdala and hippocampus when looking at facial expressions. They also show increased activity in the lateral prefrontal cortex, perhaps in an attempt to compensate for the failure of the limbic areas. The finding is consistent with other studies showing alcoholics invoking additional and sometimes higher-order brain systems to accomplish a relatively simple task at normal levels. The study compared 15 abstinent long-term alcoholics and 15 healthy, nonalcoholic controls, matched on socioeconomic backgrounds, age, education, and IQ.

Marinkovic K, Oscar-Berman M, Urban T, O'Reilly CE, Howard JA, Sawyer K, Harris GJ. Alcoholism and dampened temporal limbic activation to emotional faces. Alcoholism, Clinical and Experimental Research [Internet]. 2009 ;33(11):1880 - 1892. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19673745

http://www.eurekalert.org/pub_releases/2009-08/ace-edc080509.php
http://www.eurekalert.org/pub_releases/2009-08/bumc-rfa081109.php

More insight into encoding of identity information

Different pictures of, say, Marilyn Monroe can evoke the same mental image — even hearing or reading her name can evoke the same concept. So how exactly does that work? A study in which pictures, spoken and written names were used has revealed that single neurons in the hippocampus and surrounding areas respond selectively to representations of the same individual regardless of the sensory cue. Moreover, this occurs very quickly, not only to very familiar people — the same process was observed with the researcher’s image and name, although he was unknown to the subject a day or two earlier. It also appears that the degree of abstraction reflects the hierarchical structure within the mediotemporal lobe.

Quiroga QR, Kraskov A, Koch C, Fried I. Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain. Current Biology [Internet]. 2009 ;19(15):1308 - 1313. Available from: http://www.cell.com/current-biology/abstract/S0960-9822(09)01377-3

http://www.eurekalert.org/pub_releases/2009-07/uol-ols072009.php

Monkeys and humans use the same mechanism to recognize faces

The remarkable ability of humans to distinguish faces depends on sensitivity to unique configurations of facial features. One of the best demonstrations for this sensitivity comes from our difficulty in detecting changes in the orientation of the eyes and mouth in an inverted face — what is known as the Thatcher effect . A new study has revealed that this effect is also demonstrated among rhesus macaque monkeys, indicating that our skills in facial recognition date back 30 million years or more.

Adachi I, Chou DP, Hampton RR. Thatcher Effect in Monkeys Demonstrates Conservation of Face Perception across Primates. Current Biology [Internet]. 2009 ;19(15):1270 - 1273. Available from: http://www.cell.com/current-biology/abstract/S0960-9822(09)01195-6

http://www.eurekalert.org/pub_releases/2009-06/eu-yri062309.php

Face recognition may vary more than thought

We know that "face-blindness" (prosopagnosia) may afflict as many as 2%, but until now it’s been thought that either a person has ‘normal’ face recognition skills, or they have a recognition disorder. Now for the first time a new group has been identified: those who are "super-recognizers", who have a truly remarkable ability to recognize faces, even those only seen in passing many years earlier. The finding suggests that these two abnormal groups are merely the ends of a spectrum — that face recognition ability varies widely.

Russell R, Duchaine B, Nakayama K. Super-recognizers: people with extraordinary face recognition ability. Psychonomic Bulletin & Review [Internet]. 2009 ;16(2):252 - 257. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19293090

http://www.eurekalert.org/pub_releases/2009-05/hu-we051909.php

Oxytocin improves human ability to recognize faces but not places

The breastfeeding hormone oxytocin has been found to increase social behaviors like trust. A new study has found that a single dose of an oxytocin nasal spray resulted in improved recognition memory for faces, but not for inanimate objects, suggesting that different mechanisms exist for social and nonsocial memory. Further analysis showed that oxytocin selectively improved the discrimination of new and familiar faces — participants with oxytocin were less likely to mistakenly characterize unfamiliar faces as familiar.

Rimmele U, Hediger K, Heinrichs M, Klaver P. Oxytocin Makes a Face in Memory Familiar. J. Neurosci. [Internet]. 2009 ;29(1):38 - 42. Available from: http://www.jneurosci.org/cgi/content/abstract/29/1/38

http://www.eurekalert.org/pub_releases/2009-01/sfn-hii010509.php

Insight into 'face blindness'

An imaging study has finally managed to see a physical difference in the brains of those with congenital prosopagnosia (face blindness): reduced connectivity in the region that processes faces. Specifically, a reduction in the integrity of the white matter tracts in the ventral occipito-temporal cortex, the extent of which was related to the severity of the impairment.

Thomas C, Avidan G, Humphreys K, Jung K-jin, Gao F, Behrmann M. Reduced structural connectivity in ventral visual cortex in congenital prosopagnosia. Nat Neurosci [Internet]. 2009 ;12(1):29 - 31. Available from: http://dx.doi.org/10.1038/nn.2224

http://www.eurekalert.org/pub_releases/2008-11/cmu-cms112508.php

Visual expertise marked by left-side bias

It’s been established that facial recognition involves both holistic processing (seeing the face as a whole rather than the sum of parts) and a left-side bias. The new study explores whether these effects are specific to face processing, by seeing how Chinese characters, which share many of the same features as faces, are processed by native Chinese and non-Chinese readers. It was found that non-readers tended to look at the Chinese characters more holistically, and that native Chinese readers prefer characters that are made of two left sides. These findings suggest that whether or not we use holistic processing depends on the task performed with the object and its features, and that holistic processing is not used in general visual expertise – but left-side bias is.

Hsiao JH, Cottrell GW. Not all visual expertise is holistic, but it may be leftist: the case of Chinese character recognition. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2009 ;20(4):455 - 463. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19399974

http://www.physorg.com/news160145799.html

Object recognition fast and early in processing

We see through our eye and with our brain. Visual information flows from the retina through a hierarchy of visual areas in the brain until it reaches the temporal lobe, which is ultimately responsible for our visual perceptions, and also sends information back along the line, solidifying perception. This much we know, but how much processing goes on at each stage, and how important feedback is compared to ‘feedforward’, is still under exploration. A new study involving children about to undergo surgery for epilepsy (using invasive electrode techniques) reveals that feedback from the ‘smart’ temporal lobe is less important than we thought, that the brain can recognize objects under a variety of conditions very rapidly, at a very early processing stage. It appears that certain areas of the visual cortex selectively respond to specific categories of objects.

Liu H, Agam Y, Madsen JR, Kreiman G. Timing, Timing, Timing: Fast Decoding of Object Information from Intracranial Field Potentials in Human Visual Cortex. Neuron [Internet]. 2009 ;62(2):281 - 290. Available from: http://www.cell.com/neuron/abstract/S0896-6273(09)00171-8

http://www.sciencedaily.com/releases/2009/04/090429132231.htm
http://www.physorg.com/news160229380.html
http://www.eurekalert.org/pub_releases/2009-04/chb-aga042709.php

New brain region associated with face recognition

Using a new technique, researchers have found evidence for neurons that are selectively tuned for gender, ethnicity and identity cues in the cingulate gyrus, a brain area not previously associated with face processing.

Ng M, Ciaramitaro VM, Anstis S, Boynton GM, Fine I. Selectivity for the configural cues that identify the gender, ethnicity, and identity of faces in human cortex. Proceedings of the National Academy of Sciences [Internet]. 2006 ;103(51):19552 - 19557. Available from: http://www.pnas.org/content/103/51/19552.abstract

http://www.sciencedaily.com/releases/2006/12/061212091823.htm

No specialized face area

Another study has come out casting doubt on the idea that there is an area of the brain specialized for faces. The fusiform gyrus has been dubbed the "fusiform face area", but a detailed imaging study has revealed that different patches of neurons respond to different images. However, twice as many of the patches are predisposed to faces versus inanimate objects (cars and abstract sculptures), and patches that respond to faces outnumber those that respond to four-legged animals by 50%. But patches that respond to the same images are not physically connected, implying a "face area" may not even exist.

Grill-Spector K, Sayres R, Ress D. High-resolution imaging reveals highly selective nonface clusters in the fusiform face area. Nat Neurosci [Internet]. 2007 ;10(1):133 - 133. Available from: http://dx.doi.org/10.1038/nn0107-133

http://www.sciencedaily.com/releases/2006/08/060830005949.htm

Face blindness is a common hereditary disorder

A German study has found 17 cases of the supposedly rare disorder prosopagnosia (face blindness) among 689 subjects recruited from local secondary schools and a medical school. Of the 14 subjects who consented to further interfamilial testing, all of them had at least one first degree relative who also had it. Because of the compensation strategies that sufferers learn to utilize at an early age, many of them do not realize that it is an actual disorder or even realize that other members of their family have it — which may explain why it has been thought to be so rare. The disorder is one of the few cognitive dysfunctions that has only one symptom and is inherited. It is apparently controlled by a defect in a single gene.

Kennerknecht I, Grueter T, Welling B, Wentzek S, Horst J, Edwards S, Grueter M. First report of prevalence of non-syndromic hereditary prosopagnosia (HPA). American Journal of Medical Genetics. Part A [Internet]. 2006 ;140(15):1617 - 1622. Available from: http://www.ncbi.nlm.nih.gov/pubmed/16817175

http://www.sciencedaily.com/releases/2006/07/060707151549.htm

Nothing special about face recognition

A new study adds to a growing body of evidence that there is nothing special about face recognition. The researchers have found experimental support for their model of how a brain circuit for face recognition could work. The model shows how face recognition can occur simply from selective processing of shapes of facial features. Moreover, the model equally well accounted for the recognition of cars.

Jiang X, Rosen E, Zeffiro T, VanMeter J, Blanz V, Riesenhuber M. Evaluation of a Shape-Based Model of Human Face Discrimination Using fMRI and Behavioral Techniques. Neuron [Internet]. 2006 ;50(1):159 - 172. Available from: http://www.cell.com/neuron/abstract/S0896-6273(06)00205-4

http://www.eurekalert.org/pub_releases/2006-04/cp-eht033106.php

Rare learning disability particularly impacts face recognition

A study of 14 children with Nonverbal Learning Disability (NLD) has found that the children were poor at recognizing faces. NLD has been associated with difficulties in visual spatial processing, but this specific deficit with faces hasn’t been identified before. NLD affects less than 1% of the population and appears to be congenital.

Liddell GA, Rasmussen C. Memory Profile of Children with Nonverbal Learning Disability. Learning Disablilities Research & Practice [Internet]. 2005 ;20(3):137 - 141. Available from: http://dx.doi.org/10.1111/j.1540-5826.2005.00128.x

http://www.eurekalert.org/pub_releases/2005-08/uoa-sra081005.php

Single cell recognition research finds specific neurons for concepts

An intriguing study surprises cognitive researchers by showing that individual neurons in the medial temporal lobe are able to recognize specific people and objects. It’s long been thought that concepts such as these require a network of cells, and this doesn’t deny that many cells are involved. However, this new study points to the importance of a single brain cell. The study of 8 epileptic subjects found variable responses from subjects, but within subjects, individuals showed remarkably specific responses to concepts. For example, a single neuron in the left posterior hippocampus of one subject responded to all pictures of actress Jennifer Aniston, and also to Lisa Kudrow, her co-star on the TV hit "Friends", but not to pictures of Jennifer Aniston together with actor Brad Pitt, and not, or only very weakly, to other famous and non-famous faces, landmarks, animals or objects. In another patient, pictures of actress Halle Berry activated a neuron in the right anterior hippocampus, as did a caricature of the actress, images of her in the lead role of the film "Catwoman," and a letter sequence spelling her name. The results suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.

Quiroga QR, Reddy L, Kreiman G, Koch C, Fried I. Invariant visual representation by single neurons in the human brain. Nature [Internet]. 2005 ;435(7045):1102 - 1107. Available from: http://dx.doi.org/10.1038/nature03687

http://www.eurekalert.org/pub_releases/2005-06/uoc--scr062005.php

Evidence faces are processed like words

It has been suggested that faces and words are recognized differently, that faces are identified by wholes, whereas words and other objects are identified by parts. However, a recent study has devised a new test, that finds people use letters to recognize words and facial features to recognize faces.

Martelli M, Majaj NJ, Pelli DG. Are faces processed like words? A diagnostic test for recognition by parts. Journal of Vision [Internet]. 2005 ;5(1). Available from: http://www.journalofvision.org/content/5/1/6.abstract

You can read this article online at http://www.journalofvision.org//5/1/6/.

http://www.eurekalert.org/pub_releases/2005-03/afri-ssf030705.php

Face blindness runs in families

A study of those with prosopagnosia (face blindness) and their relatives has revealed a genetic basis to the neurological condition. An earlier questionnaire study by the same researcher (himself prosopagnosic) suggests the impairment may be more common than has been thought. The study involved 576 biology students. Nearly 2% reported face-blindness symptoms.

Grueter M, Grueter T, Bell V, Horst J, Laskowski W, Sperling K, Halligan PW, Elli HD, Kennerknecht I. Hereditary Prosopagnosia: the First Case Series. Cortex [Internet]. 2007 ;43(6):734 - 749. Available from: http://www.sciencedirect.com/science/article/pii/S0010945208705021

http://www.newscientist.com/article.ns?id=dn7174

Faces must be seen to be recognized

In an interesting new perspective on face recognition, a series of perception experiments have revealed that identifying a face depends on actually seeing it, as opposed to merely having the image of the face fall on the retina. In other words, attention is necessary.

Moradi F, Koch C, Shimojo S. Face Adaptation Depends on Seeing the Face. Neuron [Internet]. 2005 ;45(1):169 - 175. Available from: http://www.cell.com/neuron/abstract/S0896-6273(04)00834-7

http://www.eurekalert.org/pub_releases/2005-01/cp-fmb122904.php

New insight into the relationship between recognizing faces and recognizing expressions

The quest to create a computer that can recognize faces and interpret facial expressions has given new insight into how the human brain does it. A study using faces photographed with four different facial expressions (happy, angry, screaming, and neutral), with different lighting, and with and without different accessories (like sunglasses), tested how long people took to decide if two faces belonged to the same person. Another group were tested to see how fast they could identify the expressions. It was found that people were quicker to recognize faces and facial expressions that involved little muscle movement, and slower to recognize expressions that involved a lot of movement. This supports the idea that recognition of faces and recognition of facial expressions are linked – it appears, through the part of the brain that helps us understand motion.

Martínez AM. Matching expression variant faces. Vision Research [Internet]. 2003 ;43(9):1047 - 1060. Available from: http://www.ncbi.nlm.nih.gov/pubmed/12676247

http://www.osu.edu/researchnews/archive/compvisn.htm

How the brain is wired for faces

The question of how special face recognition is — whether it is a process quite distinct from recognition of other objects, or whether we are simply highly practiced at this particular type of recognition — has been a subject of debate for some time. A new imaging study has concluded that the fusiform face area (FFA), a brain region crucially involved in face recognition, extracts configural information about faces rather than processing spatial information on the parts of faces. The study also indicated that the FFA is only involved in face recognition.

Yovel, G. & Kanwisher, N. 2004. Face Perception: Domain Specific, Not Process Specific. Neuron, 44 (5), 889–898.

http://www.eurekalert.org/pub_releases/2004-12/cp-htb112304.php

How the brain recognizes a face

Face recognition involves at least three stages. An imaging study has now localized these stages to particular regions of the brain. It was found that the inferior occipital gyrus was particularly sensitive to slight physical changes in faces. The right fusiform gyrus (RFG), appeared to be involved in making a more general appraisal of the face and compares it to the brain's database of stored memories to see if it is someone familiar. The third activated region, the anterior temporal cortex (ATC), is believed to store facts about people and is thought to be an essential part of the identifying process.

Rotshtein, P., Henson, R.N.A., Treves, A., Driver, J. & Dolan, R.J. 2005. Morphing Marilyn into Maggie dissociates physical and identity face representations in the brain. Nature Neuroscience, 8, 107-113.

http://news.bbc.co.uk/go/pr/fr/-/2/hi/health/4086319.stm

Memories of crime stories influenced by racial stereotypes

The influence of stereotypes on memory, a well-established phenomenon, has been demonstrated anew in a study concerning people's memory of news photographs. In the study, 163 college students (of whom 147 were White) examined one of four types of news stories, all about a hypothetical Black man. Two of the stories were not about crime, the third dealt with non-violent crime, while the fourth focused on violent crime. All four stories included an identical photograph of the same man. Afterwards, participants reconstructed the photograph by selecting from a series of facial features presented on a computer screen. It was found that selected features didn’t differ from the actual photograph in the non-crime conditions, but for the crime stories, more pronounced African-American features tended to be selected, particularly so for the story concerning violent crime. Participants appeared largely unaware of their associations of violent crime with the physical characteristics of African-Americans.

Oliver MB, Jackson, II RL, Moses NN, Dangerfield CL. The Face of Crime: Viewers' Memory of Race-Related Facial Features of Individuals Pictured in the News. The Journal of Communication [Internet]. 2004 ;54(1):88 - 104. Available from: http://dx.doi.org/10.1111/j.1460-2466.2004.tb02615.x

http://www.eurekalert.org/pub_releases/2004-05/ps-rmo050504.php

Special training may help people with autism recognize faces

People with autism tend to activate object-related brain regions when they are viewing unfamiliar faces, rather than a specific face-processing region. They also tend to focus on particular features, such as a mustache or a pair of glasses. However, a new study has found that when people with autism look at a picture of a very familiar face, such as their mother's, their brain activity is similar to that of control subjects – involving the fusiform gyrus, a region in the brain's temporal lobe that is associated with face processing, rather than the inferior temporal gyrus, an area associated with objects. Use of the fusiform gyrus in recognizing faces is a process that starts early with non-autistic people, but does take time to develop (usually complete by age 12). The study indicates that the fusiform gyrus in autistic people does have the potential to function normally, but may need special training to operate properly.

Aylward, E. 2004. Functional MRI studies of face processing in adolescents and adults with autism: Role of experience. Paper presented February 14 at the annual meeting of the American Association for the Advancement of Science in Seattle.

Dawson, G. & Webb, S. 2004. Event related potentials reveal early abnormalities in face processing autism. Paper presented February 14 at the annual meeting of the American Association for the Advancement of Science in Seattle.

http://www.eurekalert.org/pub_releases/2004-02/uow-stm020904.php

How faces become familiar

With faces, familiarity makes a huge difference. Even when pictures are high quality and faces are shown at the same time, we make a surprising number of mistakes when trying to decide if two pictures are of the same person – when the face is unknown to us. On the other hand, even when picture quality is very poor, we’re very good at recognising familiar faces. So how do faces become familiar to us? Recent research led by Vicki Bruce (well-known in this field) showed volunteers video sequences of people, episodes of unfamiliar soap operas, and images of familiar but previously unseen characters from radio's The Archers and voices from The Simpsons. They confirmed previous research suggesting that for unfamiliar faces, memory appears dominated by the 'external' features, but where the face is well-known it is 'internal' features such as the eyes, nose and mouth, that are more important. The shift to internal features occurred rapidly, within minutes. Speed of learning was unaffected by whether the faces were experienced as static or moving images, or with or without accompanying voices, but faces which belonged to well-known, though previously unseen, personal identities were learned more easily.

Bruce, V., Burton, M. et al. 2003. Getting To Know You – How We Learn New Faces. A research report funded by the Economic and Social Research Council (ESRC).

http://www.eurekalert.org/pub_releases/2003-06/esr-hs061603.php
http://www.esrc.ac.uk/esrccontent/news/june03-5.asp

Face recognition may not be a special case

Many researchers have argued that the brain processes faces quite separately from other objects — that faces are a special class. Research has shown many ways in which face recognition does seem to be a special case, but it could be argued that the differences are due not to a separate processing system, but to people’s expertise with faces. We have, after all, plenty of evidence that babies are programmed right from the beginning to pay lots of attention to faces. A new study has endeavored to answer this question, by looking at separate and concurrent perception of faces and cars, by people who were “car buffs” and those who were not. If expert processing of these objects depends on a common mechanism (presumed to be related to the perception of objects as wholes), then car perception would be expected to interfere with concurrent face perception. Moreover, such interference should get worse, as the subjects became more expert at processing cars. This is indeed what was found. Experts were found to recognize cars holistically, but this recognition interfered with their recognition of familiar faces. While novices processed the cars piece by piece, in a slower process that did not interfere with face recognition. This study follows on from earlier research in which car fanciers and bird watchers were found to identify cars and birds, respectively, using the same area of the brain as is used in face recognition. A subsequent study found that people trained to identify novel, computer-generated objects, began to recognize them holistically (as is done in face recognition). This latest study shows that, not only is experts’ car recognition occurring in the same brain region as face recognition, but that the same neural circuits are involved.

Gauthier I, Curran T, Curby KM, Collins D. Perceptual interference supports a non-modular account of face processing. Nat Neurosci [Internet]. 2003 ;6(4):428 - 432. Available from: http://dx.doi.org/10.1038/nn1029

http://www.eurekalert.org/pub_releases/2003-03/vu-cfe030503.php
http://www.nytimes.com/2003/03/11/health/11PERC.html

Detection of foreign faces faster than faces of your own race

A recent study tracked the time it takes for the brain to perceive the faces of people of other races as opposed to faces from the same race. The faces were mixed with images of everyday objects, and the subjects were given the distracting task of counting butterflies. The study found that the Caucasian subjects took longer to detect Caucasian faces than Asian faces. The study complements an earlier imaging study that showed that, when people are actively trying to recognize faces, they are better at recognizing members of their own race. [see Why recognizing a face is easier when the race matches our own]

Caldara R, Thut G, Servoir P, Michel CM, Bovet P, Renault B. Face versus non-face object perception and the ‘other-race’ effect: a spatio-temporal event-related potential study. Clinical Neurophysiology [Internet]. 2003 ;114(3):515 - 528. Available from: http://www.sciencedirect.com/science/article/pii/S1388245702004078

http://news.bmn.com/news/story?day=030108&story=1

Women better at recognizing female but not male faces

Women’s superiority in face recognition tasks appears to be due to their better recognition of female faces. There was no difference between men and women in the recognition of male faces.

Lewin C, Herlitz A. Sex differences in face recognition--Women's faces make the difference. Brain and Cognition [Internet]. 2002 ;50(1):121 - 128. Available from: http://www.sciencedirect.com/science/article/B6WBY-46WVHDY-C/2/20e92b605a3fb8210460c4766ba66d35

Imaging confirms people knowledge processed differently

Earlier research has demonstrated that semantic knowledge for different classes of inanimate objects (e.g., tools, musical instruments, and houses) is processed in different brain regions. A new imaging study looked at knowledge about people, and found a unique pattern of brain activity was associated with person judgments, supporting the idea that person knowledge is functionally dissociable from other classes of semantic knowledge within the brain.

Mitchell JP, Heatherton TF, Macrae NC. Distinct neural systems subserve person and object knowledge. Proceedings of the National Academy of Sciences of the United States of America [Internet]. 2002 ;99(23):15238 - 15243. Available from: http://www.pnas.org/content/99/23/15238.abstract

http://www.pnas.org/cgi/content/abstract/99/23/15238?etoc

Identity memory area localized

An imaging study investigating brain activation when people were asked to answer yes or no to statements about themselves (e.g. 'I forget important things', 'I'm a good friend', 'I have a quick temper'), found consistent activation in the anterior medial prefrontal and posterior cingulate. This is consistent with lesion studies, and suggests that these areas of the cortex are involved in self-reflective thought.

Johnson SC, Baxter LC, Wilder LS, Pipe JG, Heiserman JE, Prigatano GP. Neural correlates of self-reflection. Brain [Internet]. 2002 ;125(8):1808 - 1814. Available from: http://brain.oxfordjournals.org/cgi/content/abstract/125/8/1808

http://brain.oupjournals.org/cgi/content/abstract/125/8/1808

Recognizing yourself is different from recognizing other people

Recognition of familiar faces occurs largely in the right side of the brain, but new research suggests that identifying your own face occurs more in the left side of your brain. Evidence for this comes from a split-brain patient (a person whose corpus callosum – the main bridge of nerve fibers between the two hemispheres of the brain - has been severed to minimize the spread of epileptic seizure activity). The finding needs to be confirmed in studies of people with intact brains, but it suggests not only that there is a distinction between recognizing your self and recognizing other people you know well, but also that memories and knowledge about oneself may be stored largely in the left hemisphere.

Turk DJ, Heatherton TF, Kelley WM, Funnell MG, Gazzaniga MS, Macrae NC. Mike or me? Self-recognition in a split-brain patient. Nat Neurosci [Internet]. 2002 ;5(9):841 - 842. Available from: http://dx.doi.org/10.1038/nn907

http://www.nature.com/neurolink/v5/n9/abs/nn907.html
http://www.sciencenews.org/20020824/fob8.asp

Differential effects of encoding strategy on brain activity patterns

Encoding and recognition of unfamiliar faces in young adults were examined using PET imaging to determine whether different encoding strategies would lead to differences in brain activity. It was found that encoding activated a primarily ventral system including bilateral temporal and fusiform regions and left prefrontal cortices, whereas recognition activated a primarily dorsal set of regions including right prefrontal and parietal areas. The type of encoding strategy produced different brain activity patterns. There was no effect of encoding strategy on brain activity during recognition. The left inferior prefrontal cortex was engaged during encoding regardless of strategy.

Bernstein LJ, Beig S, Siegenthaler AL, Grady CL. The effect of encoding strategy on the neural correlates of memory for faces. Neuropsychologia [Internet]. 2002 ;40(1):86 - 98. Available from: http://www.ncbi.nlm.nih.gov/pubmed/11595264

http://tinyurl.com/i87v

Babies' experience with faces leads to narrowing of perception

A theory that infants' experience in viewing faces causes their brains (in particular an area of the cerebral cortex known as the fusiform gyrus) to "tune in" to the types of faces they see most often and tune out other types, has been given support from a study showing that 6-month-old babies were significantly better than both adults and 9-month-old babies in distinguishing the faces of monkeys. All groups were able to distinguish human faces from one another.

Pascalis O, de Haan M, Nelson CA. Is Face Processing Species-Specific During the First Year of Life?. Science [Internet]. 2002 ;296(5571):1321 - 1323. Available from: http://www.sciencemag.org/cgi/content/abstract/296/5571/1321

http://www.eurekalert.org/pub_releases/2002-05/uom-ssi051302.php
http://news.bbc.co.uk/hi/english/health/newsid_1991000/1991705.stm
http://www.eurekalert.org/pub_releases/2002-05/aaft-bbl050902.php

Different brain regions implicated in the representation of the structure and meaning of pictured objects

Imaging studies continue apace! Having established that that part of the brain known as the fusiform gyrus is important in picture naming, a new study further refines our understanding by studying the cerebral blood flow (CBF) changes in response to a picture naming task that varied on two dimensions: familiarity (or difficulty: hard vs easy) and category (tools vs animals). Results show that although familiarity effects are present in the frontal and left lateral posterior temporal cortex, they are absent from the fusiform gyrus. The authors conclude that the fusiform gyrus processes information relating to an object's structure, rather than its meaning. The blood flows suggest that it is the left posterior middle temporal gyrus that is involved in representing the object's meaning.

Whatmough C, Chertkow H, Murtha S, Hanratty K. Dissociable brain regions process object meaning and object structure during picture naming. Neuropsychologia [Internet]. 2002 ;40(2):174 - 186. Available from: http://www.sciencedirect.com/science/article/B6T0D-4465750-6/2/0c2055de1cc1afdee26f18f2f0b0e848

Debate over how the brain deals with visual information

Neuroscientists can't agree on whether the brain uses specific regions to distinguish specific objects, or patterns of activity from different regions. The debate over how the brain deals with visual information has been re-ignited with apparently contradictory findings from two research groups. One group has pinpointed a distinct region in the brain that responds selectively to images of the human body, while another concludes that the representations of a wide range of image categories are dealt with by overlapping brain regions. (see below)

Specific brain region responds specifically to images of the human body

Cognitive neuroscientists have identified a new area of the human brain that responds specifically when people view images of the human body. They have named this region of the brain the 'extrastriate body area' or 'EBA'. The EBA can be distinguished from other known anatomical subdivisions of the visual cortex. However, the EBA is in a region of the brain called the posterior superior temporal sulcus, where other areas have been implicated in the perception of socially relevant information such as the direction that another person's eyes are gazing, the sound of human voices, or the inferred intentions of animate entities.

Brain scan patterns identify objects being viewed

National Institute of Mental Health (NIMH) scientists have shown that they can tell what kind of object a person is looking at — a face, a house, a shoe, a chair — by the pattern of brain activity it evokes. Earlier NIMH fMRI studies had shown that brain areas that respond maximally to a particular category of object are consistent across different people. This new study finds that the full pattern of responses — not just the areas of maximal activation — is consistent within the same person for a given category of object. Overall, the pattern of fMRI responses predicted the category with 96% accuracy. Accuracy was l00% for faces, houses and scrambled pictures.

Downing PE, Jiang Y, Shuman M, Kanwisher N. A Cortical Area Selective for Visual Processing of the Human Body. Science [Internet]. 2001 ;293(5539):2470 - 2473. Available from: http://www.sciencemag.org/cgi/content/abstract/293/5539/2470

Haxby JV, Gobbini IM, Furey ML, Ishai A, Schouten JL, Pietrini P. Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex. Science [Internet]. 2001 ;293(5539):2425 - 2430. Available from: http://www.sciencemag.org/cgi/content/abstract/293/5539/2425

http://www.eurekalert.org/pub_releases/2001-09/niom-bsp092601.php
http://www.sciencemag.org/cgi/content/abstract/293/5539/2425

Why recognizing a face is easier when the race matches our own

We have known for a while that recognizing a face is easier when its owner's race matches our own. An imaging study now shows that greater activity in the brain's expert face-discrimination area occurs when the subject is viewing faces that belong to members of the same race as their own.

Golby, A. J., Gabrieli, J. D. E., Chiao, J. Y. & Eberhardt, J. L. 2001. Differential responses in the fusiform region to same-race and other-race faces. Nature Neuroscience, 4, 845-850.

http://www.nature.com/nsu/010802/010802-1.html

Boys' and girls' brains process faces differently

Previous research has suggested a right-hemisphere superiority in face processing, as well as adult male superiority at spatial and non-verbal skills (also associated with the right hemisphere of the brain). This study looked at face recognition and the ability to read facial expressions in young, pre-pubertal boys and girls. Boys and girls were equally good at recognizing faces and identifying expressions, but boys showed significantly greater activity in the right hemisphere, while the girls' brains were more active in the left hemisphere. It is speculated that boys tend to process faces at a global level (right hemisphere), while girls process faces at a more local level (left hemisphere). This may mean that females have an advantage in reading fine details of expression. More importantly, it may be that different treatments might be appropriate for males and females in the case of brain injury.

Everhart ED, Shucard JL, Quatrin T, Shucard DW. Sex-related differences in event-related potentials, face recognition, and facial affect processing in prepubertal children. Neuropsychology. 2001 ;15(3):329 - 341.

http://www.eurekalert.org/pub_releases/2001-07/aaft-pba062801.php
http://news.bbc.co.uk/hi/english/health/newsid_1425000/1425797.stm

Children's recognition of faces

Children aged 4 to 7 were found to be able to use both configural and featural information to recognize faces. However, even when trained to proficiency on recognizing the target faces, their recognition was impaired when a superfluous hat was added to the face.

Freire A, Lee K. Face Recognition in 4- to 7-Year-Olds: Processing of Configural, Featural, and Paraphernalia Information. Journal of Experimental Child Psychology [Internet]. 2001 ;80(4):347 - 371. Available from: http://www.sciencedirect.com/science/article/B6WJ9-457D48M-3/2/cb66483ea30cd07cb6c2047ade7b1e57

Differences in face perception processing between autistic and normal adults

An imaging study compared activation patterns of adults with autism and normal control subjects during a face perception task. While autistic subjects could perform the face perception task, none of the regions supporting face processing in normals were found to be significantly active in the autistic subjects. Instead, in every autistic patient, faces maximally activated aberrant and individual-specific neural sites (e.g. frontal cortex, primary visual cortex, etc.), which was in contrast to the 100% consistency of maximal activation within the traditional fusiform face area (FFA) for every normal subject. It appears that, as compared with normal individuals, autistic individuals `see' faces utilizing different neural systems, with each patient doing so via a unique neural circuitry.

Pierce K, Muller R-A, Ambrose J, Allen G, Courchesne E. Face processing occurs outside the fusiform `face area' in autism: evidence from functional MRI. Brain [Internet]. 2001 ;124(10):2059 - 2073. Available from: http://brain.oxfordjournals.org/cgi/content/abstract/124/10/2059

http://brain.oupjournals.org/cgi/content/abstract/124/10/2059

Why we remember more from young adulthood than from any other period

Autobiographical memory is an interesting memory domain, given its inextricable association with identity. One particularly fascinating aspect of it is its unevenness - why do we remember so little from the first years of life ('childhood amnesia'), why do we remember some periods of our life so much more vividly than others? There are obvious answers (well, nothing interesting happened in those other times), but the obvious is not always correct. Intriguing, then, to read about a new study that links those memorable periods to self-identity.

Self-imagination helps memory in both healthy and memory-impaired

December, 2012

A small study involving patients with TBI has found that the best learning strategies are ones that call on the self-schema rather than episodic memory, and the best involves self-imagination.

Sometime ago, I reported on a study showing that older adults could improve their memory for a future task (remembering to regularly test their blood sugar) by picturing themselves going through the process. Imagination has been shown to be a useful strategy in improving memory (and also motor skills). A new study extends and confirms previous findings, by testing free recall and comparing self-imagination to more traditional strategies.

The study involved 15 patients with acquired brain injury who had impaired memory and 15 healthy controls. Participants memorized five lists of 24 adjectives that described personality traits, using a different strategy for each list. The five strategies were:

  • think of a word that rhymes with the trait (baseline),
  • think of a definition for the trait (semantic elaboration),
  • think about how the trait describes you (semantic self-referential processing),
  • think of a time when you acted out the trait (episodic self-referential processing), or
  • imagine acting out the trait (self-imagining).

For both groups, self-imagination produced the highest rates of free recall of the list (an average of 9.3 for the memory-impaired, compared to 3.2 using the baseline strategy; 8.1 vs 3.2 for the controls — note that the controls were given all 24 items in one list, while the memory-impaired were given 4 lists of 6 items).

Additionally, those with impaired memory did better using semantic self-referential processing than episodic self-referential processing (7.3 vs 5.7). In contrast, the controls did much the same in both conditions. This adds to the evidence that patients with brain injury often have a particular problem with episodic memory (knowledge about specific events). Episodic memory is also particularly affected in Alzheimer’s, as well as in normal aging and depression.

It’s also worth noting that all the strategies that involved the self were more effective than the two strategies that didn’t, for both groups (also, semantic elaboration was better than the baseline strategy).

The researchers suggest self-imagination (and semantic self-referential processing) might be of particular benefit for memory-impaired patients, by encouraging them to use information they can more easily access (information about their own personality traits, identity roles, and lifetime periods — what is termed the self-schema), and that future research should explore ways in which self-imagination could be used to support everyday memory tasks, such as learning new skills and remembering recent events.

Topics: 

Autism therapy can normalize face processing

November, 2012

A small study shows that an intensive program to help young children with autism not only improves cognition and behavior, but can also normalize brain activity for face processing.

The importance of early diagnosis for autism spectrum disorder has been highlighted by a recent study demonstrating the value of an educational program for toddlers with ASD.

The study involved 48 toddlers (18-30 months) diagnosed with autism and age-matched normally developing controls. Those with ASD were randomly assigned to participate in a two-year program called the Early Start Denver Model, or a standard community program.

The ESDM program involved two-hour sessions by trained therapists twice a day, five days every week. Parent training also enabled ESDM strategies to be used during daily activities. The program emphasizes interpersonal exchange, social attention, and shared engagement. It also includes training in face recognition, using individualized booklets of color photos of the faces of four familiar people.

The community program involved evaluation and advice, annual follow-up sessions, programs at Birth-to-Three centers and individual speech-language therapy, occupational therapy, and/or applied behavior analysis treatments.

All of those in the ESDM program were still participating at the end of the two years, compared to 88% of the community program participants.

At the end of the program, children were assessed on various cognitive and behavioral measures, as well as brain activity.

Compared with children who participated in the community program, children who received ESDM showed significant improvements in IQ, language, adaptive behavior, and autism diagnosis. Average verbal IQ for the ESDM group was 95 compared to an average 75 for the community group, and 93 vs 80 for nonverbal IQ. These are dramatically large differences, although it must be noted that individual variability was high.

Moreover, for the ESDM group, brain activity in response to faces was similar to that of normally-developing children, while the community group showed the pattern typical of autism (greater activity in response to objects compared to faces). This was associated with improvements in social behavior.

Again, there were significant individual differences. Specifically, 73% of the ESDM group, 53% of the control group, and 29% of the community group, showed a pattern of faster response to faces. (Bear in mind, re the control group, that these children are all still quite young.) It should also be borne in mind that it was difficult to get usable EEG data from many of the children with ASD — these results come from only 60% of the children with ASD.

Nevertheless, the findings are encouraging for parents looking to help their children.

It should also be noted that, although obviously earlier is better, the findings don’t rule out benefits for older children or even adults. Relatively brief targeted training in face recognition has been shown to affect brain activity patterns in adults with ASD.

Reference: 
tags problems: 
tags development: 

Negative gossip sharpens attention

July, 2011

Faces of people about whom something negative was known were perceived more quickly than faces of people about whom nothing, or something positive or neutral, was known.

Here’s a perception study with an intriguing twist. In my recent round-up of perception news I spoke of how images with people in them were more memorable, and of how some images ‘jump out’ at you. This study showed different images to each participant’s left and right eye at the same time, creating a contest between them. The amount of time it takes the participant to report seeing each image indicates the relative priority granted by the brain.

So, 66 college students were shown faces of people, and told something ‘gossipy’ about each one. The gossip could be negative, positive or neutral — for example, the person “threw a chair at a classmate”; “helped an elderly woman with her groceries”; “passed a man on the street.” These faces were then shown to one eye while the other eye saw a picture of a house.

The students had to press one button when they could see a face and another when they saw a house. As a control, some faces were used that the students had never seen. The students took the same length of time to register seeing the unknown faces and those about which they had been told neutral or positive information, but pictures of people about whom they had heard negative information registered around half a second quicker, and were looked at for longer.

A second experiment confirmed the findings and showed that subjects saw the faces linked to negative gossip for longer periods than faces about whom they had heard about upsetting personal experiences.

Reference: 
Topics: 

Simple training helps infants maintain ability to distinguish other-race faces

July, 2011

New research confirms the role of experience in the other race effect, and shows how easily the problem in discriminating faces belonging to other races might be prevented.

Our common difficulty in recognizing faces that belong to races other than our own (or more specifically, those we have less experience of) is known as the Other Race Effect. Previous research has revealed that six-month-old babies show no signs of this bias, but by nine months, their ability to recognize faces is reduced to those races they see around them.

Now, an intriguing study has looked into whether infants can be trained in such a way that they can maintain the ability to process other-race faces. The study involved 32 six-month-old Caucasian infants, who were shown picture books that contained either Chinese (training group) or Caucasian (control group) faces. There were eight different books, each containing either six female faces or six male faces (with names). Parents were asked to present the pictures in the book to their child for 2–3 minutes every day for 1 week, then every other day for the next week, and then less frequently (approximately once every 6 days) following a fixed schedule of exposures during the 3-month period (equating to approximately 70 minutes of exposure overall).

When tested at nine months, there were significant differences between the two groups that indicated that the group who trained on the Chinese faces had maintained their ability to discriminate Chinese faces, while those who had trained on the Caucasian faces had lost it (specifically, they showed no preference for novel or familiar faces, treating them both the same).

It’s worth noting that the babies generalized from the training pictures, all of which showed the faces in the same “passport photo” type pose, to a different orientation (three-quarter pose) during test trials. This finding indicates that infants were actually learning the face, not simply an image.

tags strategies: 
tags memworks: 
Topics: 
tags development: 

Better reading may mean poorer face recognition

January, 2011

Evidence that illiterates use a brain region involved in reading for face processing to a greater extent than readers do, suggests that reading may have hijacked the network used for object recognition.

An imaging study of 10 illiterates, 22 people who learned to read as adults and 31 who did so as children, has confirmed that the visual word form area (involved in linking sounds with written symbols) showed more activation in better readers, although everyone had similar levels of activation in that area when listening to spoken sentences. More importantly, it also revealed that this area was much less active among the better readers when they were looking at pictures of faces.

Other changes in activation patterns were also evident (for example, readers showed greater activation in the planum temporal in response to spoken speech), and most of the changes occurred even among those who acquired literacy in adulthood — showing that the brain re-structuring doesn’t depend on a particular time-window.

The finding of competition between face and word processing is consistent with the researcher’s theory that reading may have hijacked a neural network used to help us visually track animals, and raises the intriguing possibility that our face-perception abilities suffer in proportion to our reading skills.

tags study: 
Topics: 

Face-blindness an example of inability to generalize

October, 2010

It seems that prosopagnosia can be, along with perfect pitch and eidetic memory, an example of what happens when your brain can’t abstract the core concept.

‘Face-blindness’ — prosopagnosia — is a condition I find fascinating, perhaps because I myself have a touch of it (it’s now recognized that this condition represents the end of a continuum rather than being an either/or proposition). The intriguing thing about this inability to recognize faces is that, in its extreme form, it can nevertheless exist side-by-side with quite normal recognition of other objects.

Prosopagnosia that is not the result of brain damage often runs in families, and a study of three family members with this condition has revealed that in some cases at least, the inability to remember faces has to do with failing to form a mental representation that abstracts the essence of the face, sans context. That is, despite being fully able to read facial expressions, attractiveness and gender from the face (indeed one of the family members is an artist who has no trouble portraying fully detailed faces), they couldn’t cope with changes in lighting conditions and viewing angles.

I’m reminded of the phenomenon of perfect pitch, which is characterized by an inability to generalize across acoustically similar tones, so an A in a different key is a completely different note. Interestingly, like prosopagnosia, perfect pitch is now thought to be more common than has been thought (recognition of it is of course limited by the fact that some musical expertise is generally needed to reveal it). This inability to abstract or generalize is also a phenomenon of eidetic memory, and I have spoken before of the perils of this.

(Note: A fascinating account of what it is like to be face-blind, from a person with the condition, can be found at: http://www.choisser.com/faceblind/)

Topics: 

Face recognition ability inherited separately from IQ

January, 2010

Providing support for a modular concept of the brain, a twin study has found that face recognition is heritable, and that it is inherited separately from IQ.

No surprise to me (I’m hopeless at faces), but a twin study has found that face recognition is heritable, and that it is inherited separately from IQ. The findings provide support for a modular concept of the brain, suggesting that some cognitive abilities, like face recognition, are shaped by specialist genes rather than generalist genes. The study used 102 pairs of identical twins and 71 pairs of fraternal twins aged 7 to 19 from Beijing schools to calculate that 39% of the variance between individuals on a face recognition task is attributable to genetic effects. In an independent sample of 321 students, the researchers found that face recognition ability was not correlated with IQ.

Reference: 

Zhu, Q. et al. 2010. Heritability of the specific cognitive ability of face perception. Current Biology, 20 (2), 137-142.

Pages