familiarity

Why people with Alzheimer's stop recognizing their loved ones

  • A finding that Alzheimer's sufferers' failure to recognize familiar faces is rooted in an impairment in holistic perception rather than memory loss, suggests new strategies to help patients recognize their loved ones for longer.

People with Alzheimer's disease develop problems in recognizing familiar faces. It has been thought that this is just part of their general impairment, but a new study indicates that a specific, face-related impairment develops early in the disease. This impairment has to do with the recognition of a face as a whole.

Face recognition has two aspects to it: holistic (seeing the face as a whole) and featural (processing individual features of the face). While both are useful in object recognition, expert recognition (and face recognition is usually something humans are expert in) is built on a shift from featural to holistic processing.

The study compared the ability of people with mild Alzheimer's and healthy age- and education-matched seniors to recognize faces and cars in photos that were either upright or upside down. It found that those with Alzheimer's performed comparably to the control group in processing the upside-down faces and cars. This type of processing requires an analysis of the various features. Those with Alzheimer’s also performed normally in recognizing upright cars (car experts are likely to use holistic processing, but those with less expertise will depend more on featural processing). However, they were much slower and less accurate in recognizing faces.

Realizing that impaired facial recognition is based on a holistic perception problem, rather than being simply another failure of memory, suggests that strategies such as focusing on particular facial features or on voice recognition may help patients recognize their loved ones for longer.

http://www.eurekalert.org/pub_releases/2016-04/uom-wdp040816.php

Reference: 

Topics: 

tags development: 

tags memworks: 

tags study: 

Each memory experience biases how you approach the next one

September, 2012

A new study provides evidence that our decision to encode information as new or try and retrieve it from long-term memory is affected by how we treated the last bit of information processed.

Our life-experiences contain a wealth of new and old information. The relative proportions of these change, of course, as we age. But how do we know whether we should be encoding new information or retrieving old information? It’s easy if the information is readily accessible, but what if it’s not? Bear in mind that (especially as we get older) most information / experiences we meet share some similarity to information we already have.

This question is made even more meaningful when you consider that it is the same brain region — the hippocampus — that’s involved in both encoding and retrieval, and these two processes depend (it is thought) on two quite opposite processes. While encoding is thought to rely on pattern separation (looking for differences), retrieval is thought to depend on pattern completion.

A recent study looked at what happens in the brain when people rapidly switch between encoding new objects and retrieving recently presented ones. Participants were shown 676 pictures of objects and asked to identify each one as being shown for the first time (‘new’), being repeated (‘old’), or as a modified version of something shown earlier (‘similar’). Recognizing the similar items as similar was the question of interest, as these items contain both old and new information and so the brain’s choice between encoding and retrieval is more difficult.

What they found was that participants were more likely to recognize similar items as similar (rather than old) if they had viewed a new item on the preceding trial. In other words, the experience of a new item primed them to notice novelty. Or to put it in another way: context biases the hippocampus toward either pattern completion or pattern separation.

This was supported by a further experiment, in which participants were shown both the object pictures, and also learned associations between faces and scenes. Critically, each scene was associated with two different faces. In the next learning phase, participants were taught a new scene association for one face from each pair. Each face-scene learning trial was preceded by an object recognition trial (new and old objects were shown and participants had to identify them as old or new) — critically, either a new or old object was consistently placed before a specific face-scene association. In the final test phase, participants were tested on the new face-scene associations they had just learned, as well as the indirect associations they had not been taught (that is, between the face of each pair that had not been presented during the preceding phase, and the scene associated with its partnered face).

What this found was that participants were more likely to pair indirectly related faces if those faces had been consistently preceded by old objects, rather than new ones. Moreover, they did so more quickly when the faces had been preceded by old objects rather than new ones.

This was interpreted as indicating that the preceding experience affects how well related information is integrated during encoding.

What all this suggests is that the memory activities you’ve just engaged in bias your brain toward the same sort of activities — so whether or not you notice changes to a café or instead nostalgically recall a previous meal, may depend on whether you noticed anyone you knew as you walked down the street!

An interesting speculation by the researchers is that such a memory bias (which only lasts a very brief time) might be an adaptive mechanism, reflecting the usefulness of being more sensitive to changes in new environments and less sensitive to irregularities in familiar environments.

Reference: 

Source: 

Topics: 

tags memworks: 

Boost creativity by living abroad

August, 2012

Support for previous findings associating study abroad with increased creativity comes from a study comparing those who studied abroad with those who plan to, and those with no such intentions.

A couple of years ago I briefly reported on a finding that students who had lived abroad demonstrated greater creativity, if they first recalled a multicultural learning experience from their life abroad. A new study examines this connection, in particular investigating the as-yet-unanswered question of whether students who studied abroad were already more creative than those who didn’t.

The study involved 135 students of whom 45 had studied abroad, 45 were planning to do so, and 45 had not and were not planning to. The groups did not differ significantly in terms of age, gender, or ethnicity, and data from a sample (a third of each group) revealed no differences in terms of GPA and SAT scores. Creativity was assessed using the domain-general Abbreviated Torrance Test for Adults (ATTA) and the culture-specific Cultural Creativity Task (CCT).

Those in the Study Abroad group scored significantly higher on the CCT than those in the other two groups, who didn’t differ from each other. Additionally, those in the Study Abroad group scored significantly higher on the ATTA than those in the No Plan to Study group (those in the Plan to Study group were not significantly different from either of the other two groups).

It seems clear, then, that the findings of earlier studies are indeed ‘real’ (students who study abroad really do come home more creative than before they went) and not a product of self-selection (more creative students are more likely to travel). But the difference between the two creativity tests needs some explanation.

There is a burning issue in creativity research: is creativity a domain-general attribute, or a domain-specific one? This is not a pedantic, theoretical question! If you’re ‘creative’, does that mean you’re equally creative in all areas, or just in specific areas? Or (more likely, it seems to me) is creativity both domain-general and domain-specific?

The ATTA, as I said, measures general creativity. It does so through three 3-minute tasks: identify the troubles you might have if you could walk on air or fly (without benefit of vehicle); draw a picture using two incomplete figures (provided); draw pictures using 9 identical isosceles triangles.

The CCT has five 3-minute tasks that target culturally relevant knowledge and skills. in each case, participants are asked to give as many ideas as they can in response to a specific scenario: getting more foreign tourists to visit America; the changes that would result if you woke up with different skin color; demonstrating high social status; developing new dishes using exotic ingredients; creating a product with universal appeal.

The findings would seem to support the idea that creativity has both general and specific elements. The greater effect of studying abroad on CCT scores (compared to ATTA scores) also seem to me to be consistent with the finding I cited at the beginning — that, to get the benefit, students needed to be reminded of their multicultural experiences. In this case, the CCT scenarios would seem to play that role.

It does of course make complete sense that living abroad would have positive benefits for creativity. Creativity is about not following accustomed ruts in one’s thoughts. Those ruts are not simply generated within our own mind (as we get older, our ruts tend to get deeper), but are products of our relationship with our society. Think of clichés. The more we follow along with accustomed language and thought patterns of our group, the less creative we will be. One way to break (or at least broaden) this, is to widen our groups — by, for example, mixing in diverse circles, or by living abroad.

Interestingly, another recent study (pdf link to paper) reckons that social rejection (generally regarded as a bad thing) can make some people more creative — if they’re independent types who take pride in being different from others.

Reference: 

Lee, C. S., Therriault, D. J., & Linderholm, T. (2012). On the Cognitive Benefits of Cultural Experience: Exploring the Relationship between Studying Abroad and Creative Thinking. Applied Cognitive Psychology, n/a–n/a. doi:10.1002/acp.2857

Kim, S. H., Vincent, L. C., & Goncalo, J. A. (In press). Outside Advantage: Can Social Rejection Fuel Creative Thought? Journal of Experimental Psychology. General.
 

Source: 

Topics: 

tags memworks: 

tags strategies: 

Negative stereotypes about aging affect how well older adults remember

March, 2012

Another study has come out supporting the idea that negative stereotypes about aging and memory affect how well older adults remember. In this case, older adults reminded of age-related decline were more likely to make memory errors.

In the study, 64 older adults (60-74; average 70) and 64 college students were compared on a word recognition task. Both groups first took a vocabulary test, on which they performed similarly. They were then presented with 12 lists of 15 semantically related words. For example, one list could have words associated with "sleep," such as "bed," "rest," "awake," "tired" and "night" — but not the word “sleep”. They were not told they would be tested on their memory of these, rather they were asked to rate each word for pleasantness.

They then engaged in a five-minute filler task (a Sudoku) before a short text was read to them. For some, the text had to do with age-related declines in memory. These participants were told the experiment had to do with memory. For others, the text concerned language-processing research. These were told the experiment had to do with language processing and verbal ability.

They were then given a recognition test containing 36 of the studied words, 48 words unrelated to the studied words, and 12 words related to the studied words (e.g. “sleep”). After recording whether or not they had seen each word before, they also rated their confidence in that answer on an 8-point scale. Finally, they were given a lexical decision task to independently assess stereotype activation.

While young adults showed no effects from the stereotype manipulation, older adults were much more likely to falsely recognize related words that had not been studied if they had heard the text on memory. Those who heard the text on language were no more likely than the young adults to falsely recognize related words.

Note that there is always quite a high level of false recognition of such items: young adults, and older adults in the low-threat condition falsely recognized around half of the related lures, compared to around 10% of unrelated words. But in the high-threat condition, older adults falsely recognized 71% of the related words.

Moreover, older adults’ confidence was also affected. While young adults’ confidence in their false memories was unaffected by threat condition, older adults in the high-threat condition were more confident of their false memories than older adults in the low-threat condition.

The idea that older adults were affected by negative stereotypes about aging was supported by the results of the lexical decision task, which found that, in the high-threat condition, older adults responded more quickly to words associated with negative stereotypes than to neutral words (indicating that they were more accessible). Young adults did not show this difference.

Reference: 

Thomas, A. K., & Dubois, S. J. (2011). Reducing the burden of stereotype threat eliminates age differences in memory distortion. Psychological science, 22(12), 1515-7. doi:10.1177/0956797611425932

Source: 

Topics: 

tags: 

tags development: 

tags memworks: 

tags problems: 

Errorless learning not always best for older brains

October, 2011

New evidence challenges the view that older adults learn best through errorless learning. Trial-and-error learning can be better if done the right way.

Following a 1994 study that found that errorless learning was better than trial-and-error learning for amnesic patients and older adults, errorless learning has been widely adopted in the rehabilitation industry. Errorless learning involves being told the answer without repeatedly trying to answer the question and perhaps making mistakes. For example, in the 1994 study, participants in the trial-and-error condition could produce up to three errors in answer to the question “I am thinking of a word that begins with QU”, before being told the answer was QUOTE; in contrast, participants in the errorless condition were simply told “I am thinking of a word that begins with QU and it is ‘QUOTE’.”

In a way, it is surprising that errorless learning should be better, given that trial-and-error produces much deeper and richer encoding, and a number of studies with young adults have indeed found an advantage for making errors. Moreover, it’s well established that retrieving an item leads to better learning than passively studying it, even when you retrieve the wrong item. This testing effect has also been found in older adults.

In another way, the finding is not surprising at all, because clearly the trial-and-error condition offers many opportunities for confusion. You remember that QUEEN was mentioned, for example, but you don’t remember whether it was a right or wrong answer. Source memory, as I’ve often mentioned, is particularly affected by age.

So there are good theoretical reasons for both positions regarding the value of mistakes, and there’s experimental evidence for both. Clearly it’s a matter of circumstance. One possible factor influencing the benefit or otherwise of error concerns the type of processing. Those studies that have found a benefit have generally involved conceptual associations (e.g. What’s Canada’s capital? Toronto? No, Ottawa). It may be that errors are helpful to the extent that they act as retrieval cues, and evoke a network of related concepts. Those studies that have found errors harm learning have generally involved perceptual associations, such as word stems and word fragments (e.g., QU? QUeen? No, QUote). These errors are arbitrary, produce interference, and don’t provide useful retrieval cues.

So this new study tested the idea that producing errors conceptually associated with targets would boost memory for the encoding context in which information was studied, especially for older adults who do not spontaneously elaborate on targets at encoding.

In the first experiment, 33 young (average age 21) and 31 older adults (average age 72) were shown 90 nouns presented in three different, intermixed conditions. In the read condition (designed to provide a baseline), participants read aloud the noun fragment presented without a semantic category (e.g., p­_g). In the errorless condition, the semantic category was presented with the target word fragment (e.g. a farm animal  p­_g), and the participants read aloud the category and their answer. The category and target were then displayed. In the trial-and-error condition, the category was presented and participants were encouraged to make two guesses before being shown the target fragment together with the category. The researchers changed the target if it was guessed. Participants were then tested using a list of 70 words, of which 10 came from each of the study conditions, 10 were new unrelated words, and 30 were nontarget exemplars from the TEL categories. Those that the subject had guessed were labeled as learning errors; those that hadn’t come up were labeled as related lures. In addition to an overall recognition test (press “yes” to any word you’ve studied and “no” to any new word), there were two tests that required participants to endorse items that were studied in the TEL condition and reject those studied in the EL condition, and vice versa.

The young adults did better than the older on every test. TEL produced better learning than EL, and both produced better learning than the read condition (as expected). The benefit of TEL was greater for older adults. This is in keeping with the idea that generating exemplars of a semantic category, as occurs in trial-and-error learning, helps produce a richer, more elaborated code, and that this is of greater to older adults, who are less inclined to do this without encouragement.

There was a downside, however. Older adults were also more prone to falsely endorsing prior learning errors or semantically-related lures. It’s worth noting that both groups were more likely to falsely endorse learning errors than related lures.

But the main goal of this first experiment was to disentangle the contributions of recollection and familiarity to the two types of learning. It turns out that there was no difference between young and older adults in terms of familiarity; the difference in performance between the two groups stemmed from recollection. Recollection was a problem for older adults in the errorless condition, but not in the trial-and-error condition (where the recollective component of their performance matched that of young adults). This deficit is clearly closely related to age-related deficits in source memory.

It was also found that familiarity was marginally more important in the errorless condition than the trial-and-error condition. This is consistent with the idea that targets learned without errors acquire greater fluency than those learned with errors (with the downside that they don’t pick up those contextual details that making errors can provide).

In the second experiment, 15 young and 15 older adults carried out much the same procedure, except that during the recognition test they were also required to mention the context in which the words were learned was tested (that is, were the words learned through trial-and-error or not).

Once again, trial-and-error learning was associated with better source memory relative to errorless learning, particularly for the older adults.

These results support the hypothesis that trial-and-error learning is more beneficial than errorless learning for older adults when the trials encourage semantic elaboration. But another factor may also be involved. Unlike other errorless studies, participants were required to attend to errors as well as targets. Explicit attention to errors may help protect against interference.

In a similar way, a recent study involving young adults found that feedback given in increments (thus producing errors) is more effective than feedback given all at once in full. Clearly what we want is to find that balance point, where elaborative benefits are maximized and interference is minimized.

Reference: 

[2496] Cyr, A-A., & Anderson N. D.
(2011).  Trial-and-error learning improves source memory among young and older adults.
Psychology and Aging. No Pagination Specified - No Pagination Specified.

Source: 

Topics: 

tags development: 

tags memworks: 

tags problems: 

tags strategies: 

Ability to remember memories' origin develops slowly

October, 2011

A study comparing the brains of children, adolescents, and young adults has found that the ability to remember the origin of memories is slow to mature. As with older adults, impaired source memory increases susceptibility to false memories.

In the study, 18 children (aged 7-8), 20 adolescents (13-14), and 20 young adults (20-29) were shown pictures and asked to decide whether it was a new picture or one they had seen earlier. Some of the pictures were of known objects and others were fanciful figures (this was in order to measure the effects of novelty in general). After a 10-minute break, they resumed the task — with the twist that any pictures that had appeared in the first session should be judged “new” if that was the first appearance in the second session. EEG measurements (event-related potentials — ERPs) were taken during the sessions.

ERPs at the onset of a test stimulus (each picture) are different for new and old (repeated) stimuli. Previous studies have established various old/new effects that reflect item and source memory in adults. In the case of item memory, recognition is thought to be based on two processes — familiarity and recollection — which are reflected in ERPs of different timings and location (familiarity: mid-frontal at 300-500 msec; recollection: parietal at 400-70 msec). Familiarity is seen as a fast assessment of similarity, while recollection varies according to the amount of retrieved information.

Source memory appears to require control processes that involve the prefrontal cortex. Given that this region is the slowest to mature, it would not be surprising if source memory is a problematic memory task for the young. And indeed, previous research has found that children do have particular difficulty in sourcing memories when the sources are highly similar.

In the present study, children performed more poorly than adolescents and adults on both item memory and source memory. Adolescents performed more poorly than adults on item memory but not on source memory. Children performed more poorly on source memory than item memory, but adolescents and adults showed no difference between the two tasks.

All groups responded faster to new items than old, and ERP responses to general novelty were similar across the groups — although children showed a left-frontal focus that may reflect the transition from analytic to a more holistic processing approach.

ERPs to old items, however, showed a difference: for adults, they were especially pronounced at frontal sites, and occurred at around 350-450 msec; for children and adolescents they were most pronounced at posterior sites, occurring at 600-800 msec for children and 400-600 msec for adolescents. Only adults showed the early midfrontal response that is assumed to reflect familiarity processing. On the other hand, the late old/new effect occurring at parietal sites and thought to reflect recollection, was similar across all age groups. The early old/new effect seen in children and adolescents at central and parietal regions is thought to reflect early recollection.

In other words, only adults showed the brain responses typical of familiarity as well as recollection. Now, some research has found evidence of familiarity processing in children, so this shouldn’t be taken as proof against familiarity processing in the young. What seems most likely is that children are less likely to use such processing. Clearly the next step is to find out the factors that affect this.

Another interesting point is the early recollective response shown by children and adolescents. It’s speculated that these groups may have used more retrieval cues — conceptual as well as perceptual — that facilitated recollection. I’m reminded of a couple of studies I reported on some years ago, that found that young children were better than adults on a recognition task in some circumstances — because children were using a similarity-based process and adults a categorization-based one. In these cases, it had more to do with knowledge than development.

It’s also worth noting that, in adults, the recollective response was accentuated in the right-frontal area. This suggests that recollection was overlapping with post-retrieval monitoring. It’s speculated that adults’ greater use of familiarity produces a greater need for monitoring, because of the greater uncertainty.

What all this suggests is that preadolescent children are less able to strategically recollect source information, and that strategic recollection undergoes an important step in early adolescence that is probably related to improvements in cognitive control. But this process is still being refined in adolescents, in particular as regards monitoring and coping with uncertainty.

Interestingly, source memory is also one of the areas affected early in old age.

Failure to remember the source of a memory has many practical implications, in particular in the way it renders people more vulnerable to false memories.

Reference: 

Source: 

Topics: 

tags development: 

tags memworks: 

tags problems: 

Visual perception - a round-up of recent news

July, 2011

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

Reference: 

[2291] Kim, J. G., Biederman I., & Juan C-H.
(2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study.
The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L.
(2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories.
Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A.
(2011).  Behavior and neural basis of near-optimal visual search.
Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S.
(2011).  A neural basis for real-world visual search in human occipitotemporal cortex.
Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S.
(2011).  Value-driven attentional capture.
Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Source: 

Topics: 

tags memworks: 

People are poor at predicting their learning

April, 2011

A series of online experiments demonstrate that beliefs about memory, judgments of how likely you are to remember, and actual memory performance, are all largely independent of each other.

Research has shown that people are generally poor at predicting how likely they are to remember something. A recent study tested the theory that the reason we’re so often inaccurate is that we make predictions about memory based on how we feel while we're encountering the information to be learned, and that can lead us astray.

In three experiments, each involving about 80 participants ranging in age from late teens to senior citizens, participants were serially shown words in large or small fonts and asked to predict how well they'd remember each (actual font sizes depended on the participants’ browsers, since this was an online experiment and participants were in their own homes, but the larger size was four times larger than the other).

In the first experiment, each word was presented either once or twice, and participants were told if they would have another chance to study the word. The length of time the word was displayed on the first occasion was controlled by the participant. On the second occasion, words were displayed for four seconds, and participants weren’t asked to make a new prediction. At the end of the study phase, they had two minutes to type as many words as they remembered.

Recall was significantly better when an item was seen twice. Recall wasn’t affected by font size, but participants were significantly more likely to believe they’d recall those presented in larger fonts. While participants realized seeing an item twice would lead to greater recall, they greatly underestimated the benefits.

Because people so grossly discounted the benefit of a single repetition, in the next experiment the comparison was between one and four study trials. This time, participants gave more weight to having three repetitions versus none, but nevertheless, their predictions were still well below the actual benefits of the repetitions.

In the third experiment, participants were given a simplified description of the first experiment and either asked what effect they’d expect font size to have, or what effect having two study trials would have. The results (similar levels of belief in the benefits of each condition) neither resembled the results in the first experiment (indicating that those people’s predictions hadn’t been made on the basis of their beliefs about memory effects), or the actual performance (demonstrating that people really aren’t very good at predicting their memory performance).

These findings were confirmed in a further experiment, in which participants were asked about both variables (rather than just one).

The findings confirm other evidence that (a) general memory knowledge tends to be poor, (b) personal memory awareness tends to be poor, and (c) ease of processing is commonly used as a heuristic to predict whether something will be remembered.

 

Addendum: a nice general article on this topic by the lead researcher Nate Kornell has just come out in Miller-McCune

Reference: 

Kornell, N., Rhodes, M. G., Castel, A. D., & Tauber, S. K. (in press). The ease of processing heuristic and the stability bias: Dissociating memory, memory beliefs, and memory judgments. Psychological Science.

Source: 

Topics: 

tags memworks: 

tags strategies: 

tags study: 

Encoding features of complex and unfamiliar objects

Journal Article: 

Modigliani, V., Loverock, D.S. & Kirson, S.R. (1998). Encoding features of complex and unfamiliar objects. American Journal Of Psychology, 111, 215-239.

  • We don't store in memory every detail of common objects.
  • Repeated exposures to an object don't necessarily result in remembering any more about them.

There is a pervasive myth that every detail of every experience we've ever had is recorded in memory. It is interesting to note therefore, that even very familiar objects, such as coins, are rarely remembered in accurate detail1.

We see coins every day, but we don't see them. What we remember about coins are global attributes, such as size and color, not the little details, such as which way the head is pointing, what words are written on it, etc. Such details are apparently noted only if the person's attention is specifically drawn to them.

There are several interesting conclusions that can be drawn from studies that have looked at the normal encoding of familiar objects:

  • you don't automatically get more and more detail each time you see a particular object
  • only a limited amount of information is extracted the first time you see the object
  • the various features aren't equally important
  • normally, global rather than detail features are most likely to be remembered

In the present study, four experiments investigated people's memories for drawings of oak leaves. Two different types of oak leaves were used - "red oak" and "white oak". Subjects were shown two drawings for either 5 or 60 seconds. The differences between the two oak leaves varied, either:

  • globally (red vs white leaf), or
  • in terms of a major feature (the same type of leaf, but varying in that twomajor lobes are combined in one leaf but not in the other), or
  • in terms of a minor feature (one small lobe eliminated in one but not in theother).

According to the principle of top-down encoding, the time needed to detect a difference between stimuli that differ in only one critical feature will increase as the level of that feature decreases (from a global to a major specific to a lower-grade specific feature).

The results of this study supported the view that top-down encoding occurs, and indicate that, unless attention is explicitly directed to specific features, the likelihood of encoding such features becomes less the lower its structural level. One of the experiments tested whether the size of the feature made a difference, and it was decided that it didn't.

References

1. Jones, G.V. 1990. Misremembering a familiar object: When left is not right. Memory & Cognition, 18, 174-182.

Jones, G.V. & Martin, M. 1992. Misremembering a familiar object: Mnemonic illusion, not drawing bias. Memory & Cognition, 20, 211-213.

Nickerson, R.S. & Adams, M.J. 1979. Long-term memory of a common object. Cognitive Psychology, 11, 287-307.

Topics: 

tags memworks: 

Subscribe to RSS - familiarity
Error | About memory

Error

The website encountered an unexpected error. Please try again later.