Skip to main content

Review of "Memory's Penumbra: Episodic Memory Decisions Induce Lingering Mnemonic Biases"

Memory can be biased - here's how...

As an aside, I always like to see the authors of the papers - that way, if I should ever see them at a conference, I can capitalize on my terrible source memory and approach them saying "don't I know you from somewhere?".  It also helps attribute, recognition where it is due.  




Katherine Duncan




Arhanti Sadanand
Lila Davachi

I attended a talk given by Katherine Duncan at the Rotman Research Institute (where I work).  The talk, was largely based on the findings from the Science paper that came out in July.  The talk was great - and just as importantly, so is the paper and its findings.

The idea comes from neuroscience, yet the paper itself is entirely behavioral - something some of the senior faculty liked - that it was still possible to get a psychology/neuroscience paper into Science without resorting to newfangled methods like fMRI. 

 Even without fMRI, the paper is pretty neat.  

The basic premise is that since neuromodulators such as acetylcholine work over the span of a few seconds, it is possible that their effects form a "penumbra" or period of influence that might extend into a subsequent task.

Using this idea, the authors adapted Craig Stark's Behavioural Pattern Separation Object task (I was pleasantly surprised to find this is freely available on the author's website), which consists of a familiarization phase, followed by a task where new objects, old objects and objects that were similar to some of the other objects were presented (see figure below).

The critical manipulation was whether the object preceding the similar trial was new or old.  The logic is that an "old" object would activate memory and bias the respondent towards pattern completion which makes it harder to distinguish a similar object from it's predecessor.  Alternately, a "new" object should bias the respondent towards pattern separation making it easier to distinguish the similar object from it's predecessor.  Thus the critical measure was accuracy on the similar trial...

Adapted from Duncan et al., Science, 2012,
There is a second experiment in this paper (where the authors prove the inverse, namely that a preceding "old" trial helps with pattern completion, but I will not cover that here save to say that it nicely complements the initial experiment.


From a practical, real world point of view, what does this mean?  I guess it means that if you were just asked to recall something and then saw someone who looked very similar to someone you knew, you might mistake that person for your acquaintance - at least for a few seconds.  Conversely, if you were asked to examine a new object analytically, the opposite might hold true.


This paper worth the read and is rigorous from a theoretical and scientific view suggesting many ways to expand on the basic findings.  Coming from an inhibitory theory background, my first thought is can this phenomenon be extinguished by inhibition?  If so, what are the consequences?  I have a few ideas I may want to try out...




Duncan, K., Sadanand, a., & Davachi, L. (2012). Memory’s Penumbra: Episodic Memory Decisions Induce Lingering Mnemonic Biases. Science, 337(6093), 485–487. doi:10.1126/science.1221936

Comments

Popular posts from this blog

Video games allow for multitasking? A review of "Improving multi-tasking ability through action videogames"

W ould you believe that playing video games could make you more likely to succeed as a pilot? There's been quite a bit of colloquial evidence on multitasking and task-switching floating around the internet recently, thanks in no small part to work by Ophir et al., (2009) who showed that people who are habitual media-multitaskers (i.e. those annoying people who text while watching a movie or even worse while driving) tend to perform worse on a wide variety of cognitive measures of attention.  They concluded that people who are chronic media multitaskers have a "leaky filter", and as such cannot block out irrelevant (and distracting) information.  This agrees nicely other theories of attention, and particularly inhibitory theory (Hasher & Zacks, Michael C. Anderson, Gazzaley). One reason that multitasking has come under such scrutiny is that it is now encouraged and even expected in many professions and by many employers.  Even in situations where multitasking...

Dopamine helps the brain change gears: A review of "Dopamine-supports coupling of attention-related networks."

Before we begin, some background information... It seems that networks and the idea of connectivity is on everybody's minds these days (pardon the pun).  It's a fairly safe bet to say that nearly everyone affiliated with neuroscience has more than a passing familiarity with the default mode and dorsal attention networks (i.e. see Raichle et al., 2001).  These networks are anticorrelated which is to say that when one is being more heavily accessed, the activity in the other is suppressed.  Broadly, the default mode is active at rest, during mind wandering and when thinking about the autobiographical past - it has been described as being related to internally directed attention.  The dorsal attention or task positive network on the other hand is engaged when attention needs to be directed outwards at a cognitively demanding task or situation.  From Dang et al., 2012 What about the frontoparietal cognitive control network?  This network was discove...

On the use and abuse of Hairballs. What might graph-theory mean for cognitive neuroscience?

Olaf Sporns Discovering the Human Connectome H aving just returned from the OHBM (Organization for Human Brain Mapping) it's not hard to be phenomenally ecstatic about the state of research and raring to apply new techniques and theories to my data.   There is much to be hopeful for; I encountered many brilliant people who are excited about their research and how neuroscience is helping to unlock the secrets of the mind. That said, there did seem to be a great deal of research that was following the coat-tails of last year's big ideas.  This year the "big thing" was graph theory - the branch of mathematics and computer science that was previously applied to other fields such as social network visualization and biology  by Olaf Sporns and applied to brain networks.   Graph theory works on the notion that you can reduce a matrix (normally a correlation matrix) to a set of nodes (which represent specific brain regions) and edges (which represent either phys...