The Ever-Elusive Engram

Since the beginning of the last century, researchers have sought to explain the neurobiological basis of memory.  Everything we do, everything we think and feel is rooted in neural mechanisms, and this necessarily includes memory.  But the engram – the hypothesised neural trace of a memory – is notoriously hard to pin down.

Much of the reason for this is that memory is so broad that it’s difficult to come up with examples of brain function that don’t implicate memory.  This means that vast networks of the brain are going to be involved in memory in some fashion.  What we do know (fairly unhelpfully) is that it isn’t in one particular location, or even in a few locations.  The hippocampus is famed as the seat of memory, but that’s only a small part of the story; the Wikipedia page for the neuroanatomy of memory lists the hippocampus, cerebellum, amygdala, basal ganglia, frontal lobe, temporal lobe, parietal lobe and occipital lobe as brain parts associated with memory function.  Well, that’s pretty much every part of the brain that contributes to cognition… at all.

The first hint that it might be neural networks, rather than lumps of brain tissue, that hold the key to the engram was an experiment done by Lashley in 1929.  He put rats in a series of mazes of varying difficulty, and gave them enough time to learn the layout (rats are very good at this).  He then made lesions in their brains to see what would happen.  What he discovered was that it didn’t matter where he made the lesion, the rat would still have some memory left of how to navigate the maze.  What did make a difference was how big the lesion was – the higher the percentage of cortical brain tissue he cut out, the more errors the rat made (an error counted as a wrong turn).  This study was pretty revolutionary for the then-young science of neuropsychology, because of the implications about how the brain is organised.  There aren’t little centres for different cognitive faculties; brain function is spread over often vast networks in the brain.

(In fact, if you were to repeat Lashley’s experiment more precisely, you would find that there are subtle differences and variations in severity when you cut out different parts of the cortex.  Lashley’s error was making his lesions so big that each one damaged several cortical areas involved in memory and spatial navigation.  In any case, if you disrupt one form of memory implicated in the maze task – say, navigation by smell – this might be compensated for by using a different method, like sight.)

Ok, so memory is mediated by networks of neurons.  How does this happen?  How do you store a memory?  We don’t fully know, but there are some explanations that can enlighten us.  One of them is the idea of long-term potentiation, a modification of synapse strength.  Synapses are the chemical junctions between neurons.  Essentially, they are gaps between brain cells which work by transmitting the neurotransmitters (chemicals like serotonin) released by one neuron to the next neuron, and then that neuron generates an electrical impulse.  When the electrical impulse reaches the other end of the neuron it releases a neurotransmitter into the next synapse, and so on.

In 1966, a Norwegian researcher called Lømo discovered that if you repeatedly stimulate a neural pathway with bursts of electricity at a high frequency, the synapses in the pathway become more efficient – it takes less effort to transmit the signal the next time because the resultant electrical potential in the second neuron is around 50% bigger.  He did this on neurons on the perforant pathway, which is associated with memory and projects to the hippocampus.  In unanaesthetised rabbits, the subjects of his experiment, the effect lasted up to 16 weeks.

External events are represented in the brain as temporo-spatial patterns of neural activation.  That is the basic premise of neuropsychology; everything we experience is down to these neural patterns.  This means that all learning and memory must be represented by synaptic changes, otherwise nothing about our experiences would change, and that would mean we hadn’t remembered anything.  Lømo’s discovery was the first empirical support for this, and we’ve learnt more since about how synapses can alter their strength.

Here comes the complicated bit, so pay attention.  There is a general rule in neuroscience that neurons that fire together, wire together – this is the basis of Hebbian learning.  As the intensity of the input increases, both the probability that long-term potentiation will occur and the magnitude of the LTP increase.  There is an intensity threshold which must be reached in order for synaptic change to happen, and beyond this threshold higher intensities result in greater LTP.  What’s the significance of this?  It means that a minimum number of synapses must be coactive – fire together – in order for LTP to occur.  This is how particular neural circuits  begin to “form” a memory: when there is the right kind of input, like the neural processing of seeing something that has been a threat to you in the past, this particular circuit of neurons starts firing more easily  than it would have done if you didn’t have such a memory.  Effectively, your brain is replaying the information.  That’s what the memory trace is; that’s the mysterious engram.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s