INFOGRAPHIC: 3 Fundamental Laws of Neuroscience


3 Fundamental Laws of Neuroscience

Further reading

Hebb’s law: https://brainracker.wordpress.com/2011/05/25/the-ever-elusive-engram/

Weber’s law: http://highered.mheducation.com/sites/0073382736/student_view0/perception/weber_s_law__/index.html

Bayes’ law: http://www.cut-the-knot.org/Probability/BayesTheorem.shtml

Are there any fundamental laws missing?

Why does water feel wet?


Ever had the strange feeling of being convinced you’re sitting on something wet, only to discover it’s just cold – not damp?  A new paper has just explained why this happens, by creating a new model of wetness perception.

Wetness perception has been something of a mystery.  Although there are clearly specified receptor cells in the skin for detecting a range of different attributes in the environment – thermoreceptors for sensing heat, mechanoreceptors for sensing pressure, and so on – there is no specific receptor for detecting how wet something is.  A mystery, that is, until a PhD student at Loughborough University began investigating it.

I like this paper because it’s an incredibly neat explanation of something so everyday that you don’t normally think about it.  I’m quite amazed that it appears wetness perception in humans has more or less escaped scientific scrutiny until now.

The model describes wetness perception as the result of “complex multisensory integration” of thermal and tactile inputs to the skin: if it’s cold and clammy, it’s probably wet.  So to demonstrate this, they altered thermal perception and mechanosensory perception in a few different ways, to work out how these inputs contribute to wetness perception.

The basis of their experiment was bringing a series of wet cotton stimuli, which varied in temperature, into contact with participants’ skin.   All the stimuli were carefully prepared to ensure they had exactly the same moisture content.

The first finding was that when they asked the participants to rate how wet it was, they reliably found that the cold, wet cotton was perceived to be wetter than more temperate wet cotton.  Secondly, when they were given the opportunity to have the cotton rubbed against their skin, providing mechanosensory stimulation from the movement as well as the thermal input, their ratings of wetness were more accurate than when the stimuli were held still against the skin.  So, very broadly, thermal and mechanosensory information are both important.

This gave them a nice theoretical starting point for how we combine different types of sensory inputs to create a sensation of wetness.  Next, they needed to work out the mechanics.  So the next stage was to delve further into the biology of it by a) seeing what happens when the activity of A-nerve fibres, which carry thermal and tactile information to the spinal cord, is suppressed; and b) comparing the wetness sensitivity of two different types of skin – the forearm, which is better at thermal sensitivity, and the fingerpad, which is better at tactile sensitivity.  From both of these methods, they determined that coldness was the biggest influence on overall wetness perception with mechanosensory information playing a supplementary role.

Based on this behavioural data, they propose a Bayesian neurophysiological model of wetness perception, in which activity from Aδ-nerve fibres (which respond to cold) and Aβ fibres (which respond to pressure) are combined to produce a rational estimate of wetness.

And, as they point out, it explains why you often don’t know that your nose is bleeding until you’ve touched it with your finger or looked in a mirror: if it’s too warm, you don’t sense the wetness.

References

Filingeri, D., Fournet, D., Hodder, S., & Havenith, G. (2014).  Why wet feels wet? A neurophysiological model of human cutaneous wetness sensitivity.  Journal of Neurophysiology, 112, 1457-1469.

Wonky Brains and Crooked Quirks


Your brain is not symmetrical: a horizontal MRI scan will show you that it’s a bit wonky – as if it’s been twisted anticlockwise slightly inside the skull (the technical term for this is Yakovlevian anticlockwise torque).  The right frontal lobe, as a result, is bigger and wider than the left and it often protrudes forwards beyond the left frontal lobe; while the left occipital lobe is wider than the right and protrudes further backwards.

Yakovlevian torque

Yakovlevian torque

The curious thing about these neuroanatomical asymmetries is that they are not random variations between individuals: these are distinct patterns in anatomical design that appear to have some advantage.  While most parts of your body – your eyes, your lungs, your arms – are more or less symmetrical with a few idiosyncratic exceptions, something in the development of your brain causes these systematic adjustments.  Indeed, patterns in hemispheric symmetry and asymmetry that deviate from this norm are associated with schizophrenia, dyslexia, and a number of other disorders.  Why?  What causes wonky brains?

Vocalisation is lateralised to the left hemisphere – from monkeys to marmosets to mice

Many of these asymmetries are closely related to one of the most uniquely human abilities: language.  In keeping with this twisted Yakovlevian appearance, the Sylvian fissure (which lies, roughly speaking, underneath the main auditory and language processing areas) is longer and less steep in the left hemisphere, making way for a larger planum temporale.  Language (or other kinds of vocalisation) is lateralised to the left hemisphere – not just in humans, but a variety of other animals from monkeys to marmosets to mice (as well as song birds and frogs).  In humans, the planum temporale is heavily implicated in auditory processing and the other structural differences between the left and right frontal lobes match the areas that are associated with language and speech.  Indeed, in most autistic children it is the right hemisphere that dominates speech processing (unlike most other people, who show left lateralisation), and this is reversed as they improve their language skills and the left hemisphere gradually comes to dominate.

Asymmetry affords flexibility

It’s reasonable to assume that these asymmetrical quirks have some reason for being – nature doesn’t deviate from a pattern without good reason.   While language lateralisation is the most studied, visuospatial ability, attention, music perception, and mathematical ability have all been found to be dominated by one or another hemisphere.

The best reason is, simply, that asymmetry affords flexibility.  Having one part of the brain specialised for a particular task or function means that information can be processed more efficiently, freeing up other parts for other things – much the same as how most modern computers have dual-core processors for efficiency.  It also reduces the amount that the hemispheres interfere or conflict with each other by granting dominance to one over the other.

Bisazza and Dadda (2006) demonstrated this by breeding two strains of the same species of fish, one strain with asymmetrical brain structure, and the other with symmetrical brain structure.  When they were required to carry out one task (catching a shrimp), the two strains of fish performed equally well.  But when they were required to do two tasks simultaneously (catch a shrimp while avoiding a predator), those with symmetrical brains took twice as long to catch the shrimp.  Those with asymmetrical brains were barely affected by the distraction of the predator.

Translate this into humans, and lateralisation for speech means that your left hemisphere can perform the function of holding a conversation while your right hemisphere can get on with other cognitive tasks.  As for some of the other asymmetrical quirks – the fact that Alzheimer’s disease progresses faster in the left hemisphere, for example; or that the left also dominates the perception and expression of positive emotions; or the observation that  many of these asymmetries are most pronounced in right-handers – the answer is not as obvious.

 

References

Dadda, M., & Bisazza, A., (2006).  Does brain asymmetry allow efficient performance of simultaneous tasks?  Animal Behaviour, 72, 523-529.

Further reading

Ehr, G. (2006).  Hemisphere dominance of brain function – which functions are lateralized and why?  In 23 Problems in Systems Neuroscience, van Hemmen, J. L., & Sejnowski, T. J. (Eds.).  Oxford University Press: New York.

Petty, R. G. (1999).  Structural asymmetries of the human brain and their disturbance in schizophrenia.  Schizophrenia Bulletin, 25, 121-139.

Toga, A. W., & Thompson, P. M. (2003).  Mapping brain asymmetry.  Nature Reviews Neuroscience, 4, 37-48.

Bird-brain: Why do pigeons bob their heads?


This is Bob.

This question occurred to me yesterday while I was waiting for a bus and spotted some pigeons nearby.  Their idiosyncratic, jerky, back-and-forth head movements sort of look like they’re continually pecking for invisible food suspended in the air – which is quite odd if you think about it, and must cause them to expend quite a bit of energy.

So why do they do it?

It turns out that the back-and-forth motion is an illusion, actually.  This was first described by Dunlap & Mowrer (1930) who found that birds (specifically chickens, in this case) don’t jerk their heads backwards; in fact, they hold their heads still relative to their surroundings and move their body forwards, before thrusting their head forwards for it to catch up with the body.  This is called the “thrust-hold cycle”.  (See picture: the head moves relative to the body, only because the body continues to move forwards while the head stays still.)

Thrust-hold cycle

It’s clear that this has something to do with their vision: pigeons don’t do it when they’re blindfolded and made to walk forwards, and they don’t do it when they are walking on a treadmill (such that they remain stationary relative to their surroundings) – which means the behaviour only occurs when the environment is unstable.

What’s happening is that they are keeping the image projected onto the retina still, to prevent the image blurring.  Humans have a similar optokinetic reflex: try slowly scanning the room from left to right in a straight line, while keeping your head still.  It’s extremely difficult to do smoothly because your eyes will keep making small, jerky saccadic movements.  But hold up a finger, slowly move it from left to right, and follow it with your gaze, and your eyes will fixate on the finger and move perfectly smoothly.  It’s because in the first instance the thing you’re looking at is the room, the retinal image of which is changing as you move your eyeballs.  Your vestibular system does its best to stabilise the image by keeping your eyes still for as long as possible and then quickly fixating on the next image of the room.  In the second instance, however, the visually important thing is your finger, and the best way to keep it in focus is by moving your eyes such that your the image of your finger is always projected onto the same part of your retina.

Rather than eye movements, pigeons make movements with their entire heads in a similar fashion to stabilise the image.  This explanation is further supported by the fact that if you put a pigeon on a treadmill such that it’s only moving passively (i.e. it’s not walking or moving its feet at all), it will still bob its head: the pigeon doesn’t even need to be walking in order to produce this behaviour, it just needs to have an unstable visual environment.

There are likely to be other functions of head-bobbing, however.  If nothing else, it’s a reasonable assumption to make simply because visual processing is such a complex task – the visual system always makes the most of what it’s got.

For instance, depth perception is more difficult for pigeons than for humans because the visual fields of each eye don’t overlap nearly as much as ours.  We can tell a lot about the depth or distance of an object by looking at it with both eyes and comparing the retinal image projected onto each.  Pigeons, with their eyes spaced more widely apart, don’t have that kind of information available to them.  One source of information they can use is motion parallax, which is based on the fact that objects closer to you will move faster than objects further away as your travel through your environment.

For instance, if you’re travelling in a car, you might notice the fence at the side of the road whizzing past so fast you can’t see it clearly; the trees behind it will move quite quickly, but not as quickly as the fence; the buildings in the background will move more slowly still; and the distant mountains you can make out in the distance will barely be moving at all.  You can get a pretty good idea of the relative distances between all these things in the foreground and background just from their relative motion.  The same idea is shown in this GIF.

Motion parallax

Pigeons create this motion parallax and maximise it in the thrust phase of their thrust-hold cycle: by moving their head a little bit, over and over again, they can get a rapidly updating idea of how far away things are, relative to the pigeon and to other things in the environment.  Even when the pigeon is moving too fast to be able to maintain the hold phase – e.g. when it’s running or landing – it will still thrust its head back and forth to sustain the motion parallax.

Incidentally, cat owners will have noticed a similar phenomenon in their pets: before making a big leap, cats will often move their heads back and forth a few times.  Although cats’ eyes overlap a great deal (even more than humans’) so they don’t face the same paucity of depth information that pigeons do, when making an ambitious jump it still helps to have all the information you can get about the distance between you and the landing spot you’re aiming for.

References

Dunlap, K. and Mowrer, O. H. (1930). Head movements and eye functions of birds. J. Comp. Psychol. 11, 99–113.

Why We’re Not Brains in Vats


Have you ever used your fingers to help you count?  Have you ever drawn someone a map to explain the way?  Have you ever drawn a mind map and found it helped you organise your thoughts?   All these things are examples of a very clever adaptive technique of off-loading cognition onto the environment, something that a set of recent experiments demonstrates very nicely.

We're not this.

Using the things around you can make it easier to think, by effectively increasing cognitive capacity – like an external hard-drive. This is what you’re doing when you draw a diagram to make something clearer.  Your brain doesn’t operate as a self-sufficient processor; it is situated in the context of whatever is around it, and it makes things very efficient and saves memory capacity by using the world to its advantage.

Writing is possibly the best example of this aspect of embodied cognition, not least because as a culture we are so dependent on it.  When you know something is written down for you to look at again, you don’t need to remember it in so much detail, which leaves you free to turn your attention to other things.  In fact, societies that are illiterate generally have a better memory (although I can’t for the life of me find a reference for this), because they have to do without the convenience of writing down important things.

So we are discovering more and more that the mind is not a self-contained instrument that is detached from the world, the way our intuition usually says it is.  Cognition and the environment are intimately connected.  When Descartes said that the mind and the body were separate substances, he wasn’t just wrong – he was asking the wrong question entirely.  You can’t meaningfully separate them, because the evidence to date points to the environment as a functional extension of the brain.  This is what is espoused by the theory of embodied cognition, which says, in essence, that the basis of all cognition is perception and action and that this is the context in which the mind should be understood and studied.  Offloading cognition onto the environment is one of the six views of embodied cognition.

The latest research on this topic brings it up to date by looking at online information and other influences of computers on cognitive processes.  Researchers at Columbia have investigated the effect of using computers on people’s memory, by getting them to type items of trivia (like, “An ostrich’s eye is bigger than its brain,”) onto a computer.  In one experiment, half the participants were told that what they typed would be saved; the others were told it would be deleted.  When they had finished, they wrote down as many of them as they could remember.  (I’m sure you can guess what the result was by now.)  The ones who were told their work would be saved remembered less, because they had offloaded some of their memories onto the computer.  It didn’t make a difference whether they had been explicitly told to remember them or not; so it wasn’t just that of the participants consciously thought they would need the information later.  The simple fact is that our brains make less of an effort to remember things that we have other sources for.

The paper is published in one of the most respected scientific journals, Science, and details four experiments the authors carried out relating to how computers and the internet affect our thoughts.  Two of them demonstrate embodied cognition; in the second (very revealing) experiment, they looked at the source of the information.  Participants typed facts into the computer, as before, but were lead to believe that they would be able to access the folders where their entries had been saved during the memory test.

This time, for one third of the statements they were told that their work had been saved.

For another third, they were told which folder it had been saved in.

And for the final third of entries, they were told it had been erased.

When their entries had been saved, participants’ memory for the trivia was worse, but their memory for whether and where it had been stored was better.  Tellingly, when they couldn’t remember what the statement was, they were more likely to remember where it was saved.  They had focused on exploiting their environment by dedicating their memory to how to access it again, rather than what the content was in the first place.

So this paper doesn’t say anything new about embodied cognition.  What it does is demonstrate that it exists (as you might predict) in the context of computers.  Inevitably, some people will say that this is evidence that computers make us “stupid,” because we are relying on computers to remember things for us – after all, when information was saved on the computer, they didn’t remember it as well.  Is that fair?  Not really; remember, this is an adaptive process where you are taking advantage of the resources you have to make your memory more effective.  And, also remember that writing does the same thing, but no one claims that writing makes us stupid.

And finally, why are we not brains in vats?  Because embodied cognition says that minds need a body through which to explore the environment, in order to exist.

Sparrow, B., Liu, J., & Wenger, D. M (2011).  Google effects on memory: Cognitive consequences of having information at our fingertips.  Science.

The Ever-Elusive Engram


Since the beginning of the last century, researchers have sought to explain the neurobiological basis of memory.  Everything we do, everything we think and feel is rooted in neural mechanisms, and this necessarily includes memory.  But the engram – the hypothesised neural trace of a memory – is notoriously hard to pin down.

Much of the reason for this is that memory is so broad that it’s difficult to come up with examples of brain function that don’t implicate memory.  This means that vast networks of the brain are going to be involved in memory in some fashion.  What we do know (fairly unhelpfully) is that it isn’t in one particular location, or even in a few locations.  The hippocampus is famed as the seat of memory, but that’s only a small part of the story; the Wikipedia page for the neuroanatomy of memory lists the hippocampus, cerebellum, amygdala, basal ganglia, frontal lobe, temporal lobe, parietal lobe and occipital lobe as brain parts associated with memory function.  Well, that’s pretty much every part of the brain that contributes to cognition… at all.

The first hint that it might be neural networks, rather than lumps of brain tissue, that hold the key to the engram was an experiment done by Lashley in 1929.  He put rats in a series of mazes of varying difficulty, and gave them enough time to learn the layout (rats are very good at this).  He then made lesions in their brains to see what would happen.  What he discovered was that it didn’t matter where he made the lesion, the rat would still have some memory left of how to navigate the maze.  What did make a difference was how big the lesion was – the higher the percentage of cortical brain tissue he cut out, the more errors the rat made (an error counted as a wrong turn).  This study was pretty revolutionary for the then-young science of neuropsychology, because of the implications about how the brain is organised.  There aren’t little centres for different cognitive faculties; brain function is spread over often vast networks in the brain.

(In fact, if you were to repeat Lashley’s experiment more precisely, you would find that there are subtle differences and variations in severity when you cut out different parts of the cortex.  Lashley’s error was making his lesions so big that each one damaged several cortical areas involved in memory and spatial navigation.  In any case, if you disrupt one form of memory implicated in the maze task – say, navigation by smell – this might be compensated for by using a different method, like sight.)

Ok, so memory is mediated by networks of neurons.  How does this happen?  How do you store a memory?  We don’t fully know, but there are some explanations that can enlighten us.  One of them is the idea of long-term potentiation, a modification of synapse strength.  Synapses are the chemical junctions between neurons.  Essentially, they are gaps between brain cells which work by transmitting the neurotransmitters (chemicals like serotonin) released by one neuron to the next neuron, and then that neuron generates an electrical impulse.  When the electrical impulse reaches the other end of the neuron it releases a neurotransmitter into the next synapse, and so on.

In 1966, a Norwegian researcher called Lømo discovered that if you repeatedly stimulate a neural pathway with bursts of electricity at a high frequency, the synapses in the pathway become more efficient – it takes less effort to transmit the signal the next time because the resultant electrical potential in the second neuron is around 50% bigger.  He did this on neurons on the perforant pathway, which is associated with memory and projects to the hippocampus.  In unanaesthetised rabbits, the subjects of his experiment, the effect lasted up to 16 weeks.

External events are represented in the brain as temporo-spatial patterns of neural activation.  That is the basic premise of neuropsychology; everything we experience is down to these neural patterns.  This means that all learning and memory must be represented by synaptic changes, otherwise nothing about our experiences would change, and that would mean we hadn’t remembered anything.  Lømo’s discovery was the first empirical support for this, and we’ve learnt more since about how synapses can alter their strength.

Here comes the complicated bit, so pay attention.  There is a general rule in neuroscience that neurons that fire together, wire together – this is the basis of Hebbian learning.  As the intensity of the input increases, both the probability that long-term potentiation will occur and the magnitude of the LTP increase.  There is an intensity threshold which must be reached in order for synaptic change to happen, and beyond this threshold higher intensities result in greater LTP.  What’s the significance of this?  It means that a minimum number of synapses must be coactive – fire together – in order for LTP to occur.  This is how particular neural circuits  begin to “form” a memory: when there is the right kind of input, like the neural processing of seeing something that has been a threat to you in the past, this particular circuit of neurons starts firing more easily  than it would have done if you didn’t have such a memory.  Effectively, your brain is replaying the information.  That’s what the memory trace is; that’s the mysterious engram.

Pareidolia (or, the House that Looks Like Hitler)


This came to my attention today: a house that looks like Hitler.  And, weirdly enough, it does look strikingly like him – even though it bears none of the characteristics of any human face, let alone the subtle idiosyncrasies that make an individual’s face distinguishable.  It’s got a slanted roof and a prominent lintel above the door.

As far as seeing faces in things goes, this is one of the most startling incidences of face pareidolia I can think of, by far.  There have only been the fairly dubious images of the Virgin Mary burned onto toast, or the face on Mars (according to Wikipedia, taken by some to be evidence of a long-lost Martian civilisation.  Hmm).  Pareidolia is the phenomenon of seeing patterns or meaning in random objects or sounds, but – from my experience, anyway – it happens much more easily with seeing faces in things.

How does this happen?  Visual images of objects that look like faces are, as you’d expect, processed in the same area of the brain that processes images of real faces – the fusiform face area.  A study from 2009 looked at how pareidolia is produced by the brain.

The brain images on the left show where the activation is in response to seeing a face-like object (the top brain), a real face (the middle brain), and a non-face-like object (the bottom brain).  The area circled is the fusiform face area (FFA), and you can see quite clearly that it shows roughly the same pattern for real faces and face-like objects – compared with no activity when the subject is looking at a non-face-like object.  There’s the evidence.

So how are we able to tell that only one of them actually is a face, if both images are processed by the FFA?  Well, they are processed differently.  The graph on the right shows the level of activity in that area over time (a period of 0.8 seconds).  The x=0 axis is the exact time that the image was shown, and the various lines show the current, which is indicative of the FFA’s response to those images.  The pattern here is different – there is more activity when it’s a real face, and the shape of the peak is different.  This means that slightly different neural circuits are activated in the FFA, and that’s what underlies the weird perception.  More interestingly, the fact that the peak of activity happens so early (165ms is not a long time) means it is an immediate, low-level perceptual process.  That’s why even when you know it’s not a face, you can’t help seeing Hitler.

Hadjikhani, N., Kveraga, K., Naik, P., & Ahlfors, S. P. (2009).  Early (M170) activation of face-specific cortex by face-like objects.  Neuroreport, 20 (4), 403-7.