Thursday, 5 July 2012


VI. Feelings and Firings (Thursday July 5)

Wolf Singer (MPI, Germany) Consciousness: Unity in Time Rather Than Space?
Erik Cook (McGill) Are Neural Fluctuations in Cortex Causally Linked to Visual Perception? 
Bjorn Brembs (FU Berlin, Germany) Behavioral Freedom and Decision-Making in Flies: Evolutionary precursor of "Free Will"? 
Julio Martinez (McGill) Voluntary Attention and Working Memory in The Primate Brain: Recording from Single Cells 
Christopher Pack (McGill)  Vision During Unconsciousness 
Barbara Finlay (Cornell) Continuities/Discontinuities in Vertebrate Brain Evolution and Cognitive Capacities: Implications for Consciousness? 

Comments invited


  1. I am either very stubborn or very stupid.

    Can anyone give me a definition of attention that doesn't require a decisional force that somehow cannot be explained?
    If not, then I don't see the difference between attention and consciousness.

    I'll take the time to formulate my thoughts into something coherent and post them tonight. Feel free to comment and provide your input!

    1. I think you are going about it incorrectly. Think about it evolutionarily. Attention evolved to select relevant information from the environment, amplify its signal and to drown out others. There is no decision per say behind what stimuli are attended to. Just rather learned or innate behaviours that the brain has matched with the most salient stimuli. Otherwise I imagine where the attention goes is up to chance, especially in completely novel situations, as with the flies we saw today.

      Although I do see what you mean by attention and consciousness being difficult to separate. When you are conscious you are attending to something whether it is something physically around you or thoughts. It is difficult to imagine being conscious and not attending to something and vice versa as well.

      Or maybe what do you mean by decisional force?

    2. Hi Vincent,
      I don't know if it will help to clarify but for an interesting definition of attention at both psychological and neuronal levels, I also recommand the introduction of following article of Mesulam (2010): Spatial attention and neglect: parietal, frontal and cingulate contributions to the mental representation and attentional targeting of salient extrapersonal events.
      It is specified for instance that no neuron is specifically dedicated to attention and that almost any neuron can be involved in attentional modulation. The author also proposed to conceptualize attention modulations into those that are domain-specific (ex: in response to visual stimuli) and those that are domain-independent (exerting through bottom up influences of ascending reticular activating system and top down influences from the frontal lobes and limbic system).

      Audrey Doualot

    3. I can see where the problem lies if we are talking about endogenous attention: "who" decides what to attend to? Martinez didn't respond to this question.

    4. Thanks to you Pierre and Audrey, that indeed makes sense. And I see at least one team of researchers are looking into the neuronal basis of attention (I'll be sure to read that Audrey, thanks!). But even with that considered, I still find it interesting (in a bad way) that scientists use attention in their experiments without knowing how it emerges in the brain, what are its correlates and, most of all, how it can contaminate their experimental data. Or maybe I don't know enough to know that it doesn't matter? Any light on that?

      However, like Martha said (and like I commented on Martinez's answer), there's still the problem of who is controlling attention. From a monist materialist point of view, anything referring to a soul or any form of free will is unacceptable [if you don't agree with that affirmation, we can have a private chat, but I do not want to discuss this subject here as it might diverge from the question].

      I'll start my reasoning and see where this leads me. The cause of attention focusing on a specific stimulus at a given moment is determined by both internal and external influences. Here's a few of them : any homeostatic input from the body, social interactions, unexpected movement or sound, etc. No place for choice here, it's all doing (to use Harnad's nomenclature) and doing is observable, hence computable, hence implementable in an AI.

      So we build this AI with an attention capacity, but no feeling or free will. Which means it doesn't have the feeling of attending to something. So, the question is : If you don't feel like you are attending to something, are you really attending to it? Cameras do that all the time, and I don't think we can say they feel. Woah, I came a long way from my first statement... Anyways, from my point of view, we can have attention without feeling, without consciousness.

      Now, some people argue that attention and consciousness are the same thing, with quantitative differences (one meaning being more "aware" than the other). I'd like to hear these people and know what are their arguments/reasoning. Or any other would like to share their inputs on the matter.

    5. I think that attention is particularly affecting scientists' data. This goes unnoticed by most of them because attention might not be the central point of their research (I'm think about C. Pack's scheme yesterday of all the different interrelations between the several areas involved in visual processing and how it's merely impossible to think about isolating them all in a specific research protocol).

      J. Martinez's presentation said something really interesting about this: directing your attention onto a stimulus is of course deeply related to having consciousness of that stimulus. However, he showed in his study that attention was also increasing the contrast of the object looked at. Attention is also modifying the structural component of what is seen. This is particularly relevant when it comes to studying perception on a screen: attention brings consciousness (most of the time), attention brings constrast increase too.

  2. This comment has been removed by the author.

    1. This comment has been removed by the author.

  3. The panel agreed that our definition of consciousness is too vague to identify "THE" NCC, and that we should instead talk about the neural correlates of conscious X (ie. vision, decision-making, audition, etc.) This reminds me of Ledoux commenting on the neural correlates of fear actually only representing a common final pathway of many different fear circuits. I wonder if we will at least find the equivalent of common final pathway, some type of common neural correlate, for the many varieties of consciousness.

    1. That would mean that we could have visual consciousness without, for example, auditive consciousness?
      Doesn't seem impossible to me, blind-sight would support that. Problem is, we can never tell who is conscious and who isn't.

      Are vegetative state people conscious? To what extent? What kind of consciousness? What about primates? When is an organism (biologic or not) conscious enough to have rights?

      I'm uneasy with the many types of consciousness thing, but it seems to be the only way to go.

  4. Diego commented that we look for "correlates" rather than "causes". This is not unique for consciousness; we talk about correlates for many (all?) topics in cognitive neuroscience. This must arise from the difficulty in implying causation when our measures (particularly brain imaging) are correlative. Does anyone know where/how this idea of correlates started?

    Diego also briefly suggested that lesion methods might be a better way to infer causality. I think this is inccorrect - because a structure is critical for a function does not mean that is causes the function. Maybe I am misunderstanding what he meant by causality. As far as I can think, the only causal technique that we have is brain stimulation in awake, reporting, humans.

  5. Day after day, presentation after presentation, neuroscientists tell us that they can measure neuronal signs of awareness of external stimuli. None of them goes as far as calling this counsciousness. Maybe it's time to accept that there is a difference between awareness and counsciousness.

    Isn't there a component of consciousness related to the capacity to get out of the sensorimotor domain, out of the immediate spatiotemporal reality? That could explain why it is so difficult to identify sensory stimuli to generate consciousness.

    Words can stimulate access to thoughts beyond the sensorimotor domain. Yesterday, tomorrow, home, Tokyo, Mars, Santa Claus all transport you outside of your spatiotemporal reality. You are aware of the word (acoustic stimulus when spoken or visual stimulus when written) and a distant reality comes to your consciousness.

    Not available in aplysia, fruit flies, monkeys nor computers... only in humans?

    1. Well, a picture of Santa Clauss might do the same thing. However, it doesn't help since animals are very bad at recognizing pictures. Computers are better at it, but their output is not related to (not grounded in) any reality.

    2. It still remains basic, but C. Pack talked in his presentation about the different degrees of neural convergence ("integration") taking place as processing becomes deeper and deeper, with his example of that inferotemporal neuron getting prefentially activated at O.J. Simpson's picture. It's not about reaching some kind of other spatiotemporal reality, but it still is a profound example of the relationship between what is seen and what is known. If I'm thinking about a beach in Thailand right now, the stimulus which must have driven this idea to emerge probably lies in the fact that I endogenously activated an "exotic places" network which ended up on a beach in Thailand. Yet, this might consider more "consciousness" as seeing a dot on a screen, although it seems to me to rely on quite the same basic mechanisms seen so far.

      As for the difference between awareness and consciousness, I think it's very interesting because it makes me think about this: in French, we don't really have different words for awareness and consciousness, both would rather be translated as "consciousness", being aware of "être conscient de". This is very interesting because, on the one hand, I'm questioning myself about whether or not we're building up different categories because there are different words in English. On the other hand, I'm no proponent of the fact that if one word cannot be translated, it doesn't exist: lots of words just cannot be translated in our Western languages. I think that this gap in linguistics is very interesting: it's worth the thought.

  6. Diego, I do not agree when you say we should talk about neural causes of consciousness instead of correlates. A fundamental property of causality is temporality. When one decides to label a phenomenon A a cause of phenomenon B, this implies that A preceded B. This might seem trivial, but the implication is that someone has to decide where to stop going back in time to investigate the phenomena that have contributed to B. It is much more specific and far less confusing to label neural events immediatly preceding a perception or a behavior as proximal processes. Moreover, I think that any causal discussion about behavior should include functionality, that is how this behavior has been selected through evolution (which implies to consider causality on a far wider time frame than the time frame you propose). Excluding evolution in a definition of causality would be problematic because the decision of excluding or including certain properties in any definition affects the way we interpret the world. In the present case, exclusinf evolution from the definition of causality could undermine the importance of evolution and function not only in theory, but in practice also.

  7. It is too broad to talk about the Neural Correlate of Consciousness. One should always talk about the Neural Correlate of a Particular Conscious Experience (NCPCE), such as seeing a red object in a particular visual location, or feeling pain in the tip of the left thumb, etc.



    I will never know "what it is like" for another to experience the color red, but I do think that imaging tools have the potential of uncovering the underlying neuronal activity associated with "what it is like to see the color red", across individuals (this seems feasible, since we are, for the most part able to agree about what "red" refers to in the real world, and as such, we can induce the "feeling" of seeing red in, say, potential research participants). If we do accept the tenet that consciousness/feeling, is an all or nothing phenomenon, then there needs to be a neurophysiological signature that is responsible for the generation of our "feeling". This may be naive/obvious, but it seems logical that every time we "feel", there is either an organ-wide neuronal activity pattern that is present in every other instance of "feeling", and/or a more focal, input specific, neuronal firing pattern that generates the phenomenological experience of "feeling" in response to a particular set of stimuli. If it is the former, then we have good reasons to hope for a solution via technological advance, I think. If it is the latter, then it seems to me we have a daunting task in front of us: how could we dissociate between input processing versus input-specific "feeling" generation?
    Me gusta ·


    Are you suggesting that there would be ONE network regulating feeling or that there would be different kinds of networks according to the feeling you're experiencing?
    2 de Julio a la(s) 16:03 · Me gusta


    The existence of both types seems plausible to me. And both would produce activity that leads us to understand "what it is like to feel something". It is not a "regulatory" network, though, in the strict sense of the word. If there were one such network, then that network would be responsible for producing a sense of "what it is like to feel something", where the "something" would be stimulus driven, without having a modulatory role. If we consider only the possibility of ONE network responsible for generating the impression of "what it is like to feel something", then that network must be understood as being 1) unitary in the sense that a core network of neurons will fire for any given conscious state, but also 2) dynamic/expandable in that it is obviously stimulus driven and will fire in synchrony with those networks that are activated during input processing.