Wednesday, 4 July 2012

Bjorn Merker: The Brain's Need for Sensory Consciousness: From Probabilities to Percepts

      Abstract: Some years ago I suggested that consciousness pays its way in the functional economy of the brain by unlocking the savings hidden in the mutual dependencies among target selection, action selection and motivational ranking through multi-objective constraint satisfaction among them (Merker 2007, p. 70). This would place consciousness at a late stage in the brain's operations, suggesting a subcortical implementation of key mechanisms of consciousness in sites of global convergence in midbrain and diencephalon. No doubt our cortical machinery is the source of much of our conscious contents, but that does not mean that the cortex also must be the site where those contents become conscious. Recent information-theoretic analyses of the probabilistic data format of cortical operations point to the utility of collapsing cortical probability density distributions to estimate form in extracortical locations (Ma et al. 2006). I propose that this essential step in neural operations is implemented in a subcortical "global best estimate buffer" whose contents - alone among neural activities - are conscious. They are so not by virtue of anything being "added" to them in order to "make them conscious," but as a direct consequence of the format they must adhere to in order to provide a global best estimate within the narrow time constraints of inter-saccadic intervals. That format directly matches the global format of our phenomenal experience, which in its sensory aspects is that of naive realism.

      Ma, W.J., Beck, J.M., Latham, P.E. & Pouget, A. 2006. Bayesian inference with probabilistic population codes. Nature Neuroscience, 9, 1432-1438.
      Merker, B. 2007. Consciousness without a cerebral cortex: A challenge for neuroscience and medicine. Target article, peer commentary and author--response. Behavioral and Brain Sciences, 30, 63: 134.
Comments invited


  1. Two questions / problems :

    1. Merker's talk seems to be more about intentionality (i.e. our relation to the world) thant about consciousness (i.e. «qualia», raw feels, feelings, phenomenal consciousness, etc.). If that's the case, he's not trying to solve the problem that's interessing us here...

    2. What does this « mental model » implies ? Is it a detailed reprensentation ? Here the sensorimotor approach has very good criticisms against this type of representationalism : detailed mental representation are cognitively demanding an thus are not adaptative.

    1. I agree with the first part of your problem 1. However, as only Pr. Harnad talk has really been about the causal role of feelings and that the other questions discussed by the speakers are important, I do not see why we should consider that we are interested solely in the hard problem. I think that much energy could still be put in the hard problem and that no answer could be found. If this premisse is right, the problem that should interest us is that of higher-level human capacities related to feelings, but in which feelings have no causal role.

    2. Isn't there a link between intentionality and consciousness? The world causes our raw feels, feelings, even "qualia" and phenomenal consciousness. Intentionality is only causality seen from the receiving end; we feel something about the causes.

      Awareness is feeling the effects. Consciousness is knowing that these effects have a cause (a causal relation to the world).

    3. To Alexandre B. Romano, point 1: You may remember that in my talk I dwelt at some length on the emptiness of Nagel's characterization of conscoiousness in terms of "something it is like". It is no more than another name for the conscious state, a tautology (please see my video for the details). I then went on to say that "instead of being recognized as a tautology this ”something it is like” has caused endless mischief by being taken in the sense of a property, a distinctive qualitative something possessed by the conscious state, and by it alone, as an inherent identifying attribute, elusive yet crucial to any explanation of the nature of the state it is taken to predicate. As such it has taken on a life of its own and assumed many guises, such as the putative ”feel” of consciousness, a truly bizarre notion, since any ”feel” that might be proper to it would have to be a content of the conscious state to be experienced, and a content of the conscious state cannot define that state itself, being contained in it. So called qualia are derivatives of this global ”feel” by differentiating it into the separate ”feels” of any and all conceivable specific contents of the conscious state, and obviously are even worse definientia for the same reason." That means that if "«qualia», raw feels, feelings" is "the problem that's interessing us here", then "what interests us here" is not the conscious state itself, but some of its contents, and a limited portion of those contents, moreover. My talk, however, dealt with the conscious state as such, in keeping with the title of this Summer Institute, and it cannot be defined by something it contains ("qualia", "feels" or what not), however widespread that misconception might be.
      Point 2: The contents of the brain's reality model are exactly as detailed as the contents of your conscioousness, because the former are the latter, as I tried to illustrate towards the end of my talk.


      Very complicated, Bjorn!

      Nagel or not, organisms feel. And the hard problem is explaining how and why they do.

      That's neither tautologous, nor does it need even to mention the weasel-word "qualia"...

    5. Here is the deleted comment, corrected for spelling:
      I'd say that conscious organisms feel, and they see if they have vision, and so on. Feelings are an important content of consciousness, but no more than a content, and one among many others, such as perceptual contents (the world in which we experience ourselves as living), thoughts, etc. My objection is to elevating the fact of feelings (readily interpretable as those contents of consciousness that serve motivational guidance of behavior) to a defining attribute of the conscious state, whereas in fact they belong among its contents. On what grounds has this particular kind of content been determined to be harder to explain than sights and sounds? The crux of consciousness theory is not explaining conscious contents, but the state that offers those contents the chance to be conscious. That is an altogether different order of problem, which forces one to come to grips with the defining properties of the first person perspective, which is the really hard problem.


      Sights and sounds are felt: It feels like something to see or hear. All mental states (conscious states, call them the synonym one will) feel like something to be in. Otherwise they are just states.

      The hard problem is explaining how and why any state feels like anything to be in.

      The self-referential koans of consciousness have next to nothing to do with this. If every conscious state did not feel like something to be in (even if all it feels like is "ouch") we would not have had this Summer Institute...

    7. Sights and sounds are felt (evoke feeling) IF THEY HAVE EMOTIONAL IMPLICATIONS, otherwise they simply look and sound in different ways. Those differences are differences of visual or auditory (pattern) content, and not of feeling content. To claim otherwise is to rob vision and audition of their phenomenological specificity, and to equate all contents of consciousness with "feeling". That would be to make "feeling" a synonym of "content of consciousness" (or perhaps construe it as the Nagel-inspired "additive" needed to make something conscious), which returns us to the Nagel fallacy. "feel like something to be in" is another way of expressing Negel's "something it is like", the problematic nature of which I alluded to in my reply to Romano, and more directly in my lecture.


      Well, we're just going to have to differ on this, Bjorn:

      I think Tom Nagel is and was spot-on (except he should have said "feels like" rather than "is like").

      Nothing -- but nothing at all -- other than word-choice differs between asking someone what it sounds like to hear a middle C on a baroque oboe and asking someone what it feels like to hear a middle C on a baroque oboe.

      And in English the word for what anger feels like and what a rough surface feels like happens to be the same, whereas in French the same word is used for what anger feels like and what burning rubber smells like. The invariant is that they all feel like something, to a feeling organism, but not to a robot, or a teapot. (And explaining the causal role of that difference is what both feels and is hard...)

      But if you insist on a different terminology, just translate the challenge to explain how and why anything feels like anything to a challenge to explain how and why anything feels, smells, tastes, sounds or looks like anything.

    9. You are in other words using "feels like" as a synonym for "content of consciousness", since you say that there is but a word-choice difference between "feels like" and "sounds like" and presumably "looks like" and so on. What I fail to see is the utility of specifically settling on "feel like" as the term of choice for this wide range of ordinary language ways of referring to contents of consciousness. In ordinary language "feel like" typically suggests feeling states, emotions, but that is obviously not what you have in mind (which I realize only now!). In other words, just like Nagel's "something it is like", your "feel" (since it might as well be "sound," or "look," and so on) adds nothing of substance to the neutral and generic "content of consciousness." And of course, unconscious objects and creatures have no contents of consciousness, on that we are in full agreement. The crux of what gives something the capacity to entertain contents of consciousness, i.e. to be conscious, is the real question. My suggestion, elaborated in my talk, is that it is the first person perspective - what it is and what it is for - that supplies the key in that regard.


      Bjorn, this is all circular.

      Your "content of consciousness" is just a metaphor: Baskets have contents (objects). Books have contents (words). What makes the "contents" of "consciousness" conscious is that they are felt. (And what makes words meaningful is that it feels like something to say and mean them.)

      If contents are unfelt, they are unconscious.

      So forget the "contents" and forget about "conscious" and "consciousness," and explain why any state at all is felt, rather than just "functed" (like the state of water, boiling, or the internal state of my computer, right now).

      All else is just hermeneutics, hand-waving and question-begging.

      Ditto for "first person perspective". What makes a perspective conscious (rather than just the optical field of, say, a camera or telescope) is that it's felt. The "1st-person" is merely long-hand for the feeler of the feeling.

      Nothing is gained from all these synonyms, redundancies and weasel-words (except self-delusion about having made some sort of inroad on the hard problem of explaining how and why organisms feel rather than just funct).

    11. I have no problem at all with the basic distinction that you are insisting on, namely between various accounts of mechanism that produce outcomes (doing, performing a function) on the one hand, and on the other accounting for the fact that we ourselves (and presumably some animals as well) do not operate in the dark night of unconsciousness but experience things. I am on your side regarding the fundamental nature as well as importance of that distinction, and am fully cognizant of the tall order faced by anyone presuming to tackle the latter fact on the grounds of naturalism. We differ on two points only. The first is merely terminological (now that I understand the broad sense in which you employ "feeling" - and I just listened to every word of your video lecture from this point of view to make sure I am not missing something).
      My objection to using "feeling" as a synonym for consciousness is that for one thing it is awkward (it works least well for visual experience: In English "I feel him" is NOT possible for "I see him"), and for another because there is a perfectly good generic word in English that covers all the bases much better, namely EXPERIENCE. If I am to give up the convenient and well established terminology of "conscious state", "conscious content", and "first person perspective" I will gladly do so for the single term "experience," "experienced" and "experiencer," but not for "feeling", "felt" and "feeler." This, basically, because the first usage is well established and the second is not in the sense in which you use it, which invites confusion with the ordinary language "feeling/emotion." When in your talk you ask "Why do animals have feelings?" the uninitiated might think that you are using ordinary language to ask why they experience hunger or pain rather than why they are conscious. They cannot, of course, EXPERIENCE hunger and pain without being conscious, but attempts to answer your question would head in very different directions depending on the sense in which "feelings" is understood (particularly when, as in that passage, you use it in the plural). No such problem arises is you ask "Why do animals experience?", or "Why are animals conscious?"...
      The second point on which we seem to differ is that, provisionally at least, I do not share your intuitive sense that the hard problem is impossibly hard. Rather, I share Nagel's sense, alluded to in seldom quoted passages of his 1974 paper, that his arguments might not after all block every conceivable avenue to a physicalist, that is a neural, account of the conscious state (or experience, if you will). As a mere possibility for what he called the ”far intellectual future”, he gingerly suggested that such an account might come within reach if we, and I quote him ”pursue a more objective understanding of the mental in its own right.” For this purpose he suggested that what he called ”structural features of perception” would serve better than qualitative ones, and he illustrated the latter by an example involving redness and the sound of a trumpet. Nagel is pointing, in other words, in the exact opposite direction from that of relying on qualitative aspects of conscious contents as levers of progress on the problem we are dealing with. In that I think he was exactly right, and I have been pursuing issues in the "structural features of perception" that might give leverage towards "a more objective understanding of the mental in its own right" (some examples are in my talk), as stepping stones towards an account in naturalistic terms for why animals are equipped not just with neural mechanisms, which typically function perfectly well without consciousness, but with experience.


      The trouble with "experience" is that it is ambiguous as between something being felt and something merely happening: "My radio is experiencing difficulties." Experiences can be felt or unfelt. I do agree that "experience" comes the closest (it's the word Eva Jablonka prefers), and would be the second best word -- but we only need one word, and "feeling" is the only one free of all ambiguity. ("Feel," by the way, is just as multimodal as "experience": It is perfectly natural and makes perfect sense to say that it feels like something to see, hear, smell, taste, etc.)

      I definitely have no proof that the hard problem is insoluble (and I rather hope it isn't). All I have is an argument, which is that it looks as if all doings can be completely explained ("reverse-engineered") functionally, without anything left over. That makes it hard for feeling to have any causal role (except if psychokinetic dualism had been true, which it is not). There is no room for feeling in any causal explanation; feeling is causally superfluous. Unlike Nagel, I cannot see how a neural account of the "structure of perception" can provide such a causal explanation (though of course I agree that the 1-1 correlation between neural activity and feeling makes prediction possible: this is Dennett's "heterophenomenology," but it's just weather-forecasting, not causal explanation).

    13. It seems to me that by the use of figurative meanings we might continue this terminological argument indefinitely. How about "The earth felt the impact of an unusually large meteor"? That sounds like perfectly acceptable and comprehensible English to me. I prefer ordinary and well established usage, and think that science has no business deploying ordinary language terms in specialized technical senses. When it has a need for specialized terminology it is preferable to coin a neologism specifically for that need, to avoid confusion and ambiguity.

      Since by your own stipulation your "feel" is synonymous with "conscious" and "experience" (I remember a slide in your talk headed by the formula "consciousness = feeling") I do not see that anything hinges on the particular choice of words in any particular context. Yet your persistent preference for and defence of this one term over all its synonyms would seem to indicate otherwise, and if in fact you think that something actually hinges on that particular usage, it would be interesting and helpful to know what that something might be.

      Your argument that all doings (all actual behavior) can be completely accounted for in causal (mechanistic) terms without consciousness (= feeling) playing any role in the account (mechanism) assumes or implies that consciousness (= feeling) is without function. My hunch is that if you built a mechanism that in fact "did" (behaved) exactly as we do in all circumstances in perpetuity (the real Turing test, which unfortunately is encumbered by the flaw of the "stopping problem", i.e. how long do you have to run it to be sure?) you would find that the mechanism, in order to actually behave identically to us in all circumstances, would need to include a subsystem whose structure and mode of functioning fulfilled all criteria of a conscious mode of functioning. And I further think that you would find that on scrutiny our brain would turn out to contain a subsystem organized in just that manner as well, either subsystem accounting for the consciousness (= feeling) of their respective hosts.

      As to the issue of the "criteria for a conscious mode of functioning", that to me is the crux of the problem of conscioiusness, and the aspect of it to which I am devoting my continuing labors. It is on that score that I was encouraged by finding Nagel's reference to "structural features of perception" (rather than qualitative ones) as a possible path to a "more objective understanding of the mental" which then, in its turn, might open the way to a neural account of the conscious state. That is a long research program, and Nagel merely alluded to it as a tenuous possibility in the vaguest of terms, and he by no means claimed to "see how a neural account of the 'structure of perception'" would provide a causal account of consciousness, as you put it. That actually reverses the approach he gingerly hinted at, in remarks I was quite pleased to encounter in preparing for the Montreal meeting, because I have been pursuing the early step of his suggestion for some years now, without having known about his remarks.


      Bjorn Merker: "It seems to me that by the use of figurative meanings we might continue this terminological argument indefinitely. How about "The earth felt the impact of an unusually large meteor"?"

      Very clever example! I like it. (But I agree that this is all just terminological.)

      Bjorn Merker: I do not see that anything hinges on the particular choice of words in any particular context."

      I agree. Let's move on to substance.

      Bjorn Merker: Your argument that all doings (all actual behavior) can be completely accounted for in causal (mechanistic) terms without consciousness (= feeling) playing any role in the account (mechanism) assumes or implies that consciousness (= feeling) is without function."

      Well, I think that is rather an inference from the existing evidence and reasoning, not an implication about all possible future evidence…

      Bjorn Merker: My hunch is that if you built a mechanism that in fact "did" (behaved) exactly as we do in all circumstances in perpetuity (the real Turing test, which unfortunately is encumbered by the flaw of the "stopping problem", i.e. how long do you have to run it to be sure?) you would find that the mechanism, in order to actually behave identically to us in all circumstances, would need to include a subsystem whose structure and mode of functioning fulfilled all criteria of a conscious mode of functioning."

      I agree completely. The Turing Test has no time limit. And I do believe that anything that could pass T3 (or, a fortiori, T4) could feel. And T4's inner mechanisms would have counterparts in organisms' brains.

      But how would this explain how and why T3 (or T4, or organisms) feel?

      We are prepared to believe they feel, on the strength of T3 (or T4 -- probably even T2!). So let's suppose that God comes down and tells us we're right: they do feel.

      How does that explain how and why they feel?

      Having a "subsystem whose structure and mode of functioning fulfilled all criteria of a conscious mode of functioning" is not en explanation of how and why the correlate causes feeling; it is just a statement of the correlation.

      Until further notice, functional correlates explain the doing they generate, not how or why it is accompanied by feeling.

      Bjorn Merker: And I further think that you would find that on scrutiny our brain would turn out to contain a subsystem organized in just that manner as well, either subsystem accounting for the consciousness (= feeling) of their respective hosts."

      Ex hypothesi, T4 has counterparts in the brain. What is lacks is an explanation of how and why those counterparts generate feeling, rather than just doing.

      Bjorn Merker: I was encouraged by finding Nagel's reference to "structural features of perception" (rather than qualitative ones)"

      I don't know what "structural" means here. Unfelt (aka "non-qualitative") perception is not perception, it's just processing.

      Take intensity. An acoustic vibration can be high or low in amplitude. That's its structure. This is correlated with high and low amplitude (or frequency) activity it causes in the brain (and how one causes the other is known, and unproblematic: they're both doings). This is in turn correlated with the feeling that the sound is louder or softer.

      Now I am waiting to heard how the "structure" of these amplitude/frequency effects on a robot with sensors and effectors (and internal processors) explains how and why it feels like something to hear sounds…

    15. I have followed your entire reconstruction of the argument, and could comment on some of the steps in your reconstruction, e.g. your introduction of "correlates" which did not figure in my thinking, let alone in the sense of their "causing" anything (like feeling). But your final sentence tells me that that would not help, because the way that that sentence is formulated prevents me from accepting the challenge it poses. It asks me to accept what I call a Nagel-inspired "additive" conception of the conscious state in that you insist that in addition to hearing sounds - which in my book is being conscious of sounds - it must "feel like something" to hear sounds for them to qualify as conscious, and there I simply refuse to follow you. That "feel like something" is the "additive" - the exact equivalent of Nagel's "something it is like" - and I refer you to the early parts of my talk for my objection to that construct. Heard sounds are by definition conscious, so an account of this fact of consciousness must include an account of why the neural representation of sound at some point in the system is heard, becomes conscious. In this case conscious and heard are the very same thing, and the additive of "felt" adds no additional requirement, because by your own stipulation "consciousness = feeling".

      That is not to say that the fact that something is conscious - say heard - explains itself: explaining it is to my mind a tall order, and a challenge I am working on. I would think that the first steps on that path should involve getting clarity about what it is to be conscious, in its own, descriptive, terms. For that, structural aspects INTERNAL to sensory consciousness (and not the kind of psychophysical correlation you cite) ought to be helpful. I have in mind aspects such as its perspectival organization, to mention but one alluded to in my talk. If you disagree, it would be helpful to know how you conceive of what it is to be conscious, because presumably we must know something more than the mere name of what we are trying to explain if we hope to arrive at a credible explanation. Your answer can hardly take the form of saying that to be conscious is to feel, because you have told us that the two are synonyms, so that would amount to saying that to be conscious is to be conscious.


      What's needed is not my own "concept" of consciousness (feeling) -- we all know what it feels like to feel -- but (anybody's) causal explanation of how and why organisms feel rather than just do. The rest is just phenomenology (hermeneutics).

      I would only add that "unheard sounds" is a bit incoherent. (The counterpart of the time-worn "tree falling alone in the woods" pseudo-puzzle.) What is at issue is unheard oscillations (doings). (But probably you just mis-spoke.)

    17. Yes indeed, I mis-wrote, hearing is of course always of sounds, so it should read "you insist that in addition to hearing something - which in my book is an act of consciousness - it must "feel like something" to hear it in order to qualify as an instance of consciousness".
      And, just to be sure, here is the correct wording of the later sentence: "Hearing something is by definition conscious, so an account of this fact of consciousness must include an account of why the neural representation of certain states of ambient air oscillation at some point in the system is heard, becomes conscious."

      With that cleared up, perhaps we can get back to the issue my badly formulated September 7 first paragraph led up to, namely the desirability of getting clarity about what it is to be conscious in its own, descriptive, terms. This, because that issue is intimately involved in answering the question you repeatedly return to, and which concerns many of us who gathered in Montreal this summer, namely a causal explanation of why animals are conscious (= feel in your terminology) rather than function without awareness (= feeling). I just deleted a comment in which I had made an inchoate start on indicating how I approach that question, only to realize that I had skipped the even more elementary step of getting clarity about what it is to be conscious, in its own, descriptive, terms. As I said in my September 7 comment, this must be an important part of arriving at a causal account of why animals are conscious. This because presumably we must know something more than the mere name ("consciousness", "feeling") of what we are trying to explain if we hope to arrive at a credible explanation. So at this point I refer you to the second paragraph of my September 7 comment as my suggestion for how to proceed.

    18. Ah, unless the first part of your reply above in fact concerns that second paragraph, as just occurred to me!

      In that case, yes, in some general ordinary language sense of "feel" ('I feel good about that', 'I feel disappointed', 'a feeling of security' and so on) we probably "all know what it feels like to feel" (though even there I have a problem: Why not just "know what it is to feel"? But let's leave that detail aside). From the fact that 'we all know what it is to feel' in this everyday sense it hardly follows that we "all know what it is to feel" in your sense of feeling as synonym for consciousness.

      Leaving my objections to that usage aside, I have no good sense at all of what is impled about the conscious state on this "feeling" construal of its nature. For example, in your August 24 comment above you intimate that besides feelings somehow a "feeler" of the feelings is involved in this conception. Is this "feeler" also a feeling, and if so, does that "feeler feeling" require a feeler of it? And if the feeler is not also a feeling, does that mean that something other than and in addition to feelings figure in your conception? I have no idea what your answers to any of this might be - the answers certainly are not apparent from the content of your lecture at the Summer Institute.

      All of this and more would have to be spelled out if there is to be any hope of coming up with a causal account of the conscious state. Finding solutions is facilitated by applying constraints, and these come from specifics and details. As I've already stated, we accordingly need to know more than the mere name ("consciousness", "feeling") of what we are trying to explain if there is to be any hope of arriving at a credible explanation. So, I would like to know what hides between the "feeling" label for that which you are asking to be accounted for in causal terms.

    19. That should of course be "behind the "feeling" label", in the last sentence above.


      I can only repeat:

      We can keep playing the game of analysis and phenomenology, but it does not explain a simple, hard fact: Organisms feel rather than just do. We all know what that means. How? Why?

      We can keep distracting ourselves with koans about the difference between "what it feels like to feel" and "what it is to feel". And with puzzlement about whether the "feeler" is part of the feeling. And about "the feeling of the feeling of the feeler feeling the feeling…"

      But it gets us nowhere.

      And it begs the (hard) question.

    21. And I can only repeat that we do not all know what it means to feel, in the sense in which you use that term, namely as a synonym for consciousness. It was you who introduced "what it feels like to feel" and a "feeler of feelings," not me, and when I asked for clarification of your usage, in order to get a better grasp of what it is that you want explained, you reply that "the fact of feeling doesn't need "analysis," it needs explaining." So we seem to be at an impasse.

      Perhaps after devoting some 4000 words over close to two months of leisurely exchanges without moving much beyond the challenge you posed on July 22 and my reply of two days later, we can simply agree to disagree for the time being. I would be happy with that, having found our exchanges both stimulating and thought provoking.

  2. Unless I completely missed the point, the position expressed in the talk does lead to infinite regress. If there's a central locus/module/structure/thing that puts it all together, the threat of an homunculus watching that simulated reality isn't very far.

    Merker even had an image with a tiny monitor. If consciousness is the monitor, who's looking at the monitor? Without an observer, creating a simulated reality is useless.

    1. Gloignon, the existence brain structures that integrate information has nothing to do with the homonculus, which implies a single module that autonomously knows all about the internal representation of the world. Pr. Merker did not posit such an autonomous brain structure. His argument was much more informed about neuroanatomy.

    2. Felix Mongeon is correct in pointing out that integrative structures, of which the brain has many, are untouched by the homunculus argument. Moreover, with regard to gloignon's comment, the same is largely true of monitoring functions, which are legion in engineering control systems and - I presume - are a necessary part of any model of consciousness that means to explain first person phenomenal consciousness. The ONLY circumstances under which a monitoring function leads to an infinite regress are those under which its construction literally and fully DUPLICATES that of the system of which it is a part, because then that monitoring function must itself be fully duplicated as part of its construction, and so on. I know of no instance in the entire literature of of our field that assumes such a monitoring function. I certainly do not do so. The ego-center of the neural reality model I sketched is little more than a geometric perspective point FROM WHICH contents are monitored, equipped with a winner-takes-all type decision mechanism. It is one PART of the reality model, and by no means duplicates it, so no infinite regress enters the picture.

  3. In the period of question, Merker tried to locate the neural correlate of consciousness partly in the midbrain , the hypothalamus (and other locations, but I'm not very familiar with neuroanatomy). To give a location for consciousness seems to be a requirement for his theory since he claims that we have a model of the world that is very much 'neurologically realized'. But the proposed location pretty much contradicts Damasio's claims that consciousness might be in good part located in the brain stem. Does Merker's account contradicts Damasio's findings?

    1. No, the structures that I concentrate on are located in the upper brainstem (roof of the midbrain, ventral thalamus, etc.), and in general terms are compatible with Damasio's perspective. When it comes to which specific structures might be included or excluded, much work remains to be done, and I certainly am only providing heuristic arguments regarding localization of key mechanisms. I provide more details than I could give in my talk in the chapter in the Edelman et al. book I referred to in my last slide (which should just have appeared in print).

  4. Re-reading the abstract, I think the initial idea of a processing buffer where ideas gets thrown and selected makes sense. As long as there is no little "dude" (a centralized system) looking at the buffer and doing the selection.

  5. Very nice, Bjorn. I like your statement of the multi-constraint biologically relevant functions. But here's a but, though it is not much of a but: Why can't a dynamic global workspace as specified in my slideshow/narrative do that job?



    1. Thank you Bernie. My problem with the global workspace conception is that, as I tried to explain in all brevity in my talk, it provides everything except an intrisic reason to believe that its contents are conscious. It would seem to work perfectly well without any such assumption, all the more so in its latest incarnation of free-floating flotilla on the ebbs and tides of cerebral activations, if I understood your talk right.

  6. Looking for some clarifications: how does Dr. Merker's account of consciousness and the "global best estimate buffer" apply to other, non-visual - sensory modalities? If I understand correctly, any mammal equipped with vision would be conscious by his criteria. But what of a creature with no eyes? It would seem to me that similar probability-processing machinery could exist for other senses. If this holds, where can we draw the line between what organisms count as conscious or non-conscious?

    1. Re your second question- I wouldn't take his ideas to mean anything with vision to be conscious - the creature needs the 'extra ingredient': the 'what it is like' to be looking through those eyes, seeing what they see, and doing what they do (even feeling what they feel?).
      I was thinking along similar lines re other senses. Dr. Merker's 'view' is certainly visuo-centric (ha ha), and his response to how this conscious construct would change in a blind individual was not fully satisfying. However, I think the constructed visual world he described & the rover analogy was surrogate fo the entire unified sensory percept we experience ('sensory seeing'). However, when I think of the localization of a sound in space (arguably equally as important as vision), I feel the integration site of that sound (my 'listener') correspond anatomically (within my head) to wherever that sound IS in the azimuthal plane. YET when sight and sound are integrated I have an entirely different experience all-together.
      So, where exactly is our 'vantage' point? Does it even exist? I think it is more contingent on the type of sensory experience we have, in which case I'm not sure I ascribe to the idea that one loci is the site of all conscious experience.

    2. Nico, to pursue on your question, Pr. Merker did in fact also give the example of audition. He briefly mentioned it. I also think it is easier to talk about vision than other sensory modalities since vision is the most studied. Most discussions about sensory modalities concerned vision since the beginning of the summer school.

    3. Yes, quite right, my assumption throughout has been that the neural reality model is multimodal, though my illustrations were almost exclusively visual. An interesting point in this connection is that all the spatial modalities tie in to the gaze control machinery, which provides a strong reason to integrate them all in a joint multimodal reality model. Even when the optic nerves are severed the eyes and head turn to auditory or somatosensory stimuli. And in the few creatures that lack functional eyes, such as some cave-dwelling salamanders, the head turns in appropriate fashion.

  7. IMHO, Doctor Merker’s work is an amazingly rich path toward a naturalized phenomenology. It gives an account of how one should expect representations of the lived body, and representations of the lived world, should be articulated the activities ongoing in the neuronal networks that themselves generate these representations. From a phenomenological point of view, this work is of great importance.

    1. Thank you, Maxwell Ramstead, for this hyper-compressed but precise summary of the core notion I tried to get across in my talk!

  8. The first 15 minutes of this talk were the most clear and succinct summary of the theories espoused to explain how consciousness is generated in the brain seen at the entire Summer Institute. Thank you Dr. Merker for this.

    1. Thank you, Roberto Gulli, for this appreciation. I am glad that my remarks proved helpful to you, and send you my very best.

  9. Sir, I have been working on a project to change the terms "Incompatible with life". One of our supporters Rays of Sunshine Hydranencephaly Information & Support (Kammy) had asked me search you out concerning your research. I would love to welcome you to visit our page on facebook ( which is set up so that even non subscribing members can read all our materials and posts. I appreciate you time and hope you will support our goal.
    Thank you for your time and consideration.
    Lynn May,

  10. Dear Lynn May: I am sorry not to have discovered your comment until now (mid-January 2013)! I also was informed of your interest by Kammy, and will contact you by e-mail before long. In appreciation and with my best regards, Bjorn.