Home > Epistemology, Philosophy of Mind > An Argument Against Qualia (and some stuff about Robots and Consciousness, too!)

An Argument Against Qualia (and some stuff about Robots and Consciousness, too!)

Samuel Butler’s speculation (in Erewhon‘s Book of the Machines) that machines could eventually develop consciousness was something of a joke, but the debate on robot consciousness has developed into a major issue in philosophy of mind, psychology, and neuroscience, as well as becoming a huge pop-culture phenomenon. The Matrix details robots taking over the world; I, Robot does something similar; Bicentennial Man portrays an increasingly human-like robot; AI does the same, except with a very human-like child. If the human mind, as science has begun to reveal, is nothing but a very extremely complicated interaction of material elements, why can’t a computer reach the same level of complexity and hence achieve consciousness? There’s no doubt that they could eventually look and act like human beings, but the question remains whether they can, for example, have the same moral rules apply to them as apply to human beings, or even simpler, actually have experience and not be “zombies.”

For Christians, Jews, Muslims, and other supernatural mythologists who give humans a very special role and purpose in the world, the answer is clear: no way. But even naturalist philosophers have argued against such a possibility. There’s always the weak argument, “humans created robots so they can’t have consciousness, or at the very least they can’t ever be equals.” The refutation to that one is fairly self-explanatory. Relating to the possibility of consciousness, humans have made machines that have achieved things far beyond their individual capabilities. Relating to ethics, if a created being meets one’s ethical criterion, why must his neurocentrism stand in the way?

More sophisticatedly, John Searle argues that computers process syntax, but not meaning; that is, they can consistently process inputs and produce outputs, but they do not actually understand the information moving through them. I find this position very interesting, because it makes a strong differentiation between meaning and syntax. When it comes to minds, however, can’t everything be ultimately reduced to syntax (in input, processing, and output)? It would seem that meaning is just a particular richness of syntax. Searle appears to be arguing that there is something to meaning above and beyond the mapping of a particular thing to a certain permutation of, say, binary switches, which he claims to be merely syntax.

Is Qualia Bullshit?

Searle’s distinction between meaning and syntax has very close parallels to the distinction between consciousness and function, or as the discussion is commonly focused, between qualitative experience and the physical. The primary concept at hand is qualia, a thing which conscious beings experience which has reality above and beyond the material components of the brain. The philosopher Frank Jackson (who may have changed his position recently) initially made his case for qualia by posing the following hypothetical about Mary the scientist:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. (It can hardly be denied that it is in principle possible to obtain all this physical information from black and white television, otherwise the Open University would of necessity need to use color television.)

What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false.

My answer to the quandary of Mary the scientist is that she does learn something new, in as much as her new exposure creates even the slightest functional change. If she is able to newly distinguish between objects, as would be expected if one began to see in color instead of black and white, that is fundamentally a process of learning. It seems that Jackson’s case is begging the question: Mary experiences something new, therefore, she learns (where “experience” contains the concept “learning”).

If functionality can explain any corresponding changes in Mary’s knowledge, Occam’s razor (an essential tool, especially in a discussion like this) requires that qualia be done away with. This narrows the question: supposing there were no functional difference, does Mary learn something new? That can almost be a tautologous “no,” depending on one’s meaning of “learn.” Better yet, when two creatures are functionally indistinguishable, is it possible for one to have this ‘qualia’ and the other not? What about even functionally and materially indistinguishable, save for some process that generates qualia? I will argue a firm “no” to the existence of qualia of this kind, throwing it in the philosophical trash pile on top of forms, universals, nouemena, essences, and all other kinds of fantastical and useless constructions. Note here that I am not arguing against experience, feelings, and so on; I am merely arguing against the position that there exists a realm of non-physical things (indescribable by language).

Evidence for Qualia: How do we know it’s there?

A commonly used example of qualia is color. At one point or another many people think when they’re children, “when I see green, is someone else seeing red? It’s a possibility mannn.” Some take this as irrefutable proof of qualia, when really it is just the absence of proof or disproof. To show what I mean, we can start with another classic example: color-blindness.

Color-blindness is, most certainly, a functional deficiency. People who are red-green colorblind can not distinguish between red and green objects, a deficiency which manifests itself in their responses to them (like failing to recognize traffic lights). But what about the case of perfectly inverted spectra, in which every color is perfectly inverted with its opposite?

I find the case of inverted spectra questionable, or, further, that such a case could even possibly exist. How could we test for the presence of inverted spectra in others? In short, we can not, because there would be no functional difference even if it were the case. While a test can be devised for something like red-green colorblindness by showing a red patch and a green patch and seeing the subject’s response to them, such a test can not work with inverted spectra. Supposing someone saw what was to them “actually” purple but others saw it as “actually” red and pointed to it and taught that person to call it “red,” and this were done perfectly over the entire spectrum, that person would report the colors in the same way as everyone else. Note the trouble I face here, in putting “actually” in quotes: what is “actual” purple? We have the scientific measurement of wave frequency, which is by all means actual and objective, but the “actual purple” commonly referenced is of the qualitative kind, which has no inherent and objective means of measurement!

But, someone may respond, though we don’t have access to others’ experiences, wouldn’t we be able to detect inverted spectra if it happened to us by reflecting on our memories? This counter-objection seems dubious. It must presuppose the possibility of inverted spectra to prove the possibility of inverted spectra. Even supposing, again, that inverted spectra is the case, whatever it is, it is doubtful whether even we would be able to know it. We use color as a means of distinguishing objects; on the other hand, we can only distinguish color by means of objects. That chair is purple; that table is red. Supposing tomorrow I came home and saw that only my chair turned red and my table turned purple, I would know that they changed colors on the basis of comparison of my distinct memory of the chair and table against the backdrop of other colors. However, suppose instead that all the colors in the spectra were inverted in my experience, inverting the colors of everything in my apartment. When I woke up the next day, would anything seem out of place? What would happen if my experience of all the objects in my lifetime from which I experienced and derived my color along with all future ones were perfectly inverted? Remember, inverted spectra is not simply putting on a pair of glasses that retranslates the light waves it’s receiving and sends them through your eyes (that’s cheating!); it’s actually changing your experience of purpleness to redness.

Hence, a true case of inverted spectra would also apply to my memories. If I thought, “purple,” or I thought of my memory of something purple, like my purple chair, there it would be: sitting there in my apartment, looking experience-red. Nothing would strike me as queer about that, because I wouldn’t have some store of supra-experiential color information with which I could say: “aha! My experience changed!” Inverted spectra depends on an implicit assumption that we somehow have an a priori knowledge of color that we can then gauge against our experiences.

You can see how even trying to speak in this manner about color experience devolves into presupposition-loaded nonsense. Overall, I am trying to draw attention to the absurdity of claiming the possibility of inverted spectra, by showing that there isn’t any meaningful way of speculating about it: there is no proof or disproof, and there is a much more reasonable explanation sitting nearby. The idea of qualia existing in itself is just an idle speculation, like mine that there is currently a deer crapping on my head, but both he and the crap are totally invisible, immutable, amaterial, and undetectable. Where we cannot speak, we must pass over in silence. There is no meaningful access that we, at least as human beings, could possibly have to this mysterious world of color experience. The only means by which we can ever gauge anything of this “qualitative” kind is by measuring it against other things, a distinctly functional process. Occamite reduction thus does away with the magical realm of qualia.

Implications for the nature of consciousness

In light of that, consciousness is often treated like a binary state – one is either conscious, or not conscious, one has qualia, or doesn’t have qualia – as though it were a singularly defined characteristic, with one sole consciousness that a mind either possesses or does not possess. By this logic, there must be a point at which consciousness disappears when a certain “puzzle piece” is removed, and likewise, a point at which a piece is added and it appears. This also goes hand in hand with the position that basically entails that there are all the operative functions of the brain, but consciousness is a phenomenon above and beyond those functions that exists in itself – lending itself to the possibility of philosophical “zombies” who act in every way totally identical to a human being who had consciousness, but without consciousness.

It would be more appropriate to describe consciousness as itself playing a functional role. It would fit with our understanding of evolution, and would overcome the dastardly problems of the Cartesian mind-body dichotomy. More explicitly, we should not treat consciousness as an irreducible property in itself. If a man goes blind, he’s still conscious, right? What if all of his senses are subsequently eliminated totally? Is he still conscious? Consciousness would be better described as an agglomeration of functional interactions with the external world, in contrast to it being simply a light-switch. The role of that thing which we call color is to allow us to distinguish between objects on the basis of their interaction with light, which apparently is pervasive and discriminatory enough to be a useful tool for prolonging our survival. The components of our consciousness, the five senses, give us consciousness by means of how they transmit information about reality into a center within which that information is integrated. In any case, there is no other explanation for and description of consciousness in philosophy that does not eventually devolve into explicit and direct reliance on the unknown.

Oh yeah, and about them robots: approximating its original intention, we can appropriately revise our initial question to something like, “can a robot ever possess the essential qualities commonly associated with human feelings?” Realistically, this is a question of empirics. First, a precise definition of the nature and breadth of the “feelings” must be decided. Then the components of the mind in question must be tested to determine if those conditions are met. It’s a difficult task, but technology may make it possible to actually derive a series of functional tests that can fully probe the expansive range of human consciousness.

Categories: Epistemology, Philosophy of Mind Tags:
  1. December 31st, 2011 at 14:34 | #1

    Inasmuch as functionalism can account for anything, functionalism can account for behavioural changes without bringing in qualia.

    That is because functional descriptions are abstract. They are not full causal explanations. A functional description of a computer systems need not mention the need for a power
    supply, or any other hardware issue, but computers need power to actually work.

    If you have a full causal explanation of system S that omits property P, then you can safely
    conclude that P is not necessary for an actual implementation of S. But you can’t conclude
    that computers don’t need power, just because hardware details are omitted from an abstract high-level functional description.

    So your argument doesn’t show that qualia are redundant. They might still be part
    of the hardware implementation that implements and drives the functioning of the system.
    (As they would be under qualiaphilic identity theory).

  2. admin
    December 31st, 2011 at 15:32 | #2

    Thanks for your comment!

    You raise an interesting point, but I think my critique is more along the lines of theories of qualia being nonfalsifiable. It’s an attempt to show that the “problem” of qualia is a linguistic puzzle. Any comment we make about the “existence” of qualia is not giving us any new information about the world, because it has no testable implications. To use your computer analogy, it would be trying to talk about hardware issues in a computer in a world where we could absolutely never observe the underlying mechanics of the computer. Of course, with our human brains, we can get a sense of mechanics through science – but we can’t get at the “essence” of our minds with philosophy, and qualia seems to be an issue of the “essence” of experience.

  1. December 11th, 2008 at 14:49 | #1

© 2009-2017 Christopher Khawand All Rights Reserved