williamseagerPhilosphy37intrus xNagelSamplephilshortpapertolookat xTheories_of_Consciousness_An_Introduction_—-_HOT_Theory_The_Mentalistic_Reduction_of_Consciousness
Guidelines for Short Papers Each paper must be double spaced, justified on the left side only (turn off right-hand justification) in Times Roman 12-point font (or equivalent), with margins of at least 1 inch. Papers must be no longer than 2 pages.I will stop reading at the bottom of the secondpage. Do not attempt to summarize every aspect of the paper! This exercise is designed to help you learn how to identify and summarize the core argument(s) in anarticleaccurately and succinctly.You may but are not required tooffer some critical insight into the assigned reading. This can take the form either of agreeing or disagreeing with the author(or both)and explaining why you agree or disagree. To do this you need to spell out the author’s position enough so that your own remarks have some context. If you wish, you may focus your attention on a particular argument or part of a paper, as long as it is central to the main issue and not a peripheral point. It is your choice which readings you write on, but you must hand each paper in on the class in which we will discuss the material you wrote about. For example, if you wish to hand in an assignment on Flanagan and Polger’s paper, it is due at the beginning of class on February 27. So please pay close attention to the reading schedule. However, the reading you are to do this paper on is 21.Excerpt from Seager, William. 1999. Theories of Consciousness: An Introduction and Assessment. London; New York: Routledge Pages 72-84 of the attached pdf article I will provide a copy of the reading for you to use and read as a word attachment. Please follow the guidelines and directions above. You must read the reading and then follow the directions stated above to write the short paper. I have attached both the reading you are to use for this short paper as well as an example of a short paper written by an A student in our class on a different reading/topic that you can look at to see how it should look. Thank you and im looking forward to seeing the work you do on this short paper. The reading you are to do this short paper on is a pdf and the sample is a word document.
Lack of instructions
Guidelines for Short Papers Each paper must be double spaced, justified on the left side only (turn off right-hand justification) in Times Roman 12-point font (or equivalent), with margins of at least 1 inch. Papers must be no longer than 2 pages.I will stop reading at the bottom of the secondpage. Do not attempt to summarize every aspect of the paper! This exercise is designed to help you learn how to identify and summarize the core argument(s) in anarticleaccurately and succinctly.You may but are not required tooffer some critical insight into the assigned reading. This can take the form either of agreeing or disagreeing with the author(or both)and explaining why you agree or disagree. To do this you need to spell out the author’s position enough so that your own remarks have some context. If you wish, you may focus your attention on a particular argument or part of a paper, as long as it is central to the main issue and not a peripheral point. It is your choice which readings you write on, but you must hand each paper in on the class in which we will discuss the material you wrote about. For example, if you wish to hand in an assignment on Flanagan and Polger’s paper, it is due at the beginning of class on February 27. So please pay close attention to the reading schedule. However, the reading you are to do this paper on is
21.Excerpt from Seager, William. 1999. Theories of Consciousness: An Introduction and Assessment. London; New York: Routledge
Pages 72-84 of the attached pdf article
I will provide a copy of the reading for you to use and read as a word attachment. Please follow the guidelines and directions above. You must read the reading and then follow the directions stated above to write the short paper. I have attached both the reading you are to use for this short paper as well as an example of a short paper written by an A student in our class on a different reading/topic that you can look at to see how it should look. Thank you and im looking forward to seeing the work you do on this short paper. The reading you are to do this short paper on is a pdf and the sample is a word document.
In his essay, What Is It Like to Be a Bat, Thomas Nagel argues that regardless of the validity of physicalism, it is still impossible for humans to understand how physicalism could be true in that a complete physical description of consciousness is not possible. Nagel claims that the explanatory gap is unbridgeable since consciousness is necessarily subject to a particular point of view and any attempt to objectively understand consciousness is therefore rendered incomplete.
Nagel posits, first, that consciousness is not a strictly human phenomenon. Rather, there are other organisms which experience consciousness. From their consciousness, and their consciousness alone, Nagel concludes that there is what it is like to be that organism. Nagel calls “what it is like to be that organism” the subjective character of experience. For Nagel, the reductionists must describe the subjective character of experience to a physical phenomenon, since, if base properties and all subsequently determined properties can be described in terms of physical properties then their resultant mental properties (consciousness) should be describable in terms of physical properties. However, for Nagel, this is the most difficult task of the physicalists since “every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.” (437)
To demonstrate the relationship between subjectivity and point of view, Nagel uses the experience of a bat. Bats use echolocation in order to perceive certain characteristics of real things (their size, motion, distance from the bat, etc.). While humans are also able to perceive these characteristics, due to the difference in sensory modalities, the way in which we do would be radically different from the way bats perceive these characteristics of reality. Nagel explains that even the limited ability to imagine oneself as a bat would merely constitute imagining oneself behaving like a bat and would not constitute any understanding of what it would be like to be a bat. Further, lacking the same sensory modalities as bats we do not have the correct experiential terms for describing what it would be like to be a bat. Nagel holds himself only to the claim that humans cannot describe the experience of a bat since we are not of the same type of being as a bat. That is to say that we are so different in our senses that we, humans, could never correctly describe the subjective character of experience of a bat. Nagel later uses the example, “Red is like the sound of a trumpet,” (449) to show the vanity of attempts at such descriptions between beings with differing sensory modalities (here between one with and one without vision). However, Nagel refuses to claim that all subjective characters of experience can never be described. He admits that humans may be able to understand a description of the subjective character of others’ experience since we are of the same type (beings with similar sensory modalities.) Nagel does not give a definitive divisor between types but posits only, “The more different from oneself the other experiencer is, the less success one can expect with this enterprise.” (442)
Nagel notes that humans and intelligent Martians, who’s physical makeup and sensory modalities are completely different from our own, may be able to come to the same conclusion regarding the physical phenomena that constitute a rainbow, or clouds, or lightning. However, the Martians could never understand the human conception of these things in the same way that we could not understand a bat’s conception of these things. The problem, for Nagel, in understanding the subjective character of experience is that it is ordinarily the role of the sciences to describe reality in terms of objective descriptions. However, since the subjective character of experience is necessarily subject to a particular point of view, any attempt to objectively describe it would be a step further from a correct conception. Nagel asks, “Does it make sense … to ask what my experiences are really like, as opposed to how they appear to me?” (448)
Nagel concludes, noting that the pursuit of understanding consciousness may allow for objective descriptions of consciousness, but ultimate understanding of consciousness will not be possible until the question of subjective and objective is first answered.
Bibliography
Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review, 1974: 435-450.
60
3
HOT THEORY: THE MENTALISTIC
REDUCTION OF CONSCIOUSNESS
Box 3.1 • Preview
The higher-order thought (HOT) theory of consciousness asserts that a mental
state is conscious if it is the object of a thought about it. Given that we have
some naturalistically acceptable understanding of thoughts independent
of the problem of consciousness, HOT theory promises a mentalistic
reduction of consciousness. Then, the naturalistic account of non-conscious
mind – which is presumably relatively easy to attain – solves the whole
mind–body problem. HOT theory makes substantial assumptions . It assumes
that the mind’s contents divide into the intentional (or representational)
and the non-intentional (qualia, sensations). It assumes that consciousness
requires conceptual thought, and what is more, requires apparently pretty
sophisticated concepts about mental states as such. It assumes that no mental
state is essentially a conscious state. It comes dangerously close to assuming
that consciousness is always and only of mental states. Not all these
assumptions are plausible, and they lead to many objections (e.g. can
animals, to whom the ability to engage in conceptual thought may be
doubted, be conscious; what is an unconscious pain, etc.). Some objections
can be deflected, but problems remain that engage the generation problem
and prevent the mentalistic reduction from going through successfully.
Philosophers have always been attracted by projects aiming to reduce consciousness
to Something Else, even if this reduction might require a more or less radical
reconception of our understanding of consciousness. They have been motivated
by the hope that, as compared to consciousness, the Something Else would prove
more tractable to analysis and would fit more easily into the physicalist world
view (here it is perhaps encouraging that, compared to consciousness, almost
anything else would possess these relative virtues). In the tradition of Descartes,
consciousness was supposed to exhaust the realm of the mind, which itself thus
became something immediately apparent and open to the mind’s own self
inspection (inasmuch as conscious states of mind were somehow essentially self-
intimating). There is of course something intuitively appealing to such a thesis
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
61
but we have long since lost any sense that it must be true and are now happy to
countenance legions of unconscious mental states and hosts of cognitive processes
existing beneath or behind our conscious mental states. As we saw in chapter 1,
even Descartes ended up endorsing a form of the view that finds cognition, or
cognition-like phenomena outside of consciousness. A second traditional idea,
one stemming from the empiricist heritage, is that there are basic or ‘atomic’
elements of consciousness which are pure sensory qualities and from which all
‘higher’ states of consciousness are constructed, either by complex conjunction or
mental replication, or both. Hume, for example, calls these atomic elements the
simple impressions.1 The impressions are the truly immediate objects of
consciousness and their occurrence is supposed to be entirely independent of
thought. The radical proposal of the HOT theories is to deny this last claim. What
if consciousness were in fact dependent upon certain sorts of thoughts which
themselves were part of the now admissible zone of unconscious mentation?
The appealing possibility is that consciousness is somehow a definable
relation holding between certain mental states, where the latter do not already
essentially involve consciousness and, of course, are in themselves less puzzling
than consciousness itself. A mentalistic reduction of consciousness would have
several virtues. The explanation of consciousness in terms of mentality would
avoid the direct explanatory leap from consciousness to the physical, a leap
which has always seemed somewhat to exceed philosophy’s strength. If
consciousness can be reduced to anything at all, it is evidently more plausible
that it be to something already mental than directly to brute matter. Yet mental
states which do not intrinsically involve consciousness can be seen as ‘closer’ to
the natural, physical world, and so this sort of reduction promises to build a
bridge across our explanatory gap, supported by intermediate mental structures
which can be linked to both sides with relative ease.
In order to evaluate such a project we require a precise specification of, first,
the relevant non-conscious mental states and, second, the relation between them
that is to account for consciousness. One such reductive theory, distinguished by
its clarity and detailed presentation, has been advanced by David Rosenthal, first
in ‘Two Concepts of Consciousness’ (1986) and then in a series of papers that
have appeared over the last decade (see for example 1993a, 1993b, 1995). My
aim here is to review Rosenthal’s theory and to argue that, in the end, it fails to
reduce consciousness successfully. I will not claim outright that any theory of the
sort we are considering must similarly fail, but I confess that the wide scope and
extensive development of Rosenthal’s theory makes me doubt whether there are
other theories of this sort which differ significantly from it. Thus I hope my
objections will possess a quite general applicability.2
Rosenthal begins by dividing mental states into the two traditional, and
presumably exhaustive, classes: intentional mental states (e.g. beliefs, hopes,
expectations, etc.) and phenomenal or sensory mental states (e.g. pains, visual
sensations, etc.).3 For now I’ll follow Rosenthal in this distinction, but it is in fact
a substantial assumption which I shall doubt for much of the rest of this book, and
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
62
one that is curiously unsupported by the details of the HOT theory. Rosenthal
understands this distinction in terms of a division of mentalistic properties, so:
All mental states, of whatever sort, exhibit properties of one of two
types: intentional properties and phenomenal, or sensory, properties.
. . . Some mental states may have both intentional and phenomenal
properties. But whatever else is true of mental states, it is plain that we
would not count a state as a mental state at all unless it had some
intentional property or some phenomenal property.
(1986, p. 332)
The first demand of theory specification is then met by asserting that no mental states
are intrinsically or essentially conscious. This sweeping assertion would appear to be
necessary to ensure the completeness of the theory, for otherwise there would remain
a species of consciousness – the essential, non-relational sort of consciousness – for
which the theory would offer no account. The claim that mental states are not
intrinsically conscious is most plausible for the intentional states and least plausible
for the phenomenal states, but there are some intuitive grounds for both. It is
undeniable that we frequently ascribe intentional states of which we claim the subject
is not conscious, even as we also claim that these intentional states are part of the
causes and explanation of the subject’s behaviour. As for phenomenal states, Rosenthal
offers this:
Examples of sensory states that sometimes occur without
consciousness are not hard to come by. When a headache lasts for
several hours, one is seldom aware of it for that entire time. . . . But we
do not conclude that each headache literally ceases to exist when it
temporarily stops being part of our stream of consciousness, and that
such a person has only a sequence of discontinuous, brief headaches.
(1986, p. 349)
Of course, this is contentious, for one naturally wants to draw a distinction between
the headache and the persistent condition that underlies it. The ache, which is the
mental component, is indeed discontinuous but we allow the persistence of the
underlying cause to guide our speech, even though the underlying cause is
occasionally blocked from having its usual effect on consciousness. One wants to say
that the ache is a sensing of this underlying condition and this sensing is not
continuous. By analogy, if we are watching a woodpecker move through a dense
wood for an extended time we will not actually be seeing the bird throughout that
time. We nonetheless say that we watched the woodpecker for an hour. However, on
Rosenthal’s side, I should point out that in cases where the headache can be felt
whenever attention is directed towards it we are, I think, rather more inclined to say
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
63
that the headache itself persisted even during the time it was not being consciously
experienced. This sort of neglected but continually accessible sensation is quite
common. If, even upon introspection, nothing was felt we would be reluctant to say
that the ache might still ‘be there’, whether or not the underlying condition persisted.
Of course, such considerations do not sever the relation between certain mental states
and consciousness, but they do make that relation more complex.
Box 3.2 • Essential HOT Theory
For α to be a conscious mental state, the subject must have a higher-order
thought about α. But not just any sort of thought, brought about in any sort
of way, will do. Roughly speaking, we can say that for α to be conscious
one must have the ‘properly’ acquired belief that one is in α. So HOT theory
defines consciousness as follows:
α is a conscious state of S if and only if (iff)
(1) S is in the mental state α,
(2) S has an ‘appropriate’ thought about α (we’ll call having
this thought ‘being in the state T[α]’; the content of
T[α] is something like ‘I am in state α’),
(3) S’s being in α causes S’s being in T[α],
(4) S’s being in α does not cause S’s being in T[α] via inference
or
sensory information.
Each clause is necessary to avoid potential objections. It follows from HOT
theory that to be conscious of anything is to be conscious of it as something-
or-other. Every state of consciousness is ‘aspectual’. This follows from the
fact that every thought must be, so to speak, structured from concepts. But
it does not follow from HOT theory that anything has an essential conceptual
aspect under which one must be conscious of it. It also follows from HOT
theory that one can’t be conscious without having beliefs (i.e. the appropriate
higher-order thought). But it does not follow that when one is conscious of
a mental state that one is conscious of a belief. To be conscious of such
beliefs requires yet higher-order thoughts about them.
In any case, I don’t want to press this point since HOT theory may offer an
explanation of why we tend to think that consciousness is intrinsic to certain mental
states. This involves the second specification task, the delineation of the relation
between non-conscious mental states that accounts for consciousness. Rosenthal
explains it so:
. . . it is natural to identify a mental state’s being conscious with one’s
having a roughly contemporaneous thought that one is in that mental
state. When a mental state is conscious, one’s awareness of it is,
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
64
intuitively, immediate in some way. So we can stipulate that the
contemporaneous thought one has is not mediated by any inference
or perceptual input. We are then in a position to advance a useful,
informative explanation of what makes conscious states conscious.
Since a mental state is conscious if it is accompanied by a suitable
higher-order thought, we can explain a mental state’s being conscious
by hypothesizing that the mental state itself causes that higher-order
thought to occur.
(1986, pp. 335–36)
Thus it is possible to maintain that if we tend to think of certain sorts of mental states
as essentially involving consciousness this can be explained as the mistaking of a
purely nomological link for a ‘metaphysical’ one. It might be, for example, that pains
are normally such as to invariably cause the second-order thought that one is in pain
and that abnormal cases are exceptionally rare (and, needless to say, rather hard to
spot). In fact, this does not seem at all implausible. The machinery of philosophical
distinctions mounted above is then seen as merely a case of philosophical error
forcing us into an unnecessarily complex view of pains. It is literally true, according
to the HOT Theory, that a pain – in possession of its painfulness – can exist without
consciousness of it, but in fact almost all pains will be attended by consciousness of
them, in virtue of causing the appropriate state of consciousness. One might even
hope to account for the strength and constancy of this nomological link by appeal to
its evolutionary usefulness. Rosenthal comes close to making this point (while actually
making another) when he says: ‘. . . people cannot tell us about their non-conscious
sensations and bodily sensations usually have negligible effect unless they are
conscious. So non-conscious sensations are not much use as cues to [bodily] well
being . . .’ (1986, p. 348). Nature would not likely miss the chance to entrench a causal
connection between sensations, whether of pleasure or pain, and consciousness that
is of such obvious biological benefit. Still, I believe that there remain serious difficulties
with this view of the consciousness of phenomenal mental states, but it will take some
effort to bring out my worries clearly.
Before proceeding let me introduce a piece of notation. We will frequently need
to consider both a mental state and the second-order thought to the effect that one is
in the former mental state. I will use Greek letters for mental states and form the
second (or higher) order mental states as follows: the thought that one is in mental
state α will be designated by T[α]. If necessary, we can allow this construction to be
iterated, so the thought that one is in the mental state of having the thought that one
is in the mental state α gets formally named T[T[α]], and so on. This notation allows
a succinct characterization of HOT theory:
For any subject, x, and mental state, α, α is a conscious state iff
(1) x is in α,
(2) x is in (or, more colloquially, has) T[α],
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
65
(3) x’s being in α causes x’s being in T[α],
(4) x’s being in α does not cause x’s being in T[α] via inference or
sensory information.
Note that for a to be a conscious state, the subject, x, must be in T[α], but x will not
normally be conscious of T[α] as well. This would require x to be in the still
higher-order state T[T[α]]. Such higher-order thoughts are entirely possible but
relatively rare; we are not usually conscious that we are conscious (of some
particular mental state) and HOT theory’s explanation of this is quite satisfying.
HOT theory has many other virtues which are well remarked by Rosenthal himself.
Still, the definition as it stands fails to mark a crucial distinction the neglect
of which can lead to confusion. We must distinguish between α’s being a conscious
state of the subject x and x’s being conscious of α. Sometimes HOT theorists as
well as objectors appear to be conflating the idea that the subject has a second-
order thought about α which makes α a conscious state with the idea that the
subject is conscious of α in virtue of having the second-order thought. I think it
would be an unfortunate consequence if HOT theory entailed that one could be
conscious only of mental states. Most conscious states have an (intentional)
object; a conscious perception of a cat has the cat as its object and the subject in
such a state is conscious not of his state of consciousness but rather of the cat,
that is, the intentional object of the state of consciousness. In fact, it is very rare
for anyone to be conscious of a mental state, at least if it is a mental state with its
own intentional object, and despite the philosophical tradition it is entirely
mistaken to define consciousness as an apprehension of one’s own mental states.
So in a spirit of improvement and to forestall confusion, we can emend the
definition as follows. If α is a conscious state and the intentional object of α is ∈
then we say that the subject is conscious of ∈ (in virtue of being in the conscious
state α). There may be, and Rosenthal assumes that there are, conscious states that
have no intentional objects. In such cases, saying that a is a conscious state is
equivalent to saying that the subject is aware of α. For example, if we suppose
that pains are ‘purely phenomenal’ states with no intentional objects then to be
conscious of a pain is just the same thing as the pain being conscious. But even
here we must be cautious. To be conscious of a pain in this sense is not to be
conscious of a pain as such. This is a much higher level affair demanding a state
of consciousness whose intentional object is the pain, conceived of as a pain. We
shall shortly see how attention to these distinctions can be important and can fit
rather nicely into the HOT theory.
It is worth digressing here to consider a line of objection to HOT theory
which I think ultimately fails. But the objection is interesting in at least three
ways: it endorses its own radical transformation of our notion of consciousness
and the reply to it reveals some subtle strengths of the HOT theory as well as
bringing out certain features crucial for the defence of a representational view of
consciousness. The attack is mounted by Fred Dretske (1993). Dretske’ s objections
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
66
fundamentally depend upon a distinction between an experience’s being
conscious and someone’s being conscious of that experience, and the claim that
the former does not imply the latter. If Dretske is right about this we have not only
a powerful challenge to HOT theories, but also a substantial and, I would say, very
surprising extension of our knowledge about consciousness. However, I will try
to show that Dretske’s objections cannot be sustained, revealing on the way some
subtle strengths of HOT theories of consciousness.
Dretske follows Rosenthal’s use of some key concepts in setting forth his
objections. Some states of mind are conscious and some are not: state
consciousness is the sort of consciousness which conscious states enjoy.
Conscious states are always (we think) states of some creature which is conscious:
creature consciousness marks the difference between the conscious and the un-
or non-conscious denizens of the universe. Creature consciousness comes in two
flavours: transitive and intransitive. Transitive creature consciousness is a
creature’s consciousness of something or other; intransitive creature consciousness
is just the creature’s being conscious. Dretske allows that transitive creature
consciousness implies the intransitive form, or
(1) S is conscious of x or that P ⇒ S is conscious. (1993, p. 269)
Furthermore, transitive creature consciousness implies state consciousness:
(2) S is conscious of x or that P ⇒ S is in a conscious state of some sort.
(1993, p. 270)
A further crucial distinction is evident in (1) and (2) – the distinction between
what Dretske calls thing-consciousness and fact-consciousness or the distinction
between being conscious of an object4 and being conscious that such-and-such is
the case.
Dretske’s basic objection to HOT theories, although articulated in a number
of ways, can be briefly stated in terms of some further claims involving these
distinctions. The most significant is that, in a certain sense, state consciousness
does not require creature consciousness. That is, Dretske allows that states can be
conscious without their possessor being conscious of them or conscious that they
are occurring. Consider, for example, someone who is consciously experiencing
a pain. By hypothesis, this is a conscious experience. Dretske’s claim is that it is
a further and independent question whether this person is conscious of the pain
or is conscious that he or she is in pain, and one which need not always receive a
positive answer. If Dretske is correct, then HOT theories would appear to be in
trouble, for they assert an identity between a state’s being a conscious experience
of pain and the possession of the belief than one is in pain.
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
67
We must, however, re-emphasize a subtlety of the HOT theory here. The
belief that one is in pain, which according to HOT theories constitutes one’s
consciousness of the pain, does not itself have to be and generally will not be a
conscious state. One would be conscious of this belief only via a third-order state,
namely a belief that one believed that one was in pain. Thus one cannot refute the
HOT theory by claiming that it is possible for one consciously to experience pain
without consciously believing that one is in pain, that is, without being conscious
of a belief that one is in pain. HOT theories cheerfully embrace this possibility.
This is important because Dretske does not seem sufficiently to appreciate this
subtlety. He claims that HOT theories must make a negative answer to the following
question: ‘can one have conscious experiences without being conscious that one
is having them? Can there, in other words, be conscious states without the person
in whom they occur being fact-aware of their occurrence?’ (1993, p. 272). But,
plainly, HOT theories allow an affirmative answer to this question. To have a
conscious experience is, according to the theory, to believe that one is having it
but not necessarily to consciously believe that one is having it. To put the point
more generally in terms of the notation introduced above, to be conscious of a is
to be in the state T[α]; this says absolutely nothing about whether one is in the
state T[T[α]] or not, and it is the latter state that is required for T[α] to be conscious.
So, according to HOT theories we have, roughly,
S is conscious of pain = S believes that he is in pain,
so the correct analysis of fact-awareness must be along these lines:
S is conscious that he is in pain = S believes that he is in f(he is in pain),
where f is some self-ascription function. I would suggest that f(he is in pain)
should be cashed out as something like ‘. . . is in a state characterized by I am in
pain’.5 Of course, normally we are rapidly carried from the conscious pain to the
fact-awareness that we are in pain but this is a feature of our cognitive machinery,
not an analytic truth constraining HOT theories of consciousness. If one considers
animal consciousness the need to separate these states is apparent. HOT theories
must assert that an animal’s being conscious of something is the animal’s having
an appropriate thought. While this is a real difficulty for HOT theories of
consciousness, for there are many who would deny to animals the ability to have
thoughts of any kind and even more who would deny that they have thoughts
about their own mental states, this is not the difficulty Dretske advances.6 It is
natural to say that animals can be conscious of pains but that they cannot be
conscious that they are in pain. However, given that animals can have some,
perhaps quite ‘primitive’, thoughts (and the HOT theory simply must address
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
68
animal consciousness in this way), the distinction is successfully accounted for
within HOT theories by the above analysis.
The worry that Dretske may not be taking this subtlety into account is
strengthened by his remark that: ‘HOT theories . . . take an experience to be
conscious in virtue of [its] being the object of some higher-order-thought-like
entity, a higher-order mental state that . . . involves the deployment of concepts.
My concern . . . therefore, was to show that conscious experience required no fact-
awareness . . .’ (1993, p. 279). Since HOT theories allow that experiences can be
conscious in the absence of fact-awareness of these experiences, this line of
attack is, strictly speaking, misguided. It may be that Dretske meant to assert no
more by ‘fact-awareness of p’ than ‘belief that p’, without any implication that
these beliefs are themselves conscious. Such an interpretation would not be foreign
to common usage and would lead immediately to the objection against HOT
theories considered below. But Dretske actually says that ‘consciousness of a fact
[which must surely be fact-awareness] . . . requires a conscious belief that this is a
fact’ (1993, p. 272, my emphasis). HOT theories do not require this, and would
consider it an unnecessary leap to third-order thoughts.
Box 3.3 • Dretske’s Objection
Since HOT theory makes every conscious state the object of a thought
about it, every conscious state has an associated conceptualization of it, as
given in the thought that ‘makes it’ conscious. Dretske objects that it is
possible for there to be conscious experience without any of what he calls
fact awareness. Fact awareness is consciousness of facts, which are conceptual
entities; an example would be an awareness that snow is white. One can be
aware of white snow without being aware that snow is white (one can even
be aware of the whiteness of snow without being aware that snow is white).
But HOT theory does not require any consciousness of facts for there to be
conscious experience; it only demands that there be some conceptual
categorization of the experience which is itself generally not conscious.
Dretske’s basic objection can thus be countered. Dretske can, however,
further deny that every conscious experience requires some
conceptualization of it. However, while one can plausibly argue that no
conscious experience has a mandatory conceptualization, it is very difficult
to show that some conscious experience has no conceptualization. HOT
theory asserts rather that every consciousness is a consciousness as of. . . .
Contrary to Dretske, this seems entirely plausible.
In any case, HOT theories do assert an intimate connection between conscious
experiences and beliefs about those experiences. Dretske must show that some states
can be conscious in the absence of any such beliefs. Put another way, he needs to
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
69
show that states can be conscious in the absence of any conceptually articulated
characterizations of them.
I do not find any explicit argument for such a thesis in Dretske’s article. The
nearest thing is his defence of the following principle:
(3) For all things x and properties F, it is not the case that, S is conscious
of x ⇒ S is conscious that x is F. (1993, p. 266)
It would be easy to produce an argument against HOT theories based on this principle
if we could identify ‘S believes that x is F’ with ‘S is conscious that x is F’, but as we
have seen this is an identification that HOT theories need not endorse in general. But
there is a closely connected claim which is made by HOT theories.
HOT theories must endorse the transition from state consciousness to transitive
creature consciousness. For suppose some state, α, is a conscious state (i.e. possesses
state consciousness) of subject S. HOT theories analyse this as S’s believing that he is
in a (or, in the notation introduced above, having the thought T[a]). But this is
identified with the state of S’s being conscious of α.7 Thus HOT theories identify
transitive creature consciousness of α with α’s being a conscious state. Thus Dretske’s
line of attack is indeed well motivated and he is right to say that HOT theories must
deliver a negative answer to the question: ‘can there be conscious states in a person
who is not thing-aware of them?’ (1993, p. 272). S’s belief that he is in a, or S’s
consciousness of α, must then characterize α in some way via the deployment of
concepts. I take it that this is HOT theory’s way of claiming that, as well as explaining
why, all consciousness is consciousness as. . ., where the ‘. . .’ is to be filled in by the
conceptual characterization of a occurring in S’s belief. It is possible, then, to interpret
Dretske’s defence of (3) as a defence of a slightly different principle, namely the
denial that a consciousness of is always a consciousness as. We could write this
version of (3) as
(3*) For all things x and properties F, it is not the case that, S is conscious
of x ⇒ S is conscious of x as F.
If (3*) is correct we have the basis of a powerful objection against HOT theories of
consciousness. However, this is a big ‘if’.
In the first place, the exact import of (3*) (or (3) for that matter) is not altogether
clear. Dretske’s arguments for the principle may help to clarify it. He begins by noting
the obvious truth that one can be conscious of x, which as a matter of fact is an F,
without being conscious of x as an F (his example, which certainly rings true for me,
is the possibility of (consciously) seeing an armadillo while having only the faintest
idea of what an armadillo is). Such cases, however, only support a much weaker
version of (3*), which would read as follows:
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
70
(3**) For all things x and some properties F, it is not the case that, S is
conscious of x ⇒ S is conscious of x as F.
Dretske then goes on to argue that there is no property, F, such that if one sees an
armadillo one must characterize it as an F. This is also true but exposes a critical
ambiguity in (3*).
To see this clearly we must note that a modal component lurks within our
principles. The ‘⇒’ in (3*) cannot be regarded merely as material implication on pain
of the ridiculous logical consequence that S is conscious of everything. Dretske
means to assert that it is possible to be conscious of x without being conscious of x as
an F. The proper understanding of (3*) crucially depends upon the scope of this
possibility operator.
Approaching this point somewhat obliquely, consider the following explicitly
modal principle:
where ‘Aw(S,x)’ represents ‘S is conscious of x’ and ‘CON(S,x,F): stands for ‘S is
conscious of x as F’. This principle asserts that there is at least one distinguished or
essential conceptual characterization which any consciousness of x must ascribe to
x. This principle is clearly false, as Dretske ably shows. Thus we can take it that:
(Strictly speaking, this is stronger than the mere denial of (EX) but there is no reason
to suppose that essential characterizations exist for any object.) After some logical
manipulation, this becomes something close to (3*), viz.
or, equivalently,
This states that for any characterization, F, of x, it is possible to be conscious of x but
not to be conscious of x as F. This seems to be true. However, (POS) is not the correct
rendition of (3*) for (POS) is compatible with a weaker version of (EX) stating only
that it is necessary that some characterization apply to x whenever one is conscious of
x. Precisely,
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
71
The arguments that Dretske offers for (3*) are all compatible with (EX*), for they
are all to the effect that no particular ascribed characterization of x is necessary for
one to be conscious of x in any given situation. Nonetheless, all these situations
are such that x is characterized by some F. Thus, these arguments can only support
the weaker principle. But it is fallacious to infer from (POS) the stronger form
which does express the intended meaning of (3*), namely,
The fallacy here is the well known modal fallacy – here applied to attributes – of
inferring from the fact that something is possible of each thing to the fact that
something is possible of all things.
Since (3*) is unsupported, it cannot be used in an attack on HOT theories.
What is more, the correct principle, (EX*), can be invoked to disarm Dretske’s
final objection against these theories. This objection begins from an
unexceptionable premise, namely that experienced differences require different
experiences. Dretske asks us to imagine attentively examining some complex
scene and then shifting our attention to a second, very similar scene which we
then also attentively examine (as in those common puzzles that ask you to spot
the difference between two very similar pictures). One might not consciously
notice that there was any difference between the scenes but nonetheless it may be
true that one was conscious of every element of each scene. Thus the experience
of scene 1 must have been different from the experience of scene 2 (for example,
it could be that scene 2 lacks an element of scene 1 and so one consciously
experienced that element when scanning scene 1 but of course had no such
experience during examination of scene 2). Dretske concludes that we are thus
committed to the ‘possibility of differences in conscious experience that are not
reflected in conscious belief’ (1993, p. 275). Although we have seen that this is an
infelicitous way to put the objection, Dretske wishes us to take his point to show
that there can be ‘internal state consciousness with no corresponding (transitive)
creature consciousness of the conscious state’ (1993, p. 275). This would clearly
threaten HOT theories given their inescapable contention that state consciousness
entails transitive creature consciousness.
But here, I think, Dretske is equivocating between what is, in essence, a de re
and a de dicto characterization of consciousness. Would HOT theories demand
that S be conscious of the difference between any two distinct experiences as a
difference? Clearly the answer is no, for S may simply have never consciously
compared them. In such cases – quite common I should think – S need not be
conscious of the difference at all. Well, should HOT theories require that if any
two of S’s conscious experiences are different and S is actually conscious of the
difference (i.e. conscious of what is different between the two experiences) then S
must be conscious of this difference as a difference? This also calls for a negative
answer. To say that S is conscious of the difference in this sense is to say that there
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
72
is something different about the two experiences of which S is conscious; this
puts no, or very few, restrictions on how that experience will be characterized in
S’s belief about it which, according to the HOT theory, constitutes S’s
consciousness.
That is, to say that S is conscious of the difference is, on the HOT theory, to say
that S believes that he is experiencing the difference. But in the case envisaged this is
true only on a de re reading of this belief. A more precise specification of this belief
that brings out its de re character is this: of the difference (between the two experiences)
S believes of it that he is experiencing it. It does not follow that S is conscious of the
difference as a difference. To find out how S is experiencing the difference (that is,
how to fill in the relevant as. . .) one must discover the correct de dicto characterization
of S’s belief. Our principle, (EX*), guarantees that there is some such characterization
but certainly does not demand that S should end up experiencing the difference as a
difference. I can see no good reason to deny HOT theories access to de re
characterizations of the beliefs that underwrite conscious experience. Of course, such
characterizations do not help to specify the state of consciousness as it is to the
subject himself but that is quite typical of de re belief constructions. They function,
as illustrated above, to provide identification for outsiders of what a belief is about or,
through the use of the HOT theory, to explain what someone is conscious of without
a commitment as to how that person is conscious of that thing.
In short, the HOT theories of consciousness can admit the phenomena that Dretske
points out without succumbing to the objections he believes they generate.
So, the HOT theory is surprisingly resilient and seems able to generate its own
range of insights into the nature of consciousness. It is obvious that HOT theory is
structurally similar to familiar theories of perception and thus it has certain affinities
with other ‘perceptual’ theories of consciousness. By this, I do not primarily mean to
connect HOT theory with those views of perception which explicitly make perceiving
a kind of believing (see e.g. Armstrong 1968, chapter 10), though perhaps HOT
theory could be mobilized to increase the plausibility of such views. More simply,
one can see that the clauses of our definition of HOT theory quite naturally transform
into a pretty conservative characterization of perception, rather as follows:
S perceives O iff
(1) O exists,
(2) S has an experience as of O,
(3) S’s experience is caused by O,
(4) S’s experience is properly immediately caused by O.
With regard to consciousness itself, HOT theory is reminiscent of both David
Armstrong’s view of consciousness as one part of the brain physically ‘scanning’
another (see Armstrong 1968, pp. 92–94 and also chapter 15) and the early Daniel
Dennett’s view of consciousness as a content carrying brain state that gets access to
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
73
the speech production centre (see Dennett 1969, chapter 6).9 The relative advantage
of HOT theory is that it does not link the theory of consciousness with any attempt to
model the workings or structure of the brain and its cognitive architecture. It yet
remains compatible with these attempts, which can be seen as physicalist efforts to
delineate the mechanisms that would be required to make clauses (3) and (4) of our
formal characterization of HOT theory true within a working brain.
The analogy between theories of perception and HOT theory also suggests that
according to HOT theory consciousness will suffer analogues of the various forms of
misperception that philosophers of perception have appealed to, postulated or
discovered. These are occasioned by considering the effects of letting one or more of
the clauses of the definition of HOT theory, as given above on page 64, become false
while maintaining as many of the remainder as possible. Let us catalogue the
possibilities without, for now, going into either the question of their genuine possibility
or their consequences.
Box 3.4 • The Four Pathologies
For each clause of the HOT theory definition of consciousness (see Box 3.2
above) there is a possible corresponding pathology of consciousness. The
pathologies are generated by denying one clause of the definition while
maintaining the truth of as many of the rest as possible. These are test cases with
which to explore the limits of HOT theory’s plausibility. Deny clause (1) and
we get an ‘hallucination’ of consciousness, e. g. one thinks one is in pain when
one is in fact not. Deny clause (2) and we get a mental state that is not ‘noticed’
(this is not very pathological except in certain extreme cases, as when one fails
to ‘notice’ an excruciating toothache). The denial of either clause (3) or (4)
leads to interesting and problematic cases, which get to the heart of HOT
theory. In both cases we have to ask whether it is in any more than at most a
merely legalistic sense in which there is no conscious awareness of the lower-
order mental state, a. If the subject gets into T[α], how can the subject or ‘the
world’ tell how T[α] was brought about? If T[α] is the sort of state that ‘generates’
consciousness, won’t an at least as if consciousness result whenever the subject
gets into T[α]?
First, let (1) be false. Then of course (3) and (4) must be false as well, but there is
no reason to deny (2). This is a case of an hallucination of consciousness, the description
of which is somewhat problematic, but whose possibility is a fundamental characteristic
of HOT theory.
Second, let (2) be false. Again, it follows that (3) and (4) are false. This is
simply the case of an unnoticed mental state, indubitably somewhat odd if the
state is a pain or other highly distinct sensation. As we saw above, HOT theory can
perhaps account for the sense of oddity we feel about such cases.
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
74
Third, let (3) be false. In such a case, while (4) must be false, (1) and (2) can
remain true. The description of this case is also problematic, as we shall see.
Finally, let (4) be false. All the other clauses can nonetheless be true. Would
this be another hallucination of consciousness, or in this case would there be no
consciousness of a whatsoever? This is a tricky question, as we shall see below. It
is also worth noting that clause (4) is unclear as to which sorts of inferences or
sensory information are to be forbidden. Is any intermediary of this kind
prohibited, or only conscious inference or sensory information? It could hardly
be the latter for that would make HOT theory circular as an account of
consciousness. As to the former, modern cognitive theories abound with
hypothetical unconscious inferential, or quasi-inferential processes, particularly
in the case of perception, the very case upon which HOT theory is largely modelled.
Why couldn’t the link between a and T[a] be a cognitive link in this sense: that
the process connecting them can be usefully described in information-theoretic
terms? To put the question another way, why would a ‘cognitive link’ as opposed
to one of a different, perhaps more ‘direct’ sort, fail to produce consciousness?
Intuitively, we know there must be some difference. Here, as so often in
philosophical analyses we wish we could simply write ‘a link of the appropriate
sort . . .’. But even if we could get away with this in philosophy (which in truth we
cannot), any empirical investigation into the physical differences between proper
and improper linkages will bring us up against the generation problem.
We can, following Rosenthal, call these four ways of only partially fulfilling
HOT theory ‘pathological cases’. According to HOT theory they are all genuinely
possible. As Rosenthal says:
. . . since [the] higher-order thoughts are distinct from the mental
states that are conscious, those thoughts can presumably occur
even when the mental states that the higher-order thoughts purport
to be about do not exist.
(1986, p. 338)
Explicitly, this passage deals only with our first pathology, but the reason Rosenthal
gives for its possibility supports that of the other pathologies as well (and, of
course, Rosenthal independently goes to some length to support the possibility of
the second pathology).
One thing we can say about the pathologies in general is that the causal link
between a and T[α] is of a much closer and more intimate sort than the causal
links in perception that mediate between literally distant objects and the brain.
The mechanisms of HOT theory are presumably all within the brain and in fact
they will generally form a part of the more complex and extended causal sequences
involved in perception. This alone suggests one reason why we feel that the link
between consciousness and the mental state of which we are conscious is so
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
75
peculiarly intimate. According to HOT theory, it is an intimate link, but one that
is at bottom causally ‘ordinary’, not metaphysically unique.
HOT theory’s willingness to countenance these pathologies allows us to
answer a subtle but very important question about the theory. The general dictum
that to be conscious of a is to have T[α] does not resolve the question whether
possession of T[α] alone is itself sufficient for conscious experience or whether
consciousness depends upon the existence of the proper link between a and T[α].
The account of pathology 1 just given suggests that HOT theory claims the
former: T[α] is by itself sufficient for consciousness. The other clauses serve to
mark out how a certain T[α] is a consciousness of some other particular mental
state, a, or confers consciousness on a. Again we see analogies with theories of
perception, which always possess components that mark out what object is the
perceived object but must also include other components to account, or at least
allow, for the possibility of perceptual error and hallucination. Rosenthal provides
more direct textual evidence that this is his understanding of HOT theory as well.
The quote above from 1986, p. 338 makes the point reasonably clearly and
during a discussion of the ‘reflexivity’ of consciousness Rosenthal says:
The sense that something is reflexive about the consciousness of
mental states is thus not due to the conscious state’s being directed
upon itself, as is often supposed. Rather, it is the higher-order
thought that confers such consciousness that is actually self-
directed.
(1986, p. 346)
What is important here is the claim that it is the higher-order thought, in our terms,
T[a], which confers consciousness. In addition, Rosenthal states elsewhere: ‘. . . we
are not normally aware of the higher-order thoughts that, on such an account,
make mental states conscious’ (1986, p. 340, my emphasis) and more recently
Rosenthal says: ‘. . . a mental state’s being conscious consists in its being
accompanied by a HOT’ (1995, p. 26 n.). I also observe that in 1995 Rosenthal
adds: ‘. . . HOTs can presumably occur in the absence of the states they purport to
be about. What would that be like subjectively? Since having a HOT makes the
difference between whether there is or isn’t something it’s like to be in a mental
state, it may be that having a HOT without the relevant state is, sometimes at least,
subjectively indistinguishable from having both’ (1995, p. 26 n.). One has to ask:
how could it be only ‘sometimes’, if the appropriate HOT occurs?
I think we can also see that this reading of HOT theory is forced by
considerations of plausibility. Suppose we maintained that consciousness
demanded the fulfilment of all four clauses of HOT theory – that whenever anyone
was conscious at all this consciousness would satisfy all four clauses of the HOT
theory. This would directly entail that, for example, no one could be consciously
mistaken about their own mental states. For example, it would be impossible
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
76
consciously to take oneself to believe p while in fact one did not. For, on our
present reading of the theory, one could not be conscious of the belief that p, or
have the’ experience’10 of believing that one believed p, unless one did believe p.
Otherwise clause (1) would fail, contrary to our present supposition. Yet it is, I
think, overwhelmingly plausible that people can consciously take themselves to
believe, desire or hope for what they in fact do not (that people can have ‘false
consciousness’). This phenomenon is by its nature subtle and complex and it is
difficult to trot out elementary examples. Freud offers many rather extreme cases
and some are certainly quite plausible. But it is a common experience to find
someone’s self-image (that constellation of beliefs, desires, etc. that one
consciously takes oneself to possess) at odds with what an outside observer
would rightly take to be that person’s beliefs, desires, etc. Nor is it uncommon for
people to find out that they had mistaken the nature of, for example, a particular
desire of theirs when suddenly confronted with its imminent satisfaction.
We might call this implausible interpretation of HOT theory the Fully
Relational Reading and while it is unlikely to be correct we should note that it
could immediately account for the sometimes presumed incorrigibility of our
consciousness of our own mental states. If consciousness were necessarily the
fulfilment of our complex of four clauses then it would indeed be impossible to
be conscious of a via T[α] without being in the state α.
But the implausibility of the Fully Relational Reading of HOT theory stems
from the implausibility of incorrigibility itself, and the alternative reading of
HOT theory can obviously account for this failure of consciousness. Failure of
self-knowledge stems from pathologies 1 and 2 (differing in the ‘direction’ of the
failing). As befits pathologies, they are relatively rare, but their frequency could
vary widely depending upon the type of mental state involved or the particular
contents of those mental states. Such variance would be explicable in terms of the
patterns of cause and effect between the relevant αs and T[α]s. The Freudian style
of psychological analysis can be seen as at least a model for such causal
classifications of mental state interrelationships.
We are thus driven to reject the Fully Relational Reading of HOT theory.
HOT theory cannot retreat to it in the face of difficulty. This is important, since
the objections I will raise stem from the acceptance that it is the higher-order
thought, T[α], that confers consciousness, independent of the conditions under
which T[α] is brought about. That is, should T[α] occur to someone (i.e. should
someone get into the mental state designated by T[α]), that person will be in a
state of consciousness indistinguishable from that of being in state α whether or
not they are in α. There will be nothing ‘in’ that person’s consciousness by which
to distinguish the veridical from the pathological cases. Again, this is analogous
to the case of perceptual hallucination where there need be nothing ‘in’ the
perceptual experience that could reveal an hallucination as such.
Nonetheless, there is nothing in HOT theory, nor in the case of perceptual
hallucination, that precludes the recognition that one is the victim of a
pathological state of mind. The Müller-Lyer illusion is a very simple perceptual
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
77
‘hallucination’ which illustrates this point. Even while we cannot help but see
one line as longer than the other, we all know they are nonetheless the same
length. Similarly, each of HOT theory’s pathologies is compatible with the
knowledge that one is suffering it. This suggests a variety of rather traditional
objections to HOT theory, of which perhaps the simplest is the following.
According to HOT theory (via pathology 1), it is possible to be as it were
conscious of pain while one is in fact not experiencing any pain. In such a case,
that is to say, the phenomenal property of painfulness will not be exemplified at
all even though the subject ‘thinks’ that it is. The objection is straightforward.
There is no difference, to the subject, between this case and the veridical case of
‘true’ consciousness of pain because in both cases the consciousness-conferring
thought, T[α], occurs. There is every bit as much suffering in the pathological as
in the normal case, every bit as much reason to eliminate T[α] in this case as in the
normal case. Since it is the presumed painfulness of pains that provides us with
the grounds to attribute and sympathize with suffering as well as giving us the
reasons to try to ease the suffering, this would strongly suggest that the relevant
phenomenal property of pains – painfulness – occurs in both cases, contrary to
what HOT theory appears to assert. One cannot reply here that the phenomenal
property goes with T[α] rather than α. Since this is a case of consciousness, HOT
theory would then require an appropriate third-order thought, T[T[α]] to account
for the consciousness of this phenomenal property of T[α]. We could then invoke
a second-order pathology from which, in strict accord with the foregoing, it
would be evident that the phenomenal property of painfulness actually belongs
to T[T[α]]. We would thus generate a viciously infinite hierarchy of thoughts
about thoughts about thoughts . . . . The phenomenal property in question would
forever remain one step above whatever level of the hierarchy was under
consideration and thus could find a home at no level of the hierarchy and therefore
would not belong to any mental state, which is absurd.11
Instead of chasing an ultimately imaginary phenomenal property up through
this hierarchy one could reply that the pain – α – possesses its own sort of
phenomenal property, but so too does T[α]. In fact, solely within consciousness,
there is no discernible difference between the experience of either. Thus, in normal
cases, the appearance of T[α] will be a consciousness of a and (or via) α’s attendant
phenomenal property of painfulness. In the pathological case, T[α] occurs without
α but also, we may postulate, T[T[α]] occurs as well, and this latter will be a
consciousness of the phenomenal property of T[α], This reply is entirely ad hoc
and endlessly multiplies essentially indistinguishable phenomenal properties,
but it also suffers from a worse defect. T[T[α]] is a consciousness of the thought
that one is in pain (i.e. in α). Even if we grant that the thought that one is in pain
has its own phenomenal properties (somehow, conveniently and miraculously,
indistinguishable from the phenomenal property of painfulness that a carries),
T[α] carries propositional content as well. Yet in the pathological case, there need
be no conscious awareness of the thought that one is in pain. Pathology 1 only
requires that one think and experience, falsely, that one is in pain. It does not
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
78
require that one also think that one think that one is in pain. Putative sufferers of
pathology 1 would sincerely assert that they were suffering pain and there would
in truth be no difference in the experience between the pathological and non-
pathological cases. There is no reason whatsoever to suppose that whenever
cases of pathology 1 occur there must also occur an additional, still higher-order,
thought.
The danger here is that HOT theory cannot place the phenomenal properties
of mental states in the proper location (and it is hard not to suspect that the reason
is that phenomenal properties are in fact somehow tied intrinsically to
consciousness and cannot be given a relational analysis). This difficulty is
reinforced if we suppose a sufferer of pathology 1 to be informed of his condition.
Let us further suppose that our sufferer is an ardent supporter of HOT theory and
hence is happy to allow the possibility of pathology 1. It is pretty clear that
despite the additional knowledge our subject will still report that he feels pain. In
addition, he may say, in line with HOT theory, that he is not really feeling pain
but he cannot deny that he is feeling something quite indistinguishable from it,
for if pathology 1 were consciously distinguishable from ‘true’ consciousness
then HOT theory would be falsified since in that case T[a] would not be what
confers consciousness. HOT theory is thus faced with an unhappy dilemma. Either
the phenomenal property of painfulness is not exemplified at all in this version of
pathology 1, in which case there is no accounting for our subject’s reports and
evident suffering, or else it is not the pain which exemplifies the property of
painfulness, which is not only obviously implausible, but it leads to the further
implausibilities I have just outlined. In fact, it seems to me that if HOT theory can
seriously countenance the idea that the phenomenal property of painfulness is
not exemplified in this case, then there is no reason to admit the existence of
phenomenal properties at all. Their raison d’être is to account for and provide the
content of the consciousness of sensations. If this very consciousness can occur
without any phenomenal input, no real role remains for the phenomenal properties,
which become merely a gratuitous metaphysical extravagance.12
Focussing more closely on T[α]’s ability to confer consciousness naturally
brings us to pathologies 3 and 4. In both, the consciousness conferring state,
T[α], occurs but, in pathology 3, α occurs but does not cause T[α] whereas, in
pathology 4, α does cause T[α] though not immediately but rather through some
inferential process or a process dependent upon sensory information. As we have
seen above however, so long as T[α] does occur, there will be a consciousness ‘as
it were’ of α. Pathology 3 is perhaps not particularly interesting – it is a case of an
hallucination of consciousness, akin to pathology 1, but one in which, by some
chance, the mental state which would make T[α] a case of what we might call
veridical consciousness (i.e. of course, a) happens to occur alongside T[α]. In the
field of perception, it is just the possibility of such coincidences of perceptual
experience along with what would make them veridical that drives philosophers
to impose the condition that the perceived object cause the perceptual experience.
HOT theory quite properly draws the same lesson from its analogous possibility.
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
79
Pathology 4 is much more interesting and leads to what I take to be the most
fundamental objection to HOT theory. HOT theory grants that the state T[α]
confers consciousness, that is, that anyone in the state T[α] will have a conscious
experience that is at least ‘as if’ one were conscious of a which is, so far as the
conscious experience is concerned, completely indistinguishable from a true
consciousness of a. Given this, it is hard to see what is improper, from the point of
view of T[α] being a state of consciousness of α, with any causal process whatsoever
getting one into T[α], so long as it does get you into that state of consciousness.
One complication can be set aside. It is possible to imagine a causal chain
from a to T[α] which includes, as essential links, other phenomenal mental states
the conscious experience of (at least some of) which is indistinguishable from
that of a. In such a case, a principle of causal proximity would seem to require that
the state of consciousness be of the final such phenomenal state in the chain. This
is again rather analogous to possible examples drawn from the field of perception.
Suppose that you are in a dark room looking at the place where a small light bulb
will be turned on. Unbeknown to you, a mirror angled at 45° lies between you and
the light bulb, but a second light bulb has been strategically placed off to the side
so as to be visible in the mirror. This second bulb lights only if the first bulb
lights. So the first bulb causes your perceptual experience, which is a perception
of a light bulb. The causal proximity principle correctly entails that you are really
perceiving the second bulb. But if the second bulb is replaced by a suitable
arrangement of two mirrors (or even video monitors), you will now perceive the
first bulb whenever it is illuminated even though its light (or even a representation
of its light) takes a somewhat devious route to you. The causal proximity principle
applies only to causal intermediaries that ‘satisfy’ the resulting perceptual
experience. Returning to HOT theory, we can legitimately prohibit this kind of
indirect consciousness, but of course hardly any sort of inferential process or
processes relying on sensory information will interpose the particular intermediate
mental states required to rule out such processes. In what follows there will be no
danger of inadvertently appealing to this kind of truly illegitimate mediated
consciousness.
Consider first the possibility mentioned earlier that the inferential processes
that, supposedly improperly, link α to T[α] are all unconsciously buried in the
sub-personal realm of our cognitive architecture. Suppose, that is, there are
functional units in the brain whose cognitive task is simply to bring certain states
up to consciousness. It is not implausible to suppose that there is something of a
competition amongst the myriad of brain states which underlie our phenomenal
and intentional mental states, some signalling distant objects of perceptions,
others important states of the body, still others potentially relevant intentional
states. All these states will loudly clamour for ‘consideration’ by our hypothetical
functional units but only a few will become conscious. The conditions for
becoming conscious could well involve some form of hypothesis generation and
testing at the sub-personal level. Such cognitive mechanisms depend on various
sorts of information processing, some of which are closely akin to inference, as
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
80
well as necessarily involving (sub-personal) sensory information.13 This is a sketch
of how the brain might get from a to T[α] and the fact that this process involves
essentially inferential processes and a reliance on sensory information does not
seem to threaten in any way T[α]’s claim to be a consciousness of a, given that
T[α] itself confers the conscious aspect of the experience, leaving only the identity
of what T[α] is a consciousness of to be decided. The situation here is once again
analogous to that in theories of perception, especially visual perception. Many
of these theories are irredeemably inferential in nature (see, e.g. Gregory 1990 or
Marr 1982). Whatever the faults of such theories, no one has ever suggested that
they fail simply because the posited inferential properties of the sub-personal
cognitive system by themselves preclude perceptual experience!
What really motivates inclusion of clause (4) in HOT theory is not the fear of
a supposedly impossible mediated consciousness, but rather the evident fact that
possession of just any second-order thought that one is in a certain first-order
mental state will not, by itself, make one conscious of that mental state, even if
the second-order state is caused by the first-order state. (Just as, in the case of
perception, merely having the thought or belief that a candle is visible before one
is not by itself sufficient for a perception of the candle, even if this belief is
somehow caused by the presence of the candle.) In fact, HOT theory’s acceptance
of pathology 2 makes this very clear. Recall that the second pathology involves
a subject being in state a unaccompanied by the consciousness conferring state
T[α]. To make the case as stark as possible and certainly pathological, let us
suppose that a is the unconscious experience of a spot of white light in an otherwise
jet black visual field (suppose the subject is sitting in an experimental chamber,
utterly dark, in which spots of light at various locations can be turned off or on).
Just as HOT theory allows that one can be in pain while not being conscious of
the pain, one can have visual experiences without being conscious of them.14 In
such a case, α = seeing a white spot. We are supposing that, for whatever reason
and, of course, atypically, T[α] does not occur. Now imagine that our subject is
told both that in actual fact he is seeing a white spot and that, let us say for some
technical and neuro-experimental reason, he is suffering pathology 2. It is given
that our subject fully accepts HOT theory and its consequences, has trust in the
experimenter and the brain theory she employs, etc. Thus the subject comes to
have the thought that he is seeing a white spot and is suffering from pathology 2,
i.e. T[α] plus an independent thought about mental pathology. It is clear that this
T[α] will not confer the consciousness of a small white light against a jet black
background, apparently contrary to the dictates of HOT theory. HOT theory is
supposed to be saved, of course, by appeal to clause (4): our imagined case is
blatantly a case of inference to T[α] via sensory information.
But notice that clause (4) cannot do this job. First of all, we have seen that
mere appeal to inferential or sensory informational mediation will not necessarily
rule out consciousness. And second, HOT theory already accepts that T[α] is the
state that confers consciousness, in the sense of ‘mere experience’ independent of
questions of what the experience is an experience of. So if our subject actually
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
81
gets into T[α], he must be conscious ‘as it were’ of a white spot against a jet black
background. It is clear that in our imagined situation the acquisition of T[α] will
not confer this consciousness. So it follows that the possibility of pathology 2
along with the assumptions of our imaginary situation (all compatible with HOT
theory) entail that one simply cannot get into a suitable T[α] in this sort of way.
Compare our hypothetical thought experiment with a kind of perceptual
experience that actually occurs quite frequently. Say you are out bird watching;
to be specific let us say that you and a friend are looking for the elusive
spruce grouse, a bird given to freezing when startled, relying on its effective
natural camouflage to escape detection. As you walk through the bush, you
hear the tell-tale rustling sound and look in its direction. Your friend spots the
grouse and quietly tells you that you are looking right at it, yet you still do
not discriminate the grouse from the background. You believe your friend and
thus you acquire the belief that you are looking at a spruce grouse but this
belief does not yield the perceptual experience of a spruce grouse. Then quite
suddenly, with no apparent change in anything else, you do see the grouse.
You would not have spotted the grouse but for your friend’s information, so
this is a kind of inferentially and sensory informationally mediated perceptual
experience, but of course it is nonetheless a perfectly genuine perceptual
experience.
More ‘scientific’ examples can easily be given as well. A well known
visual illusion involves what is called the ‘transparency effect’ (see Rock
1985, pp. 112ff, 138 ff.). Consider fig. 3.1:
(Fig. 3.1)
At first glance this looks – to most people at any rate – like four distinct,
variously shaded rectangles. But if one is told that it is a grey, transparent sheet
placed over the two-colour rectangle underneath, one can come to see it as just
that. It seems that the information about what one is looking at transforms the
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
82
way it looks to one, and again we have a genuine conscious experience that is
inferentially, informationally mediated.
We can extend our thought experiment, by analogy with such actual events,
to show that of course one can get into a suitable T[α] in all sorts of ways. We can
imagine that our subject is in a ‘primed’ state such that being told that he is
seeing a white spot will causally release the subject from pathology 2, just as
being told that a spruce grouse is hiding right before your eyes can actually
trigger its discrimination from the background. Thus an inferential and sensory
informationally mediated process can yield a suitable T[α]. The crucial question
is: what is the difference between a suitable and an unsuitable T[α]?
My fundamental objection to HOT theory will now, I hope, be clear and
clearly damaging. It is that there is no way to delineate the suitable T[α]s from
the unsuitable ones except in ways ultimately equivalent to this: a suitable T[α]
is one that confers consciousness. If this is correct, then HOT theory cannot be a
reduction of consciousness or an explanation of consciousness, for one must
appeal, tacitly, to the very thing one wishes to reduce and explain in order to
characterize HOT theory in complete detail. This does not mean that HOT theory
is wrong to link consciousness and higher-order thought. It is, indeed, pretty
evident that consciousness does have some intimate connections with higher-
order thoughts. But it does mean that one cannot explain consciousness in terms
of, or reduce consciousness to, a relation between lower and higher-order thoughts.
We can go some way towards diagnosing HOT theory’s failing. Ultimately, it
lies in the acceptance that T[α] is a state which confers consciousness along with
the characterization of T[α] as a content carrying higher-order thought to the
effect that one is in the state a. Since HOT theory demands as well that T[α] be a
separate mental state from a there is no way to rule out T[α] being caused by a
wide variety of atypical causes (hence HOT theory’s acceptance of the pathologies
of consciousness). At the same time, it is clear that many states that would
intuitively, and correctly, be counted as such higher-order thoughts do not confer
consciousness, and so the suitable ones must be separated from the unsuitable
ones. But this cannot be done by any appeal to the process which generates T[α],
for the separateness of T[α] means that, bluntly speaking, any such process can
produce, in the respectively proper circumstances, either a suitable or an unsuitable
T[α]. In the end, the only possible characterization of this distinction that
conforms to the dictates of HOT theory is one that appeals to consciousness itself,
or the ability to confer consciousness. Thus HOT theory cannot succeed as a
reduction or explanation of consciousness.
This objection cannot be made against what I called the Fully Relational
Reading of HOT theory for on that construal, T[a] alone is insufficient to confer
consciousness. This is no help however, for, as we saw, the Fully Relational
Reading is extremely implausible on independent grounds.
Such objections reveal that the HOT theory will not face up to the generation
problem any better than did the identity theory. Suppose that we accept the
division of HOTs into the two fundamental groups: those for which, to use
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
HOT THEORY
83
Rosenthal’s words, ‘a mental state’s being conscious consists in its being
accompanied by a [suitable] HOT’ (1995, p. 26 n.) and those for which this is not
true. Whatever manages to produce a HOT of the former sort (be it inference,
causal mechanisms within the brain or even the direct will of God) will also
produce conscious experience. Within HOT theory there can be no explanation
of what this difference amounts to, for the occurrence of the appropriate HOT is
by itself sufficient to ‘generate’ consciousness – the causal ancestry, or any other
feature, of the HOT doesn’t matter once it has come into being. So we are left with
the usual question, which is the generation problem once again, of exactly what
it is about the appropriate HOTs that allows just them to confer consciousness
upon certain states?
Notice that for all my objections, there is a sense in which the HOT theory
could still be true. I have tried to show that one cannot reduce consciousness to
a kind of thinking or explain consciousness in terms of the HOTs. Nonetheless, it
might be that consciousness is, as a matter of brute fact, precisely the kind of
cognitive operation to which HOT appeals. So too, the neural vector coding
theory could, as a matter of brute fact, be what consciousness is (the two theories
could even collapse into one if we wanted or needed to identify the neural vectors
with thoughts instead of the phenomenal qualities). I noted that, on the neural
identity option, the question of just how come only those creatures with the
appropriate vector machinery were conscious is pressing (to which the answer
‘brute fact’ is not very comforting in the face of behavioural evidence of
consciousness). Similarly, while the HOT theory resides at a higher level of
abstraction than the neural vector identity theory, it too faces an analogous
problem. HOT theory must claim that beings without thoughts are without
consciousness. This is problematic for animals. One might wish to argue that
consciousness emerged prior to thought, rather than only after there were creatures
capable of having thoughts, and, worse still, having the sort of complex thoughts
that manage to be about other mental states. Much more radical versions of this
problem arise when we couple the HOT theory to theories of thought and the
contents of thoughts. Before getting to this issue however, I want to examine
Dennett’s views on consciousness, which are closely related to the HOT theory,
though more radical in the reformulation of our notion of consciousness which
they advocate.
Box 3.5 • Summary
HOT theory promises a mentalistic reduction of consciousness to non-
conscious mental states in the hope that a naturalistic treatment of the
latter will be relatively easy to find. HOT theory could then explain
what makes a state conscious: a mental state, a, is a conscious state if
it is the object of a thought with the content that the subject is in a.
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.
THEORIES OF CONSCIOUSNESS
84
Box 3.5 • Summary (cont.)
If such an explanation is successful, HOT theory might even solve the
generation problem. But HOT theory suffers from several damaging
objections, the primary one being the worry that HOT theory cannot
distinguish those higher-order thoughts which can confer consciousness
upon a mental state from the myriad of possible higher-order thoughts
which do not. In the end, the only characterization of the ‘suitable’
higherorder thoughts is simply that they are the ones that confer
consciousness. Unfortunately, this obviously leaves the generation problem
untouched and, worse, makes the HOT theory explanation of consciousness
covertly circular and hence no explanation at all.
Seager, William. Theories of Consciousness : An Introduction, Taylor & Francis Group, 1999. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/oculwlu-ebooks/detail.action?docID=169266.
Created from oculwlu-ebooks on 2020-04-07 21:28:23.
C
o
p
yr
ig
h
t
©
1
9
9
9
.
T
a
yl
o
r
&
F
ra
n
ci
s
G
ro
u
p
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.