MIND 00-01

There are two main naturalist strategies philosophers have adopted for dealing with troublesome aspects of mentality: positive and negative

The positive strategy: propose theories which make naturalism about the troublesome aspect of the mind intelligible. This positive approach could be called reductive - the idea is to accept all aspects of the mind, but reduce the problematic aspects to something acceptable to the naturalist. Here “reduce” means roughly “reveal it [the mind or some aspect of it] to be something it doesn’t initially seem to be” - i.e. provide a materialism-friendly account of experience, thought, intelligence, etc.

Main theories: versions of behaviourism, functionalism, identity-theories

The negative strategy: defuse the problem, by showing that the common sense view of the mind which leads to problems for the naturalist is just wrong: the problematic entities or features don’t exist (and so don’t have to be reduced).

Main theories: eliminativist materialism; instrumentalism; interpretivism

The possibility of a mixed strategy: reductionist about some parts of the mind, eliminativist about others.

BEHAVIOURISM

It’s useful to begin with a discussion of behaviourism - one form of which was an influential form of reductionism. Behaviourism gets short shrift in some recent textbooks - after all, everyone knows that it has long been refuted, it’s a defunct doctrine, obviously wrong.

This is overly simplistic. What is true is that some forms of behaviourism have been refuted, other forms live on - many contemporary philosophers of mind subscribe to behaviouristic doctrines - even if the label has been discarded.

These contemporary descendants of the original behaviourists will be looked at in due course. But it’s till useful to look at the original doctrine, for two reasons:

BEHAVIOURISM: MOTIVATING FORCES

To put it very crudely, the behaviourist says that having a mind consists in being able to move one’s body in certain ways, in response to various sensory stimuli. Before looking at the doctrine more closely, why should anyone subscribe to such a position, which on the face of it seems very strange.

- anyone influenced by the Cartesian tradition will say that you don’t even need a body to have a mind; you have a mind if you have experiences and can reason; anything which is conscious and intelligent has a mind, irrespective of what it does with its body - assuming it has one

Behaviourists start from the position that this Cartesian way of looking at the mind is profoundly mistaken. According to the Cartesian, mentality essentially involves conscious states, and conscious states are private, i.e. they are not publically observable.

The fact that phenomenal states and properties are not publically unobservable gives rise to certain difficulites:

General Problem of Other Minds:

I know I have experiences, but how do I know anyone else has? Since I can’t observe anyone else’s experiences, why should I believe that such things exist?

The argument from analogy is very weak. It runs: “I have experiences; other people are very similar to me biologically; therefore it’s reasonable to suppose that they have experiences too.” The problem is, the claim “All humans have experiences” is based on just one case: the only human I know that has experiences is me - generalizing from one case is very risky. Perhaps I am exceptional: every other human behaves just like me, but is a zombie (they have no experiences at all).

Of course, we all believe other people have minds. But if minds are private, in the way the Cartesian maintains, this confidence would be misplaced.

The behaviourist says: our ordinary beliefs about each other’s minds are justified; this is because minds are not hidden from public scrutiny: having a mind is a matter of behaving in certain ways, and behaviour is publically scrutable.

The Beetle in the Box:

In thinking and talking about the mind we use language. Given that conscious mental states are private, how do we go about teaching and learning words like “think”, “pain”, “experience”? It is hard to see how we could, if the Cartesian view were correct. It’s easy to see how mentalistic terms are learnable if mentality consists of publically observable behaviour …

But there’s a deeper problem here, one which Wittgenstein drew attention to.

PI 293:

If I say of myself that it is only from my own case that I know what the word “pain” means - must I not say the same of other people too? And how can I generalize the one case so irresponsibly?

Now someone tells me that he knows what pain is only from his own case! - Suppose everyone had a box with something in it: we call it a “beetle”. No one can ever look into anyone else’s box, and everyone knows what a beetle is only by looking at his beetle. - Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing. - But suppose the word “beetle” had a use in these people’s language? - If so it would not be used as the name of a thing. The thing in the box has no place in the language-game at all; not even as a something for the box might even be empty. - No, one can “divide through” by the thing in the box; it cancels out, whatever it is.

That is to say: if we construe the grammar of the expression of sensation on the model of “object and designation” the object drops out of consideration as irrelevant.

Wittgenstein is making two main points here:

- we all know the meaning of words such as “thought”, “pain”, “feeling of hunger”

- the meaning of a term is what is provided by explanations of meaning

Hence: the meaning of pain is essentially linked to behaviour.

And because of this, the sceptical problem of how we have knowledge of other minds is based on a mistake: there is no such problem! There would be, if experiences were like beetles in boxes - private objects. But Wittgenstein has argued that the word “experience” (and its kin) do not refer to private objects - they refer to aspects of publically observable behaviour.

So: we see how a behaviouristic approach can be motivated by appealing to the ways in which the words (or concepts) we use in connection with the mind can be meaningful.

- since meaning is wholly shared, every aspect of the mind about which we can talk or think must be wholly public too

- this means mind and behaviour must be closely linked: we don’t ordinarily have access to the insides of people’s heads - so the sorts of facts about the brain that neuroscientists learn must be irrelevant to the meanings of ordinary language terms for mental states)

The argument is transcendental: A transcendental argument establishes something about a phenomenon Z by showing that X (say) is a necessary condition for Z even to be possible. This is how the linguistic arguments for behaviourism work: (i) we know that words like “think” and “pain” are meaningful, (ii) what must be the case for these words to be meaningful? (iii) Answer: their correct use must be linked to publically observable phenomena.

- given that meaning is shareable and public, this general line of argument isn’t stupid.

More on this sort of argument later …

_____________

Enough preliminaries, let’s get down to details … we’ll focus on a particularly strong form of behaviourism, which usefully illustrates the reductive strategy:

ANALYTIC/LOGICAL BEHAVIOURISM

A popular doctrine in the early 20th century, held by Carnap, Hempel and Ryle (with nuanaces), among others

Logical behaviourism (LB) is a form of a more general position: analytic reductionism. The goal of this reductionism: to show that mental concepts, statements about mental events, all talk of the mental realm, can be translated into a non-mental vocabulary, without (significant) loss of meaning

· the content of statements about the mind turn out, after conceptual analysis to be non-mentalistic

More specifically:

(1) any mentalistic/psychological statement can be formulated in a way that doesn’t use any mentalistic vocabulary, or presuppose any mentalistic concepts

(2) this reformulation reveals in a clearer way what the psychological statement really means

(3) there is a systematic way of arriving at such translations

This is NOT the same as methodological behaviourism: whose advocates, usually psychologists, claim that animal (and human) behaviour is best explained, in a scientific manner, by stimulus-response laws (linking sensory input with behavioural responses without reference to any internal states) - these scientists/psychologists made no claim about how ordinary mental terms were to be analysed (in respect of their meaning)

Basic Log-Behaviourist Claim: roughly, that statements about the mind all turn out, on analysis, to be about the behaviour of human beings

What is “behaviour”: this isn’t quite so straightforward as it seems. Going by common sense, “behaviour” means something like “what people do” - behaving is the same as doing something, performing an action. Now consider the following list:

(a) Actions which don’t essentially involve bodily motions: guessing the answer to a question, working out one’s bank balance

(b) Actions which do require bodily motion: signing a check, asking a friend for advice

(c) Bodily movements: moving an arm, swinging a foot

(d) Physiological Reactions: sweating, sneezing, trembling with cold

Still going by common sense, categories (a) and (b) involve actions, they are things people do - they involve intention or volition. (c) and (d) don’t consist of things people do in this sense: when I walk my arms swing back and forth, but automatically: I’m not moving them on purpose. Similarly, when I sweat, I do so involuntarily: it’s not something I do.

- it looks like (a) and (b) constitute action, or behaviour, whereas (c) and (d) don’t

However, the behaviourist takes the opposite standpoint: only the sorts of activities in (c) and (d) constitute “behaviour”. What the behaviourist means by behaviour is bodily movement and physiological reactions.

It’s easy to see why: if the aim is to analyze mentality in terms of behaviour, then what we mean by “behaviour” must be mind-free

- as noted earlier, the psychological statement that is successfully analyzed is translated into terms which don’t themselves use or presuppose any psychological concepts

Actions such as “guessing an answer” or “working out a solution” are directly characterized in psychological terms, and so cannot feature in a reductive analysis of mentality.

Less obviously, the same goes for actions such as “asking for advice” or “signing a cheque”:

- it is only correct to describe someone as asking for advice if the person so described believes this is what they are doing; the description tacitly presupposes that the person has certain mental states: they know they need some advice; they know what advice is, they intend their action to evoke certain consequences (elicit advice), etc.

- likewise, for the description “signing a cheque” to be true of someone, they must know about banks, money, signatures, cheques - if someone doesn’t have this knowledge, they may be making marks on a piece of paper, but they are not performing the same action as someone who is signing a cheque

So it is clear why the behaviourist cannot regard actions such as this as “behaviour” - at least, not if the aim is to reduce mentality in all its aspects to behaviour.

Note: it is only when we realize how austere the behaviourist’s notion of “behaviour” is that the audacity of the undertaking can be fully appreciated.

_______________

Returning to the main theme:

If L-B is true, the traditional mind-body problem disappears! We wanted to know how the mind is related to the body. This way of phrasing the question suggests that the mind is an entity over and above the body and brain. The analytic behaviourist says this is a mistake: possessing a mind is simply behaving (or being apt to behave) in certain ways in certain circumstances. There’s no problem now in saying how mind and brain are related: mind isn’t an entity at all. For the behaviourist, the mind-body problem is a pseudo-problem: this peculiarly philosophical sort of problem arises through a misunderstanding of concepts; once we arrive at a proper understanding of the relevant concepts, the problem is revealed as what it is: not a problem at all, merely a mass of confusion.

· recall Ryle: dualism results from a category mistake (the same general kind of mistake as the one made by the man who is shown round all the university buildings, one by one, and then says: all very nice, but where’s the University?)

So, our mental terms, according to L-B, don’t refer to private inner events and states (which must belong to a private inner self); our mental concepts refer to patterns of bodily behaviour.

The Role of Dispositions

Now, at first sight A-B seems utterly insane. It seems clear that people can have lots of mental states without behaviourally manifesting them

· at the moment you may feel pains, itches without talking about them, or scratching them; you have lots of beliefs and desires you aren’t mentioning, etc.

But remember, the L-Behaviourist doesn’t claim that all your current mental states are currently manifesting themselves in your behaviour - this would be mad.

Rather, by “behaviour” is meant not just actual observable behaviour, but the ways you are disposed to behave. In ascribing mental states to someone we are typically ascribing certain behavioural dispositions.

There is a general distinction between “dispositional” and “categorical” properties:

There are many accounts of just what dispositional properties are; Ryle’s account was this:

In general, possessing dispositional properties, for Ryle, consist in certain “iffy” statements being true of an object.

Note: An alternative view: having a disposition is having an underlying state which makes the if...then... propositions true. This view isn’t Ryle’s:

To possess a dispositional property is not to be in a particular state or undergo a particular change; it is to be bound or liable to be in a particular state, or to undergo a particular change, when a particular condition is realized. (COM, p.43)

Applying this to the mind: focus on a particular mental state, such as the belief that it’s cold outside.

To possess this belief, you don’t need to be in any particular internal state; it’s true that you possess this belief if a number (probably a LARGE number) of “iffy” propositions about you are true.

E.g. Your believing that it’s cold outside is a matter of its being true that:

And so on.

Note: a single disposition to do something can be activated in different ways (a “multi-track” disposition); and the same disposition can be associated (partly constitutive of) different mental states:

- E.g. You might be disposed to turn the heater on if (a) you feel cold, (b) you believe that Sam wants it on, (c) you see snow start to fall outside.

____________________________________________________

So: L-Behaviourism is not completely insane: There’s obviously something quite realistic about a lot of this ... the mental states we have affect how we behave; in different circumstances, the same mental state will lead to different behaviour - the behaviourist analysis captures this aspect of mindedness.

Still, there are:

MAJOR PROBLEMS: lots of them - I won’t mention them all.

(1) The Inner: it seems very implausible, intuitively, that all we mean when we ascribe a mental state to someone is that they are disposed to behave in a certain way. Is feeling pain, or an emotion such as anger or love, just a matter of being disposed to act? Surely there’s an inner dimension: what it feels like from the inside - and this inner dimension is an element of what we mean by such terms as “pain” and “anger”.

Or, consider silent thought (what is it that Rodin’s “Thinker” is doing?). When we silently reflect, isn’t there something going on inside us that is essential to the activity of thinking? Of course there is!

This is a real problem for the A-B, since the theory is meant to be an account of what we ordinarily mean when we ascribe mental states to a person ... so the intuitive objection carries a lot of force.

(2) Super-spartans: the Non-manifestability Problem:

Putting aside points about the inner, there’s a problem with the idea that all mental states must involve dispositions to behave.

Take pain. There is a range of typical pain-behaviour: if someone burns their hand, they shout “Ouch”, or say they’re in pain, or put their hand under running water, etc.

But suppose there were a race of super-spartans: they are brought up from the earliest age not to show their pain in any way. They entirely quell their dispositions to pain-behaviour. Does it mean they’re no longer feeling pain?

Strawson’s example of the weather watchers: The intelligent trees: rooted beings, who are completely unable to move; they just watch what goes on, and think about it. They have no dispositions to bodily behaviour at all - is this unintelligible?

This example makes it clear that behaviourism is in trouble: irrespective of whether you sign up for Cartesian necessary privacy, the claim that every aspect of mentality must be behaviourally manifestable is implausible: those trees do have minds!

(3) First Person Knowledge

The ways we go about finding out about our own mental states are generally different from the ways we go about finding out about other people’s.

There’s an asymmetry in how we acquire first and third person knowledge of minds.

But on Rylean-type accounts, it’s hard to see why there should be any asymmetry. If my having a mental state is a matter of certain conditional propositions being true, if there’s no inner state I have to be in to be in a certain state of mind, then why should I be able to know about my own mind in ways not available to others?

(4) Vacuous Explanations:

We explain people’s behaviour in terms of their beliefs, desires, preferences, etc. The mental states a person has help explain why they do what they do. This is something no behaviourist would want to deny!

So my belief that it’s cold explains why I put on my scarf before going out.

· if I’m about to leave the house, and I believe that it’s cold, then I’ll put my scarf on; it’s because I have this belief (about the outside cold) that I act as I do in these circumstances

But now look at what this comes to on the Rylean view:

If we want an account of the mind which is compatible with the common sense idea that our states of mind are causally relevant to our behaviour and explain our behaviour, then the A-B account looks to be in deep trouble.

This problem is linked to another:

(5) Circularity and Context Dependence:

Historically speaking, this was a influential problem - i.e. it led many to seek a replacement for L-B-ism.

Let’s look again at my belief that it’s cold outside.

On the L-B view, this consists (in part) in my being disposed to put my scarf on when I go out.

On reflection, it’s clear that this sort of specification is incomplete. Dispositions are interconnected: whether a particular triggering event (something impinging on your body) activates a disposition, and causes you to say or do something, depends not just on the trigger, but on your other current dispositions

So it seems clear that particular types of mental state cannot be associated with any particular collection of behavioural dispositions. Mental states, taken in isolation, do not have their own distinctive fund of behavioural consequences.

We’re encountering here an aspect of the HOLISM of the mental.

This isn’t necessarily fatal for the behaviourist, but it does complicate matters considerably!

- It looks as if it will prove impossible to spell out the behavioural consequences of possessing any single mental state - what we have to do is spell out the behavioural consequences that a state S will have when if features as a component of the infinitely wide range of different mental systems.

A deeper problem? It can be argued that the difficulty goes deeper. It’s not just that the behaviourist analyses will be horribly and unmanageably complicated, there’s a difficulty of basic principle at stake.

The behaviourist analysis says holding a belief is being disposed to behave in ways X, Y, Z in circumstances P, Q, R

But when we spell out the circumstances which lead to the behavioural responses X, Y, Z, we have to include lots of other mental states. Call these M1, M2, M3....

· now, just as we needed to introduce other mental states when specifying the dispositions that constitute believing that P, when we turn to specifying the behavioural dispositions that constitute M1, M2 .... we will need to bring further mental states into the picture - we may well need to bring in the original belief.

E.g. I believe it’s cold so I put my scarf on when going out; but I only act like this if I want to remain warm. How are we going to analyse this desire? My wanting to remain warm will dispose me to put my scarf on if I’m going out, but only if I believe that it’s cold outside.

So: we’re going round in a tight circle here. It looks as if we aren’t cashing out talk of mental states in terms of talk about behaviour, because in specifying the behaviour we are led to refer to other mental states, and when we try to cash this talk out, we’re led to refer to further mental states, and so on forever (vicious infinite regress), or more plausibly, until we start mentioning our original mental state.

For all these reasons the programme is obviously in deep trouble!

Non-Reductive Behaviourism

I’ve been talking so far about strong behaviourism, the view that statements about the mental are all about people’s behaviour. Strong behaviourism is a doctrine about meaning and translatability.

There are less-strong versions of behaviourism that are more plausible, and still have adherents.

What we can call weak behaviourism is the claim that there are logical or conceptual connections between behaviour and mentality, coupled with the acceptance that statements about the mental can’t be translated into statements about behaviour.

The weak behaviourist accepts that when we talk about someone’s being in pain or holding a certain belief, what we are saying doesn’t amount to a claim about behaviour.

Weak (or non-reductive) behaviourisms come in different forms; they all share this feature: they claim that there are conceptual connections between mind and behaviour, and hence that any adequate account of mentality must reveal these connections.

The most influential non-reductive behaviourist is Wittgenstein, who had numerous followers.

Many of the objections against Reductive Behaviourism do not carry over to its Non-Reductive Counterpart:

- since the Non-reductivist doesn’t claim that mental terms can be translated into non-mentalistic terms, the fact that such translations are deeply problematic is irrelevant

The crucial point: although the Non-Reductivist denies translatability, the proposed logico-conceptual connections between mind and behaviour are still very strong - strong enough to do real philosophical work (or damage).

In particular, they rule out the Cartesian view that terms such as “feeling sad” or “pain” refer to states with a qualitative or phenomenal “feel” or character that is directly accessible to only the person who has it.

More specifically, the mind-behaviour connections are strong enough to rule out the following:

By virtue of the connections, it just doesn’t make sense to suppose someone could behave just like an average human but have no experiences. Likewise for inverted spectra: the idea that your experience of green might be like my experience of red just makes no sense. Or so it is claimed.

Why?

The Criterial Link: although there are no mental/behavioural translations, it is claimed that certain forms of behaviour can be logically sufficient for the correct ascription of mentalistic vocabulary

- in short, behaving as though you are in pain is enough to make it true that you are in pain

However, while it will generally be true that someone who displays pain behaviour will really be in pain, in any particular case it may be that they are not.

BUT: the defeasibility of the ascriptions doesn’t affect the main claim: it is incoherent to suppose that someone who displays pain-behaviour isn’t really in pain.

Given this link between behaviour and mentality, the main plank of Anti-Cartesianism is in place. Minds are essentially public entities, “nothing is hidden”.

To evaluate this position we need to look in more detail at the linguistic arguments mentiond earlier …

THE LINGUISTIC ARGUMENTS

To simplify matters, we can talk of the linguistic argument for (weak) behaviourism - this is what we are dealing with here - but there’s no single argument, or a single formulation of the argument - rather a general line of argument that can be formulated in different ways, e.g. Wittgenstein’s anti-private language argument. I’ll give a couple of examples, focusing on the clear and important case of sensations, e.g. pain.

From a common-sense perspective, “pain” refers to a certain kind of sensation; a feeling with a certain kind of subjective character (we know what it’s like in virtue of having felt it). This felt character is private in this sense: what my pains feel like is known directly only by me - no one else can feel what my pain is like. Now, if the essence of pain is to be a sensation with a certain subjective character, it’s hard to see why there should be any essential link between pain and behaviour - behaviour may be systematically associated with pain-sensations in most cases, but there’s not going to be any logically necessary connection - all that’s essential to pain is the felt character of the sensation.

The linguistic arguments suggest this cannot be right.

Preliminaries: We begin by accepting as a premise that “Pain” is a word in a common public language. What we mean by “pain” is what this word means in our language. When I say that I am in pain, I mean (by “pain”) just what I mean when I say YOU are in pain. “Pain” doesn’t have two different meanings, one for me and one for everyone else. By reflecting on the necessary conditions for a word to belong to a public language, we can quickly find ourselves in behaviourist territory.

Linguistic Argument 1

1. We learn the meaning of the word “pain” from other people.

2. For other people successfully to teach us the meaning of “pain”, there must be publically observable circumstances - available to both learners and teachers - in which it is correct to apply the word “pain”.

3. The relevant observable circumstances involve people’s behaviour; certain kinds of behaviour are (in normal circumstances) sufficient for correctly applying the word “pain” to a person.

4. So there is a non-contingent (or “logical”) connection between learning and knowing the meaning of “pain” and behaviour. (This follows directly from (3) - there are circumstances in which behaviour is sufficient for correctly saying someone is in pain.)

Comment: The argument is framed in terms of the necessary conditions for learning a piece of sensation-vocabulary. As such it is inconclusive: can’t we imagine a being who is born with an innate knowledge of English? Or, isn’t it a logical possibility that a being should come into existence with a full knowledge of English (recall the Hitchhiker’s guide whale).

To refute these suggestions we can frame a related argument in terms of the necessary conditions for a word to function as part of a public language (see below). This accomodates the logical possibility of beings with a wholly innate language.

Linguistic Argument 2

1.The meaning of an ordinary word is the meaning it has in a shared language.

2.The meaning a word has in a language is what is communicated by the use of it, in typical sentences.

3.“Red” is a perfectly ordinary well-understood English word. If I tell you I’ve seen a red bus, you know what I mean. You mean by “red” what I mean by “red”.

4.Suppose we each use “red” to refer a specific kind of private visual experience, caused by light stimulating our nervous system.

5.Since everyone’s experience of red is private, we have no way of telling whether our experiences of red match - whether we experience the same visual quality when we look at objects we all call “red”. E.g. perhaps red objects cause you to have the quality green objects cause me to have (and green objects cause you to have the experience red objects cause me to have).

6.So if (4) were true, we could not tell that everyone means the same when they use “red”.

7.But we do know that we mean the same by “red” (by 1-3).

8.So the private experience we each have is irrelevant to what we mean by the word “red”.

9.Consequently, to understand “red” it is sufficient to know how to use it, to know how to apply it correctly to everyday objects.

10. So the meaning of “red” is logically bound up with publically observable ways of using the word.

Wittgenstein succinctly stated this line of argument, with his famous “beetle in a box” passage that was quote earlier.

Evaluation:

There’s a significant move made from (a) a claim about how behaviour will play an essential role in an adequate account of how the word “pain” functions in a public language, to (b) a claim about the essential nature of what “pain” refers to.

Is this legitimate? Let us suppose there are epistemological constraints on how a word is learned and used in a public language, constraints which bind a word to publically observable circumstances. Can or do these constraints determine the nature of the things we use our language to talk about?

A lot depends on general philosophy of language here. If you know the meaning of a term, you know what the term refers to, the sort of thing it refers to. What is involved in knowing the meaning of the word “pain”?

Suppose we adopt the position that meaning = use, i.e. we know the meaning of a word when we know how to use it correctly. It seems to follow directly that publically observable circumstances will enter into the meaning of the term, and so its reference: using “pain” correctly is simply a matter of applying it in daily life, in the types of publically observable circumstance that we all recognize as pain-involving.

But there are other accounts of meaning. Suppose we say that to know the meaning of a term is to know the truth conditions of the sentences in which it can feature, i.e. the circumstances in which it is true to say that someone is in pain. Let’s suppose that pain is a certain type of (private) sensation; if so, you could only know the relevant truth conditions if you were acquainted with the sensation that is pain.

However, if the “beetle in the box” type argument is successful, we can’t respond like this. The argument involves the claim that if word has a meaning in a public language, then it can’t refer to anything private, anything that’s not open to public view - such as a private sensation.

The core of the argument is this: that we know we have a shared understanding of the word “pain”, and that we couldn’t know this if “pain” referred to a private sensation; consequently, knowing the meaning of “pain” can only involve knowing how to apply it in publically observable circumstances - involving people’s observable behaviour and what happens to their bodies.

Response: a dual aspect semantics of words like “pain”. There are two components to the word’s meaning: (a) a referential component (the reference is a certain type of sensation), (b) a behavioural component (knowing how to apply the word in ordinary circumstances). Someone could have a grasp of the second component even if they had no grasp of the first

We can tell a similar story for “pain”: you can speak English perfectly well, but can’t feel any pain. But you have an instrument which detects the nervous activity, caused by damage to skin or organs, which stimulate the pain center in ordinary people. This instrument flashes signals into your visual field: so when someone steps on your toe, you read the signal (sharp pressure pain in right toe).

This can seem right - but it leads to a possibility that some find disturbing: it’s at least possible that different English-speakers have different types of sensation that they call “red” or “pain” - and so a different understanding of the “inner” component of what these words mean.

So it’s possible (even if not likely) that the inner meaning-component is private: it’s possible that no one else can understand what I use “pain” to refer to, because no one else has the type of sensation I have when I’m in pain.

Wittgenstein’s private language argument can be brought in here: if this argument is successful, the very idea of private meaning is incoherent - and if so, the idea that words like “pain” have a dual component semantics is also incoherent.

But before taking a brief look at this famous argument, it’s worth making a different point.

A Way for Private Experience to be Relevant to Communication in the Absence of Intersubjective Comparisons

Recall the “Beetle in the Box” argument. Suppose our spectrums are inverted, the sensation you associate with “red” is a sensation I associate with “green”, and vice-versa. Since we each call the same objects “red” and “green”, this difference is one that never shows up - it’s one that never could show up. Surely this shows that the types of experience we associate with the words “red” and “green” are irrelevant to the word’s meaning - we surely agree on the meaning of these words - we can communicate with one another perfectly well!

- It looks as though “private experience” drops out as irrelevant to the meaning of words - if communication is possible in the absence of resemblance between experiences, what point is there in supposing there is resemblance? It might seem that all appeal to “experience” in one’s account of language about the mental is idle.

Well, maybe not. Let’s agree that mental images and other conscious contents are private, in the sense that intersubjective comparisons are in principle impossible. I can’t see into your consciousness, and you can’t see into mine. Does it follow that the contents of consciousness are irrelevant to what we mean by our words? Not in the least.

(From Lowe on Locke) Two boys, Alf and Ben, want to watch a football match, but they can’t afford tickets. They discover that if they stand next to the fence, then if one stands on the other’s shoulders he can see over the wall and describe the match to his friend. Alf goes first. He has various private perceptual experiences and puts these into words. Ben hears these words, and turns them into corresponding mental images (imagined perceptions) as he envisions the scene on the pitch. This seems quite plausible.

Question: what sort of experiences ought Ben have if Alf is going to communicate his perception of what he’s seeing?

One answer: Ben’s imaginings should replicate Alf’s experience. This allows Ben to visualize the match as Alf sees it.

BUT: if this is the case, then since we can never verify the match between different people’s ideas, it seems we could never check that Ben understand what Alf is saying, we could never be justified in thinking successful communication has taken place.

A better answer: for successful communication, the imaginings Alf’s words provoke in Ben should resemble NOT Alf’s own first-hand experiencess, but rather the first-hand experiences Ben himself would have had, if he had been in Alf’s place.

It’s easy to verify that this requirement is satisfied: to check on whether he’s understanding Alf’s description, Ben need simply take a look at the match himself: if it is as described by Alf (i.e. if what he sees resembles what he imagined when hearing Alf’s words), then he can reasonably conclude that he’s understood Alf.

SO: successful communication can involve words being associated with private sensations and ideas in people’s mind without any assumptions being made about the intersubjective similarities between the contents in different people’s minds. The fact that such similarities can’t be detected or verified doesn’t mean that the private contents are idle - they’re clearly not! Ben appreciates Alf’s description of the match precisely because he understands it - it tells him what’s going on, it allows him to imagine accurately what is going on.

However: in this sort of case, both Ben and Alf are using language to think and talk about their own experiences. They each know what they mean by “red” and “pain” even if they don’t know what the other person means (i.e. the character of the experience the other has when they use the same word). Is this possible? Not if Wittgenstein’s anti-private language argument is valid.

So let’s move on to consider:

The Anti-Private Language Argument

This is a general argument which, if valid, would rule out the possibility that we’ve just been considering: both Alf and Ben know what they mean by the words they use to describe their private experience, even though there’s no possibility of comparing the sensations they each have - no intersubjective comparisons. Wittgenstein argued that this sort of situation is impossible: a speaker can’t give meaning to a terms by defining it by reference to a private sensation.

The anti-private language argument, as its known, comes in many forms (there are many interpretations of what Wittgenstein meant) - here’s one.

THE MAIN ARGUMENT: “seems right is right”

We being by assuming: that words that refer to private inner objects would belong to a logically private language. The Cartesian (or Lockean) provides us with an example of a private language.

So, suppose I want to name one of my sensations (red, say). I have a certain visual experience, and I call it “S”. I intend “S” to apply to that sensation-type, to name the sensation. And I intend to continue to use “S” to refer to that type of sensation.

In so doing, I am using a private object in the same way we use public objects in ostensive definition: I am using my sensation as a sample or exemplar - in the same way as you might use a piece of red cloth to teach someone the meaning of the word “red” in a public language.

This seems a natural enough thing to do: an extension inwards of a public practice. But Wittgenstein argues that it is an illegitimate and incoherent extension: a genuine ostensive definition establishes the meaning of a sign by establishing a rule for the correct use of the sign; concentrating one’s attention on a sensation and thinking or saying “S” cannot do this ... or so he argues. Here’s one way of presenting the argument:

Anti-Private Language Argument:

(1) Suppose I decide to keep a diary of the occurrence of S-type sensations. Every time one occurs I note it down in my diary. Now, suppose I have a sensation, that seems to be to be of an S-type, and I note this down.

(2) What justification do I have for this? Well, the sensation struck me as similar to the original S-type sensation, my original sample. That is, the sensation is similar to how I remember the original being.

(3) But what justification do you have for thinking your memory is accurate? Perhaps this: in addition to your immediate impression that the current sensation resembles the original sample, you have a mental table or dictionary: using your imagination, you can call up this collection of sample experiences, each with their name attached - i.e. a table similar to the colour-charts we use when buying paint. You consult your mental colour-chart, and you see that the colour named “S” is similar to the colour you are currently seeing that you are inclined to call “S”.

(4) But wait: the colour chart only shows you how you remember the original sample to be - it does not reproduce the original sample. So bringing in the imaginary chart cannot be a way of checking whether your memory is correct or not - the imaginary chart will only be reliable if your memory is reliable. You are checking whether or not you accurately remember what “S” refers to against your memory of what “S” refers to. This is like buying several copies of the same morning’s newspaper to assure yourself that one of them is true.

(5) So there is no independent check you can appeal to in order to justify your thinking that your current sensation resembles the original “S” sample. Since we’re dealing here with a private inner object, your original “S” sample’s only connection with the present is through your memory.

(6) This means your initial definition of “S” failed to establish a meaning for the sign “S”: for it failed to lay down a rule for the correct use of “S”. You didn’t succeed in laying down a rule, because there is no possibility of justifying a claim that you have used “S” correctly or incorrectly on any future occasion.

(7) We might want to say that so far as you are concerned, what seems to you to be an S-type sensation is an S-type sensation (since there’s no criterion for overturning your decision). But this is wrong: if there’s no distinction between seeming right and being right, there’s no such thing as being right.

(8) Going back to the original attempt at ostensive definition, we see that no rule for the correct use for “S” was established, and so the concept S never came into being. Consequently, the issue of whether on future occasions we will be able to remember how to use S correctly is not an issue at all. The question of whether I use S correctly or incorrectly on future occasions presupposes that S is a genuine concept. But it isn’t, and never was. So the argument is not based on scepticism about the reliability of memory, but the establishing of genuine rules and concepts. The essential point is that there is no independent check on your use of “S”, not that you have no good reason to think your memory is generally reliable (at least in connection with experiential qualities).

(9) The same argument generalizes to all experience-words that we might suppose get their meaning by the process of private ostensive definition, and so come to refer to certain particular types of private inner objects. No genuine concepts can be introduced in this way.

(10) So our original idea of how a private language could function is incoherent: there could be no such language.

EVALUATING THE MAIN ANTI-PL ARGUMENT

1. Pernicious verificationism? The argument rests on the claim that the private linguist has no way of telling whether he’s using a word correctly or incorrectly (when referring to private objects). This is certainly reminiscent of verificationism: if there’s no way of telling whether X is the same as Y, it doesn’t make sense to say that X is the same as Y.

This in itself doesn’t mean the argument is unsound: the idea that we do in fact have some way of telling whether our language is meaningful is reasonable, as is the idea that if we didn’t have any way of telling this, we wouldn’t be using a meaningful language - it cannot be completely hidden from us that we are using words meaningfully - so there must be evidence of some sort that we are using words meaningfully - hence verificationism of some kind is reasonable in this context.

- this is because (our) language and meaning are human activities, and so phenomena that aren’t entirely independent of us - cf. the claim that an indestructible and unopenable box contains a cat - this can never be verified or falsified (we can suppose) but the claim is still meaningful

The problem is: what degree of verificationism is appropriate? Weak: we need some evidence that language is meaningful, or Strong: we need watertight evidence that language is meaningful.

This is a real issue here, because it’s seeming to us that something is the case usually provides some evidence that it is the case.

So, when the private linguist uses words to refer to private objects, he relies on logically private memory (memory for which there is no direct public test) - surely this provides some evidence in favour of the claim that he is using words correctly.

Why isn’t this enough evidence to give sense to the claim that there’s a difference between his being right and it’s merely seeming to him that he is? What argument could be brought to bear here?

Now, the problem concerns the correct use of words, and the claim is that if the only criterion for the use of a word is how it seems it should be used by the subject, there’s no difference between it’s being used correctly and it’s being used incorrectly. Is there a general principle behind this claim? If so it would be of this form:

P If the only reason for holding a belief is that the belief seems true to the subject, then there’s no difference between the belief’s being true and it’s being false - that is, there is no fact of the matter to which the belief is answerable.

If P is true, there would be no fact of the matter about the nature of pain, considered as a private experience. If the word “pain” is used to refer to experiences with a certain phenomenal character, then the only reason for thinking a particular use of “pain” is in accord with past use and intentions is that “it seems right to me”. Given P, this means there is no fact of the matter about the phenomenal character of pain - or any other experience.

But, P is NOT a principle that most of us are inclined to accept about the physical world. There may be no possible evidence for the claim that prior to the big bang there was a previous universe (say), but there’s certainly a fact of the matter to which the belief that there was such a universe is answerable.

So why apply the principle in the private realm? Suppose you are a Cartesian: you believe sensations are logically private objects, with a determinate nature - they’re just as real as a piece of furniture. There are, you believe, solid facts about the nature of experiential items. So if you have some uncheckable beliefs about the similarity of two sensations (occurring at different times) why should you accept that these beliefs are not backed up by any hard facts? In other words, you will not be disposed to accept P as applied to the private realm.

Now, if successful, the private language argument would render your belief in the reality of private objects untenable, and would therefore render you inclined to accept P.

BUT: if the PL argument itself presupposes P, you have no reason to accept it. The argument begs the question.

AND: if the anti-PL argument fails, we can return to the dual-semantics theory: to fully understand the meaning of “pain” you need to know (a) what pain is like, (b) how to use the word in ordinary life. We know that (b) is shared meaning; we can’t be sure (though we believe) that (a) is shared too: that other people’s pain sensations resemble our own.

2. A refutation via solipsism: the main argument relies on the thesis that it’s impossible to have criteria for distinguishing correct from incorrect applications of a word in a wholly private sphere. Is this right?

Suppose I’ve named a particular sensation as being of type “S”. On some future occasion, a sensation occurs that I’m inclined to call “S” - it is similar to how I remember the original. Is it impossible for me to be justified in thinking I must be mistaken here? Could I have reasonable grounds for overturning my initial inclination?

Suppose I’m a solipsist: I believe that I am the only thing which exists ...

Must I believe that I am infallible too? Not at all: suppose I amuse myself by doing mental arithmetic. I do a long division in my head: 1067 divided by 23. I get the answer X. On some other occasion I get Y ... to settle the issue I do it again, much more carefully, and discover that Y is right. I made a mistake - I may be the only subject of consciousness, but I am fallible.

Now, might I make other types of mistake? Since there’s no limit on the complexity of a solipsist’s experience, let’s suppose my experience is just as it really is - I seem to be surrounded by other people, in public world - although I believe (correctly) that I am the only real thing here - only my body is hooked up to a subject of experience - me.

Suppose I call a rose sepia - I remember the name “sepia” being attached to that particular colour. But other people (sic) tell me I’m wrong: sepia is a different colour - they tell me I’m not remembering correctly.

Should I believe them? Why not? We’re supposing my world is just like this world: I’ve often been corrected, I’ve often been forced by others to recognize my mistakes ... I know well enough that my memory is fallible. Why shouldn’t I allow myself to be corrected - doing so is completely reasonable!

BUT: these people don’t really exist, they’re just components of my experience .... how can they correct me?

True, they are components of my total experience, and I believe them to be nothing more ... but I also believe that my initial judgments about all manner of things are sometimes wrong, and that “other people” have often been proved right ... at least on matters concerning the phenomenal world of which they are a part.

The crucial point: the intelligibility of solipsism would be undermined if the PL argument were sound; but to suppose in advance that solipsism is unintelligible would be to beg the question ...


Behaviourism: some arguments from language (i)

The “beetle in a box” passage, Philosophical Investigations 293:

If I say of myself that it is only from my own case that I know what the word “pain” means - must I not say the same of other people too? And how can I generalize the one case so irresponsibly?

Now someone tells me that he knows what pain is only from his own case! - Suppose everyone had a box with something in it: we call it a “beetle”. No one can ever look into anyone else’s box, and everyone knows what a beetle is only by looking at his beetle. - Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing. - But suppose the word “beetle” had a use in these people’s language? - If so it would not be used as the name of a thing. The thing in the box has no place in the language-game at all; not even as a something for the box might even be empty. - No, one can “divide through” by the thing in the box; it cancels out, whatever it is.

That is to say: if we construe the grammar of the expression of sensation on the model of “object and designation” the object drops out of consideration as irrelevant.

Linguistic Argument 1

1. We learn the meaning of the word “pain” from other people.

2. For other people successfully to teach us the meaning of “pain”, there must be publically observable circumstances - available to both learners and teachers - in which it is correct to apply the word “pain”.

3. The relevant observable circumstances involve people’s behaviour; certain kinds of behaviour are (in normal circumstances) sufficient for correctly applying the word “pain” to a person.

4. So there is a non-contingent (or “logical”) connection between learning and knowing the meaning of “pain” and behaviour. (This follows directly from (3) - there are circumstances in which behaviour is sufficient for correctly saying someone is in pain.)

Linguistic Argument 2

1. The meaning of an ordinary word is the meaning it has in a shared language.

2. The meaning a word has in a language is what is communicated by the use of it, in typical sentences.

3. “Red” is a perfectly ordinary well-understood English word. If I tell you I’ve seen a red bus, you know what I mean. You mean by “red” what I mean by “red”.

4. Suppose we each use “red” to refer a specific kind of private visual experience, caused by light stimulating our nervous system.

5. Since everyone’s experience of red is private, we have no way of telling whether our experiences of red match - whether we experience the same visual quality when we look at objects we all call “red”. E.g. perhaps red objects cause you to have the quality green objects cause me to have (and green objects cause you to have the experience red objects cause me to have).

6. So if (4) were true, we could not tell that everyone means the same when they use “red”.

7. But we do know that we mean the same by “red” (by 1-3).

8. So the private experience we each have is irrelevant to what we mean by the word “red”.

9. Consequently, to understand “red” it is sufficient to know how to use it, to know how to apply it correctly to everyday objects.

10. So the meaning of “red” is logically bound up with publically observable ways of using the word.

Behaviourism: some arguments from language (i)

Anti-Private Language Argument:

(1) Suppose I decide to keep a diary of the occurrence of S-type sensations. Every time one occurs I note it down in my diary. Now, suppose I have a sensation, that seems to be to be of an S-type, and I note this down.

(2) What justification do I have for this? Well, the sensation struck me as similar to the original S-type sensation, my original sample. That is, the sensation is similar to how I remember the original being.

(3) But what justification do you have for thinking your memory is accurate? Perhaps this: in addition to your immediate impression that the current sensation resembles the original sample, you have a mental table or dictionary: using your imagination, you can call up this collection of sample experiences, each with their name attached - i.e. a table similar to the colour-charts we use when buying paint. You consult your mental colour-chart, and you see that the colour named “S” is similar to the colour you are currently seeing that you are inclined to call “S”.

(4) But wait: the colour chart only shows you how you remember the original sample to be - it does not reproduce the original sample. So bringing in the imaginary chart cannot be a way of checking whether your memory is correct or not - the imaginary chart will only be reliable if your memory is reliable. You are checking whether or not you accurately remember what “S” refers to against your memory of what “S” refers to. This is like buying several copies of the same morning’s newspaper to assure yourself that one of them is true.

(5) So there is no independent check you can appeal to in order to justify your thinking that your current sensation resembles the original “S” sample. Since we’re dealing here with a private inner object, your original “S” sample’s only connection with the present is through your memory.

(6) This means your initial definition of “S” failed to establish a meaning for the sign “S”: for it failed to lay down a rule for the correct use of “S”. You didn’t succeed in laying down a rule, because there is no possibility of justifying a claim that you have used “S” correctly or incorrectly on any future occasion.

(7) We might want to say that so far as you are concerned, what seems to you to be an S-type sensation is an S-type sensation (since there’s no criterion for overturning your decision). But this is wrong: if there’s no distinction between seeming right and being right, there’s no such thing as being right.

(8) Going back to the original attempt at ostensive definition, we see that no rule for the correct use for “S” was established, and so the concept S never came into being. Consequently, the issue of whether on future occasions we will be able to remember how to use S correctly is not an issue at all. The question of whether I use S correctly or incorrectly on future occasions presupposes that S is a genuine concept. But it isn’t, and never was. So the argument is not based on scepticism about the reliability of memory, but the establishing of genuine rules and concepts. The essential point is that there is no independent check on your use of “S”, not that you have no good reason to think your memory is generally reliable (at least in connection with experiential qualities).

(9) The same argument generalizes to all experience-words that we might suppose get their meaning by the process of private ostensive definition, and so come to refer to certain particular types of private inner objects. No genuine concepts can be introduced in this way.

(10) So our original idea of how a private language could function is incoherent: there could be no such language.

____

General Principle P If the only reason for holding a belief is that the belief seems true to the subject, then there’s no difference between the belief’s being true and it’s being false - that is, there is no fact of the matter to which the belief is answerable.