# Keyboard Shortcuts?

×
• Next step
• Previous step
• Skip this slide
• Previous slide
• mShow slide thumbnails
• nShow notes
• hShow handout latex source
• NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Philosophical Psychology \\ 03: Do Humans Perceive Others’ Feelings?}

\maketitle

# 03: Do Humans Perceive Others’ Feelings?

\def \ititle {03: Do Humans Perceive Others’ Feelings?}
\begin{center}
{\Large
\textbf{\ititle}
}

\iemail %
\end{center}
subtitle is: Prospects for the Hypothesis That We Perceive Mental States, and How Some Limits on What Can Be Perceived Are Overcome through Simple Forms of Social Interaction.
This talk is about a simple question. Do humans every perceptually expeirence any of another’s mental states? What evidence would help us to answer this question? At the very end, I will switch from perception to interaction, and consider what we might gain by shifting from merely observing expressions of emotion to jointly expressing an emotion as in sharing a smile.

How do you know about it?it = this penit = this joy

percieve indicator, infer its presence

- vs -

percieve it

[ but perceiving is inferring ]

Here are three ways of knowing: inference, testimony and perception.
Where ‘it’ is an ordinary physical object like my favourite pen, each of these three identifies a process by which knowledge could be acquired.
(This is not to say that talk about reasoning invariably picks out a process (cf Alvarez), just that there is a process.)
There seems to be a clear contrast the first and last ways of coming to know about it. (Philosophers debate about whether the first and second are genuinely different routes to knowledge.)
Even if, as do I, you think perception involves inference-like processes, there is still a contrast between inferring and perceiving.
For instance, suppose you know that I am all but inseparable from my favourite pen. Then when you see me arrive you can infer that my pen is here too. Contrast this with simply seeing my pen on the table. In one case we see something other than the pen, namely me, and inferring that the pen is here; in the other case you are seeing the pen itself.
This is the contrast I need in what follows.
Are mental states the kinds of thing we can come to know about through perceiving them? I will focus on emotions like joy.

Aviezer et al (2012, figure 2A3)

[not relevant yet:] ‘(3) a losing face on a winning body’
Emotions are sometimes expressed bodily and vocally. Here is someone who has just won or lost a tennis match, so is feeling joy or anger. Maybe you can tell which?
Here is a natural thought: Bodily and vocal expressions of emotion enable us to perceptually experience the expressed emotions.
This natural thought fits with ordinary talk about mental states. Imagine yourself at a tennis match. Reporting the event afterwards, you might say that you saw his ecstacy at winning. It might seem that his ecstacy is as plainly visible as the hair on his head.

verbal reports and ratings? No!

(Scholl & Tremoulet 2000; Schlottman 2006)

There are scientists and philosophers who have placed a lot of weight on verbal reports. They are satisfied that if people use mental state terms to describe what they perceive, then they perceive those mental states. But I think this can’t be right.
Go back to the tennis match. Reporting the event afterwards you might say not only that you saw his ecstacy but also that you saw him win. I take it you can't literally perceptually experience winning or losing in the sense that, arguably, you can perceptually experience movement or changes in colour. After all, winning or loosing is a matter of rules and conventions.
So we can’t straightforwardly take what people say as a guide to what they perceive.
Here you might object that we can perceptually experience things like winning or losing, so there is no reason to question verbal reports.
But note that taking this line amounts to rejecting the claim I started with.

contrast:

percieve indicator, infer its presence

- vs -

percieve it

I claimed that there is a contrast between seeing an indicator and inferrring the presence of an object
If there is a contrast here, what we colloquially call seeing someone win must be a case of perceiving indicators and inferring winning. So to say that we can straightforwardly infer that people perceive emotions from their verbal reports is to reject the existence of the very contrast that allows us to make sense of the question.

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

\citet[p.\ 573]{mcneill:2012_embodiment}: ‘We sometimes {see} aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

challenge

Evidence?

## Categorical Perception & Emotion

\section{Categorical Perception & Emotion}

\section{Categorical Perception & Emotion}

2.5B

7.5BG

2.5BG

Categorical perception is perhaps most easily understood from the case of colour.
[For later: colour is doubly relevant because there's been serious debate about whether it's possible to perceive categorical colour properties despite copious verbal reports. This case shows how evidence can bear on questions about phenomenology, although I won’t be talking about that here (probably).]

fix initial system of categories

measure disciminatory responses

observe between- vs within-category differences

exclude non-cognitive explanations for the differences

greater discrimination between than within catgeories indicated that the inital system of categories may be having some influence on whatever underpins the responses.
Do any perceptual processes in humans discriminate stimuli according to the expressions of emotion they involve? That is, do humans have \emph{categorical perception} of expressions of emotion?
Assume that we as theorists have a system which allows us to categorise static pictures of faces and other stimuli according to which emotion we think they are expressing: some faces are happy, others fearful, and so on.
There is a way of categorising static pictures of faces and other stimuli according to which emotion someone might think they are expressing: some faces are happy, others fearful, and so on From five months of age, or possibly much earlier \citep{field:1982_discrimination}, through to adulthood, humans are better at distinguishing faces when they differ with respect to these categories than when they do not \citep{Etcoff:1992zd,Gelder:1997bf,Bornstein:2003vq,Kotsoni:2001ph,cheal:2011_categorical,hoonhorst:2011_categoricala}.
To illustrate, consider these pictures of faces. The idea is this.
With respect to all features apart from the expression of emotion, each face picture differs from its neighbours no more than any other picture differs from its neighbours.
Most neighbouring pairs of face pictures would be relatively hard to distinguish,
especially if they were not presented side-by-side.
But most people find one pair of neighbouring face pictures relatively easy to distinguish---you may notice this yourself.
What underlies these patterns of discrimination? Several possibilities that would render them uninteresting for our purposes can be ruled out.
The patterns of discrimination do not appear to be an artefact of linguistic labels (\citealp{sauter:2011_categorical}; see also \citealp{laukka:2005_categorical}, p.\ 291),%
% \footnote{ Puzzlingly, experiments by \citet{fugate:2010_reading} using photos of chimpanzee faces with human subjects are sometimes cited as evidence that categorical perception of expressions of emotion depends on, or can be modulated by, the use of verbal labels for stimuli (e.g.\ \citealp[p.\ 288]{barrett:2011_context}; \citealp[p.\ 315]{gendron:2012_emotion}). Caution is needed in interpreting these findings given that there may be differences in the ways humans process human and chimpanzee faces. In fact, what \citeauthor{fugate:2010_reading}'s findings show may be simply that human viewers do not show [categorical perception] for the chimpanzee facial configurations used in their study' \citep[p.\ 1482]{sauter:2011_categorical}. } %
nor of the particular choices subjects in these experiments are presented with \citep{bimler:2001_categorical,fujimura:2011_categorical}. Nor are the patterns of discrimination due to narrowly visual features of the stimuli used \citep{sato:2009_detection}.
We can be confident, then, that the patterns of discrimination probably reflect one or more processes which categorises stimuli by expression of emotion.
Examples of stimuli used (they had 200 ish faces) by Batty and Taylor in their ERP study.

Batty & Taylor, 2003 figure 1

Don't have details, this is just to stress it's early (around 200ms) and plausibly automatic.
‘at a mean latency of 140 ms) the N170 showed both amplitude and latency modulation differentially with emotional expressions. ... [BUT] Whether this is due presently to low-level stimulus factors or to the use of emotional faces is still to be determined’ \citep[p.~616]{batty:2003_early}.
‘As the task did not require the subjects to focus on particular emotional expressions ... these data suggest an early automatic encoding of emotional facial expression’ \citep[p.~616]{batty:2003_early}.

Batty & Taylor, 2003 figure 2

At least some of the processes underpinning categorical perception of facial expressions of emotion are rapid (occurring within roughly 200 milliseconds of a stimulus' appearance), pre-attentive \citep{vuilleumier:2001_emotional} and automatic in the sense that whether they occur is to a significant degree independent of subjects' tasks and motivations \citep{batty:2003_early}.%

Perceptual?

At least for fear & happiness

- ERP (Campanella et al 2002)

- visual search : behavioural (Williams et al 2005)

But are any of the processes that categorise stimuli by expression of emotion perceptual? %That is, are the observed abilities to discriminate expressions of emotion ever based on perceptual processes? Answering this question is complicated by the fact that many parts of the brain are involved \citep{adolphs:2002_recognizing,vuilleumier:2007_distributed}. There is evidence that both the amygdala \citep{harris:2012_morphing,harris:2014_dynamic} and also some cortical structures \citep{batty:2003_early} respond categorically to expressions of emotion; and that intervening in the operations of the somatosensory cortex can impair categorisation (\citealp{pitcher:2008_transcranial}; see also \citealp{banissy:2011_superior}). To my knowledge, so far it is only for happy and fearful stimuli that we have direct evidence from both neurophysiological \citep{Campanella:2002aa} and behavioural measures \citep{williams:2005_looka} of categorisation occurring in perceptual processing. So while the evidence is not conclusive, there is converging evidence that some perceptual processes categorise stimuli including faces by expression of emotion. Humans may have categorical perception not only for speech, colour, orientation and other properties but also for expressions of emotion.
Earlier I asked, what evidence could bear on the perceptual hypothesis? Now we have a partial answer.

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

challenge

Evidence? Categorical Perception!

Part of the evidence relevant to the perceptual hypothesis is evidence that humans can categorically perceive expressions of emotion.
But what does the evidence from studies of categorical perception tell us about the truth or falsity of the perceptual hypothesis ?
Just here it is natural to take a sceptical line and claim ...

1. The objects of categorical perception, ‘expressions of emotion’, are facial expressions.

so ...

2. The things we perceive in virtue of categorical perception are not emotions.

Just here we have to be extremely careful. We have to ask, What are the objects of categorical perception? That is, What Are the Perceptual Processes Supposed to Categorise?

## The Objects of Categorical Perception

\section{The Objects of Categorical Perception}

\section{The Objects of Categorical Perception}

What are the perceptual processes supposed to categorise?

standard view: fixed expressions linked to emotional categories

Aviezer et al (2012, figure 2A3)

But are the things categorised by perceptual processes facial configurations? This view faces a problem. There is evidence that the same facial configuration can express intense joy or intense anguish depending on the posture of the body it is attached to, and, relatedly, that humans cannot accurately determine emotions from spontaneously occurring (spontaneously occurring---i.e.\ as opposed to acted out) facial configurations \citep{motley:1988_facial,aviezer:2008_angry,aviezer:2012_body}. These and other findings, while not decisive, cast doubt on the view that categories of emotion are associated with categories of facial configurations \citep{hassin:2013_inherently}.
The same facial configuration can express intense joy or intense anguish depending on the posture of the body it is attached to; and humans cannot accurately determine emotions from spontaneously occurring (as opposed to acted out) facial configurations \citep{motley:1988_facial,aviezer:2008_angry,aviezer:2012_body}.

Aviezer et al's puzzle:

Given that facial configurations are not diagnostic of emotion, why are they categorised by perceptual processes?

This evidence makes the findings we have reviewed on categorical perception puzzling. Given that the facial configurations are not diagnostic of emotion, why are they categorised by perceptual processes?% \footnote{ Compare \citet[p.\ 1228]{aviezer:2012_body}: although the faces are inherently ambiguous, viewers experience illusory affect and erroneously report perceiving diagnostic affective valence in the face.' } This question appears unanswerable as long as we retain the assumption---for which, after all, no argument was given---that the things categorical perception is supposed to categorise are facial configurations.

... maybe they aren’t.

But if we reject this assumption, what is the alternative?

speech perception

## Speech Perception

\section{Speech Perception}

\section{Speech Perception}

articulatory gesture

In speaking we produce an overlapping sequence of articulatory gestures, which are motor actions involving coordinated movements of the lips, tongue, velum and larynx. These gestures are the units in terms of which we plan utterances (Browman and Goldstein 1992; Goldstein, Fowler, et al. 2003).
These are the actions I want to focus on first in thinking about what we experience when we encounter others’ actions.

Browman & Goldstein 1986, figure 1

‘Trajectory of lower lip in [abe] as measured by tracking infra-redLED placed on subject's lower lip’
‘Not every utterance of word transcribed with /b/ will display exactly the trajectory of Fig. 1.: the trajectory will vary with vowel context, syllable position, stress, speaking rate and speaker. We must, therefore, ultimately characterise /b/ as a family of patterns of lip movement’ \citep[p.~224]{browman:1986_towards}

Speech and auditory perception involve distinct processes

A schematic spectrogram for a synthetic sound which is normally perceived as [ra]. The horizontal axis represents time, the vertical frequency.
A schematic spectrogram for [la].
In the middle you see the \emph{base}, i.e. the part of the spectrogram common to [ra] and [la]. This is played to one ear
Below you see the transitions, , i.e. the parts of the spectrogram that differ between [ra] and [la]. When played in isolation these sound like a chirp. When played at the same time as the base but in the other ear, subjects hear a chirp and a [ra] or a [la] depending on which transition is played.
How do we know that the same stimuli may be processed by different perceptual systems concurrently—for instance, how do we know that speech and auditory processing are distinct? A phenomenon called “duplex” perception demonstrates their distinctness occurs in. Artificial speech-like stimuli for two syllables, [ra] and [la], are generated. The acoustic signals for each syllable is artificially broken up into two parts, the “base” and “transition” (see Fig. *** below). The syllables have the same “base” but differ in the “transition”. When the “transition” is played alone it sounds like a chirp and quite unlike anything we normally hear in speech. Duplex perception occurs when the base and transition are played together but in separate ears. In this case, subjects hear both the chirp that they hear when the transition is played in isolated, and the syllable [la] or [ra]. Which syllable they hear depends on which transition is played, so speech processing must have combined the base and transition. By contrast, auditory processing must have failed to combine them because otherwise the chirp would not have been heard. In this case, then, the perception resulting from the duplex presentation involves simultaneous auditory and speech recognition processes. This shows that auditory and speech processing are distinct perceptual processes.
The duplex case is unusual. We can’t normally hear the chirps we make in speaking because speech processing inhibits this level of auditory processing. But plainly speech is subject to some auditory processing for we can hear extra-linguistic qualities of speech; some of these provide cues to emotional state, gender and class. Perception of these extra-linguistic qualities enables us to distinguish stimuli within a category. As already mentioned, this is a problem for Repp’s operational definition. Our ability to discriminate stimuli is the product of both categorical speech processing and non-categorical auditory processing. If we want to get at the essence of categorical perception it seems there is no alternative but to appeal to particular perceptual processes rather than behaviours.
Source: \citep{Liberman:1981xk}
Here are 12 speech-like sounds. Acoustically each differs from its neighbours no more than any other does.
They would be labelled differently
And within a label they are relatively hard to disciminate whereas ...
Discriminating acoustically no less similar stimuli that are given different labels is easier (faster and more accurate).
This is categorical perception: speed and accuracy maps onto labelling ...
Categorical perception of mating calls and perhaps other acoustic signals is widespread in non-human animals including monkeys, mice and chinchillas (Ehret 1987; Kuhl 1987), and is even found in cognitively unsophisticated animals such as frogs (Baugh, Akre and Ryan 2008) and crickets (Wyttenbach, May and Hoy 1996).

What are the objects of categorical perception?

the location of the category boundaries changes depending on contextual factors such as the speaker’s dialect,22 or the rate at which the speaker talks;23 both factors dramatically affect which sounds are produced.
This means that in two different contexts, different stimuli may result in the same perceptions, and the same stimulus may result in different perceptions.
co-articulation, the fact that phonic gestures overlap (this is what makes talking fast).

What are the objects of categorical perception?

1. Speech perception is categorical

2. The category boundaries correspond (imperfectly but robustly) to differences in articulatory gestures

3. The best explanation of (2) involves the hypothesis that the objects of speech perception are articulatory gestures

\emph{Articulatory Gesture:} In speaking we produce an overlapping sequence of articulatory gestures, which are motor actions involving coordinated movements of the lips, tongue, velum and larynx. These gestures are the units in terms of which we plan utterances (Browman and Goldstein 1992; Goldstein, Fowler, et al. 2003).

## Emotion vs Speech Categorical Perception

\section{Emotion vs Speech Categorical Perception}

\section{Emotion vs Speech Categorical Perception}

articulation of phoneme

expression of emotion

Compare expressing an emotion by, say, smiling or frowning, with articulating a phoneme.
Both have a communicative function (on expressions of emotion, see for example \citealp{blair:2003_facial,sato:2007_spontaneous}) and both are categorically perceived, but the phonetic case has been more extensively investigated.

- communicative function

- communicative function

Variations due to coarticulation, rate of speech, dialect and many other factors mean that isolated acoustic signals are not generally diagnostic of phonemes: in different contexts, the same acoustic signal might be a consequence of the articulation of any of several phonemes.

- isolated acoustic signals not diagnostic

So here there is a parallel between speech and emotion. Much as isolated facial expressions are not diagnostic of emotions (as we saw a moment ago), isolated acoustic signals are plausibly not diagnostic of phonetic articulations.

- isolated facial expressions not diagnostic

Why then are isolated acoustic signals---which rarely even occur outside the lab---categorised by perceptual or motor processes at all? To answer this question we first need a rough idea of what it is to articulate a phoneme. Articulating a phoneme involves making coordinated movements of the lips, tongue, velum and larynx. How these should move depends in complex ways on numerous factors including phonetic context \citep{Browman:1992da,Goldstein:2003bn}. In preparing for such movements, it is plausible that the articulation of a particular phoneme is an outcome represented motorically, where this motor representation coordinates the movements and normally does so in such a way as to increase the probability that the outcome represented will occur.

- complex coordinated, goal-directed movements

This implies that the articulation of a particular phoneme, although probably not an intentional action, is a goal-directed action whose goal is the articulation of that phoneme.
(On the link between motor representation and goal-directed action, see \citealp{butterfill:2012_intention}.)
Now some hold that the things categorised in categorical perception of speech are not sounds or movements (say) but rather these outcomes---the very outcomes in terms of which speech actions are represented motorically (\citealp{Liberman:2000gr}; see also \citealp{Browman:1992da}).% \footnote{ Note that this claim does not entail commitment to other components of the motor theory of speech perception. } % On this view, categorical perception of speech is a process which takes as input the bodily and acoustic effects of speech actions and attempts to identify which outcomes the actions are directed to bringing about, that is, which phonemes the speaker is attempting to articulate. That isolated acoustic signals can engage this process and thereby trigger categorical perception is merely a side-effect, albeit one with useful methodological consequences.

- complex coordinated, goal-directed movements

We can think of expressions of emotion as goal-directed in the same sense that articulations of phonemes are. They are actions whose goal is the expression of a particular emotional episode.
This may initially strike you as implausible given that such expressions of emotion can be spontaneous, unintentional and involuntary. But note that expressing an emotion by, say, smiling or frowning, whether intentionally or not, involves making coordinated movements of multiple muscles where exactly what should move and how can depend in complex ways on contextual factors. That such an expression of emotion is a goal-directed action follows just from its involving motor expertise and being coordinated around an outcome (the goal) in virtue of that outcome being represented motorically.% \footnote{ To increase the plausibility of the conjecture under consideration, we should allow that some categorically perceived expressions of emotion are not goal-directed actions but events grounded by two or more goal-directed actions. For ease of exposition I shall ignore this complication. }
Recognising that some expressions of emotion are goal-directed actions in this sense makes it possible to explain what distinguishes a genuine expression of emotion of this sort, a smile say, from something unexpressive like the exhalation of wind which might in principle resemble the smile kinematically. Like any goal-directed actions, genuine expressions of emotion of this sort are distinguished from their kinematically similar doppelgänger in being directed to outcomes by virtue of the coordinating role of motor representations and processes.
the wild conjecture under consideration is that the things categorical perception is supposed to categorise, the ‘expressions of emotion’, are actions of a certain type, and these are categorised by which outcomes they are directed to.

What are the perceptual processes supposed to categorise?

Actions whose goals are to express certain emotions.

- The perceptual processes categorise events (not e.g. facial configurations).

Let me explain the increasingly bold commitments involved in accepting this conjecture.
First, the things categorised in categorical perception of expressions of emotion are events rather than configurations or anything static. (Note that this is consistent the fact that static stimuli can trigger categorical perception; after all, static stimuli can also trigger motor representations of things like grasping \citep{borghi:2007_are}.)

- These events are not mere physiological reactions.

Second, these events are not mere physiological reactions (as we might intuitively take blushing to be) but things like frowning and smiling, whose performance involves motor expertise. \footnote{ To emphasise, one consequence of this is that not everything which might intuitively be labelled as an expression of emotion is relevant to understanding what is categorised by perceptual processes. %For example, in the right context a blush may signal emotion without requiring motor expertise. }

- These events are are perceptually categorised by the outcomes to which they are directed.

Third, these events are perceptually categorised by the outcomes to which they are directed. That is, outcomes represented motorically in performing these actions are things by which these events are categorised in categorical perception.
Should we accept the wild conjecture? It goes well beyond the available evidence and currently lacks any reputable endorsement. In fact, we lack direct evidence for even the first of the increasingly bold commitments just mentioned (namely, the claim that the things categorically perceived are events). A further problem is that we know relatively little about the actions which, according to the wild conjecture, are the things categorical perception is supposed to categorise (\citealp[p.\ 47]{scherer:2013_understanding}; see also \citealp{scherer:2007_are} and \citealp{fernandez-dols:2013_advances}). However, the wild conjecture is less wild than the only published responses to the problems that motivate it.% % (which, admittedly, are wilder than an acre of snakes).% \footnote{ See \citet[p.\ 15]{motley:1988_facial}: ‘particular emotions simply cannot be identified from psychophysiological responses’; and \citet[p.\ 289]{barrett:2011_context}: ‘scientists have created an artifact’. } And, as I shall now explain, several considerations make the wild conjecture seem at least worth testing.
Consider again the procedure used in testing for categorical perception. Each experiment begins with a system for categorising the stimuli (expressions). This initial system is either specified by the experimenters or, in some cases, by having the participants first divide stimuli into categories using verbal labels or occasionally using non-verbal decisions. The experiment then seeks to measure whether this initial system of categories predicts patterns in discrimination. But what determines which category each stimulus is assigned to in the initial system of categories? You might guess that it is a matter of how likely people think it is that each stimulus---a particular facial configuration, say---would be associated with a particular emotion. In fact this is wrong. Instead, each stimulus is categorised in the initial system according to how suitable people think such an expression would be to express a given emotion: this is true whether the stimuli are facial \citep{horstmann:2002_facial} or vocal \citep{laukka:2011_exploring} expressions of emotion (see also \citealp[pp.\ 98--9]{parkinson:2013_contextualizing}). To repeat, in explicitly assigning an expression to a category of emotion, people are not making a judgement about the probability of someone with that expression having that emotion: they are making a judgement about which category of emotion the expression is most suited to expressing. Why is this relevant to understanding what perceptual processes categorise? The most straightforward way of interpreting the experiments on categorical perception is to suppose that they are testing whether perceptual processes categorise stimuli in the same ways as the initial system of categories does. But we have just seen that the initial system categorises stimuli according to the emotions they would be best suited to expressing. So on the most straightforward interpretation, the experiments on categorical perception of expressions of emotion are testing whether there are perceptual processes whose function is to categorise actions of a certain type by the outcomes to which they are directed. So the wild conjecture is needed for the most straightforward interpretation of these experiments. This doesn't make it true but it does make it worth testing.
So far I have focussed on evidence for categorical perception from experiments using faces as stimuli. However, there is also evidence that perceptual processes categorise vocal and facial expressions alike (\citealp{grandjean:2005_voices,laukka:2005_categorical}; see also \citealp{jaywant:2012_categorical}). We also know that %judgements about which emotion an observed person is expressing in a photograph can depend on the posture of the whole body and not only the face \citep{aviezer:2012_body}, and that various contextual factors can affect how even rapidly occurring perceptual processes discriminate expressions of emotion \citep{righart:2008_rapid}. There is even indirect evidence that categorical perception may concern whole bodies rather than just faces or voices \citep{aviezer:2008_angry,aviezer:2011_automaticity}. In short, categorical perception of expressions of emotion plausibly resembles categorical perception of speech in being a multimodal phenomenon which concerns the whole body and is affected by several types of contextual feature. This is consistent with the wild conjecture we are considering. The conjecture generates the further prediction that the effects of context on categorical perception of expressions of emotion will resemble the myriad effects of context on categorical perception of speech so that `every potential cue ... is an actual cue' (\citealp[p.\ 11]{Liberman:1985bn}; for evidence of context effects see in categorical perception of speech, for example, \citealp{Repp:1987xo}; \citealp{Nygaard:1995po} pp.\ 72--5; \citealp{Jusczyk:1997lz}, p.\ 44).

modest hypothesis about perceptual experience of emotion

Information about others’ emotions can faciliate categorical perception of their expressions of emotion,

which gives rise to phenomenal expectations concerning their bodily configurations, articulations and movements.

So I first asked, what evidence could bear on the perceptual hypothesis? I answered that part of the evidence is from studies of categorical perception of facial expressions of emotion.

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

challenge 23

Evidence? Categorical Perception!

Which model of the emotions?

Relation to speech and action?

Part of the evidence relevant to the perceptual hypothesis is evidence that humans can categorically perceive expressions of emotion.
Then my next question was, What does the evidence from studies of categorical perception tell us about the truth or falsity of the perceptual hypothesis ?
Now I think we can answer this question. Or rather, we know that the answer depends on what the objects of categorical perception are.
If the objects of categorical perception are merely facial expresssions, then I think the evidence does not support the claim that we see aspects of each others’ mental lives.
But if, as I’ve conjectured, the objects of categorical perception are actions directed to the expression of particular emotions, then I think the evidence does provide modest support for the claim that we perceive aspects of each others’ mental lives.
This claim will need qualifying in various ways.
First, the evidence suggests that categorical perception of expressions of emotion, like categorical perception of speech, is not tied to single modality in any very robust way.
Second, I suggest elsewhere that categorical perception isn’t exactly like perception as philosophers standardly understand it. In particular, its phenomenology is to be understood in terms of \textbf{PHENOMENAL EXPECTATIONS} rather than by analogy with perception of shapes or textures.
But let me put these aside because I want to mention a second challenge.
The challenge is to specify a model of anger and other mental states which captures how these mental states appear to the perceivers. This is a hard challenge to meet because available models arguably render mental states imperceptible.
But what is a model of the emotions? Since emotions are mental states, here I think we can step back and think about mental states generally. On a widely accepted view, mental states involve subjects having attitudes toward contents (see figure). Possible attitudes include believing, wanting, being happy that, and being angry that. The content is what distinguishes one belief from all others, or one desire from all others. The content is also what determines whether a belief is true or false, and whether a desire is satisfied or unsatisfied. There are two main tasks in constructing a model of mental states. The first task is to characterise some attitudes. This typically involves specifying their distinctive functional and normative roles.% \footnote{ For examples, see \citet{Bratman:1987xw} on intention or \citet[][chapter 11]{Velleman:2000fq} on belief. } % The second task is to find a scheme for specifying the contents of mental states. This typically involves one or another kind of proposition, although some have suggested other abstract entities including map-like representations.% \footnote{ See \citet[p.\ 163]{Braddon-Mitchell:1996ce}: ‘what is inside our heads should be thought of as more like maps than sentences.’ }
‘emotions are episodic modes of evaluative engagement with the social and practical world’ \citep[p.\ 1512]{parkinson:2008_emotions}.
FIRST LIMIT: Determinate emotions (the particular evaluation implied by my anger in this case) vs the determinable (anger, in this case). Categorical perception provides evidence only for the determinable. (Joel said in discussion that he made this point with an analogy with CP of colour in Smith (2010)).
SECOND LIMIT: Static emotions vs emotions unfolding (see notes below from ‘sharing a smile’). It seems that CP doesn’t capture dynamics of emotion.
So only a very crude model of the mental would capture emotions as we categorically perceive them. What we need here is a minimal theory of the emotions.
Motor theory of speech perception and goal ascription ...

conclusion

How do you know about it?it = this penit = this joy

percieve indicator, infer its presence

- vs -

percieve it

[ but perceiving is inferring ]

Here are three ways of knowing: inference, testimony and perception.
My question was, Are mental states the kinds of thing we can come to know about through perceiving them?

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

Focussing on emotions like joy, I have suggested that we face two challenges (at least) in answering it.

challenge 1: evidence?

The first challenge is to explain what kind of evidence might bear on the question.
Here I suggested that studies of categorical perception of expressions of emotion are relevant. Whether they provide evidence in favour fo the view that we can know others’ mental states through perceiving them depends on what the objects of categorical perception are.
If they are facial configurations, or something like this, then it seems clear that the evidence does not support the view that we can know others’ mental states through perceiving them. If anything, such evidence would support the view that when we think we are perceiving mental states, we are actually perceiving facial expressions that we associate with particular emotions.
But if the objects of categorical perception are actions directed to the goal of expressing particular emotions, then studies of categorical perception do provide some evidence for the view that we have at least phenomenal expectations concerning others’ emotions.

challenge 2: Which model of the emotions?

The second challenge concerned the limits of categorical perception.
I suggested that because categorical perception is underpinned by automatic processes which operate over a short temporal window, the emotions which are the goals of the actions that are the objects of categorical perception cannot be modelled as propositional attitudes. Instead we need a more minimal model of the mental to understand the nature of these emotions.

challenge 3: Relation to speech and action?

In the end, we are probably not yet in a position to know whether humans ever perceptually experience others' mental states. But thinking about categorical perception does give us a way of finally making some progress with this question.