Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

The Teleological Stance

In the lecture on behaviour reading last week, I offered a brief survey of mechanisms that could be involved in getting from joint displacements and bodily configurations to larger, more abstract bits of behaviour grouped into units in ways that reflect structures of action.

The ‘Teleological Stance’
~ The goals of an action are those outcomes which the means is a best available way of bringing about.

Csibra & Gergely

Planning

1. This outcome, G, is the goal (specification)

2. Means m is a best available* way of bringing G about

3. ∴ adopt m

Tracking

1. This means, m, has been adopted (observation)

2. G is an outcome such that: m is a best available* way of bringing G about

3. ∴ G is a goal of the observed action

So planning is the process of moving from goals to means, whereas tracking goes in the reverse direction, from means to goals. But what is common to the two is the relation between means and goals. In both cases, planning and goal-tracking, the means that are adopted should be a best available way of bringing the goal about.
Note that this is not exactly an answer to our question, How can infants track goals from nine months of age (or earlier)? It provides what Marr would call a computational description.
That is, it provides a function from facts about events and states of affairs that could be known without knowing which goals any particular actions are directed to, nor any facts about particular mental states to one or more outcomes which are the goals of an action.
Providing this function explains how pure goal-tracking is possible in principle.
But what we want to know, of course, is how infants (and adults) actually compute this function. If this is (roughly) the function which computationally describes pure goal tracking, what are the representations and processes involved in pure goal tracking?
An we need to know how they compute to which outcome a means is the best available.

‘an action can be explained by a goal state if, and only if, it is seen as the most justifiable action towards that goal state that is available within the constraints of reality’

\citep[p.~255]{Csibra:1998cx}

Csibra & Gergely (1998, 255)

1. action a is directed to some goal;

2. actions of a’s type are normally means of realising outcomes of G’s type;

3. no available alternative action is a significantly better* means of realising outcome G;

4. the occurrence of outcome G is desirable;

5. there is no other outcome, G′, the occurrence of which would be at least comparably desirable and where (2) and (3) both hold of G′ and a

Therefore:

6. G is a goal to which action a is directed.

We start with the assumption that we know the event is an action.
Why normally? Because of the ‘seen as’.
Any objections?
I have an objection. Consider a case in which I perform an action directed to the outcome of pouring some hot tea into a mug. Could this pattern of inference imply that the outcome be the goal of my action? Only if it also implies that moving my elbow is a goal of my action as well. And pouring some liquid. And moving air in a certain way. And ...
How can we avoid this objection?
Doesn’t this conflict with the aim of explaining *pure* behaviour reading? Not if desirable is understood as something objective. [explain]
Now we are almost done, I think.
OK, I think this is reasonably true to the quote. So we’ve understood the claim. But is it true?
How good is the agent at optimising the selection of means to her goals? And how good is the observer at identifying the optimality of means in relation to outcomes? \textbf{ For optimally correct goal ascription, we want there to be a match between (i) how well the agent can optimise her choice of means and (i) how well the observer can detect such optimality.} Failing such a match, the inference will not result in correct goal ascription.
But I don’t think this is an objection to the Teleological Stance as a computational theory of pure goal ascription. It is rather a detail which concerns the next level, the level of representations and algorithms. The computational theory imposes demands at the next level.
‘Such calculations require detailed knowledge of biomechanical factors that determine the motion capabilities and energy expenditure of agents. However, in the absence of such knowledge, one can appeal to heuristics that approximate the results of these calculations on the basis of knowledge in other domains that is certainly available to young infants. For example, the length of pathways can be assessed by geometrical calculations, taking also into account some physical factors (like the impenetrability of solid objects). Similarly, the fewer steps an action sequence takes, the less effort it might require, and so infants’ numerical competence can also contribute to efficiency evaluation.’ \citep{csibra:2013_teleological}
So this is the teleological stance, a computational description of goal ascription.
Although this is rarely noted, I think the Teleological Stance takes us beyond Dennett’s intentional stance because it allows us to distinguish between people on the basis of what they do. You reach for the red box; your goal is to retrieve the food. I reach for the blue box, so my goal is to retrieve the poison.
But there is a problem for the Teleological Stance ...

How is this computed?