Think of predictive coding.
Think of CTM.
Think of any account of the mind you like.
I bet that the only way that non-accidental matches can occur
on the account is through a process of inference.
Differences in representational format
block inferential integration
(without translation),
and so create interface problems.
\citet[p.~2]{jackendoff:1996_architecture} proposes
‘a systemof interface modules. An interface module communicates between two
levels of encoding, say Ll and L2, by carrying a partial translation of information
in Ll form into information in L2 form.’
Translation might work in some cases
(maybe between phonology and syntax,
or between spatial and linguistic representation?).
But it does not appear to work for motivational states,
and I guess not for executive (Bach: effective) states either.
The mind is made up of lots of different, loosely
connected systems that work largely independently of each other. To a certain extent it’s fine for
them to go their own way; and of course since they all get the same inputs (what with being parts of
a single subject), there are limits on how separate the ways the go can be. Still, it’s often good
for them to be aligned, at least eventually.
But how are they ever nonaccidentally brought into alignment?
One function of experience
is to solve these problems.
Experience is what enables there to be nonaccidental eventual alignment of largely independent
cognitive systems. This is what experience is for.
Can we think along these lines in the case of action?
This was clear enough in Dickinson’s case.
But how could it work in the case intention vs motor representation?
Two complications make it appear difficlut ..