Tuesday, 17 April 2018

Affordance Maps and the Geometry of Solution Spaces

I study throwing for two basic reasons. One, it is intrinsically fascinating and I want to know how it works. Second, it's become a rich domain in which to study affordances, and it is really forcing me to engage in great detail with the specifics of what these are.

My approach to affordances is that they are dynamical properties of tasks, which means that in order to study them, I need to be able to characterise my task dynamics in great detail. I developed an analysis (Wilson et al, 2016) to do this, and I also have a hunch this analysis will fit perfectly with the motor abundance analyses like UCM (Wilson, Zhu & Bingham, in press). I have recently discovered that another research group (led by Dagmar Sternad) has been doing this whole package for a few years, which is exciting news. Here I just want to briefly summarise the analysis and what the future might hold for this work.

Thursday, 1 March 2018

General Ecological Information Does Not Support the Perception of Anything

One common critique of the ecological approach is how can we use perception to explain behaviour that is organised with respect to things in the world that aren't currently in our area? How do we plan for future activities, or how do we know that the closed fridge has beer? 

A recent attempt to get ecological about this comes from Reitveld & Kiverstein (2014) who propose a relational account of affordances that enables them to talk about opportunities for more complex behaviours. This account has developed into the Skilled Intentionality Framework (e.g. Bruineberg & Rietveld, 2014), where skill is an 'optimal grip' on a field of task-relevant, relational affordances. 

I have always had one primary problem with this programme of work - I don't believe that they can show how these affordances create information and thus can be perceived. I discuss this here and here, and there's comments and replies for Rietveld and Kiverstein there too. You can indeed carve the world up into their kind of entities, but if they don't create information then they cannot be perceived and they are irrelevant to behaviour. 

I was therefore excited to see a new paper from the group called 'General ecological information supports engagement with affordances for ‘higher’ cognition' (Bruineberg, Chemero & Rietveld, 2018; hence BC&R). There is a lot of excellent work in here; but their proposal for a general ecological information is, in fact, neither ecological nor information. It is a good way of talking ecologically about conventional constraints on behaviour, but it doesn't make those perceivable and so the main thesis of the paper fails. 

Tuesday, 19 December 2017

Muscle Homology in Coordinated Rhythmic Movements

One of my main experimental tasks is coordinated rhythmic movement. This is a simple lab task in which I ask people to produce rhythmic movements (typically with a joystick) and coordinate those at some mean relative phase. Not all coordinations are equally easy; without training, people can typically only reliably produce 0° (in-phase) and 180° (anti-phase) movements. People can learn other coordinations, however; I typically train the maximally difficult 90° (although my PhD student has just completed a study training people at 60°; more on that awesome data shortly). I use coordination to study the perceptual control of action and learning.

My work is all designed to test and extend Bingham's mechanistic model of coordination dynamics. This model explicitly identifies all the actual components of the perception-action system producing the behaviour, and models them. In particular, it models the perceptual information we use to perceive relative phase; the relative direction of motion. This is an important contributor to coordination stability and this model is a real step up in terms of how we do business in psychology.

There is another factor that affects coordination stability, however, and the model currently only addresses this implicitly. That factor is muscle homology, and it's been repeatedly shown to be an important factor. For a long time, I have avoided worrying about it, because I have had no mechanistic way to talk about it. I think I have the beginnings of a way now, though, and this post is the first of several as I develop my first draft of that analysis.

Sunday, 5 November 2017

A Test of Direct Learning (Michaels et al, 2008)

Direct learning (Jacobs & Michaels, 2007) is an ecological hypothesis about the process of perceptual learning. I describe the theory here, and evaluate it here. One of the current weaknesses is little direct empirical support; the 2007 paper only reanalysed earlier studies from the new perspective. Michaels et al (2008) followed up with a specific test of the theory in the context of dynamic touch. The study was designed to provide data that could be plotted in an information space, which provides some qualitative hypotheses about how learning should proceed.

There are some minor devils in the detail; but overall this paper is a nice concrete tutorial on how to develop information spaces, how to test them empirically and how to evaluate the results that come out. The overall process will benefit from committing more fully to a mechanistic, real-parts criterion but otherwise shows real promise.  

Friday, 3 November 2017

Evaluating 'Direct Learning'

In my previous post I laid out the direct learning framework developed by Jacobs & Michaels (2007). In this post, I'm going to evaluate the central claims and assumptions with a mechanistic eye. Specifically, my question is mainly going to be 'what are the real parts or processes that are implementing that idea?'. 

This is a spectacularly complicated topic and I applaud Jacobs & Michaels for their gumption in tackling it and the clarity with which they went after it. I also respect the ecological rigour they have applied as they try to find a way to measure, analyse and drive learning in terms of information, and not loans on intelligence. It is way past time for ecological psychology to tackle the process of learning head on. I do think there are problems in the specific implementation they propose, and I'll spend some time here identifying those problems. I am not identifying these to kill off the idea, though; read this as me just at the stage of my thinking where I am identifying what I think I need to do to improve this framework and use it in my own science. 

Thursday, 2 November 2017

Direct Learning (Jacobs & Michaels, 2007)

The ecological hypothesis is that we perceive properties of the environment and ourselves using information variables that specify those properties. We have to learn to use these variables; we have to learn to detect them, and then we have to learn what dynamical properties they specify.

Learning to detect variables takes time, so our perceptual systems will only be able to become sensitive to variables that persist for long enough. The only variables that are sufficiently stable are those that can remain invariant over a transformation, and the only variables that can do this are higher order relations between simpler properties. We therefore don't learn to use the simpler properties, we learn to use the relations themselves, and these are what we call ecological information variables. (Sabrina discusses this idea in this post, where she explains why these information variables are not hidden in noise and why the noise doesn't have to be actively filtered out.)

Detecting variables is not enough, though. You then have to learn what dynamical property that kinematic variable is specifying. This is best done via action; you try to coordinate and control an action using some variable and then adapt or not as a function of how well that action works out.

While a lot of us ecological people studying learning, there was not, until recently, a more general ecological framework for talking about learning. Jacobs & Michaels (2007) proposed such a framework, and called it direct learning (go listen to this podcast by Rob Gray too). We have just had a fairly intense lab meeting about this paper and this is an attempt to note all the things we figured out as we went. In this post I will summarise the key elements, and then in a follow-up I will evaluate those elements as I try and apply this framework to some recent work I am doing on the perception of coordinated rhythmic movements.

Saturday, 21 October 2017

What Limits the Accuracy of Human Throwing?

Throwing a projectile in order to hit a target requires you to produce one lot of the set of release parameters that result in a hit; release angle, velocity (speed and direction) and height (relative to the target). My paper last year on the affordances of targets quantified these sets using a task dynamical analysis.

There is one additional constraint; these release parameters have to occur during a very short launch window. This window is the part of the hand's trajectory during which the ball must be released in order to intercept the target. It is very easy to release slightly too late (for example) and drill the projectile into the ground.

How large is this launch window? It is surprisingly, terrifyingly small; Calvin (1983) and Chowdhary & Challis (1999) have suggested it is on the order of 1ms. Those papers used a sensitivity analysis on simulated trajectories to show that accuracy is extremely sensitive to timing errors and this millisecond level precision is required to produce an accurate throw.

Smeets, Frens & Brenner (2002) tested this hypothesis with dart throwing. If this intense pressure on timing the launch window determines accuracy, then throwers should organise their behaviour and throw in a way that makes their launch window as tolerant of errors as possible. They replicated the sensitivity analyses on human data to see if people try to give themselves the maximum error tolerance in the launch, or whether they were trying to accommodate errors in other variables.

What they found is that the launch window timing is not the limiting factor. Their throwers (who were not especially expert) did not throw so as to minimise the sensitivity of the launch window timing to errors. Quite the contrary; they lived in a fairly sensitive region of the space, and then didn't make timing errors. They did throw so as to reduce the sensitivity to speed errors, however, and errors in the targeting came from errors in the spatial path of the hand that the system did not adequately compensate for, rather than the timing of the hand's release. (The authors saw some evidence that the position, speed and direction of the hand trajectory were organised into a synergy, which aligns nicely with the motor abundance hypothesis).

I would like to replicate and extend this analysis process using more detailed simulations and data from better throwers. I've become convinced it's a very useful way to think of what is happening during the throw. I also think these results point to some interesting things about throwing. Specifically, while timing and speed must both be produced with great accuracy, the system has developed two distinct solutions to coping with errors. Timing errors are reduced by evolving neural systems that can reliably produce the required precision. Speed errors have been left to an online perception-action control process which adapts the throw to suit local demands. The latter is the more robust solution; so why was timing solved with brain power?