Tuesday 17 April 2018

Affordance Maps and the Geometry of Solution Spaces

I study throwing for two basic reasons. One, it is intrinsically fascinating and I want to know how it works. Second, it's become a rich domain in which to study affordances, and it is really forcing me to engage in great detail with the specifics of what these are.

My approach to affordances is that they are dynamical properties of tasks, which means that in order to study them, I need to be able to characterise my task dynamics in great detail. I developed an analysis (Wilson et al, 2016) to do this, and I also have a hunch this analysis will fit perfectly with the motor abundance analyses like UCM (Wilson, Zhu & Bingham, in press). I have recently discovered that another research group (led by Dagmar Sternad) has been doing this whole package for a few years, which is exciting news. Here I just want to briefly summarise the analysis and what the future might hold for this work.


Affordance Maps (me)

In order to hit a target by throwing, a person must produce one of a specific set of release parameters, a set that changes as target details change. This set is actually a subset of a broader release parameter space, where the rest of the space describes release parameter combinations that lead to misses. The 'hit' subset has more than one solution in it (there is redundancy in the task demands), and different regions of the subset are more tolerant of error than others. 
I have a paper out (Wilson et al, 2016) in which I very precisely quantified this subset for a long distance targeted throwing task. The figure above shows the hit region (colour) for a vertically oriented target (left) and a horizontally oriented target (right) 10m away from the thrower. As you can see, the thickest part of the hit subset moves as the target orientation changes, and skilled throwers select release parameters to suit the task demands (they live in the most stable regions). 

I produced these graphs by simulating the dynamics of projectile motion across the full space of release parameters and colour coding the results of each throw. I did this because I wanted to quantify the task demands that these task dynamics were imposing, and I identified these demands as the affordance of the target to be hit by throwing. I now call these affordance maps. This is part of my empirical work that argues affordances are best understood as dynamical dispositional properties of tasks. 

When I got interested in the various 'motor abundance' analysis methods (like UCM, optimal feedback control, nonlinear covariation and GEM), I realised that my affordance maps might be a natural fit for these techniques. Each of these effectively does some kind of movement variability partioning, into 'bad' variance (which is taking you away from achieving the goal) and 'good variance' (which is just you moving around within the subset of the space that will produce the outcome). Good variance is left to accumulate, bad variance must be detected and controlled away. I realised that a) my affordance maps define the goal subspace each method needs and that b) it does it in a way that might lead to explaining how people perceive the goal subspace and can thus work to control their actions appropriately. I sketched out this hypothesis in an upcoming book chapter, and I have a 90gig motion capture data set from last Easter waiting for me to try this out on.

Solution Space Geometry (Sternad) 

I've recently discovered that Dagmar Sternad has been doing this analysis for a while now, and is a few papers into figuring out how to get it to work. After being a bit bummed for 5 minutes that someone was beating me to it, I got over it and now I'm excited that someone who knows what she is doing is in the game. We're swapping emails now to see how we can help each other out.

Sternad's work primarily uses a virtual reality (VR) 'throwing' game that's actually a bit more like tetherball. The participant's job is to throw a tethered ball towards a target, and because it's a VR task she can alter the task by changing the shape (the geometry) of the solution subspace. In her most recent paper (Zhang et al, 2018) she created 4 different spaces; note that her spaces are different from mine because her task dynamics are different; she modelled the tetherball dynamics instead, but the space is still defined by release parameters. 


Her participants explored the space over learning, and learned to live in the most stable region (just like my throwers). Participants are clearly sensitive to these underlying dynamics - they are perceiving these affordances!

Her other results her which caught my eye was how people coped with the launch window timing problem. For a high speed throw to work, you have to land your moment of release within a very narrow launch window. This window can be so narrow that the time spent in that window can be 1-2ms; this poses an immense pressure on a nervous system that is typically only able to get timing variability down to about 10ms. And yet we throw - so how?

One hypothesis (from Calvin) is that our big brains evolved in order to use massive parallel processing to achieve this timing. In early learning, people do indeed just get better at reducing their timing variability. Zhang et al (2018) showed, however, that there is another option; people can shape their throwing arm trajectories in order to spend a little more time within the launch window. This strategy showed up toward the end of training, once the timing variability reduction seemed to have topped out.

Summary

First, I am super excited that a) my idea about affordances and motor abundance has some legs and that b) someone who actually knows the maths is seriously getting into this. I really was a little bummed at first about being 'scooped' but I've realised that's the wrong mindset and I'm hoping to make some real progress now by not having to do everything myself!

Our work complements each other nicely, I think. 
  • I'm using motion capture of expert throwers in a very natural throwing task; this makes my work very directly about throwing itself. My motivation is about figuring out how best to dynamically characterise the affordances of our environment to test the hypothesis that we perceive these and use them to control actions. 
  • Her work uses VR, which gives her a lot of experimental control over the task dynamics (her manipulations of the solution spaces where much more refined than I can do with my real target). Her motivation is more about the action variability analyses and figuring out what you can and can't do with them
I think we are converging on the same idea from two different angles and between us I think there's a ton of good stuff to do. Even if we never formally collaborate I'll be able to build on her papers and I hope she'll be able to build on mine. 

References

No comments:

Post a Comment