Header logo is

Human optimization strategies under reward feedback

2009

Conference Paper

am


Many hypothesis on human movement generation have been cast into an optimization framework, implying that movements are adapted to optimize a single quantity, like, e.g., jerk, end-point variance, or control cost. However, we still do not understand how humans actually learn when given only a cost or reward feedback at the end of a movement. Such a reinforcement learning setting has been extensively explored theoretically in engineering and computer science, but in human movement control, hardly any experiment studied movement learning under reward feedback. We present experiments probing which computational strategies humans use to optimize a movement under a continuous reward function. We present two experimental paradigms. The first paradigm mimics a ball-hitting task. Subjects (n=12) sat in front of a computer screen and moved a stylus on a tablet towards an unknown target. This target was located on a line that the subjects had to cross. During the movement, visual feedback was suppressed. After the movement, a reward was displayed graphically as a colored bar. As reward, we used a Gaussian function of the distance between the target location and the point of line crossing. We chose such a function since in sensorimotor tasks, the cost or loss function that humans seem to represent is close to an inverted Gaussian function (Koerding and Wolpert 2004). The second paradigm mimics pocket billiards. On the same experimental setup as above, the computer screen displayed a pocket (two bars), a white disk, and a green disk. The goal was to hit with the white disk the green disk (as in a billiard collision), such that the green disk moved into the pocket. Subjects (n=8) manipulated with the stylus the white disk to effectively choose start point and movement direction. Reward feedback was implicitly given as hitting or missing the pocket with the green disk. In both paradigms, subjects increased the average reward over trials. The surprising result was that in these experiments, humans seem to prefer a strategy that uses a reward-weighted average over previous movements instead of gradient ascent. The literature on reinforcement learning is dominated by gradient-ascent methods. However, our computer simulations and theoretical analysis revealed that reward-weighted averaging is the more robust choice given the amount of movement variance observed in humans. Apparently, humans choose an optimization strategy that is suitable for their own movement variance.

Author(s): Hoffmann, H. and Theodorou, E. and Schaal, S.
Book Title: Abstracts of Neural Control of Movement Conference (NCM 2009)
Year: 2009

Department(s): Autonomous Motion
Bibtex Type: Conference Paper (inproceedings)

Address: Waikoloa, Hawaii, 2009
Cross Ref: p10334
Note: clmc

BibTex

@inproceedings{Hoffmann_ANCMC_2009,
  title = {Human optimization strategies under reward feedback},
  author = {Hoffmann, H. and Theodorou, E. and Schaal, S.},
  booktitle = {Abstracts of Neural Control of Movement Conference (NCM 2009)},
  address = {Waikoloa, Hawaii, 2009},
  year = {2009},
  note = {clmc},
  doi = {},
  crossref = {p10334}
}