The Mission of the Perception/Action Lab
The approach in the Perception/Action Lab is based on a number of principles:
(1) Perception and action are synergistic and mutually constraining or determining. There is no perception without
action and there is no action without perception.
(2) Perception requires information: in light for vision; in sound for audition, in movement for proprioception.
Because perception and action are inexorably linked, information is found in spatial-temporal patterns. Movement is
an essential part of the game.
(3) Information is based on natural law and dynamics. Physical dynamics constrain and determine the forms of
motions in events including human actions. Those forms project to the forms in information carrying media (e.g.
light and sound).
Research in the lab is focused on visual, proprioceptive and haptic perception and then, human actions
performed with the upper limbs: reaching-to-grasp, overarm throwing, and rhythmic limb movements, as well as the
coordinate looking movements. We study action, but our primary focus and commitment is to the study of perception
as the foundation for all actions.
Visually Guided Reaches-to-Grasp
Visually guided reach-to-grasp abilities underwrite the tremendous success humans have had in
adapting to terrestrial environments and molding them to human needs and purposes. As one might
expect, the ability is complex. Reaches are initiated under largely feedforward or open loop control.
Halfway through a reach, however, feedback or closed loop control becomes paramount, and is
especially important for accurate grasping. We have investigated the essential forms of movement
exhibited by reaches-to-grasp and the ways they vary with task. We have studied the feedforward
control of reaching and how visual information about the surrounding layout of objects is used to
guide reaches. (See also research on Calibration.) The information used to guide a reach includes
visual perception of object distance, size and shape. The latter is particularly problematic. (See
research on Shape Perception.) We have shown that problems in shape perception make the online
guidance of the final stages of a reach-to-grasp essential for fact and accurate performance. The
question is what information is used to guide this portion of a reach-to-grasp? We have studied and
rejected various candidate information variables and are currently involved in extensive research
investigating our prime candidate: Tau (roughly, 'time-to-contact') defined to control binocular
Iberall, T., Bingham, G.P., and Arbib, M.A. (1986). Opposition space as a
structuring concept for the analysis of skilled hand movements. Experimental Brain Research Series,
15. Heidelberg: Springer-Verlag.
Bingham, G.P. & Zaal, F.T.J.M. (2004). Why it is probably not used to
guide reaches. In H. Hecht & G.J.P. Savelsbergh (Eds.) Theories of Time-to-Contact. Boston: MIT
Mon-Williams, M., Coats, R., & Bingham, G. P. (2004). Reaching with feeling. Journal of
Vision, 4(8), 411a.
Bingham, G., Hughes, K., & Mon-Williams, M. (in press). Grasping coordination with both hands:
Information demands asynchronous timing. Experimental Brain Research.
Mon-Williams, M. & Bingham, G.P. (submitted). Slam, stop, and fly-through: The coordinative timing
and structure of reach-to-grasp movements varies with task. Experimental Brain Research.
Space Perception and the Calibration of Reach-To-Grasp Actions
Successful reaches-to-grasp require visual guidance and this, in turn, entails perception of the
distance, direction, size and shape of objects to guide the early, feedforward portion of a reach.
All the traditional questions and problems of space perception (the oldest area in the study of
perception) apply. What is the information used to see each of these properties? How accurately and
reliably can they be perceived? In this context especially, perception is essentially a (physical)
dynamically constrained relationship between the perceiver/actor and the surroundings. As such, its
stability is at issue. It's a measurement system that is subject to drift and thus, it must be calibrated. Perceptual units
must be stably related to the units of action. We have shown the form and importance of the
calibration dynamic in visually guided reach-to-grasp actions and its role in each of the forms of
information needed to guide a reach-to-grasp. This way of thinking about perception and visually
guided actions is new and different and thus, controversial. A lot of evidence will have to amassed
before this way of thinking becomes well established.
Bingham, G.P. & Stassen, M.G. (1994). Monocular egocentric distance information generated by head
movement. Ecological Psychology, 6(3), 219-238.
Bingham, G.P. & Pagano, C.C. (1998). The necessity of a perception/action approach to definite
distance perception: Monocular distance perception to guide reaching. Journal of Experimental
Psychology: Human Perception and Performance , 24 , 145-168.
Pagano, C.C. & Bingham, G.P. (1998). Comparing measures of monocular distance perception: Verbal and
reaching errors are not correlated. Journal of Experimental Psychology: Human Perception and
Performance , 24 (4), 1037-1051.
Bingham, G.P. & Romack, J.L. (1999). The rate of adaptation to displacement prisms remains constant
despite acquisition of rapid calibration. Journal of Experimental Psychology: Human Perception and
Performance , 25(5), 1331-1346.
Wickelgren, E.A., McConnell, D. & Bingham, G.P. (2000). Reaching measures of monocular distance
perception: Forward vs side-to-side head movements and haptic feedback. Perception & Psychophysics,
62 (5), 1051-1059.
Bingham, G.P. (2005). Allometry and space perception: Compression of optical ground texture yields
decreasing ability to resolve differences in spatial scale. Ecological Psychology, 17(3&4), 193-204.
Bingham, G.P., Coats, R. & Mon-Williams, M. (2007). Unnatural prehension to virtual objects is not
inevitable if calibration is allowed. Neuropsychologia, 45, 288-294.
Mon-Williams, M. & Bingham, G.P. (in press). Calibrating reach distance to visual targets. Journal of
Experimental Psychology: Human Perception and Performance
Coats, R., Bingham, G.P. & Mon-Williams, M. (in press). Reaching with feeling: somatosensory feedback
calibrates reaching and grasping. Experimental Brain Research.
Reaching in Virtual Environments versus Actual Environments
Virtual environments (VE) provide a potentially wonderful tool for the perception/action lab. In a
VE, its possible to manipulate visual information and its directly geared relation to action. But
this is at a cost. VE are not the same as Actual Environments (AE). Visual perception does not
normally entail the looking at an image, but it does in a VE. A VE perturbs vision and actions guided
by vision. We have studied and described the exact nature of such perturbations, especially in the
case of visually guided reaching.
Bingham, G.P., Bradley, A., Bailey, M., Vinner, R. (2001). Accommodation, occlusion and disparity
matching are used to guide reaching: A comparison of actual versus virtual environments. Journal of
Experimental Psychology: Human Perception and Performance , 27(6), 1314-1344.
Reaching-To-Grasp and Metric Shape Perception from Stereovision and Structure-from-Motion
We (and many others in the literature) have shown that the perception of metric shape is shockingly
bad. In fact, we have found that metric space perception is generally poor, except in the context of
relevant actions where calibration of the relation between perception and action can yield accurate
and stable perceptually guided performance. Unfortunately, this does not help in the case of shape
perception. We showed that position perception (distance and direction of objects) is independent of
shape perception and the latter cannot be calibrated. (See also research on Calibration.) Athough we
have recently found that good perception of metric shape is possible under certain conditions, it has
become clear that continuous online guidance of the final phases of a reach-to-grasp is necessary for
reliably accurate performance. (See research on reach-to-grasp actions.) As if the perception of 3D
object shapes were not challenging enough, we are now studying whether visualization methods are
effective in allowing users to perceive the shapes of 4D objects. We are investigating the best
training methods to allow users to learn to see 4D objects and their properties.
Bingham, G.P., Zaal, F., Robin, D. & Shull, J.A. (2000). Distortions in definite distance and shape
perception as measured by reaching without and with haptic feedback. Journal of Experimental
Psychology: Human Perception and Performance, 26 (4), 1436-1460.
Lind, M., Bingham, G.P. & Forsell, C. (2002). The illusion of perceived 3D metric structure.
Proceedings of the IEEE InfoVis Conference 2002.
Bingham, G.P., Crowell, J.A. & Todd, J.T. (2004). Distortions of distance and shape are not produced
by a single continuous transformation of reach space. Perception & Psychophysics, 66(1), 152-169.
Bingham, G.P. & Lind, M. (submitted). Large continuous perspective transformations are necessary and
sufficient for accurate perception of metric shape. Perception & Psychophysics.
Lee, Y.L., Norman, F., & Bingham, G.P. (submitted). Poor shape perception is the reason that
reaches-to- grasp are visually guided online. Perception & Psychophysics.
Perceiving the Shape of 4D Objects
As if the perception of 3D object shapes were not challenging enough, we are now studying whether
visualization methods are effective in allowing users to perceive the shapes of 4D objects. We are
investigating the best training methods to allow users to learn to see 4D objects and their
properties. This is a major new project in the lab.
Thakur, S., Hanson, A. & Bingham, G.P. (2006). Active visualization methods enable perception of
sturcture from motion in higher dimensional spaces: Comparing active and passive perception of the
rigidity of 3D and 4D objects. Journal of Vision, 6(6), 864a.
Nonlinear Dynamics, Self-Organization and Bimanual
Bimanual coordination (it ain't just the hands) is fundamental to human action. It is required when
ever two limbs are moved at the same time to perform some action, walking, running, hopping,
bicycling, typing, drumming, playing nearly any instrument, so on and so forth. Merely moving two
joints in different limbs (e.g. moving the wrists or the elbows or two fingers in different hands) in
a rhythmic or oscillatory fashion reveals complex structure in the kinematics (i.e. measured
movements) that is a signature of self-organizational nonlinear dynamics. Its coupled oscillators,
but what is the nature of the coupling. We have shown that its perceptual. We began with visual
judgment studies, then continued with proprioceptive judgment studies and finally, perception/action
studies that enabled us to manipulate the visual information independently of the movements to show
that the stability of the behavior was a function of the perceptual information. Most recently, we
have shown that new coordinations can be acquired by learning to perceive new information which can
then be used immediately to guide the new behavior. We are currently developing this work for
applications in the study of stroke.
Bingham, G.P. (1995). The role of perception in timing: Feedback control in motor programming and
task dynamics. In E. Covey, H. Hawkins, T. McMullen & R. Port (Eds.) Neural Representation of
Temporal Patterns , pp. 129-157. New York: Plenum Press.
Bingham, G.P., Schmidt, R.C. & Zaal, F. (1999). Visual perception of the relative phasing of human
limb movements. Perception & Psychophysics, 61(2), 246-258.
Zaal, F., Bingham, G.P. & Schmidt, R.C. (2000). Visual perception of relative phase and phase
variability. Journal of Experimental Psychology: Human Perception and Performance, 26(3), 1209- 1220.
Bingham, G.P., Zaal, F.T.J.M., Shull, J.A. and Collins, D.R. (2001). The effect of frequency on
visual perception of relative phase and phase variability. Experimental Brain Research, 136, 543-552.
Bingham, G.P. (2001). A perceptually driven dynamical model of rhythmic limb movement and bimanual
coordination. Proceedings of the 23rd Annual Conference of the Cognitive Science Society, (pp.
75-79). Hillsdale, N.J., LEA Publishers.
Wilson , A., Craig, J.C. & Bingham, G.P. (2003). Haptic perception of phase variability. Journal of
Experimental Psychology: Human Perception and Performance , 29, 1179-1190.
Bingham, G.P. (2004). A perceptually driven dynamical model of bimanual rhythmic movement (and phase
perception). Ecological Psychology, 16(1),45-53.
Bingham, G.P. (2004). Another timing variable composed of state variables: Phase perception and phase
driven oscillators. In H. Hecht & G.J.P. Savelsbergh (Eds.) Theories of Time-to-Contact. Boston: MIT
Wilson, A., Collins, D.R. & Bingham, G.P. (2005). Perceptual coupling in rhythmic movement
coordination stable perception leads to stable action. Experimental Brain Research, 164, 517-528.
Wilson, A., Collins, D.R. & Bingham, G.P. (2005). Human movement coordination implicates relative
direction as the information for relative phase. Experimental Brain Research, 165, 351-361.
Wilson, A. & Bingham, G.P. (submitted). A Perception/Action Approach to Rhythmic Movement
Coordination I: Improved Perception Leads (Eventually) to Improved Movement Stability. Experimental
Wilson, A. & Bingham, G.P. (submitted). A Perception/Action Approach to Rhythmic Movement
Coordination II: Perturbations of Relative Position, Speed and Direction to Perturb Phase
Perception.Journal of Experimental Psychology: Human Perception and Performance.
Affordances and Maximum Distance Overarm Throwing
Visually guided reaching is certainly an essential part of the human condition Maximum distance
overarm throwing is an extension of reaching abilities and it arguably made the survival of the human
species, especially during the ice ages, possible. The skill is not ubiquitous among people, although
its potential is. The skill, when acquired, is accompanied by the ability to perceive affordances for
throwing in objects that are potential projectiles. Some are better than others for maximum distance
throws and people can perceive this by hefting objects. There is an optimum weight in each size
object. How do people do this? We have hypothesized that its through a Smart Perceptual Mechanism
(SPM), that is, a shared dynamic between throwing and hefting. This SPM is fundamental to the ability
to learn to perceive the affordance. Ongoing research on learning to throw is showing how the
perception and the skill are coupled by the SPM.
Visual Event Perception
What is the visual information used to recognize events like a person walking with a limp or a
bouncing ball that splashes into water? We have hypothesized that events are spatial-temporal
objects, and like objects, are recognized through their forms. A trajectory form consists of the
shape of a path of motion and of the speed profile along that path. We have found that these forms
can be discriminated by 8-month old infants and adults alike and used to recognize events seen from
different 3D perspectives.
Wickelgren, E. & Bingham, G.P. (2001). Infant sensitivity to trajectory forms. Journal of
Experimental Psychology: Human Perception and Performance , 27 (4), 942-952.
Muchisky, M.M. & Bingham, G.P. (2002). Trajectory forms as a source of information about
events.Perception & Psychophysics, 64(1), 15-31.
Wickelgren, E.A. & Bingham, G.P. (2004). Perspective distortion of trajectory forms and perceptual
constancy in visual event identification. Perception & Psychophysics, 66, 629-641.
Wickelgren, E. & Bingham, G.P. (in press). Trajectory forms as information for visual event
recognition: 3D perspectives on path shape and speed profile. Perception & Psychophysics.
Bingham, G.P. & Wickelgren, E.A. (in press). Events and Actions as dynamically molded
spatial-temporal objects: A critique of the motor theory of biological motion perception. In T.
Shipley & J. Zacks (Eds.)Event Perception, Oxford University Press: Oxford, UK.
What makes trajectory forms so informative? The underlying physical dynamics of events molds the
trajectories into the specific forms that uniquely correspond to given events. Event dynamics also
relate the timing and spatial character of events. Because of this, timing can be used to perceive
the spatial scale of events, that is, the size of objects in events and how far away they are.
Bingham, G.P. (1995). Dynamics and the problem of visual event recognition. In Port, R. & T. van
Gelder (eds.), Mind as Motion: Dynamics, Behavior and Cognition, (pp403-448). Cambridge, MA: MIT
Bingham, G.P., Rosenblum, L.D. & Schmidt, R.C. (1995). Dynamics and the orientation of kinematic
forms in visual event recognition. Journal of Experimental Psychology: Human Perception and
Performance , 21(6), 1473-1493.
McConnell, D.S., Muchisky, M.M. & Bingham, G.P. (1998). The use of time and trajectory forms as
visual information about spatial scale in events. Perception & Psychophysics, 60 (7), 1175-1187.
Twardy, C. & Bingham, G.P. (2002). Causation, causal perception and conservation laws. Perception &
Psychophysics, 64 (6). 956-968.
Many questions remain. How do trajectory forms combine with relative phase information to provide
information about complex events like those that involve multiple human actions as observed during a
football game or dance competition, a wrestling match or a basketball game? How does the human visual
system detect and use such information that entails both a large number of degrees of freedom (that
is, lots of parts moving in different ways) and such significantly time extended behavior?
Perception and Embodied Memory
Perception and memory are both ways to obtain information to identify objects
and/or guide actions. Perception is embodied and so can be memory, we argue. When perceiving
depth relations (i.e. spatial layout), the visual system utilizes both optic flow and image
structure information. Optic flow is strong in specifying spatial relations but it is transient.
Image structure information, on the other hand, is weak in specifying depth but it is stable.
Once calibrated by optic flow, image structures become anchors of what has been perceived when the
fleeting optic flow was available. As such, the perceived spatial relations need not be kept in
the head (the traditional notion of memory); instead, it can be offloaded to and preserved in
the external world structures. We call this "embodied memory" because the physically present
structures in the world are projecting images available to the eyes in real time.
Pan, J. S., Bingham, N., & Bingham, G. P. (2013). Embodied Memory: Effective and Stable Perception By Combining Optic Flow and Image Structure.
Pan, J. S.& Bingham, G. P. (submitted). Embodied Memory for Broken Camouflage.
Experiment Demos (click on the links below to download demos)
The demos only work on Mac's. Motion speed depends on the operating system and its settings.
On each trial press "g" to start, press "s" when you are done. To quit the program, press the "Apple" and "Q" keys together.
Demo 1 (Optic flow only): With optic flow information only, how many pink
squares (targets) can you click on?
Demo 2 (Image structure only): Given image structure information (green
borders around targets), how many pink squares (targets) can you identify when
the front surface progressively occludes the rear surface containing targets?
How about when the front surface instantly shifts and occludes (not progressively
occluding) the targets on the rear surface? Optic flow information is only available
in trials with progressive occlusion and not available in trials with instant shifts.
Demo 3 (Both optic flow and image structure information): With both sources
of information, how many pink squares can you click on?
Demo 4 (Optic flow only): With optic flow information only, how many targets (squares in the front) can you find at the end of each trial?
Demo 5 (Optic flow and image structure information, unpaired): Squares with blue borders on the front layer are targets; squares with blue borders on the background layer are distracters. At the end of each trial, how many targets can you find?
Demo 6 (Optic flow and image structure information, paired): Squares on the front layer are targets; squares on the background layer are distracters. Targets and distracters have borders of distinct colors. At the end of each trial, how many targets can you find?
For non-Mac users, please watch a short video for demos 4, 5 and 6. Click here.
Orientation Change Demo Please download this program to see how embodied memory aids perception with orientation change. In all experiments, observers use the mouse to click on hidden locations of targets (not on the visible windows through which targets are previously seen). One point is awarded for a correct identification; one point is deducted if incorrect identification; and no point change if no response is made. In the “1_noRot” folder, the experimental demo is for the condition with no orientation change (or baseline condition). The stimuli contain the phases of structure-from-motion (SFM) to reveal the spatial relations, translation of the rear surface resulting in progressive occlusion of targets (the pink circles). Then, after some delay (during which there is either image structure information or a black screen), observers click on the hidden locations of targets. In the “2_withRot” folder, there are three experimental demos, corresponding to the Manuscript’s Experiments 2 and 3, and all of them contain orientation change. In the “1_seeRot_seeDuringDelay” folder, the demo illustrates the condition where observers see SFM first, and then progressive occlusion of targets, then the whole scene tilts and yields orientation change that is visible to observers. Finally, observers wait for either a long or a short delay period, during which image structure information is always available, before they identify hidden targets. In the “2_seeRot_noSeeDuringDelay” folder, the demo illustrates the condition where the orientation change is visible, but during delay, image structure information is unavailable (i.e. the screen is black) to the observers. Finally, in the “3_noSeeRot_noSeeDuringDelay” folder, the demo illustrates the condition where orientation change occurs while the screen is black (the orientation change is imperceptible) and the screen remains black during delay (image structure is unavailable during delay). Press Command-Q to quit experiment at any time.
For non-Mac users, please see this video demo.
Low Vision Experiment: Beyond simplified lab-based tasks, combined optic flow and image structure information also aids perceiving complex daily events. When image-based information becomes poor and insufficient in specifying events, due to low image quality, deprived viewing conditions and/or low visual acuity, motion generated optic flow information compensates and engenders unambiguous perception of events. Optic flow powerfully calibrates the image structures and the otherwise meaningless and blurred pictures start making sense when played in sequence (with motion). Download the demo to see how it works. (This demo requires Java and runs on all operating systems.)
Low Vision Experiment Demo
Developmental Coordination Disorder (DCD)
- Developmental Coordination Disorder (DCD) is a "motor disorder" that affects approximately 6% of the population. As repercussions of poor motor control, DCD affects academic performance, social experience and emotional health. DCD is often co-morbid with dyslexia, autism spectrum disorder, and ADHD, among other perceptual and cognitive disorders. The primary objective of these projects is to understand how barriers to effective motor learning created by sensori-motor deficits can be overcome to enable children with DCD and other developmental disorders often co-morbid with DCD to learn to perform good compliant manual actions and especially handwriting. The results will contribute to a theoretical basis for explaining the problems faced by children with DCD with direct implications for the design of therapeutic training for these children.