Visual Perception of Contours, Surfaces, and Objects: Basic Research and Modeling

A central focus of our lab is discovering the processes and mechanisms involved in the visual perception of objects. We use psychophysical research and computational modeling in this work. Much of our research has focused on attaining a comprehensive understanding of how objects, contours, and surfaces are perceived from information that is fragmentary in space and time. We are also deeply interested in visual perception and representation of shape. (See separate topic description on Shape.)

Contour and Surface Interpolation Processes in Object Formation

Our research has suggested that two complementary processes support perception of objects from fragmentary visual information. Contour interpolation processes depend on spatial relations of visible edge segments, and surface interpolation processes connect spatially separated visible regions based on similarity of surface properties. In the figure below, the fragments separated by the gray occluding object are linked by both contour and surface interpolation in (A); by contour interpolation only, in (B); by surface interpolation only, in (C), and by neither interpolation process in D.

Contour and Surface Interpolation
Kellman, P.J., Garrigan, P., & Erlikhman, G. (2013). Challenges in understanding visual shape perception and representation: Bridging subsymbolic and symbolic coding. In S.J. Dickinson & Z. Pizlo (Eds.), Shape Perception in Human and Computer Vision: An Interdisciplinary Perspective (249-274). London: Springer-Verlag.

Contour and Surface Interpolation. (A) Both contour and surface interpolation processes contribute to perceived unity of the three black regions behind the gray occluder. (B) Contour interpolation alone. (C) Surface interpolation alone. (D) Both contour and surface interpolation have been disrupted, causing the blue, yellow, and black regions to appear as three separate objects.

Our understanding of both processes is advancing in current projects. These include new work that supports a two-stage theory of contour interpolation. A number of discrepant phenomena and controversies in the field can be understood by a model that posits an initial, “promiscuous” contour linking stage, followed by subsequent processing that implements a variety of scene constraints ensuring the consistency of border ownership, closure, and appearance of crossing interpolations in determining perceived objects. The paradox of some difficult to understand object formation phenomena is that perception of unified objects depends both on a highly automated, geometrically constrained, and “promiscuous” contour-linking process, and on a second stage that reinforces, weakens, or even deletes contour connections from Stage 1 in the final scene representation. One weakness of this general account in the past has been the lack of a clear method for quantifying effects in Stage 2 and modeling their interactions. Recent work underway in our lab is beginning to clarify this picture greatly and provide clear support the overall account of a two-stage process.

Neural Modeling of Visual Interpolation Processes

We have implemented a neural-style model of the basic contour linking process in contour interpolation (Kalar, et al, 2010, Vision Research). This model draws heavily on earlier work by Heitger et al (1992, 1998) but shows how contour interpolation in the basic contour linking stage works from a common mechanism in both illusory contour and occluded contour contexts.

Input displays (left) and model outputs from a neural model of contour interpolation in modal and amodal completion. See Kalar, D. J., Garrigan, P., Wickens, T. D., Hilger, J. D., & Kellman, P. J. (2010). A unified model of illusory and occluded contour interpolation. Vision Research, 50(3), 284-299.

Current efforts in modeling involve two major efforts:

1) Developing a Two-Process Theory of Contour Interpolation

Whereas the first stage of contour linking in contour interpolation likely involves a modular perceptual mechanism based on scene geometry, the second stage may involve satisfaction of multiple constraints, applications of priors, and combinations of factors of differing weights. These different approaches to modeling perceptual phenomena may come together in a two-process theory. A weakness to date is that there has not been much systematic quantification of second-stage constraints or their interactions, owing in part to the lack of good methods for isolating the second stage. In current work, we are developing successful methods for doing so and have begun to quantify scene constraints in contour interpolation and model their interactions.

Kellman, P.J., Garrigan, P., & Erlikhman, G. (2013). Challenges in understanding visual shape perception and representation: Bridging subsymbolic and symbolic coding. In S.J. Dickinson & Z. Pizlo (Eds.), Shape Perception in Human and Computer Vision: An Interdisciplinary Perspective (249-274). London: Springer-Verlag.

Graphical examples of contour relatability, which expresses the underlying geometry of contour interpolation in both amodal and modal completion. Top Row: Relatable edges produce amodal completion of the visible black regions (left column), and for the same physically-specified edges, illusory contours in the right column. Bottom Row: Disruption of contour relatability leads to perception of separate fragments (left) and the absence of illusory contours (right).

Carrigan, S. B., & Kellman, P. J. (2019). From early contour linking to perception of continuous objects: Specifying scene constraints in a two-stage model of amodal and modal completion. Journal of Vision (in press).

Using Character Recognition to Assess Outputs of the Second Stage of Contour Interpolation. Displays of the sort shown above, which derive from a classic illustration by Bregman (1991), used in a speeded performance task, may allow access to the second stage of contour interpolation due to the access of recognition processes to scene descriptions generated there. The methods allow quantification of the strength of particular scene constraints, alone and in combination. In the example shown, opposite contrast polarity for relatable fragments is shown (c.f. Su, He & Ooi, 2010).

2) Bridging Subsymbolic Activations and Symbolic Descriptions in Vision

A major objective of continuing work is to bridge the subsymbolic and symbolic aspects of visual processing. Whereas early cortical processing involves many contrast-sensitive units with local receptive fields, perceptual descriptions of objects and their shapes requires more configural and symbolic descriptions. Most research in vision works on one side or the other of this divide. For example, work on neural-style models of contour interpolation (e.g., Heitger et al, 1998; Kalar et al, 2010) uses spatially parallel distributed operators to indicate interpolation “activation” at various locations; their outputs are typically images in which the human observer can see, for example, the illusory contour connections between inducing elements. However, the contours in these models are really just sets of points of activations. The model does not have any representation of contour tokens, their shapes, whether contours close, etc. Arguably, human perception has all of these symbolic descriptions, and more. We are working on rigorously specified, next-generation models that go from natural images to descriptions of the completed contours, whole objects, and shapes in scene descriptions gotten from seeing.

3D and Spatiotemporal interpolation

Our lab and collaborators have made pioneering contributions in the study of contour and surface interpolation in object formation as 3D and spatiotemporal processes (Kellman, 1984, Perception & Psychophysics; Kellman, Garrigan & Shipley, 2005, Psych. Review; Palmer, Kellman & Shipley, 2006, JEP: General; Fantoni, Gerbino, & Kellman, 2008, Vision Research). We also discovered, characterized, and developed models of the related process of spatiotemporal boundary formation (SBF); see Shipley & Kellman, 1994, JEP: General; Erlikhman & Kellman, 2014, Frontiers in Human Neuroscience; 2015, Vision Research; 2016, Frontiers in Psychology. Some demos from this work can be seen under the Shape section of this website. Research efforts continue to further advance these areas, often with a focus on the core underlying processes that unify 2D, 3D and spatiotemporal object formation.

From Palmer, E. M., Kellman, P. J., & Shipley, T. F. (2006). A theory of dynamic occluded and illusory object perception. Journal of Experimental Psychology: General, 135(4), 513-541.


  • Philip J. Kellman Philip J. Kellman
  • Everett Mettler Everett Mettler
  • Suzy Carrigan Suzy Carrigan
  • Nicholas Baker Nicholas Baker


  • Gennady Erlikhman Gennady Erlikhman
  • face Jennifer Mnookin (UCLA)
  • face Itiel Dror (UCL)
  • face Hongjing Lu (UCLA)
  • face Patrick Garrigan (SJU)
  • face Brian Keane (Rutgers)
  • face Tandra Ghose (TU Kaiserslautern)

Selected Publications