We use psychophysical research and modeling to understand processes and mechanisms of human object, space, and motion perception. Special areas of emphasis include studying the ways in which the visual system produces coherent and stable representations of contours, surfaces, and objects from input that is fragmentary in space and time, and perception and representation of shape.Learn more →
We study basic and applied aspects of perceptual and adaptive learning, especially in complex, real-world learning domains. Our goals are to understand how perceptual learning works, how it leads to discovery of relational and abstract structure as expertise develops in particular domains, and how perception and perceptual learning interact with other cognitive processes.Learn more →
Perception and Cognition in Learning
Accompanying our basic research efforts are efforts to apply perceptual and adaptive learning to real-world educational and training domains. We have developed novel adaptive learning algorithms with the goal of optimizing spacing and mastery in learning. Our ARTS (Adaptive Response-time-based Sequencing) System can be applied to almost any kind of learning (e.g., factual, procedural, perceptual), but much of our focus has been on connecting perceptual and adaptive learning in PALMs (perceptual-adaptive learning modules). PALMs have shown great value in accelerating pattern recognition and transfer in advanced learning domains, including mathematics and science learning, medical learning, and surgical training.Learn more →
News & Events
Our recent research on deep learning networks and shape perception, published in PLOS Computational Biology, was highlighted as the Featured Story on UCLA Today's home page. The research was also covered by Science Daily, AAAS EurekAlert, and Phys.org, and other scientific and media outlets. Authors Nick Baker, Hongjing Lu, Gennady Erlikhman, and Phil Kellman looked into the best object recognition systems in artificial intelligence, known as deep convolutional neural networks (DCNNs), and asked how closely the processing in these devices resemble the human brain. From a series of experiments, the team concluded that humans recognize objects based on their shapes, while deep learning networks respond to fragments of objects, and surface textures, but do not have any access to overall shape. Click here to read the UCLA Newsroom article or click here to read the full research article in PLOS Computational Biology.
An editorial in The British Journal of Anaesthesia featured the use of perceptual and adaptive learning technologies in training anesthesiology residents, concluding that this approach has "the potential to revolutionise our traditional approaches to learning in anaesthesia."Read the full editorial →
The US Patent and Trademark Office (USPTO) awarded a patent entitled System and Method for Adaptive Perceptual Learning to Phil Kellman. The patent describes perceptual learning technology that accelerates pattern recognition and transfer to novel cases in complex domains where diagnostic structure must be detected amidst variation. It also discloses novel methods of combining perceptual and adaptive learning to produce greater efficiencies and more comprehensive learning in perceptual and categorical learning.For more information, contact email@example.com →
Professor Kellman is promoted to Distinguished Professor in the UCLA Psychology Department, and is appointed Adjunct Professor of Surgery in the David Geffen UCLA School of Medicine.
January 23 2015
Professor Kellman served as an expert in the White House Workshop on Bridging Neuroscience and Learning.Read the full article in the UCLA Newsroom →
Professor Kellman is featured in the Faculty Spotlight on the UCLA Psychology website.Learn more →
Phil Kellman, Director of the UCLA HPL, has been elected to the Society of Experimental Psychologists, the oldest honorary society in scientific psychology. (Click here for an early picture of the Society.)View an early picture of the Society →
The UCLA Human Perception Laboratory has begun testing in 32 schools in the Philadelphia, Pennsylvania area as part of a major scale-up project to study perceptual and adaptive learning interventions in middle-school mathematics. The project, funded by the Institute of Education Sciences of the US Department of Education, is a collaboration between UCLA (Phil Kellman, PI) and two teams at the University of Pennsylvania: one at the Institute for Research in Cognitive Science (IRCS), Christine Massey, Co-PI; and one at the University of Pennsylvania Graduate School of Education, Andrew Porter and Laura Desimone, Co-PIs. This project is following up earlier successful results of perceptual-adaptive learning modules to improve learning of fractions and measurement.Visit the PLM Study website →
The US Office of Naval Research (ONR) has awarded the UCLA Human Perception Laboratory, in collaboration with the UCLA Center for Advanced Surgical and Interventional Technology (CASIT) , a grant to study and develop cutting edge learning technologies in medical simulation. The goal of the project is to apply perceptual and adaptive learning technologies to simulation in surgical training, with one major goal being to reduce reliance on live animals in surgical training. Erik Dutson, MD is the PI on this project and Phil Kellman is the Co-PI.
Our visual perception research has been supported by the National Eye Institute (NEI), the National Science Foundation (NSF), and the National Institute of Justice (NIJ).
Our research in perceptual learning, adaptive learning, and their applications to learning technology has been supported in recent years by the National Science Foundation (NSF), the Institute of Education Sciences (IES) at the US Department of Education, the National Institutes of Health (NIH), the US Office of Naval Research (ONR) and the National Aeronautical and Space Administration (NASA).