Lab Presentation / Publication Note: Members of the Human Perception Lab presented papers and posters at the Annual Meeting of the Cognitive Science Society.
Everett Mettler presented a paper entitled Comparing adaptive and random spacing schedules during learning to mastery criteria. This work was supported by NSF awards NSF 1109228, "Adaptive Sequencing and Perceptual Learning Technologies in Mathematics and Science", and NSF ECR1644916, “Advancing Theory and Application in Perceptual and Adaptive Learning to Improve Community College Mathematics”. The published proceedings citation is:
Mettler, E., Massey, C., Burke, T., & Kellman, P. J. (2020). Comparing adaptive and random spacing schedules during learning to mastery criteria.Proceedings of the 42nd Annual Conference of the Cognitive Science Society (pp. 773-779). Toronto, ON: Cognitive Science Society.
Read the proceedings paper
Everett Mettler presented a poster entitled Adaptive vs. fixed spacing of learning items: Evidence from studies of learning and transfer in chemistry education. This work was supported by NSF awards NSF 1109228, "Adaptive Sequencing and Perceptual Learning Technologies in Mathematics and Science", and NSF ECR1644916, “Advancing Theory and Application in Perceptual and Adaptive Learning to Improve Community College Mathematics”; as well as NIH award 1RC1HD063338-01, "Using Perceptual and Adaptive Learning to Advance Chemistry Education". The published proceedings citation is:
Mettler, E., El-Ashmawy, A. K., Massey, C. M., & Kellman, P. J. (2020). Adaptive vs. fixed spacing of learning items: Evidence from studies of learning and transfer in chemistry education.Proceedings of the 42nd Annual Conference of the Cognitive Science Society (pp. 1598-1604). Toronto, ON: Cognitive Science Society.
Read the proceedings paper
April 2020
Lab Publication Note: Members of the Human Perception Lab, in collaboration with members of the David Geffen School of Medicine and the Harbor-UCLA Medical Center, published an article in the journal Academic Emergency Medicine Education and Training:
Krasne, S., Stevens, C. D., Kellman, P. J., & Niemann, J. T. (2020). Mastering electrocardiogram interpretation skills through a perceptual and adaptive learning module.Academic Emergency Medicine Education and Training. doi: 10.1002/aet2.10454
Read the full article
April 2020
Lab Publication Note: Members of the Human Perception Lab and the Computational Vision and Learning Lab published an article in Vision Research:
Baker, N., Lu, H., Erlikhman, G., & Kellman, P.J. (2020). Local features and global shape information in object classification by deep convolutional neural networks.Vision Research, 172, 46-61. doi: 10.1016/j.visres.2020.04.003
Read the full article
September 2019
The National Cancer Institute (NCI) of the National Institutes of Health (NIH) has awarded a 4-year grant to the UCLA Human Perception Laboratory for the project Perceptual and Adaptive Learning in Cancer Image Interpretation under the program Perception and Cognition in Cancer Image Interpretation. The goal of this project is to investigate principles and mechanisms of perceptual and adaptive learning in the learning of multiple diagnostic categories in dermatologic screening and mammography, with the ultimate aim of improving training and proficiency in cancer image interpretation.
July 2019
Lab Presentation / Publication Note: Everett Mettler presented a poster entitled The synergy of passive and active learning modes in adaptive perceptual learning at the Annual Meeting of the Cognitive Science Society in Montreal. This work was supported by NSF award NSF ECR1644916, “Advancing Theory and Application in Perceptual and Adaptive Learning to Improve Community College Mathematics”. The published proceedings citation is:
Mettler, E., Phillips, A., Massey, C., Burke, T., Garrigan, P., & Kellman, P. J. (2019). The synergy of passive and active learning modes in adaptive perceptual learning. In A.K. Goel, C.M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (pp. 2351-2357). Montreal, QB: Cognitive Science Society.
Read the proceedings paper
May 2019
Lab Presentation / Publication Note: Members of the Human Perception Lab presented papers and posters at the 2019 Annual Meeting of the Vision Sciences Society.
Everett Mettler presented a paper entitled Perceptual learning benefits from strategic scheduling of passive presentations and active, adaptive learning. This work was supported by NSF award NSF ECR1644916, “Advancing Theory and Application in Perceptual and Adaptive Learning to Improve Community College Mathematics”. The published abstract citation is:
Mettler, E. W., Phillips, A. S., Burke, T., Garrigan, P., Massey, C. M., & Kellman, P. J. (2019). Perceptual learning benefits from strategic scheduling of passive presentations and active, adaptive learning. Journal of Vision, 19(10), 293.
Read the published abstract
Gennady Erlikhman and Nicholas Baker presented a poster entitled Recursive networks reveal illusory contour classification images. The published abstract citation is:
Kellman, P. J., Erlikhman, G., Baker, N., & Lu, H. (2019). Recursive networks reveal illusory contour classification images. Journal of Vision, 19(10), 241a.
Read the published abstract
Nicholas Baker presented a paper entitled Constant curvature representations of contour shape. The published abstract citation is:
Baker, N., & Kellman, P. J. (2019). Constant curvature representations of contour shape. Journal of Vision, 19(10), 94.
Read the published abstract
Susan Carrigan presented a poster entitled From early contour linking to perception of continuous objects: Specifying scene constraints in a two-stage model of amodal and modal completion. The published abstract citation is:
Carrigan, S. B., & Kellman, P. J. (2019). From early contour linking to perception of continuous objects: Specifying scene constraints in a two-stage model of amodal and modal completion. Journal of Vision (in press).
April 2019
Phil Kellman was interviewed for an episode of the Australian Broadcasting Company’s podcast The Science Show. In the episode, entitled “Challenges for AI visual recognition,” Phil Kellman and host Robyn Williams discuss our recent research on deep learning networks and shape perception, and the possible implications on pressing topics such as self-driving cars.
What happens when the driverless car approaches a stop sign sprayed with graffiti? Does the car stop?
Philip Kellman
Distinguished Professor
Adjunct Professor of Surgery
Ph.D., University of Pennsylvania
Area Chair: Cognitive Psychology
Primary Area: Cognitive Psychology
Robyn Williams
Science Journalist and Broadcaster
UCLA
University of California — Los Angeles
December 2018
Lab Publication Note: Members of the Human Perception Lab and the Computational Vision and Learning Lab published an article in PLOS Computational Biology:
PLOS Computational Biology/Rubylane.com
Baker, N., Lu, H., Erlikhman, G., & Kellman, P.J. (2018) Deep convolutional networks do not classify based on global object shape. PLOS Computational Biology, 14(12), e1006613.
Read the full article
This work was covered by several national and international news outlets. Here is a sampling of that coverage:
Lab Presentation / Publication Note: Members of the Human Perception Lab presented papers and posters at the Annual Meeting of the Cognitive Science Society in Madison, WI.
Everett Mettler presented a paper entitled Enhancing adaptive learning through strategic scheduling of passive and active learning modes. This work was supported by NSF award NSF ECR1644916, “Advancing Theory and Application in Perceptual and Adaptive Learning to Improve Community College Mathematics”. The published proceedings citation is:
Mettler, E., Massey, C. M., Burke, T., Garrigan, P., & Kellman, P. J. (2018). Enhancing adaptive learning through strategic scheduling of passive and active learning modes. In T.T. Rogers, M. Rau, X. Zhu, & C. W. Kalish (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (pp. 768-773). Austin, TX: Cognitive Science Society.
Read the proceedings paper
Nicholas Baker presented a poster entitled Deep convolutional networks do not perceive illusory contours. The published proceedings citation is:
Baker, N., Kellman, P. J., Erlikhman, G., & Lu, H. (2018). Deep convolutional networks do not perceive illusory contours. In T.T. Rogers, M. Rau, X. Zhu, & C. W. Kalish (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (pp. 1310-1315). Austin, TX: Cognitive Science Society.
Read the proceedings paper
Our visual perception research has been supported by the National Eye Institute (NEI), the National Science Foundation (NSF), and the National Institute of Justice (NIJ).
Our research in perceptual learning, adaptive learning, and their applications to learning technology has been supported in recent years by the National Science Foundation (NSF), the Institute of Education Sciences (IES) at the US Department of Education, the National Institutes of Health (NIH), the US Office of Naval Research (ONR) and the National Aeronautical and Space Administration (NASA).