James DiCarlo
Department Head
- MIT Department of Brain and Cognitive Sciences
I am currently a Professor of Neuroscience at the McGovern Institute for Brain Research and Department Head of Brain and Cognitive Sciences at Massachusetts Institute of Technology. I have been awarded the Alfred P. Sloan Research Fellowship (2002), the Pew Scholar Award in Biomedical Sciences (2002-2006), and the McKnight Scholar Award in Neuroscience (2006-2009). The overarching goal of my research group is to obtain a deep understanding of how the brain develops and executes its remarkably powerful neuronal representation of visual objects, and how that representation underlies perception, cognition and behavior. Over the last 15 years, using the non-human primate model system, my collaborators and I have shown that populations of neurons at the highest cortical visual processing stage — the inferior temporal cortex (IT) — rapidly convey explicit representations of object identity. We have also shown that these explicit object representations work in similar ways during natural, free viewing conditions, and are capable of representing multiple objects simultaneously. My group has also developed and/or applied a series of new technologies that support measuring and perturbing neural circuits in non-human primates. We are currently using a combination of large-scale neurophysiology, brain imaging, direct neural perturbation methods, and machine learning methods to build neurally-mechanistic computational models of the ventral visual stream and its support of cognition and behavior.
Our recent progress and ongoing work is in: building image-based computational models that explain visual neural responses, mapping those models to the neural tissue and testing causality, and testing how those neural mechanisms might develop from supervised and unsupervised visual experience. Based on that work, we are closing in on an end-to-end understanding of the neural mechanisms of human visual object recognition — i.e. from image to neuronal activity to perceptual report. We aim to use this understanding to inspire and develop new artificial vision systems, to provide a basis for new neural prosthetics (brain-machine interfaces) to restore or augment lost senses, and to provide a foundation to understand how high-level sensory representations are altered in human conditions such as agnosia, autism and dyslexia.