I'm a first-year PhD student in computational neuroscience at Columbia. I'm interested in understanding how high-dimensional neural representations support complex behaviors and how these capabilities are learned. I graduated from Stanford in 2019 with an undergraduate major in math and a master's in computer science. At Stanford, I worked with Shaul Druckmann on modeling neural dynamics in mouse anterior lateral motor cortex under optogenetic perturbations, and with Surya Ganguli on developing unifying theories of representations in the retina and visual cortex. I've also spent time in Jay McClelland's lab at Stanford, using deep learning models to study memory and visual attention, and at Cerebras Systems, a machine learning hardware startup.


“Learning to Learn with Feedback and Local Plasticity.” Lindsey, J. and Litwin-Kumar, A. arXiv preprint. Link

“A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs.” Lindsey, J.*, Ocko, S.*, Ganguli, S., and Deny, S. Accepted for oral presentation at ICLR 2019. Link

“The Emergence of Multiple Retinal Cell types through Efficient Coding of Natural Movies.” Ocko, S.*, Lindsey J.*, Ganguli, S., Deny, S. Presented at NeurIPS 2018. Link

“A Neural Network Model of Complementary Learning Systems.” Jain, M* and Lindsey, J*. CogSci 2018 proceedings (oral presentation). Link

“Semiparametric Reinforcement Learning.” Jain, M* and Lindsey, J*. ICLR, Workshop Track. 2018. Link

“Pre-Training Attention Mechanisms.” Lindsey, J*. NeurIPS 2017 Workshop on Cognitively Informed Artificial Intelligence. Link

(* = primary contribution)

Deep Learning Implementations

Fun Projects from Long Ago