Hi! I am a neuroscientist interested in how populations of neurons interact to perceive the world and drive behavior, a topic I approach using techniques from statistics and machine learning.
I am currently a postdoc at Harvard University with Sam Gershman, where I study the neural representations of reinforcement learning. Previously, I got my Ph.D. in neural computation and machine learning at Carnegie Mellon University, where I studied learning using brain-computer interfaces with Byron Yu, Steve Chase, and Aaron Batista.
Summary: Animals are thought to predict rewards using reinforcement learning (RL). In environments with hidden states, animals may require "beliefs," or probabilistic estimates of the hidden states. We show that such belief-like representations emerge in recurrent neural networks (RNN) trained to perform RL in environments with hidden states.
Summary: In this perspective, we consider the idea that learning in the brain can be described in terms of optimization, similar to learning in artificial neural networks (ANNs). We highlight three key features of how neural population changes with learning that differ from ANNs, suggesting refinements to this optimization view.
Summary: We identified large fluctuations in neural population activity in motor cortex (M1) indicative of arousal-like internal state changes. These changes in neural activity helped to explain why animals learned some tasks more quickly than others.
Summary: A brain–machine interface, or BMI, directly connects the brain to the external world, translating a user's internal motor commands into action. In this chapter, we discuss the four basic components of an intracortical BMI: an intracortical neural recording, a decoding algorithm, an output device, and sensory feedback.
Summary: We establish that new neural activity patterns emerge with learning, providing evidence that the formation of new patterns of neural population activity can underlie the learning of new skills.
Summary: Millions of neurons in the brain control the activity of tens of muscles in the arm, meaning neural activity is redundant. We compared various hypotheses for how the brain deals with this redundancy by recording in primary motor cortex while subjects performed a brain-computer interface task.
Summary: We augment a neural network known as a variational autoencoder (VAE) to classify the observed data while also learning its latent representation. We show that when this network is combined with an LSTM and used to generate music, the network plays fewer incorrect notes than a standard VAE+LSTM.
Summary: We compare the time-varying improvements in sensitivity during motion discrimination tasks in 2D and 3D, and find that the two are remarkably similar, however with a lower signal-to-noise ratio in 3D.
Summary: We show that cells in the lateral intraparietal area (LIP) have firing activity that simultaneously carries decision signals and decision-irrelevant sensory signals. We conclude that LIP cells show a broader range of response motifs than previously considered.