Click here to send me an email!
Johns Hopkins logo CogSci logo
picture of me

I am a recent graduate of the Cognitive Science department at Johns Hopkins University. My dissertation is an examination of models of structure assembly in neural networks. Models like Holographic Reduced Representation (HRRs) and Tensor Product Representations (TPRs) resolve the gap between cognition as modeled in classical Cognitive Science—algorithms applied to discrete symbols—and the broadly numerical computational machinery of the brain. I combine these and related computational formats with a representation-optimization procedure drawn from Harmonic Grammar, and designed three models using these principles, applying them to the task of Knowledge Base Completion. Previously I was an Ertegun Scholar at the University of Oxford, and before that I did my undergrad at McGill University in Montreal.


Welcome to my home page, where you can find some information about me, my research, and my projects. TLDR: I'm a Computational Cognitive Scientist fresh out of the PhD program in Cognitive Science at JHU. Check out my dissertation, and the summary slides from my recent thesis defense!

What is Cognitive Science?

Cognitive Science is the study of the mind through the analysis of the behavior that it leads to, carried out at a higher level of abstraction than the study of the brain. The mind/brain is typically thought of as a kind of computer--an object receiving inputs, operating over them according to a set of rules, and resulting in an output, which is the observed behavior. And just like a computer, it can be described at various levels of abstraction--e.g. at the level of "software" versus "hardware". Cognitive Science focuses primarily on the "software" level, which means, in practice, specifying a system of computations capable of giving rise to the behavior we observe.

Linguistics and Cognitive Science

Linguistics is often assumed to be concerned either with personal learning of many languages, detailed study of the grammar of some specific language, or the study of effective pedagogical techniques for language learning. While each of these is partially within the scope of this quite diverse field, the core of the discipline today is concerned with the nature of a mental system, computational in the sense discussed above, that explains the remarkable capacity of humans to naturally, effortlessly, and with little variation in ability, learn a language.

These facts about linguistic ability make language in many ways analogous to walking on two legs. Every human born with the physical prerequisites learns to walk, and most receive minimal explicit training in how to do it. The reason we learn to walk is that we have an innate "walking faculty" that, supplemented with the right kind of experience, leads to a mature system in which we walk on two legs without any conscious effort. Incidentally, it has been shown that even systems like the visual system that seem at a superficial level to implicate mainly physiological receptors, can be permanently prevented from developing neurally if appropriate stimuli are withheld during crucial periods of development.

My Dissertation

Research into the brain implementation of language—at least that framed in terms of computation over explicit representations—is in its infancy. The key aspect of language from this point of view its discrete combinatorial nature. Morphemes and lexemes are more or less discrete atoms in a combinatorial system that arranges them into well-formed sequences. Abstract descriptions of those systems operate at that "computational-level" description. When it comes to the numerical computation style of the brain, neural networks provide a bridge. What NNs need to do in order to satisfy some basic observations about the discrete-combinatorial and compositional nature of language that allows you to parse and produce an infinite array of novel sentences is provide an explicit account of (1) how symbolic atoms realized as vectors are bound to structural positions, and (2) what computational procedures allow those representations to be manipulated such that they instatiate compositionality and the other key properties. This is known as the "filler-role binding problem" for neural computation. Several well-known solutions exist. It turns out that the best-known representational formats, HRRs and TPRs, are tightly interlinked. Specifically, HRRs are optimal realizations of TPR binding subject to constraints on the size of the vectors. I show this in my dissertation. Additionally, both a special instances of bilinear binding models, and it can be shown that they are equivalent to any properly optimized such model. In my dissertation, I prove this and related facts. I then proceed to implement and evaluate threes new models, which provide a potential architecture for Cognitive Science modeling of inference tasks, and are practially useful for Knowledge Base Completion, competitive with the SOTA.