Skip to main content
What types of page to search?

Alternatively use our A-Z index.

Research

My current research mostly focuses on understanding human cognition through mathematical models, which can be classed as a combination of the broad research areas of cognitive psychology, computational modelling, and psychological methods. More specifically, my theoretical and empirical research largely falls within the area of human decision-making, and involves the development and testing of models that are able to explain response time and choice (as well as additional pieces of information, such as changes of mind), such as extensions of the well-known and commonly used evidence accumulation framework. My more methodological research largely involves the development and implementation of Bayesian methods for estimating and comparing cognitive models, as well as methods for estimating and comparing models with computationally intractable likelihood functions.

Developing, Testing, Extending, and Comparing Models of Human Cognition

One of the key reasons for my research focus on computational modelling is my core belief that developing precise, testable theories of cognitive processes is crucial to understanding human cognition. Specifically, I believe that the precise quantitative predictions made by cognitive models provide a level of insight that is not possible with verbal theories alone, making cognitive models fundamental to creating well-defined theories of cognitive processes. However, while cognitive models have the potential to create precise, process-level theories of human cognition that can be easily tested and compared against one another, I believe that cognitive modelling often suffers from two key limitations. First, while cognitive models make precise predictions, there are often several different cognitive models that make extremely similar predictions for the specific data of interest. Second, while cognitive models are often proposed as providing an account of a broad cognitive process (e.g., decision-making), and therefore, should technically make predictions about any behaviour that we believe results from that process, cognitive models are often used in a restrictive manner, only assessing very specific aspects of observable behaviour to ensure that the model provides an accurate account of the assessed data.

One potential solution to both of these issues – which is a key focus of my current research – is forcing cognitive models to account for more sources of data. Importantly, by constraining cognitive models to jointly account for multiple sources of data, (1) the predictions of different models are more likely to differ over the joint data space, helping to potentially alleviate some of the mimicry issues, and (2) the models are now extended to capture an additional aspect of the cognitive process, reducing their restrictiveness. For instance, my recent research on double responding – where people make a second, often corrective, response after their initial response – has provided an additional constraint for evidence accumulation models, distinguishing the predictions of models with different types of inhibition that typically closely mimic one another in response time and choice data alone. In addition, double responding also extends standard evidence accumulation models to account for changes of mind in decision-making, reducing the restrictiveness of the theory as these models are now forced to explain another aspect of the decision-making process. Furthermore, my recent research collaborating with experts in electromyography (EMG) recordings of thumb movements has resulted in the development of models that are able to explain the interaction between the decision and motor systems, and well as partial errors, where people begin to respond for one option before changing their mind and responding for the other option.

In addition to my research on changes of mind, my research has also involved developing, testing, and extending models in a variety of other contexts. For instance, I have been heavily involved in research that assesses whether people become less cautious as they spend more time on a decision, reflected in collapsing thresholds and urgency-gating models, with a key finding being that whether people adopt these time-dependent strategies is largely task dependent. My research has also involved developing, evaluating, comparing, and applying models of conflict tasks (typically referred to as conflict diffusion models), with a key development being a model-based framework for separating facilitation and interference effects from one another without the need for neutral trials. Finally, I’ve worked on developing, evaluating, and comparing models in several other areas of cognition research, such as task learning, multi-attribute choice, memory, and continuous multi-agent decision-making.

Applying Cognitive Models to Measure Constructs of Latent Cognitive Processes

Another important feature of cognitive models is their ability to provide estimates of cognitive constructs of interest through their theoretically meaningful parameters. Specifically, while statistical models are typically designed to be atheoretical, in order to keep them generalisable to a range of contexts and simple enough to remain usable by a general audience, the theoretical constraint within cognitive models often results in their free parameters reflecting cognitive constructs that may help in answering a variety of research questions. For example, in evidence accumulation models, the drift rate parameter determines how quickly the accumulator moves (on average) towards the threshold, but also provides a measurement of the cognitive construct of task ability. Likewise, the threshold parameter determines how much accumulated evidence is required to terminate the process and trigger a decision, but also provides a measurement of the cognition construct of task caution. Importantly, cognitive models can provide a powerful framework for directly estimating cognitive constructs of interest, rather than attempting to infer things about these constructs from summary statistics of observed data alone.

My research has involved numerous applications of cognitive models – particularly evidence accumulation models – to estimate parameters and help answer a variety of theoretical and applied research questions. For instance, one of my research topics has been optimality, and whether people are able to adopt optimal strategies to achieve the task goal given to them by the experimenter. Within evidence accumulation models, participants have the ability to strategically adjust their thresholds to balance their speed-accuracy trade-off, meaning that we can assess how well participants can achieve a speed-accuracy trade-off that maximises the rewards provided by the task (often referred to as reward rate optimality).

However, as with my work on collapsing thresholds models, the key finding has been that people’s ability to adopt an optimal threshold is largely task dependent. Moreover, I have collaborated with social cognition researchers on several projects, using the parameter estimates from cognitive models to better understand gaze cueing, pro-social behaviour, ego-depletion, and need for closure. Similarly, I have also collaborated with developmental researchers on several projects, using the parameter estimates from evidence accumulation models to better understand autism, dyslexia, early life adversity, and ageing. Finally, I have used the parameter estimates of cognitive models to better understand a range of more typical cognitive psychology paradigms, such as how emphasising speed influences decision-making and how multi-tasking differs from increases in difficulty in a single task, as well as more broad research questions such as the heritability of cognitive constructs like task ability and caution.

Creating and Refining Methods of Estimating and Contrasting Cognitive Models

While cognitive models possess a great deal of potential utility, both as theories of cognitive processes and measurement tools for estimating cognitive constructs, practical issues can often make it difficult or even impossible to utilise cognitive models in ways that best serve our needs. Specifically, due to the complexity of many cognitive models, methods that are commonly used for estimating (e.g., Bayesian [hierarchical] parameter estimation) and/or comparing (e.g., Bayes factors, cross-validation) simple statistical models can quickly become computationally intractable in cognitive models. Moreover, when dealing with some extremely complex, theoretically precise cognitive models (e.g., the leaky-competing accumulator, urgency-gating models, etc.), even the likelihood function can become computationally intractable to calculate, making things as basic as fitting the model a difficult task. Furthermore, even when we are able to estimate the parameters of a cognitive model, the complex correlation structure between parameters in some cognitive models means that there is no guarantee that these estimates will be robust and reliable. My methodological goal within my cognitive modelling research has been to try and reduce the burden of these problems as much as possible, in order to make it easier for myself and others to achieve the greatest utility possible from cognitive modelling.

One of my key methodological focuses in cognitive modelling has been assessing the identifiability of models relative to one another (i.e., model identifiability), as well as of different parameters within a model (i.e., parameter identifiability), through simulation-based studies. Importantly, if we are unable to distinguish between two models in the unlikely-yet-ideal setting where one of them is the true, data-generating model, then comparing these models on empirical data serves limited purpose. Likewise, if we are unable to accurately estimate a parameter in a model in the unlikely-yet-ideal setting where the model is the true, data-generating model, then attempting to estimate this parameter in empirical data will only serve to mislead us. For instance, one of my key findings suggested that both out-of-sample prediction type methods (e.g., AIC, DIC, WAIC) and Bayesian model selection type methods (e.g., BIC, Bayes factors) have great difficulty in distinguishing between null and small within-subjects effects in the parameters of evidence accumulation models, though in different ways; specifically, out-of-sample methods often provided false alarms on null effects, whereas Bayesian model selection methods often missed small effects. My other key findings have demonstrated parameter identifiability issues in the unconstrained linear ballistic accumulator (LBA), collapsing thresholds and urgency gating models, and the double-pass diffusion model.

Another key methodological focus of my cognitive modelling research has been developing methods for fitting and comparing complex cognitive models. For instance, some of my previous work has focused on developing and implementing methods for estimating marginal likelihoods (i.e., Bayes factors) in cognitive models, so that researchers can utilise and compare cognitive models in a similar way to how they may compare simpler, statistical models. Another key part of my work was developing a method and framework for efficiently simulating and fitting extremely complex evidence accumulation models, which I then combined with one of the marginal likelihood estimation methods I had developed, to create pseudo-likelihood Bayes factors that I used to compare the different conflict diffusion models on several flanker data sets. Finally, I have also been part of collaborative efforts to develop different implementations of hierarchical evidence accumulation models, such as fixed/random/mixed effects models and mixture models.