- Modelling interactions between visual cognition and motor behaviour
- Modelling behavioural experiments
- What is wrong with Deep Neural Networks?
- Computational modelling of tool innovation
Modelling interactions between reaching and visual processing
Recently Strauss et al. (2015) developed a model of interaction between visual cognition and motor behaviour. This work has laid the foundations for a collaborations with Joo-Hyun Song at Brown University funded by the UK-ESRC. We are currently working on novels models by integrating standard perceptual decision making models with standard models of motor behaviour (see abstract). MSc students can get involved in this work.
Investigator:
Dietmar Heinke
Techniques:
MatLab, Quantitative model fitting, stochastic modelling, non-linear differential equations.
References
Strauss, S., Woodgate, P.J.W., Sami, S. A., & Heinke, D. (2015) Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information. Neural Networks, 72, 3-12. http://dx.doi.org/10.1016/j.neunet.2015.10.005
Strauss, S. & Heinke, D. (2012) A robotics-based approach to modeling of choice reaching experiments on visual attention. Front. Psychology, 3:105. http://http://dx.doi.org/10.3389/fpsyg.2012.00105
Modelling behavioural experiments
Narbutas et al. (2017) developed a computational model visual search experiments. Critical for the successful of this work was that they were able to fit the model to existing data. Such quantitative fitting of models allow us to gain deeper insights into the processes underlying perceptual decision making. Future work aims to utilize this novel technique and apply it other experiment paradigms such the Eriksen Flanker task.
Investigator:
Dietmar Heinke
Techniques:
MatLab, Quantitative model fitting, stochastic modelling, non-linear differential equations.
References
Narbutas, V., Lin, Y.-S., Kristan, M., & Heinke, D. (2017) Serial versus parallel search: A model comparison approach based on reaction time distributions. Visual Cognition, 1-3, 306-325. https://doi.org/10.1080/13506285.2017.1352055
Lin, Y., Heinke, D. & Humphreys, G. W. (2015) Modeling visual search using three-parameter probability functions in a hierarchical Bayesian framework. Attention, Perception, & Psychophysics, 77, 3, 985-1010.
What is wrong with Deep Neural Networks?
Deep neural networks (DNNs) have been behind recent headline-grabbing successes for artificial intelligence, such as AlphaGo beating the world's best player in the board game Go. Nevertheless, in many areas DNNs have not lead to technical solutions that match human abilities, especially with regards to object recognition. Perhaps the most obvious reason for such failures stems from the fact that DNNs employ very different processing mechanisms to those found in humans (e.g. Lake et al. 2016).
This project aims to compare DNN’s abilities with human abilities and ascertain how the two differ. This project is a collaboration with Charles Leek at Liverpool University (see here for more details)
Techniques
PyTorch, Python
Investigator:
Dietmar Heinke
References
Lake, B., Ullman, T., Tenenbaum, J., & Gershman, S. (2016). Behavioral and Brain Sciences, 1-101. Computational modelling of tool innovation.
Computational modelling of tool innovation
Humans’ ability to use tools has drastically transformed our planet, and is a skill we use in our daily lives. However, so far there is no machine learning method matching our tool use abilities. In recent work we developed a deep reinforcement learning (DRL)-method based on Duelling Double DEEP-Q (Hasselt et al., 2015) which successfully learns to use simple tools in a very simple environment. This MSc project aims to extend these results to a more complex environment.
Techniques
Python, Pygame, PyTorch
Investigator:
Dietmar Heinke
References
Osiurak, F., & Heinke, D. (2018) Looking for Intoolligence: A unified framework for the cognitive study of human tool use and technology. American Psychologist, 73(2), 169-185. http://dx.doi.org/10.1037/amp0000162.
Pygame Community. (2020). Pygame. Pygame.Org. https://www.pygame.org/
Hasselt, van, Guez, A., & Silver, D. (2015). Deep Reinforcement Learning with Double Q-learning. ArXiv.Org. https://arxiv.org/abs/1509.06461