2022-10-01 - 2023-03-31 | Research area: Cognition and Sociality
My research addresses several issues that intersect philosophy, Artificial Intelligence [AI], and biology—with a specific interest of mine turning around the blue-sky goal of creating Artificial General Intelligence [AGI] within an alternative medium – such as a digital computer or a synthetic robot. Simply put, the goal of exploring cognition and mind through the lens of embodiment and valence is to explore ways in which robust forms of cognition (the domain general and generalisability in the ‘G’ of AGI) relates intimately with the possibility of risk-to-self to an agent. In other words, if cognition intrinsically involves meaningful, goal-directed engagement with the world, then this engagement is heavily saturated with possibilities for the agent to maintain itself and adapt to incoming information. One promising starting point for understanding this is the biological basis of minded creatures: from sophisticated animals with complex nervous systems like octopi and humans to lower or, more properly, ‘basal’ organisms such as nematodes and even bacteria and slime moulds—a perspective that has been explored extensively under the heading of the ‘biogenic account’ of cognition (Levin 2019, 2020; Lyon et al. 2021). This account places a premium on the vulnerable, heavily embodied, and metabolic dimensions of agency and goal-directedness as the basis from which we should understand the origins of mindedness in evolutionary history. As Peter Godfrey-Smith has remarked in the context of computers, “a collection of ands and if-thens [the Boolean logic of discrete maths underpinning classical computers] with no metabolic point to them would be a different sort of thing” (2016: 490). It then attempts to generalise this to a broader theory of cognition and its physiological basis.