Overview of Research:
Project #1- The Philosophy of Emerging Digital Technology: There are currently three active research projects I am pursuing. The primary research project examines how emerging digital technologies are poised to affect the mind from a metaphysical, ethical, and epistemological perspective. This project was borne out of work that I conducted on how technologies such as lifelogs and neural implants affect personhood from the perspective of the brain-based view, the psychological continuity view, and the narrative continuity view of personal identity (see my paper ‘Could You Merge with AI? Reflections on the Singularity and Radical Brain Enhancement,’ published in The Oxford Handbook of Ethics of AI (2020)). My PhD dissertation, entitled A Virtue Epistemology of Brain-Computer Interface and Augmented Reality Technology, builds upon this work by analyzing emerging digital technologies (specifically brain-computer interfaces and augmented reality devices like smart glasses and smart contact lenses) from the standpoint of virtue epistemology instead of the metaphysics of personal identity. The overarching aim of the project is to discern how these technologies are equipped to influence the cultivation and maintenance of intellectual virtues, focusing on the virtues of intellectual perseverance, intellectual autonomy, intellectual humility, and open-mindedness. In accomplishing this aim, the dissertation both diagnoses epistemic threats posed by these digital technologies and offers some policy and designed-based proposals to mitigate the identified epistemic threats.
Thus far, the dissertation project has given rise to a publication in the journal Synthese, a publication in the journal Philosophy & Technology, and two manuscripts in-progress that will soon be submitted for journal review. The forthcoming article in Synthese (called ‘Neuromedia, Cognitive Offloading, and Intellectual Perseverance’) explores an epistemic threat posed by a near-future brain-computer interface device that Michael Lynch (2016) labels ‘neuromedia.’ First, I illustrate how excessive cognitive offloading of the sort incentivized by a device like neuromedia threatens to undermine intellectual virtue development from the standpoint of the theory of virtue responsibilism. Then, I examine this epistemic threat as it applies to the virtue of intellectual perseverance, arguing that neuromedia may increase cognitive efficiency at the cost of intellectual perseverance.
The forthcoming article in the journal Philosophy & Technology (called ‘Augmented Reality, Augmented Epistemology, and the Real-World Web’) addresses some epistemic threats posed by a near-future AR device that Paul Smart (2012) labels ‘the Real-World Web’ (RWW for short). I argue that the RWW threatens to exacerbate three existing epistemic problems in the digital age related to surveillance capitalism: the problem of digital distraction, the problem of digital deception, and the problem of digital divergence. The RWW is poised to present new versions of these problems in the form of what I call ‘the augmented attention economy,’ ‘augmented skepticism,’ and ‘the problem of other augmented minds.’ The paper draws on a range of empirical research on AR and offers a phenomenological analysis of virtual objects as perceptual affordances (Gibson 1979) to help ground and guide the speculative nature of the discussion.
The aforementioned manuscripts in-progress (entitled ‘Epistemic Cognitive Integration and Extended Intellectual Autonomy’ and ‘Internet-Extended Knowledge and Intellectual Humility’) discuss the epistemological implications of machine learning systems and algorithmic filtering mechanisms and draw heavily on the extended mind thesis, a metaphysical framework in the philosophy of mind and cognitive science which holds that the technological and informational components of smart devices can, under certain conditions, partly constitute cognitive processes and not merely causally influence such processes. I am particularly interested in how and to what extent digital technologies can extend not only cognition, but also knowledge, personal identity, and even intellectual virtue. I contend that, while possible, the realization of extended knowledge and extended intellectual autonomy is practically infeasible in the context of AI-powered digital technologies because of various technological impediments (e.g. algorithmic opacity). I outline various design and regulatory measures geared towards making digital technologies less resistant to epistemic cognitive integration.
Project #2- The Problem of Machine Consciousness: A secondary research project investigates how different topics in the philosophy of mind and the philosophy of cognitive science can shed light on the problem of machine consciousness. For example, my article ‘The Cognitive Phenomenology Argument for Disembodied AI Consciousness’ (published in The Age of Artificial Intelligence: An Exploration (Vernon Press, 2020)) draws on the cognitive phenomenology literature, the embodied cognition literature, and literature on the higher-order thought theory of consciousness to construct an argument for the possibility of disembodied machine consciousness. I am interested in not only whether machine consciousness is possible, but also the ethical import of machine consciousness and the connection between machine consciousness and the AI control problem.
Project #3- The Hard Problem of Consciousness and Intentionality: The final research project is on the hard problem of consciousness and the nature of intentionality. The hard problem of consciousness is what initially drew me to the study of philosophy. My bachelor’s thesis, entitled The Combination Problem: Russellian Monism, Panpsychism, and Panqualityism, defends an unorthodox solution to the hard problem called Russellian Monism. I study intentionality mostly as it relates to consciousness and the question of whether it can be explained in terms of consciousness (or vice versa). My article ‘The Extended Mind Argument Against Phenomenal Intentionality’ (published in the journal Phenomenology & The Cognitive Sciences, 2021) uses insights from the extended mind literature to construct an argument against the phenomenal intentionality thesis, the view that consciousness grounds intentionality. The article submits that the following three propositions, when properly understood, constitute an inconsistent triad: (1) the extended mind thesis is true, (2) the extended consciousness thesis is false, and (3) the phenomenal intentionality thesis is true. I motivate (1) and (2) in an effort to refute (3).
- ‘Neuromedia, Cognitive Offloading, and Intellectual Perseverance,’ Synthese (forthcoming).
- ‘Augmented Reality, Augmented Epistemology, and the Real-World Web,’ Philosophy & Technology (forthcoming).
- ‘The Extended Mind Argument Against Phenomenal Intentionality,’ Phenomenology and the Cognitive Sciences, 20 (2021): 1-28.
- ‘The Cognitive Phenomenology Argument for Disembodied AI Consciousness,’ in S. Gouveia (ed.) The Age of Artificial Intelligence: An Exploration, Vernon Press (2020): 111-132.
- ‘Could You Merge with AI? Reflections on the Singularity and Radical Brain Enhancement’ (with Susan Schneider), in M. Dubber, F. Pasquale, and Sunit Das (eds.) The Oxford Handbook of Ethics of AI, Oxford University Press (2020): 307-326.