Overview of Research:
My areas of specialization include Philosophy of Artificial Intelligence, Philosophy of Technology, Philosophy of Mind, and Virtue Epistemology. My primary research project, which exists at the intersection of these subfields, explores the philosophical implications of emerging digital technologies, with a focus on artificial intelligence, virtual and augmented reality, and neurotechnology. This research involves analyzing such technologies from the perspective of both techno-optimism and techno-pessimism. With respect to the former, I draw on the extended mind thesis to explore how and to what extent AI, neurotechnology and mixed reality technology can extend cognition and knowledge. From the perspective of techno-pessimism, I illustrate how such technologies threaten to (a) undermine intellectual virtues such as intellectual perseverance and open-mindedness, and (b) exacerbate existing epistemic and ethical problems associated with surveillance capitalism, data collection, and the online attention economy.

A secondary research project I am engaged in investigates how different topics in the philosophy of mind and the philosophy of cognitive science can shed light on the problem of AI consciousness. For example, my article ‘The Cognitive Phenomenology Argument for Disembodied AI Consciousness’ (published in The Age of Artificial Intelligence: An Exploration (Vernon Press, 2020)) draws on cognitive phenomenology, embodied cognition theory, and the higher-order thought theory of consciousness to construct an argument for the possibility of disembodied AI consciousness.
Outside of philosophy of technology, I have research interests in philosophy of mind and virtue epistemology more broadly. For instance, my article ‘The Extended Mind Argument Against Phenomenal Intentionality’ (published in the journal Phenomenology & The Cognitive Sciences (2021)) uses insights from the extended mind literature to construct an argument against the phenomenal intentionality thesis, the view that consciousness grounds intentionality.
Below are abstracts and links to my published articles.
Publications:
1. ‘Online Echo Chambers, Online Epistemic Bubbles, and Open Mindedness‘, Episteme (2023): 1-26.
Abstract
This article is an exercise in the virtue epistemology of the internet, an area of applied virtue epistemology that investigates how online environments impact the development of intellectual virtues, and how intellectual virtues manifest within online environments. I examine online echo chambers and epistemic bubbles (Nguyen, 2020), exploring the conceptual relationship between these online environments and the virtue of open-mindedness (Battaly, 2018b). The article answers two key individual-level, virtue epistemic questions: (Q1) How does immersion in online echo chambers and epistemic bubbles affect the cultivation and preservation of open-mindedness, and (Q2) Is it always intellectually virtuous to exhibit open-mindedness in the context of online echo chambers and epistemic bubbles? In response to (Q1), I contend that both online echo chambers and online epistemic bubbles threaten to undermine the cultivation and preservation of open-mindedness, albeit via different mechanisms and to different degrees. In response to (Q2), I affirm that both a deficiency and excess of open-mindedness can be virtuous in these online environments, depending on the epistemic orientation of the digital user. This bifold response to Q2 is a demonstration of normative contextualism (Kidd, 2020), the idea that the normative status of cognitive character traits is contingent upon the context in which these traits are manifested.
2. ‘The Metaverse: Virtual Metaphysics, Virtual Governance, and Virtual Abundance‘, Philosophy & Technology 36, 67 (2023): 1-8.
Abstract
In his article ‘The Metaverse: Surveillant Physics, Virtual Realist Governance, and the Missing Commons,’ Andrew McStay addresses an entwinement of ethical, political, and metaphysical concerns surrounding the Metaverse, arguing that the Metaverse is not being designed to further the public good but is instead being created to serve the plutocratic ends of technology corporations. He advances the notion of ‘surveillant physics’ to capture this insight and introduces the concept of ‘virtual realist governance’ as a theoretical framework that ought to guide Metaverse design and regulation. This commentary article primarily serves as a supplementary piece rather than a direct critique of McStay’s work. First, I flag certain understated or overlooked nuances in McStay’s discussion. Then, I extend McStay’s discussion by juxtaposing a Lockean inspired argument supporting the property rights of Metaverse creators with an opposing argument advocating for a Metaverse user’s ‘right to virtual abundance,’ informed by the potential of virtual reality technology to eliminate scarcity in virtual worlds. Contrasting these arguments highlights the tension between corporate rights and social justice in the governance of virtual worlds and bears directly on McStay’s assertion that there is a problem of the missing commons in the early design of the Metaverse.
3. ‘Neuromedia, Cognitive Offloading, and Intellectual Perseverance.’ Synthese 200 (2022): 1-26.
Abstract
This paper engages in what might be called anticipatory virtue epistemology, as it anticipates some virtue epistemological risks related to a near-future version of brain-computer interface technology that Michael Lynch (2014) calls ‘neuromedia.’ I analyze how neuromedia is poised to negatively affect the intellectual character of agents, focusing specifically on the virtue of intellectual perseverance, which involves a disposition to mentally persist in the face of challenges towards the realization of one’s intellectual goals. First, I present and motivate what I call ‘the cognitive offloading argument’, which holds that excessive cognitive offloading of the sort incentivized by a device like neuromedia threatens to undermine intellectual virtue development from the standpoint of the theory of virtue responsibilism. Then, I examine the cognitive offloading argument as it applies to the virtue of intellectual perseverance, arguing that neuromedia may increase cognitive efficiency at the cost of intellectual perseverance. If used in an epistemically responsible manner, however, cognitive offloading devices may not undermine intellectual perseverance but instead allow us to persevere with respect to intellectual goals that we find more valuable by freeing us from different kinds of menial intellectual labor.
Article Link
4. ‘Augmented Reality, Augmented Epistemology, and the Real-World Web.’ Philosophy & Technology, 35 (2022): 1-28.
Abstract
Augmented reality (AR) technologies function to ‘augment’ normal perception by superimposing virtual objects onto an agent’s visual field. The philosophy of augmented reality is a small but growing subfield within the philosophy of technology. Existing work in this subfield includes research on the phenomenology of augmented experiences, the metaphysics of virtual objects, and different ethical issues associated with AR systems, including (but not limited to) issues of privacy, property rights, ownership, trust, and informed consent. This paper addresses some epistemological issues posed by AR systems. I focus on a near-future version of AR technology called the Real-World Web, which promises to radically transform the nature of our relationship to digital information by mixing the virtual with the physical. I argue that the Real-World Web (RWW) threatens to exacerbate three existing epistemic problems in the digital age: the problem of digital distraction, the problem of digital deception, and the problem of digital divergence. The RWW is poised to present new versions of these problems in the form of what I call the augmented attention economy, augmented skepticism, and the problem of other augmented minds. The paper draws on a range of empirical research on AR and offers a phenomenological analysis of virtual objects as perceptual affordances to help ground and guide the speculative nature of the discussion. It also considers a few policy-based and designed-based proposals to mitigate the epistemic threats posed by AR technology.
Article Link
5. ‘HoloFoldit and Hologrammatically Extended Cognition‘, Philosophy & Technology 35, 106 (2022): 1-9.
Abstract
How does the integration of mixed reality devices into our cognitive practices impact the mind from a metaphysical and epistemological perspective? In his innovative and interdisciplinary article, “Minds in the Metaverse: Extended Cognition Meets Mixed Reality” (2022), Paul Smart addresses this underexplored question, arguing that the use of a hypothetical application of the Microsoft HoloLens called “the HoloFoldit” represents a technologically high-grade form of extended cognizing from the perspective of neo-mechanical philosophy. This short commentary aims to (1) carve up the conceptual landscape of possible objections to Smart’s argument and (2) elaborate on the possibility of hologrammatically extended cognition, which is supposed to be one of the features of the HoloFoldit case that distinguishes it from more primitive forms of cognitive extension. In tackling (1), I do not mean to suggest that Smart does not consider or have sufcient answers to these objections. In addressing (2), the goal is not to argue for or against the possibility of hologrammatically extended cognition but to reveal some issues in the metaphysics of virtual reality upon which this possibility hinges. I construct an argument in favor of hologrammatically extended cognition based on the veracity of virtual realism and an argument against it based on the veracity of virtual fictionalism.
Article Link
6. ‘The Extended Mind Argument Against Phenomenal Intentionality.’ Phenomenology and the Cognitive Sciences. 20 (2021): 1-28.
Abstract
This paper offers a novel argument against the phenomenal intentionality thesis (or PIT for short). The argument, which I’ll call the extended mind argument against phenomenal intentionality, is centered around two claims: the first asserts that some source intentional states extend into the environment, while the second maintains that no conscious states extend into the environment. If these two claims are correct, then PIT is false, for PIT implies that the extension of source intentionality is predicated upon the extension of phenomenal consciousness. The argument is important because it undermines an increasingly prominent account of the nature of intentionality. PIT has entered the philosophical mainstream and is now a serious contender to naturalistic views of intentionality like the tracking theory and the functional role theory (Loar 1987, 2003; Searle 1990; Strawson 1994; Horgan and Tienson 2002; Pitt 2004; Farkas 2008; Kriegel 2013; Montague 2016; Bordini 2017; Forrest 2017; Mendelovici 2018). The extended mind argument against PIT challenges the popular sentiment that consciousness grounds intentionality.
Article Link
7. ‘The Cognitive Phenomenology Argument for Disembodied AI Consciousness.’ in S. Gouveia (ed.) The Age of Artificial Intelligence: An Exploration, Vernon Press (2020): 111-132.
Abstract
In this chapter I offer two novel arguments for what I call strong primitivism about cognitive phenomenology, the thesis that there exists a phenomenology of cognition that is neither reducible to, nor dependent upon, sensory phenomenology. I then contend that strong primitivism implies that phenomenal consciousness does not require sensory processing. This latter contention has implications for the philosophy of artificial intelligence. If sensory processing is not a necessary condition for phenomenal consciousness, then it plausibly follows that AI consciousness (assuming that it is possible) does not require embodiment. The overarching goal of the chapter is to show how different topics in the analytic philosophy of mind can be brought to bear on an important issue in the philosophy of artificial intelligence.
Article Link
8. ‘Could You Merge with AI? Reflections on the Singularity and Radical Brain Enhancement‘ (with Susan Schneider), in M. Dubber, F. Pasquale, and Sunit Das (eds.) The Oxford Handbook of Ethics of AI, Oxford University Press (2020): 307-326.
Abstract
This chapter focuses on AI-based cognitive and perceptual enhancements. AI-based brain enhancements are already under development, and they may become commonplace over the next 30–50 years. We raise doubts concerning whether radical AI-based enhancements transhumanists advocate will accomplish the transhumanists goals of longevity, human flourishing, and intelligence enhancement. We urge that even if the technologies are medically safe and are not used as tools by surveillance capitalism or an authoritarian dictatorship, these enhancements may still fail to do their job for philosophical reasons. In what follows, we explore one such concern, a problem that involves the nature of the self. We illustrate that the so called transhumanist efforts to “merge oneself with AI” could lead to perverse realizations of AI technology, such as the demise of the person who sought enhancement. And, in a positive vein, we offer ways to avoid this, at least within the context of one theory of the nature of personhood.
Article Link
Links to PowerPoint Presentations