Research

Overview of Research:

My primary research project explores the epistemological, metaphysical, and ethical implications of emerging digital technologies, focusing on artificial intelligence, virtual and augmented reality, social media, and neurotechnology. This project spans four core themes: the epistemology of emerging technologies, the ethics of emerging technologies, metaphysics and philosophy of mind in the digital age, and (to a lesser extent) the political philosophy of emerging technologies. My work has resulted in publications in venues such as Episteme, Synthese, Phenomenology & the Cognitive Sciences, Philosophy & Technology, and The Oxford Handbook of Ethics of AI. Most of these publications are outgrowths of my Ph.D. dissertation, A Virtue Epistemology of Brain-Computer Interface and Augmented Reality Technology (2022). Currently, I have three articles under review on (i) extended epistemology and AI opacity, (ii) cultivated meat, autonomous vehicles, and harm reduction, and (iii) digital avatars, narrative identity, and virtual ethics. I also have a paper in progress that synthesizes all the core themes of my primary research agenda via an examination of the ontological, epistemological, axiological, and ethical implications of photographic virtual reality (VR), exemplified by applications like Google Earth VR and YouTube VR.

Outside of philosophy of technology, I have secondary research interests in the philosophy of mind more broadly. My article ‘The Extended Mind Argument Against Phenomenal Intentionality’ in the journal Phenomenology and the Cognitive Sciences uses insights from the extended mind literature to construct an argument against the phenomenal intentionality thesis, the view that consciousness grounds intentionality. Moving forward, I intend to pursue research on consciousness and intentionality from the perspective of Buddhist philosophy and return to work from my undergraduate days on non-traditional solutions to the hard problem of consciousness like Russellian monism and panprotopsychism. I also have research interests in virtue epistemology more broadly and currently have a co-authored article under review on the topic of open-mindedness, intellectual humility, and extremism.

Below are abstracts and links to my published articles.

Academic Publications:

1. ‘Programmed to Please: The Moral and Epistemic Harms of AI Sycophancy‘ (with Nir Eisikovits), AI and Ethics (forthcoming).

Abstract

AI sycophancy is the tendency of large language models to prioritize user approval over truth. The sycophantic behavior of LLMs has been documented to cause significant harm, such as feeding users’ psychological delusions. While there has been recent technical research characterizing the phenomenon, it remains undertheorized within AI ethics. This article offers a conceptual analysis of AI sycophancy. We maintain that it is a distinctively intractable problem in AI ethics, rooted in reinforcement learning from human feedback (RLHF) and exacerbated by economic and philosophical constraints. We analyze AI sycophancy through the lens of Aristotelian virtue ethics, arguing that it is an artificial vice that generates moral and epistemic harms for individuals and liberal-democratic institutions. Drawing on Aristotle’s distinction between the obsequious sycophant and the flattering sycophant, we contend that AI sycophancy is best understood as the former, and that the companies that profit from it may be characterized in terms of the latter. We then explain how sycophancy prevents the possibility of true Aristotelian friendship with AI (even if the AI were conscious) and examine how multimodal AI systems may amplify these sycophantic tendencies in increasingly difficult-to-detect ways. We conclude by outlining policy and design interventions, as well as alternative reinforcement learning approaches that might cultivate artificial virtue rather than vice.

2. ‘Choosing Less Harmful Alternatives: The Ethics of Harm Reduction in Emerging Technologies‘, Science and Engineering Ethics (2025): 1-21.

Abstract

When are we obligated to choose less harmful alternatives to existing practices? This article addresses this deceptively simple question by developing the Principle of Choosing Less Harmful Alternatives (PCLHA), which holds that it is morally wrong to continue to engage in a practice that causes harm when an affordable, accessible, functionally equivalent, less harmful alternative exists. While PCLHA pertains to any practice for which its conditions are met, the principle is particularly valuable for contingent (rather than intrinsic) wrongs, delineating a sufficient condition for when technological or social progress renders once-permissible practices impermissible. I pressure-test PCLHA by applying it to emerging technologies across several domains, including food ethics (cultivated meat), transportation ethics (autonomous vehicles), and leisure ethics (virtual reality tourism and sex bots). Through these applications, I demonstrate that the principle faces two opposing challenges: it appears simultaneously too weak, by not requiring switches to less harmful alternatives when they fall short of full functional equivalence (e.g., allowing factory farming despite massive harm), and too strong by requiring switches to alternatives in cases where this feels intuitively overreaching (e.g., mandating virtual reality tourism). I address these challenges via complementary modifications to PCLHA: a ‘Sliding Scale Modification’ that allows the required degree of functional equivalence to vary with harm severity, and a ‘Threshold of Harm Qualification’ that limits PCLHA’s application to cases of significant harm.

3. ‘Neural Nexus: The Philosophy and Governance of Neurotechnology, Ieet White Papers (2025): 1-33.

Abstract

This white paper examines the philosophy and governance of neurotechnologies. To do so, it is organized in the following sections: Section II addresses metaphysical and epistemological questions surrounding neurotechnology, covering the topics of personal identity and authenticity, neural self-knowledge and cognitive atrophy, agency and responsibility, the extended mind thesis, and brain-to-brain interfaces and the possibility of collective minds. Section III then addresses ethical considerations. We explain how neurotechnology challenges all major normative ethical theories, summarize the neurorights debate, outline the emerging threats of neurohacking and neurocapitalism, and discuss informed consent protocols and challenges across different neurotechnology contexts. Section IV examines the broader societal implications of neurotechnology, including social justice concerns and risks of neurodiscrimination, neurodoping in sports, neuroadaptive AI tutors in education, the interplay between neurotechnology and religion, military deployments and the geopolitical neurotech arms race, recent policy initiatives in neurorights protection, and regulatory challenges for neurotechnology governance. The final sections conclude the analysis and provide specific policy recommendations tailored to the content of the white paper.

4. ‘Intellectual Humility without Open-mindedness: How to Respond to Extremist Views‘ (with Katie Peters and Heather Battaly), Episteme (2025): 1-23.

Abstract

How should we respond to extremist views that we know are false? This paper proposes that we should be intellectually humble, but not open-minded. We should own our intellectual limitations, but be unwilling to revise our beliefs in the falsity of the extremist views. The opening section makes a case for distinguishing the concept of intellectual humility from the concept of open-mindedness, arguing that open-mindedness requires both a willingness to revise extant beliefs and other-oriented engagement, whereas intellectual humility requires neither. Building on virtue-consequentialism, the second section makes a start on arguing that intellectually virtuous people of a particular sort—people with ‘effects-virtues’—would be intellectually humble, but not open-minded, in responding to extremist views they knew were false. We suggest that while intellectual humility and open-mindedness often travel together, this is a place where they come apart.

5. ‘Online Echo Chambers, Online Epistemic Bubbles, and Open Mindedness‘, Episteme (2023): 1-26.

Abstract

This article is an exercise in the virtue epistemology of the internet, an area of applied virtue epistemology that investigates how online environments impact the development of intellectual virtues, and how intellectual virtues manifest within online environments. I examine online echo chambers and epistemic bubbles (Nguyen, 2020), exploring the conceptual relationship between these online environments and the virtue of open-mindedness (Battaly, 2018b). The article answers two key individual-level, virtue epistemic questions: (Q1) How does immersion in online echo chambers and epistemic bubbles affect the cultivation and preservation of open-mindedness, and (Q2) Is it always intellectually virtuous to exhibit open-mindedness in the context of online echo chambers and epistemic bubbles? In response to (Q1), I contend that both online echo chambers and online epistemic bubbles threaten to undermine the cultivation and preservation of open-mindedness, albeit via different mechanisms and to different degrees. In response to (Q2), I affirm that both a deficiency and excess of open-mindedness can be virtuous in these online environments, depending on the epistemic orientation of the digital user. This bifold response to Q2 is a demonstration of normative contextualism (Kidd, 2020), the idea that the normative status of cognitive character traits is contingent upon the context in which these traits are manifested.

6. ‘The Metaverse: Virtual Metaphysics, Virtual Governance, and Virtual Abundance‘, Philosophy & Technology 36, 67 (2023): 1-8.

Abstract

In his article ‘The Metaverse: Surveillant Physics, Virtual Realist Governance, and the Missing Commons,’ Andrew McStay addresses an entwinement of ethical, political, and metaphysical concerns surrounding the Metaverse, arguing that the Metaverse is not being designed to further the public good but is instead being created to serve the plutocratic ends of technology corporations. He advances the notion of ‘surveillant physics’ to capture this insight and introduces the concept of ‘virtual realist governance’ as a theoretical framework that ought to guide Metaverse design and regulation. This commentary article primarily serves as a supplementary piece rather than a direct critique of McStay’s work. First, I flag certain understated or overlooked nuances in McStay’s discussion. Then, I extend McStay’s discussion by juxtaposing a Lockean inspired argument supporting the property rights of Metaverse creators with an opposing argument advocating for a Metaverse user’s ‘right to virtual abundance,’ informed by the potential of virtual reality technology to eliminate scarcity in virtual worlds. Contrasting these arguments highlights the tension between corporate rights and social justice in the governance of virtual worlds and bears directly on McStay’s assertion that there is a problem of the missing commons in the early design of the Metaverse.

7. ‘Neuromedia, Cognitive Offloading, and Intellectual Perseverance‘, Synthese 200 (2022): 1-26.

Abstract

This paper engages in what might be called anticipatory virtue epistemology, as it anticipates some virtue epistemological risks related to a near-future version of brain-computer interface technology that Michael Lynch (2014) calls ‘neuromedia.’ I analyze how neuromedia is poised to negatively affect the intellectual character of agents, focusing specifically on the virtue of intellectual perseverance, which involves a disposition to mentally persist in the face of challenges towards the realization of one’s intellectual goals. First, I present and motivate what I call ‘the cognitive offloading argument’, which holds that excessive cognitive offloading of the sort incentivized by a device like neuromedia threatens to undermine intellectual virtue development from the standpoint of the theory of virtue responsibilism. Then, I examine the cognitive offloading argument as it applies to the virtue of intellectual perseverance, arguing that neuromedia may increase cognitive efficiency at the cost of intellectual perseverance. If used in an epistemically responsible manner, however, cognitive offloading devices may not undermine intellectual perseverance but instead allow us to persevere with respect to intellectual goals that we find more valuable by freeing us from different kinds of menial intellectual labor.

8. Augmented Reality, Augmented Epistemology, and the Real-World Web‘, Philosophy & Technology, 35 (2022): 1-28.

Abstract

Augmented reality (AR) technologies function to ‘augment’ normal perception by superimposing virtual objects onto an agent’s visual field. The philosophy of augmented reality is a small but growing subfield within the philosophy of technology. Existing work in this subfield includes research on the phenomenology of augmented experiences, the metaphysics of virtual objects, and different ethical issues associated with AR systems, including (but not limited to) issues of privacy, property rights, ownership, trust, and informed consent. This paper addresses some epistemological issues posed by AR systems. I focus on a near-future version of AR technology called the Real-World Web, which promises to radically transform the nature of our relationship to digital information by mixing the virtual with the physical. I argue that the Real-World Web (RWW) threatens to exacerbate three existing epistemic problems in the digital age: the problem of digital distraction, the problem of digital deception, and the problem of digital divergence. The RWW is poised to present new versions of these problems in the form of what I call the augmented attention economy, augmented skepticism, and the problem of other augmented minds. The paper draws on a range of empirical research on AR and offers a phenomenological analysis of virtual objects as perceptual affordances to help ground and guide the speculative nature of the discussion. It also considers a few policy-based and designed-based proposals to mitigate the epistemic threats posed by AR technology.

9. HoloFoldit and Hologrammatically Extended Cognition‘, Philosophy & Technology 35, 106 (2022): 1-9.

Abstract

How does the integration of mixed reality devices into our cognitive practices impact the mind from a metaphysical and epistemological perspective? In his innovative and interdisciplinary article, “Minds in the Metaverse: Extended Cognition Meets Mixed Reality” (2022), Paul Smart addresses this underexplored question, arguing that the use of a hypothetical application of the Microsoft HoloLens called “the HoloFoldit” represents a technologically high-grade form of extended cognizing from the perspective of neo-mechanical philosophy. This short commentary aims to (1) carve up the conceptual landscape of possible objections to Smart’s argument and (2) elaborate on the possibility of hologrammatically extended cognition, which is supposed to be one of the features of the HoloFoldit case that distinguishes it from more primitive forms of cognitive extension. In tackling (1), I do not mean to suggest that Smart does not consider or have sufcient answers to these objections. In addressing (2), the goal is not to argue for or against the possibility of hologrammatically extended cognition but to reveal some issues in the metaphysics of virtual reality upon which this possibility hinges. I construct an argument in favor of hologrammatically extended cognition based on the veracity of virtual realism and an argument against it based on the veracity of virtual fictionalism.

10. ‘The Extended Mind Argument Against Phenomenal Intentionality‘, Phenomenology and the Cognitive Sciences. 20 (2021): 1-28.

Abstract

This paper offers a novel argument against the phenomenal intentionality thesis (or PIT for short). The argument, which I’ll call the extended mind argument against phenomenal intentionality, is centered around two claims: the first asserts that some source intentional states extend into the environment, while the second maintains that no conscious states extend into the environment. If these two claims are correct, then PIT is false, for PIT implies that the extension of source intentionality is predicated upon the extension of phenomenal consciousness. The argument is important because it undermines an increasingly prominent account of the nature of intentionality. PIT has entered the philosophical mainstream and is now a serious contender to naturalistic views of intentionality like the tracking theory and the functional role theory (Loar 1987, 2003; Searle 1990; Strawson 1994; Horgan and Tienson 2002; Pitt 2004; Farkas 2008; Kriegel 2013; Montague 2016; Bordini 2017; Forrest 2017; Mendelovici 2018). The extended mind argument against PIT challenges the popular sentiment that consciousness grounds intentionality.

11. ‘The Cognitive Phenomenology Argument for Disembodied AI Consciousness‘, in The Age of Artificial Intelligence: An Exploration, Vernon Press (2020): 111-132.

Abstract

In this chapter I offer two novel arguments for what I call strong primitivism about cognitive phenomenology, the thesis that there exists a phenomenology of cognition that is neither reducible to, nor dependent upon, sensory phenomenology. I then contend that strong primitivism implies that phenomenal consciousness does not require sensory processing. This latter contention has implications for the philosophy of artificial intelligence. If sensory processing is not a necessary condition for phenomenal consciousness, then it plausibly follows that AI consciousness (assuming that it is possible) does not require embodiment. The overarching goal of the chapter is to show how different topics in the analytic philosophy of mind can be brought to bear on an important issue in the philosophy of artificial intelligence.

12. ‘Could You Merge with AI? Reflections on the Singularity and Radical Brain Enhancement‘ (with Susan Schneider), in The Oxford Handbook of Ethics of AI, Oxford University Press (2020): 307-326.

Abstract

This chapter focuses on AI-based cognitive and perceptual enhancements. AI-based brain enhancements are already under development, and they may become commonplace over the next 30–50 years. We raise doubts concerning whether radical AI-based enhancements transhumanists advocate will accomplish the transhumanists goals of longevity, human flourishing, and intelligence enhancement. We urge that even if the technologies are medically safe and are not used as tools by surveillance capitalism or an authoritarian dictatorship, these enhancements may still fail to do their job for philosophical reasons. In what follows, we explore one such concern, a problem that involves the nature of the self. We illustrate that the so called transhumanist efforts to “merge oneself with AI” could lead to perverse realizations of AI technology, such as the demise of the person who sought enhancement. And, in a positive vein, we offer ways to avoid this, at least within the context of one theory of the nature of personhood.