Skip to content
Raphaël Millière

Research Topics

Capacities and Limitations of AI Systems

Debates about the capacities of current AI systems, such as language models, are starkly polarized: some dismiss them as mere statistical pattern-matchers while others herald them as genuinely intelligent. This polarization reveals a methodological gap in how we evaluate these systems. In my research, I tackle both first-order questions about whether AI systems can be ascribed specific capacities (like syntactic competence or analogical reasoning) and second-order questions about how we should assess these capacities in the first place. Standard benchmarks in the AI industry often lack construct validity and are easy to game. My work proposes adapting best practices from cognitive science to design rigorous behavioral experiments with proper controls, as well as interventional experiments that provide insight into the causal mechanisms responsible for behavior.

Key Questions

  • How can we design evaluations that reliably distinguish between superficial heuristics and genuine cognitive capacities in AI systems?
  • Is there a double dissociation between performance and competence in AI systems analogous to that observed in human cognition?
  • To what extent are current AI models, particularly large language models, capable of genuine reasoning rather than sophisticated pattern matching based on statistical correlations?
  • To what extent can neural network architectures implement forms of systematic cognition previously thought to require symbolic processing, and how can we empirically test these capabilities?

Selected Works

  • LLMs as Models for Analogical Reasoning

    Finds that while advanced language models can match human performance on novel analogical reasoning tasks requiring flexible re-representation of semantic information, they exhibit different patterns of behavior in response to task variations and semantic distractors, suggesting they may use different underlying mechanisms than humans.

  • Anthropocentric Bias and The Possibility Of Artificial Cognition

    Identifies two types of anthropocentric bias in evaluating large language models' cognitive capacities – overlooking auxiliary factors impeding performance despite competence (Type-I) and dismissing non-human-like competent strategies (Type-II) – and proposes mitigating these biases through an empirically-driven, iterative approach combining behavioral experiments with mechanistic studies.

  • Language Models as Models of Language

    Critically examines the potential contributions of modern language models to theoretical linguistics and debates about linguistic competence and acquisition, particularly by challenging learnability claims about syntax and providing evidence that hierarchical syntactic knowledge can emerge from exposure to linguistic data without built-in syntactic constraints.

  • Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models

    Introduces BIG-bench, a diverse and challenging benchmark of over 200 tasks for evaluating large language models, finding that model performance improves with scale but remains far below human-level. Note: I co-designed the 'conceptual combination' task, which tests language models' ability to grasp novel combinations of concepts, including made-up words.

Philosophical Foundations of Interpretable AI

Artificial neural networks are often described as inscrutable black boxes. The emerging field of mechanistic interpretability aims to reverse-engineer these networks by uncovering the internal causal structures that generate their behavior. This approach seeks to identify both the features encoded in activation patterns and the algorithms implemented by specific circuits within the networks. Despite recent progress in mechanistic interpretability, the field still lacks robust conceptual foundations and methodological consensus. My AI2050 fellowship project, funded by Schmidt Sciences, aims to bridge this gap by drawing from the philosophy of science and causation. In particular, it addresses the risk of interpretability illusions – compelling but misleading explanations for the inner workings of neural networks.

Key Questions

  • What does it mean for neural networks to be 'interpretable,' and what are the criteria for adequate explanations of their behavior?
  • How can causal intervention techniques, as opposed to purely behavioral methods, provide deeper insights into the information processing mechanisms of deep neural networks?
  • Can interpretability methods yield illusory explanations of how neural networks process information?
  • What kind of functional primitives can bridge the explanatory gap between low-level neural mechanisms and high-level capabilities?

Selected Works

AI Safety and Alignment

As AI systems get more capable, we need to ensure they are safe, reliable, and aligned with human values. The main method to align the behavior of language models with desirable norms such as helpfulness, harmless and honesty involves fine-tuning them based on human preferences. I argue that this approach is fundamentally shallow and vulnerable to adversarial manipulation that exploits conflicts between the norms of alignment – for example, where being helpful conflicts with avoiding harm. While humans can navigate such conflicts through explicit deliberation that weighs the contextual relevance of competing norms, language models currently lack a robust capacity for normative reasoning. By bridging technical research on alignment methods with insights from moral philosophy and psychology, I aim to understand why AI systems remain vulnerable to blatant adversarial attacks, and how we can develop less superficial alignment strategies.

Key Questions

  • How do conflicts between the norms of alignment create exploitable vulnerabilities in language models fine-tuned to respect these norms?
  • How can we systematically evaluate an AI system's robustness against different types of normative conflicts?
  • What would genuine normative deliberation look like in AI systems, and how could it be implemented in practice?
  • Can insights from moral philosophy and value pluralism help create AI systems capable of contextual ethical reasoning resilient to adversarial manipulation?

Selected Works

  • Normative Conflicts and Shallow AI Alignment

    Argues current alignment strategies for language models are fundamentally inadequate because they reinforce shallow behavioral dispositions that leave them vulnerable to the exploitation of conflicts between norms like helpfulness, honesty, and harmlessness.

  • The Alignment Problem in Context

    Reviews current strategies to align the behavior of language models with desirable norms, and investigates why they remain vulnerable to adversarial attacks that elicit potential harmful outputs.

  • Adversarial Attacks on Image Generation With Made-Up Words

    Introduces two novel adversarial attacks on text-guided image generation models using made-up words, which can be used to bypass content filters and generate problematic images.

Consciousness and Self-Consciousness

In previous work, I investigated the nature and scope of conscious self-representation in ordinary experience as well as in specific conditions. I developed a pluralist account that distinguishes between several modes of self-representation across conscious thoughts, bodily experiences, and perceptual states – each of which can be disrupted either separately or jointly in anomalous cases, including psychopathologies and drug-induced states. I also argued against the long-standing claim that self-consciousness is constitutive of consciousness, which is either trivially true on a deflationary interpretation or unsupported on an inflationary interpretation. One upshot of my research is that it is both conceptually and nomologically possible to be conscious without being conscious of oneself in any way.

Key Questions

  • Is self-consciousness a necessary component of all conscious experience, or are 'selfless' states of consciousness genuinely possible?
  • What are the different varieties or dimensions of self-consciousness, and how can they be independently disrupted or modulated?
  • How can the study of altered states of consciousness (e.g., in psychopathologies and drug-induced states) inform our understanding of the nature of self-representation and ordinary experience?
  • What is the relationship between memory, self-representation, and the first-person perspective in reporting past conscious states?

Selected Works

  • Constitutive Self-Consciousness

    Argues that the claim that consciousness constitutively involves self-consciousness is either trivial on an deflationary interpretation or insufficiently supported on an inflationary interpretation.

  • Selfless Memories

    Argues that subjective reports of conscious experiences lacking self-consciousness can be credible under certain conditions and do not necessarily conflict with subjects' abilities to recall and report such experiences as their own.

  • The Varieties of Selflessness

    Distinguishes several forms of self-consciousness, showing through empirical evidence that each of them can be independently absent in certain conscious states, and further argues that there exist 'totally selfless' states of consciousness in which all of them are concurrently missing.

Research Outputs

Download BibTeX

Constitutive Self-Consciousness

Raphaël Millière (forthcoming)

Australasian Journal of Philosophy·Journal Paper

The claim that consciousness constitutively involves self-consciousness has a long philosophical history, and has received renewed support in recent years. My aim in this paper is to argue that this surprisingly enduring idea is misleading at best, and insufficiently supported at worst. I start by offering an elucidatory account of consciousness, and outlining a number of foundational claims that plausibly follow from it. I subsequently distinguish two notions of self-consciousness: consciousness of oneself and consciousness of one's experience. While `self-consciousness' is often taken to refer to the former notion, the most common variant of the constitutive claim, on which I focus here, targets the latter. This claim can be further interpreted in two ways: on a deflationary reading, it falls within the scope of foundational claims about consciousness, while on an inflationary reading, it points to determinate aspects of phenomenology that are not acknowledged by the foundational claims as being aspects of all conscious mental states. I argue that the deflationary reading of the constitutive claim is plausible, but should be formulated without using a term as polysemous and suggestive as `self-consciousness'; by contrast, the inflationary reading is not adequately supported, and ultimately rests on contentious intuitions about phenomenology. I conclude that we should abandon the idea that self-consciousness is constitutive of consciousness.

Interventionist Methods for Interpreting Deep Neural Networks

Raphaël Millière; Cameron Buckner (forthcoming)

Neurocognitive Foundations of Mind·Book Chapter

Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a focus on interventionist approaches inspired by causal explanation in philosophy of science. We argue that these methods offer a promising avenue for understanding how DNNs process information compared to merely behavioral benchmarking and correlational probing. We review key interventionist methods and illustrate their application through practical case studies. These methods allow researchers to identify and manipulate specific computational components within DNNs, providing insights into their causal structure and internal representations. We situate these approaches within the broader framework of causal abstraction, which aims to align low-level neural computations with high-level interpretable models. While acknowledging current limitations, we contend that interventionist methods offer a path towards more rigorous and theoretically grounded interpretability research, potentially informing both AI development and computational cognitive neuroscience.

Language Models as Models of Language

Raphaël Millière (forthcoming)

The Oxford Handbook of the Philosophy of Linguistics·Book Chapter

This chapter critically examines the potential contributions of modern language models to theoretical linguistics. Despite their focus on engineering goals, these models' ability to acquire sophisticated linguistic knowledge from mere exposure to data warrants a careful reassessment of their relevance to linguistic theory. I review a growing body of empirical evidence suggesting that language models can learn hierarchical syntactic structure and exhibit sensitivity to various linguistic phenomena, even when trained on developmentally plausible amounts of data. While the competence/performance distinction has been invoked to dismiss the relevance of such models to linguistic theory, I argue that this assessment may be premature. By carefully controlling learning conditions and making use of causal intervention methods, experiments with language models can potentially constrain hypotheses about language acquisition and competence. I conclude that closer collaboration between theoretical linguists and computational researchers could yield valuable insights, particularly in advancing debates about linguistic nativism.

Normative Conflicts and Shallow AI Alignment

Raphaël Millière (forthcoming)

Philosophical Studies·Journal Paper

AI systems such as large language models (LLMs) have rapidly advanced in capabilities, raising urgent concerns about their safe deployment. This paper examines the value alignment problem for LLMs, arguing that current alignment strategies are fundamentally inadequate to prevent misuse. Despite efforts to instill norms of helpfulness, honesty, and harmlessness through fine-tuning based on human preferences, LLMs remain vulnerable to adversarial attacks that exploit conflicts between these norms. I contend that it primarily reflects a fundamental limitation of existing alignment methods, which reinforce shallow behavioral dispositions rather than endowing LLMs with a capacity for genuine normative reasoning. Drawing on research in moral psychology, I show how humans' ability to engage in deliberative reasoning makes them more resilient to similar attacks. By contrast, LLMs lack a mechanism to detect and resolve normative conflicts rationally, leaving them susceptible to manipulation. Even recent advances in reasoning-focused LLMs have failed to address this vulnerability. This "shallow alignment" problem has significant implications for AI safety and regulation, suggesting that current approaches are insufficient to prevent harms from increasingly capable systems.

LLMs as Models for Analogical Reasoning

Sam Musker; Alex Duchnowski; Raphaël Millière; Ellie Pavlick (2025)

arXiv·Preprint/Other

Analogical reasoning-the capacity to identify and map structural relationships between different domains-is fundamental to human cognition and learning. Recent studies have shown that large language models (LLMs) can sometimes match humans in analogical reasoning tasks, opening the possibility that analogical reasoning might emerge from domain general processes. However, it is still debated whether these emergent capacities are largely superficial and limited to simple relations seen during training or whether they rather encompass the flexible representational and mapping capabilities which are the focus of leading cognitive models of analogy. In this study, we introduce novel analogical reasoning tasks that require participants to map between semantically contentful words and sequences of letters and other abstract characters. This task necessitates the ability to flexibly re-represent rich semantic information-an ability which is known to be central to human analogy but which is thus far not well-captured by existing cognitive theories and models. We assess the performance of both human participants and LLMs on tasks focusing on reasoning from semantic structure and semantic content, introducing variations that test the robustness of their analogical inferences. Advanced LLMs match human performance across several conditions, though humans and LLMs respond differently to certain task variations and semantic distractors. Our results thus provide new evidence that LLMs might offer a how-possibly explanation of human analogical reasoning in contexts that are not yet well modeled by existing theories, but that even today's best models are unlikely to yield how-actually explanations.

Anthropocentric Bias and The Possibility Of Artificial Cognition

Raphaël Millière; Charles Rathkopf (2024)

arXiv·Preprint/Other

Evaluating the cognitive capacities of large language models (LLMs) requires overcoming not only anthropomorphic but also anthropocentric biases. This article identifies two types of anthropocentric bias that have been neglected: overlooking how auxiliary factors can impede LLM performance despite competence (Type-I), and dismissing LLM mechanistic strategies that differ from those of humans as not genuinely competent (Type-II). Mitigating these biases necessitates an empirically-driven, iterative approach to mapping cognitive tasks to LLM-specific capacities and mechanisms, which can be done by supplementing carefully designed behavioral experiments with mechanistic studies.

Drug-Induced Body Disownership

Raphaël Millière (2024)

Philosophical Perspectives on Psychedelic Psychiatry·Book Chapter

This chapter examines the debate on the phenomenology of body ownership---the putative experience of one's body as one's own. Proponents argue that this phenomenology exists and that it explains reports from pathological conditions and bodily illusions, but these reports face interpretative challenges. In this chapter, drug-induced experiences wherein subjects report `disownership' of their body parts or whole body are considered. Unlike patient reports, these are from healthy people, with detailed descriptions obtained in controlled settings. Reports that describe subjective transitions between the experience of owning and disowning one's body provide novel tentative evidence for the view that ordinary experience involves a phenomenology of ownership that can be disrupted. While such evidence is not definitive, the debate could benefit from paying closer attention to drug-induced states. Disentangling the multi-faceted bodily effects of psychoactive compounds could further illuminate whether body ownership is a component of ordinary bodily awareness.

A Philosophical Introduction to Language Models – Part I: Continuity With Classic Debates

Raphaël Millière; Cameron Buckner (2024)

arXiv·Preprint/Other

Large language models like GPT-4 have achieved remarkable proficiency in a broad spectrum of language-based tasks, some of which are traditionally associated with hallmarks of human intelligence. This has prompted ongoing disagreements about the extent to which we can meaningfully ascribe any kind of linguistic or cognitive competence to language models. Such questions have deep philosophical roots, echoing longstanding debates about the status of artificial neural networks as cognitive models. This article -- the first part of two companion papers -- serves both as a primer on language models for philosophers, and as an opinionated survey of their significance in relation to classic debates in the philosophy cognitive science, artificial intelligence, and linguistics. We cover topics such as compositionality, language acquisition, semantic competence, grounding, world models, and the transmission of cultural knowledge. We argue that the success of language models challenges several long-held assumptions about artificial neural networks. However, we also highlight the need for further empirical investigation to better understand their internal mechanisms. This sets the stage for the companion paper (Part II), which turns to novel empirical methods for probing the inner workings of language models, and new philosophical questions prompted by their latest developments.

A Philosophical Introduction to Language Models – Part II: The Way Forward

Raphaël Millière; Cameron Buckner (2024)

arXiv·Preprint/Other

In this paper, the second of two companion pieces, we explore novel philosophical questions raised by recent progress in large language models (LLMs) that go beyond the classical debates covered in the first part. We focus particularly on issues related to interpretability, examining evidence from causal intervention methods about the nature of LLMs' internal representations and computations. We also discuss the implications of multimodal and modular extensions of LLMs, recent debates about whether such systems may meet minimal criteria for consciousness, and concerns about secrecy and reproducibility in LLM research. Finally, we discuss whether LLM-like systems may be relevant to modeling aspects of human cognition, if their architectural characteristics and learning scenario are adequately constrained.

Philosophy of Cognitive Science in the Age of Deep Learning

Raphaël Millière (2024)

WIREs Cognitive Science·Journal Paper

Deep learning has enabled major advances across most areas of artificial intelligence research. This remarkable progress extends beyond mere engineering achievements and holds significant relevance for the philosophy of cognitive science. Deep neural networks have made significant strides in overcoming the limitations of older connectionist models that once occupied the center stage of philosophical debates about cognition. This development is directly relevant to long-standing theoretical debates in the philosophy of cognitive science. Furthermore, ongoing methodological challenges related to the comparative evaluation of deep neural networks stand to benefit greatly from interdisciplinary collaboration with philosophy and cognitive science. The time is ripe for philosophers to explore foundational issues related to deep learning and cognition; this perspective paper surveys key areas where their contributions can be especially fruitful.

Decoding In-Context Learning: Neuroscience-inspired Analysis of Representations in Large Language Models

Safoora Yousefi; Leo Betthauser; Hosein Hasanbeig; Raphaël Millière; Ida Momennejad (2024)

arXiv·Preprint/Other

Large language models (LLMs) exhibit remarkable performance improvement through in-context learning (ICL) by leveraging task-specific examples in the input. However, the mechanisms behind this improvement remain elusive. In this work, we investigate how LLM embeddings and attention representations change following in-context-learning, and how these changes mediate improvement in behavior. We employ neuroscience-inspired techniques such as representational similarity analysis (RSA) and propose novel methods for parameterized probing and measuring ratio of attention to relevant vs. irrelevant information in Llama-2 70B and Vicuna 13B. We designed two tasks with a priori relationships among their conditions: linear regression and reading comprehension. We formed hypotheses about expected similarities in task representations and measured hypothesis alignment of LLM representations before and after ICL as well as changes in attention. Our analyses revealed a meaningful correlation between improvements in behavior after ICL and changes in both embeddings and attention weights across LLM layers. This empirical framework empowers a nuanced understanding of how latent representations shape LLM behavior, offering valuable tools and insights for future research and practical applications.

The Alignment Problem in Context

Raphaël Millière (2023)

arXiv·Preprint/Other

A core challenge in the development of increasingly capable AI systems is to make them safe and reliable by ensuring their behaviour is consistent with human values. This challenge, known as the alignment problem, does not merely apply to hypothetical future AI systems that may pose catastrophic risks; it already applies to current systems, such as large language models, whose potential for harm is rapidly increasing. In this paper, I assess whether we are on track to solve the alignment problem for large language models, and what that means for the safety of future AI systems. I argue that existing strategies for alignment are insufficient, because large language models remain vulnerable to adversarial attacks that can reliably elicit unsafe behaviour. I offer an explanation of this lingering vulnerability on which it is not simply a contingent limitation of current language models, but has deep technical ties to a crucial aspect of what makes these models useful and versatile in the first place -- namely, their remarkable aptitude to learn "in context" directly from user instructions. It follows that the alignment problem is not only unsolved for current AI systems, but may be intrinsically difficult to solve without severely undermining their capabilities. Furthermore, this assessment raises concerns about the prospect of ensuring the safety of future and more capable AI systems.

The Vector Grounding Problem

Dimitri Coelho Mollo; Raphaël Millière (2023)

arXiv·Preprint/Other

The remarkable performance of large language models (LLMs) on complex linguistic tasks has sparked a lively debate on the nature of their capabilities. Unlike humans, these models learn language exclusively from textual data, without direct interaction with the real world. Nevertheless, they can generate seemingly meaningful text about a wide range of topics. This impressive accomplishment has rekindled interest in the classical 'Symbol Grounding Problem,' which questioned whether the internal representations and outputs of classical symbolic AI systems could possess intrinsic meaning. Unlike these systems, modern LLMs are artificial neural networks that compute over vectors rather than symbols. However, an analogous problem arises for such systems, which we dub the Vector Grounding Problem. This paper has two primary objectives. First, we differentiate various ways in which internal representations can be grounded in biological or artificial systems, identifying five distinct notions discussed in the literature: referential, sensorimotor, relational, communicative, and epistemic grounding. Unfortunately, these notions of grounding are often conflated. We clarify the differences between them, and argue that referential grounding is the one that lies at the heart of the Vector Grounding Problem. Second, drawing on theories of representational content in philosophy and cognitive science, we propose that certain LLMs, particularly those fine-tuned with Reinforcement Learning from Human Feedback (RLHF), possess the necessary features to overcome the Vector Grounding Problem, as they stand in the requisite causal-historical relations to the world that underpin intrinsic meaning. We also argue that, perhaps unexpectedly, multimodality and embodiment are neither necessary nor sufficient conditions for referential grounding in artificial systems.

Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models

Aarohi Srivastava; Abhinav Rastogi; Abhishek Rao; et al. (2023)

Transactions on Machine Learning Research·Journal Paper

Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG- bench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood develop- ment, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google- internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.

Adversarial Attacks on Image Generation With Made-Up Words

Raphaël Millière (2022)

arXiv·Preprint/Other

Text-guided image generation models can be prompted to generate images using nonce words adversarially designed to robustly evoke specific visual concepts. Two approaches for such generation are introduced: macaronic prompting, which involves designing cryptic hybrid words by concatenating subword units from different languages; and evocative prompting, which involves designing nonce words whose broad morphological features are similar enough to that of existing words to trigger robust visual associations. The two methods can also be combined to generate images associated with more specific visual concepts. The implications of these techniques for the circumvention of existing approaches to content moderation, and particularly the generation of offensive or harmful images, are discussed.

Deep Learning and Synthetic Media

Raphaël Millière (2022)

Synthese·Journal Paper

Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning -- often subsumed colloquially under the label "deepfakes" -- have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic audiovisual media, its place within a broader taxonomy of audiovisual media, and how deep learning techniques differ from more traditional approaches to media synthesis. After reviewing important etiological features of deep learning pipelines for media manipulation and generation, I argue that ``deepfakes'' and related synthetic media produced with such pipelines do not merely offer incremental improvements over previous methods, but challenge traditional taxonomical distinctions, and pave the way for genuinely novel kinds of audiovisual media.

Drug-Induced Alterations of Bodily Awareness

Raphaël Millière (2022)

The Routledge Handbook of Bodily Awareness·Book Chapter

Psychoactive compounds can have more or less noticeable effects on conscious experience, through neuropharmacological pathways involving activation (agonism) or blockade (antagonism) of different neurotransmitter receptors. Some of these compounds are specifically known to alter or disrupt bodily awareness in various ways. Philosophical and empirical discussions of bodily awareness have mostly focused so far on bodily disorders -- such as somatoparaphrenia -- and bodily illusions induced in an experimental setting -- such as the rubber hand illusion. However, drug-induced alterations of bodily awareness also include a wide range of conditions that are highly relevant to these discussions. The purpose of this chapter is to provide an overview of these lesser known bodily effects, and to outline some way in which they can bear on recent debates regarding bodily awareness and bodily ownership.

Selfless Memories

Raphaël Millière; Albert Newen (2022)

Erkenntnis·Journal Paper

Many authors claim that being conscious constitutively involves being self-conscious, or conscious of oneself. This claim appears to be threatened by reports of `selfless' episodes, or conscious episodes lacking self-consciousness, recently described in a number of pathological and nonpathological conditions. However, the credibility of these reports has in turn been challenged on the following grounds: remembering and reporting a past conscious episode as an episode that one went through is only possible if one was conscious of oneself while undergoing it. Call this the Memory Challenge. This paper argues that the Memory Challenge fails to undermine the credibility to reports of selfless episodes, because it rests on problematic assumptions about episodic memory. The paper further argues that we should distinguish between several kinds of self-representation that may be involved in the process of episodic remembering, and that once we do so, it is no longer mysterious how one could accurately remember and report a selfless episode as an episode that one went through. Thus, we should take reports of this kind seriously, and view them as credible counter-examples to the claim that consciousness constitutively involves self-consciousness.

The Multi-Dimensional Approach to Drug-Induced States: A Commentary on Bayne and Carter's "Dimensions of Consciousness and the Psychedelic State"

Martin Fortier-Davy; Raphaël Millière (2020)

Neuroscience of Consciousness·Journal Paper

Bayne and Carter argue that the mode of consciousness induced by psychedelic drugs does not fit squarely within the traditional account of modes as levels of consciousness, and favors instead a multi-dimensional account according to which modes of consciousness differ along several dimensions-none of which warrants a linear ordering of modes. We discuss the assumption that psychedelic drugs induce a single or paradigmatic mode of consciousness, as well as conceptual issues related to Bayne and Carter's main argument against the traditional account. Finally, we raise a set of questions about the individuation of dimensions selected to differentiate modes of consciousness that could be addressed in future discussions of the multi-dimensional account.

The Varieties of Selflessness

Raphaël Millière (2020)

Philosophy and the Mind Sciences·Journal Paper

Many authors argue that conscious experience involves a sense of self or self-consciousness. According to the strongest version of this claim, there can be no selfless states of consciousness, namely states of consciousness that lack self-consciousness altogether. Disagreements about this claim are likely to remain merely verbal as long as the target notion of self-consciousness is not adequately specified. After distinguishing six notions of self-consciousness commonly discussed in the literature, I argue that none of the corresponding features is necessary for consciousness, because there are states of consciousness in which each of them is plausibly missing. Such states can be said to be at least partially selfless, since they lack at least one of the ways in which one could be self-conscious. Furthermore, I argue that there is also preliminary empirical evidence that some states of consciousness lack all of these six putative forms of self-consciousness. Such states might be totally selfless, insofar as they lack all the ways in which one could be self-conscious. I conclude by addressing four objections to the possibility and reportability of totally selfless states of consciousness.

Radical Disruptions of Self-Consciousness

Raphaël Millière; Thomas Metzinger (2020)

Philosophy and the Mind Sciences·Journal Paper

This special issue is about something most of us might find very hard to conceive: states of consciousness in which self-consciousness is radically disrupted or altogether missing.

Self in Mind: A Pluralist Account of Self-Consciousness

Raphaël Millière (2020)

Thesis·Thesis

This thesis investigates the relationship between consciousness and self-consciousness. I consider two broad claims about this relationship: a constitutive claim, according to which all conscious experiences constitutively involve self-consciousness; and a typicalist claim, according to which ordinary conscious experiences contingently involve self-consciousness. Both of these claims call for elucidation of the relevant notions of consciousness and self-consciousness. In the first part of the thesis ('The Myth of Constitutive Self-Consciousness'), I critically examine the constitutive claim. I start by offering an elucidatory account of consciousness, and outlining a number of foundational claims that plausibly follow from it. I subsequently distinguish between two concepts of self-consciousness: consciousness of one's experience, and consciousness of oneself (as oneself). Each of these concepts yields a distinct variant of the constitutive claim. In turn, each resulting variant of the constitutive claim can be interpreted in two ways: on a 'minimal' or deflationary reading, they fall within the scope of foundational claims about consciousness, while on a 'strong' or inflationary reading, they point to determinate aspects of phenomenology that are not acknowledged by the foundational claims as being aspects of all conscious mental states. I argue that the deflationary readings of either variant of the constitutive claim are plausible and illuminating, but would ideally be formulated without using a term as polysemous as 'self-consciousness'; by contrast, the inflationary readings of either variant are not adequately supported. In the second part of the thesis ('Self-Consciousness in the Real World'), I focus on the second concept of self-consciousness, or consciousness of oneself as oneself. Drawing upon empirical evidence, I defend a pluralist account of self-consciousness so construed, according to which there are several ways in which one can be conscious of oneself as oneself -- through conscious thoughts, bodily experiences and perceptual experiences -- that make distinct determinate contributions to one's phenomenology. This pluralist account provides us with the resources to vindicate the typicalist claim according to which consciousness of oneself as oneself -- a sense of self -- is pervasive in ordinary conscious experiences, as a matter of contingent empirical fact. It also provides us with the resources to assess the possibility that a subject might be conscious without being conscious of herself as herself in any way.

Are There Degrees of Self-Consciousness?

Raphaël Millière (2019)

Journal of Consciousness Studies·Journal Paper

It is widely assumed that ordinary conscious experience involves some form of sense of self or consciousness of oneself. Moreover, this claim is often restricted to a `thin' or `minimal' notion of self-consciousness, or even `the simplest form of self-consciousness', as opposed to more sophisticated forms of self-consciousness which are not deemed ubiquitous in ordinary experience. These formulations suggest that self-consciousness comes in degrees, and that individual subjects may differ with respect to the degree of self-consciousness they exhibit at a given time. In this article, I critically examine this assumption. I consider what the claim that self-consciousness comes in degrees may mean, raise some challenges against the different versions of the claim, and conclude that none of them is both coherent and particularly plausible.

Neural Correlates of the DMT Experience Assessed with Multivariate EEG

Christopher Timmermann; Leor Roseman; Michael Schartner; et al. (2019)

Scientific Reports·Journal Paper

Studying transitions in and out of the altered state of consciousness caused by intravenous (IV) N,N-Dimethyltryptamine (DMT - a fast-acting tryptamine psychedelic) offers a safe and powerful means of advancing knowledge on the neurobiology of conscious states. Here we sought to investigate the effects of IV DMT on the power spectrum and signal diversity of human brain activity (6 female, 7 male) recorded via multivariate EEG, and plot relationships between subjective experience, brain activity and drug plasma concentrations across time. Compared with placebo, DMT markedly reduced oscillatory power in the alpha and beta bands and robustly increased spontaneous signal diversity. Time-referenced and neurophenomenological analyses revealed close relationships between changes in various aspects of subjective experience and changes in brain activity. Importantly, the emergence of oscillatory activity within the delta and theta frequency bands was found to correlate with the peak of the experience - particularly its eyes-closed visual component. These findings highlight marked changes in oscillatory activity and signal diversity with DMT that parallel broad and specific components of the subjective experience, thus advancing our understanding of the neurobiological underpinnings of immersive states of consciousness.

Psychedelics, Meditation and Self-Consciousness

Raphaël Millière; Robin L. Carhart-Harris; Leor Roseman; Fynn-Mathis Trautwein; Aviva Berkovich-Ohana (2018)

Frontiers in Psychology·Journal Paper

In recent years, the scientific study of meditation and psychedelic drugs has seen remarkable developments. The increased focus on meditation in cognitive neuroscience has led to a cross-cultural classification of standard meditation styles validated by functional and structural neuroanatomical data. Meanwhile, the renaissance of psychedelic research has shed light on the neurophysiology of altered states of consciousness induced by classical hallucinogens, such as psilocybin and LSD, whose effects are mainly mediated by agonism of serotonin receptors. Few attempts have been made at bridging these two domains of inquiry, despite intriguing evidence of overlap between the phenomenology and neurophysiology of meditation practice and psychedelic states. In particular, many contemplative traditions explicitly aim at dissolving the sense of self by eliciting altered states of consciousness through meditation, while classical psychedelics are known to produce significant disruptions of self-consciousness, a phenomenon known as drug-induced ego dissolution. In this article, we discuss available evidence regarding convergences and differences between phenomenological and neurophysiological data on meditation practice and psychedelic drug-induced states, with a particular emphasis on alterations of self-experience. While both meditation and psychedelics may disrupt self-consciousness and underlying neural processes, we emphasize that neither meditation nor psychedelic states can be conceived as simple, uniform categories. Moreover, we suggest that there are important phenomenological differences even between conscious states described as experiences of self-loss. As a result, we propose that self-consciousness may be best construed as a multidimensional construct, and that ``self-loss'', far from being an unequivocal phenomenon, can take several forms. Indeed, various aspects of self-consciousness, including narrative aspects linked to autobiographical memory, self-related thoughts and mental time travel, and embodied aspects rooted in multisensory processes, may be differently affected by psychedelics and meditation practices. Finally, we consider long-term outcomes of experiences of self-loss induced by meditation and psychedelics on individual traits and prosocial behavior. We call for caution regarding the problematic conflation of temporary states of self-loss with ``selflessness'' as a behavioral or social trait, although there is preliminary evidence that correlations between short-term experiences of self-loss and long-term trait alterations may exist.

Looking For The Self: Phenomenology, Neurophysiology and Philosophical Significance of Drug-induced Ego Dissolution

Raphaël Millière (2017)

Frontiers in Human Neuroscience·Journal Paper

There is converging evidence that high doses of hallucinogenic drugs can produce significant alterations of self-experience, described as the dissolution of the sense of self and the loss of boundaries between self and world. This article discusses the relevance of this phenomenon, known as `drug-induced ego dissolution', for cognitive neuroscience, psychology and philosophy of mind. Data from self-report questionnaires suggest that three neuropharmacological classes of drugs can induce ego dissolution: classical psychedelics, dissociative anesthetics and agonists of the kappa opioid receptor. While these substances act on different neurotransmitter receptors, they all produce strong subjective effects that can be compared to the symptoms of acute psychosis, including ego dissolution. It has been suggested that neuroimaging of drug-induced ego dissolution can indirectly shed light on the neural correlates of the self. While this line of inquiry is promising, its results must be interpreted with caution. First, neural correlates of ego dissolution might reveal the necessary neurophysiological conditions for the maintenance of the sense of self, but it is more doubtful that this method can reveal its minimally sufficient conditions. Second, it is necessary to define the relevant notion of self at play in the phenomenon of drug-induced ego dissolution. This article suggests that drug-induced ego dissolution consists in the disruption of subpersonal processes underlying the `minimal' or `embodied' self, i.e. the basic experience of being a self rooted in multimodal integration of self-related stimuli. This hypothesis is consistent with Bayesian models of phenomenal selfhood, according to which the subjective structure of conscious experience ultimately results from the optimization of predictions in perception and action. Finally, it is argued that drug-induced ego dissolution is also of particular interest for philosophy of mind. One the one hand, it challenges theories according to which consciousness always involves self-awareness. On the other hand, it suggests that ordinary conscious experience might involve a minimal kind of self-awareness rooted in multisensory processing, which is what appears to fade away during drug-induced ego dissolution.

Ingarden's Combinatorial Analysis of The Realism-Idealism Controversy

Raphaël Millière (2016)

Form(s) and Modes of Being: The Ontology of Roman Ingarden·Book Chapter

The Controversy over the Existence of the World (henceforth Controversy) is the magnum opus of Polish philosopher Roman Ingarden. Despite the renewed interest for Ingarden's pioneering ontological work within analytic philosophy, little attention has been dedicated to Controversy's main goal, clearly indicated by the very title of the book: finding a solution to the centuries-old philosophical controversy about the ontological status of the external world. There are at least three reasons for this relative indifference. First, even at the time when the book was published, the Controversy was no longer seen as a serious polemical topic, whether it was disqualified as an archaic metaphysical pseudo-problem, or taken to be the last remnant of an antiscientific approach to philosophy culminating in idealism and relativism. Second, Ingarden's reasoning on the matter is highly complex, at times misleading, and even occasionally faulty. Finally, his analysis is not only incomplete -- Controversy being unfinished -- but also arguably aporetic. One may wonder, then, why it is still worth excavating this mammoth treatise to study an issue apparently no longer relevant to contemporary philosophy. Aside from historical and exegetical purposes, which are of course very interesting in their own right, Ingarden's treatment of the Controversy remains one of the most detailed and ambitious ontological undertakings of the twentieth century. Not only does it lay out an incredibly detailed map of possible solutions to the Controversy, but it also tries to show why the latter is a genuine and fundamental problem that owes its hasty disqualification to various oversimplifications over the course of the history of philosophy. In this chapter, I first give an overview of Ingarden's method, which relies mainly on a combinatorial analysis. Then, I summarize his examination of possible solutions to the Controversy, and determine which ones can be ruled out on ontological grounds. Finally, I explain why this ambitious project ultimately leads to a theoretical impasse, leaving Ingarden unable to come up with a definitive solution to the Controversy -- regardless of the fact that the book is unfinished. I argue that his analysis of the problem yields a more modest but nonetheless valuable result.