Artificial intelligence and the future of consciousness science: ethical and policy reflections

Marcello Ienca

Technical University of Munich, Laboratory of Ethics of AI & Neuroscience, School of Medicine and Health, Munich, Germany; President-Elect of the International Neuroethics Society (INS)

Joseph J. Fins

Weill Cornell Medical College, Division of Medical Ethics, New York, NY USA; Yale Law School, Solomon Center for Health Law & Policy, New Haven, CT, USA; Immediate Past President of the International Neuroethics Society (INS)

Debra JH Mathews

Berman Institute of Bioethics and School of Medicine, Johns Hopkins University, Baltimore, USA; Current President of the International Neuroethics Society (INS)


The convergence of AI and consciousness science demands anticipatory, globally inclusive governance to promote ethical progress and address issues of neuroprivacy, bias, and equity, say Marcello Ienca, Joseph Fins, and Debra Mathews of the International Neuroethics Society.

DOI: https://doi.org/10.25453/plabs.30710924

Read further: Frontiers in Science article hub

Published on November 25th, 2025

The convergence of AI and consciousness science demands anticipatory, globally inclusive governance to promote ethical progress and address issues of neuroprivacy, bias, and equity.  

Scientific interest in consciousness is rapidly accelerating, propelled by new neuroimaging, electrophysiological, and computational tools. As Cleeremans et al. note in their Frontiers in Science lead article (1), understanding consciousness is one of the most significant challenges of this century—and it is gaining increased urgency given recent advances in artificial intelligence (AI), including large language models (LLMs) and neuromorphic systems. AI technologies are reshaping how scientists study, interrogate, model, and conceptualize consciousness—but these methods are advancing faster than the ethical and regulatory frameworks needed to guide their use. 

Writing in our personal capacities as past, present, and future Presidents of the International Neuroethics Society (INS), we see a critical opportunity to address the emerging issues posed by AI’s integration into consciousness research through interdisciplinary, global dialogue at the intersection of neuroscience, medicine, law, AI, and ethics. Here, we highlight four key areas of policy action needed to inform the intersection of AI with consciousness science: (i) comprehensive regulatory and ethical guidelines for AI utilized in clinical consciousness assessment; (ii) international research standards and ethics frameworks for synthetic consciousness; (iii) robust data governance to ensure neuroprivacy; and (iv) bias-mitigation strategies to promote global equity.  

These goals are achievable only through anticipatory, inclusive, and equitable global governance. Sustained, multinational collaboration—engaging diverse publics and amplifying marginalized voices—is therefore essential to shape policies that are truly representative and just. 

Guidelines for clinical consciousness assessment 

Novel tools to detect consciousness in humans are raising questions about what consciousness is and who possesses it, and many of these tools increasingly incorporate AI. These developments are particularly relevant for the study of cognitive motor dissociation (CMD), a condition in which individuals appear unresponsive yet demonstrate covert awareness. Recent estimates suggest that up to one in four patients thought to be in a coma or vegetative state may retain signs of consciousness (2, 3), reframing CMD as a public health concern that demands careful assessment and communication.  

In clinical contexts, overinterpreting algorithmic classifications of consciousness may lead to false attributions of awareness or, worse, to neglecting conscious patients who fall outside algorithmic thresholds. Such Type II errors that fail to identify consciousness when it actually exists would be the gravest sort of omission. Given our limited understanding of the nature and markers of consciousness, AI should be treated as a supportive instrument for discovery and analysis, not a substitute for conceptual clarity or diagnostic precision.  

To avoid premature claims that risk misrepresenting patients, regulatory and professional bodies should work together to establish clinical standards for the use of AI in consciousness assessment—defining validation criteria, ensuring clinician oversight, and aligning practices across health systems.   

Research standards for synthetic consciousness 

While AI promises new diagnostic and analytical tools, it also risks blurring the boundary between computational models of consciousness and emerging entities that might be conscious. AI systems are increasingly portrayed as capable of replicating or even surpassing human intelligence (4), and questions are regularly raised about whether some of them, such as simulative computational models, have achieved (or will soon achieve) consciousness. Yet others contend that we remain far from a mechanistic account of conscious experience, be it in humans or AI agents (5–7). As long as we cannot quantify consciousness in humans with methodological robustness, claims about conscious AI systems are necessarily premature. 

Generative LLM models produce impressive linguistic outputs without self-representation or subjectivity, historically markers of consciousness. Although there is also a risk for Type II error here, given current evidence, treating these behaviors as proof of consciousness risks confusing simulation with instantiation. Researchers must resist the fallacy of analogic reasoning, in which functional similarity is mistaken for phenomenological equivalence. LLM architectures add a new layer of epistemic complexity to the question of consciousness because they lack the biological substrates and integrative dynamics of the human brain. This does not mean that synthetic or artificial consciousness is impossible in principle, as some have argued (5). Rather, we currently lack the theoretical and empirical foundations to identify or verify it. Exaggerated narratives can distort research priorities and influence policy prematurely. The profound challenge will be knowing when the line genuinely blurs and then disappears. 

Even if AI systems reproduce behavioral markers of consciousness, the mechanisms through which they operate could be dispositive to the categorization. Neural processes in living organisms are embodied, affective, and multifaceted; AI systems based on LLMs are text-centric, disembodied, and monofaceted (8). These divergent substrates may generate functionally similar but phenomenologically distinct outputs. Understanding such differences is essential for assessing moral relevance and agentic responsibility. The boundary between complex computation and genuine experience remains philosophically uncertain. Comparative research between in silico and in vitro models, for instance on neuromorphic chips and brain organoids, may clarify which architectures could, in principle, support consciousness (9).  

Thus, to avoid anthropomorphizing AI, shared methodological standards and ethical frameworks must be established for synthetic consciousness. This will require broad international collaboration among neuroscientists, computer scientists, clinicians, philosophers, and neuroethicists (8, 10). International funding agencies, professional societies, and journals could reinforce these efforts by requiring ethical review and transparent disclosure of claims related to synthetic consciousness before publication or deployment. 

Neuroprivacy protection  

Progress in both human and synthetic consciousness research depends increasingly on large, shared datasets spanning countries and institutions. Neural and behavioral data are highly sensitive: they can reveal private cognitive states and enable personal identification (11, 12). Cross-border data sharing thus raises pressing concerns about neuroprivacy, data sovereignty, and informed consent (13).  

To address this challenge as consciousness science advances, robust, global data governance strategies must be implemented. Data-sharing consortia and research collaborations should integrate neuroprivacy provisions consistent with international frameworks, such as the Council of Europe's Convention 108, ensure transparent consent procedures and establish oversight mechanisms to prevent misuse of neural data. 

Bias mitigation and global equity 

AI models trained on large web corpora may inherit ableist, gendered, and cultural biases (14), while the limited languages used to train most LLMs may mean that linguistic nuance, cultural understandings, and expressions of consciousness are missing from the data sets, risking epistemic injustice relative to a core feature of human experience. When embedded into assistive devices, such biases and omissions can misunderstand or misrepresent patients’ experiences and diminish their autonomy. Safeguarding personal autonomy demands transparent model documentation, bias evaluation, and clinical oversight. Additionally, access to the computational infrastructure enabling consciousness research remains uneven (15). The concentration of resources in a few regions (16) risks creating further cultural and geopolitical asymmetries in defining what counts as consciousness.  

Thus, clear bias-mitigation strategies are required to promote global equity. Transnational organizations could help develop equitable governance frameworks by supporting multilingual model training, expanding Global South participation in and leadership of research and research datasets, and ensuring inclusive representation in standard-setting processes. 

Toward anticipatory neuroethics governance 

Consciousness science stands at the crossroads of neuroscience, medicine, computation, and neuroethics. The convergence of this contested epistemic space with AI¾another epistemologically and morally complex domain¾requires anticipatory governance and strong ethics frameworks. This means not only identifying ethical and societal implications before they crystallize into problems but also engaging with and listening to the communities most likely to enjoy the benefits and suffer the harms at this intersection. Future advances in neuroscience and AI will require engaging a broader range of actors in ethical deliberation about research design, technology development, and governance (17); understanding their interests, needs, and values; and ensuring that research, development, and governance are responsive and accountable. Cleeremans et al. bring the opportunities and challenges ahead into sharp relief, and a collective response is required. We propose that international societies such as the INS, alongside policymakers, programmers, and clinicians, pursue coordinated, accountable, and forward-looking governance to address the evolving challenges at the intersection of consciousness and AI. Such a global effort is critical to help ensure that scientific advances in these fields are guided toward societal benefit and enhance, rather than diminish, human dignity.  


Author disclaimer 

Each author currently holds or has held leadership roles within the International Neuroethics Society (INS). The views expressed here are solely those of the authors and do not represent official positions or policies of the INS. 

Copyright statement 

Copyright: © 2025 [author(s)]. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in Frontiers Policy Labs is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.      

Generative AI statement 

The authors declared that generative AI was used in the creation of this manuscript. AI software assistance (Grammarly Pro-2025) was used for proof-reading and formatting purposes. No AI tool was used for content generation. 


References

  1. Cleeremans A, Mudrik L, Seth AK. Consciousness science: where are we, where are we going, and what if we get there? Front Sci (2025) 3:1546279. doi: 10.3389/fsci.2025.1546279 

  2. Owen AM, Coleman MR, Boly M, Davis MH, Laureys S, Pickard JD. Detecting awareness in the vegetative state. Science (2006) 313(5792):1402. doi: 10.1126/science.1130197  

  3. Bodien YG, Allanson J, Cardone P, Bonhomme A, Carmona J, Chatelle C, et al. Cognitive motor dissociation in disorders of consciousness. N Engl J Med (2024) 391(7):598–608. doi: 10.1056/NEJMOA2400645  

  4. Kim H, Yi X, Yao J, Lian J, Huang M, Duan S, et al. The road to artificial superintelligence: a comprehensive survey of superalignment. arXiv [preprint] (2024). doi: 10.48550/arXiv.2412.16468  

  5. Floridi L. AI and semantic pareidolia: when we see consciousness where there is none. SSRN [preprint] (2025). Available at: https://dx.doi.org/10.2139/ssrn.5309682  

  6. Chalmers DJ. Facing up to the problem of consciousness. J Conscious Stud (1995) 3(23):200–19. Available at: https://consc.net/papers/facing.pdf. 

  7. Chalmers DJ. How can we construct a science of consciousness? Ann N Y Acad Sci (2013) 1303(1):25–35. doi: 10.1111/nyas.12166 

  8. Aru J, Larkum ME, Shine JM. The feasibility of artificial consciousness through the lens of neuroscience. Trends Neurosci (2023) 46(12):1008–17. doi: 10.1016/j.tins.2023.09.009 

  9. Butlin P, Long R, Elmoznino E, Bengio Y, Birch J, Constant A, et al. Consciousness in artificial intelligence: insights from the science of consciousness. arXiv [preprint] (2023). doi: 10.48550/arXiv.2308.08708 

  10. Salles A, Evers K, Farisco M. Anthropomorphism in AI. AJOB Neurosci (2020) 11(2):88–95. doi: 10.1080/21507740.2020.1740350 

  11. Magee P, Ienca M, Farahany N. Beyond neural data: cognitive biometrics and mental privacy. Neuron (2024) 112(18):3017–28. doi: 10.1016/j.neuron.2024.09.004 

  12. Ienca M, Fins JJ, Jox RJ, Jotterand F, Voeneky S, Andorno R, et al. Towards a governance framework for brain data. Neuroethics (2022) 15(2):20. doi: 10.1007/s12152-022-09498-8 

  13. Ienca M, Malgieri G. Mental data protection and the GDPR. J Law Biosci (2022) 9(1):lsac006. doi: 10.1093/jlb/lsac006 

  14. Fins JJ, Shulman KS. Neuroethics, covert consciousness, and disability rights: what happens when artificial intelligence meets cognitive motor dissociation? J Cogn Neurosci (2024) 36(8):1667–74. doi: 10.1162/jocn_a_02157 

  15. Lainjo B. The global social dynamics and inequalities of artificial intelligence. Int J Innov Sci Res Rev (2020) 5(8):4966–74. Available at: https://journalijisr.com/sites/default/files/issues-pdf/IJISRR-1306.pdf 

  16. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell (2019) 1(9):389–99. doi: 10.1038/s42256-019-0088-2  

  17. Mathews DJ, Balatbat CA, Dzau VJ. Governance of emerging technologies in health and medicine—creating a new framework. N Engl J Med (2022) 386(23):2239–42. doi: 10.1056/NEJMms2200907 

 
Next
Next

Science and science advice: a defining moment?