CF Debate

Why the CF debate matters

The rapid improvement in AI capabilities in recent years has resulted in renewed interest in the question of artificial consciousness: might there be 'something that it is like to be', say, a Large Language Model (LLM)? (see FAQ for working definitions of consciousness and computational functionalism).

Most experts doubt that current digital AI systems have the ability to experience sensations or pleasure/pain to a morally significant degree (see three surveys from 2020–2025). However, something being unlikely doesn't mean it's impossible. Experts typically acknowledge a wide range of uncertainty about the question of consciousness in current systems. And many also argue that artificial consciousness becomes increasingly plausible in near-term future systems.

Major reports into the possibility of artificial consciousness emphasise both the importance and the uncertainty about computational functionalism (CF) as an assumption:

"We adopt computational functionalism as a working hypothesis primarily for pragmatic reasons. [...] if computational functionalism is false, there is no guarantee that computational features which are correlated with consciousness in humans will be good indicators of consciousness in AI [...] we have different levels of confidence in computational functionalism [...]" (p14)

"This is not to say that current AI systems are likely to be conscious—there is also the issue of whether they combine existing techniques in the right ways, and in any case, there is uncertainty about both computational functionalism and current theories—but it does suggest that conscious AI is not merely a remote possibility in the distant future." (p47)

"As in past work, we do not take computational functionalism to be clearly true, nor do we take any of these alternatives to have been refuted." (p27)

Ideas in the above reports have proved foundational for major AI companies investing into model welfare and new organisations founded to research digital consciousness. There is significant academic debate on both sides, including a major paper by Anil Seth (at Sussex University) arguing against computational functionalism in 2025.

Some AI users increasingly take seriously the possibility that their chatbot companions may have an internal subjective experience, prompting prominent thinkers such as Jonathan Birch (at the LSE) and Mustafa Suleyman (at Microsoft AI) to highlight the potential risks of "conscious-seeming AI." We should expect these trends to continue as models become more capable and humanlike, increasing the urgency of addressing the core arguments that underlie various philosophical positions on this matter.

The debate also matters because there are credible alternatives to CF, which lead to a different set of theories of consciousness and a different set of implications for which artificial systems might be conscious.

Alternatives to CF

CF is a major position in discussions of consciousness via computational neuroscience, but serious challenges to this family of theories have been brought forward. Theories providing alternative mechanisms for consciousness also garner evidential and philosophical support — with some claiming to overcome certain key challenges faced by CF (even if they typically face other challenges or gaps in their current formulation).

Many of these non-CF theories suggest that consciousness in artificial systems is possible, but only if certain key conditions are met. A few examples include:

  • Electromagnetic field theories positing that brain waves and associated fields are causally significant for consciousness, depending on their structural characteristics.
  • Quantum theories arguing that classical physics cannot fundamentally explain core aspects of conscious experiences (such as phenomenal binding).
  • Integrated Information Theory, which rejects the idea that a system could be conscious by merely mimicking relevant algorithms without implementing specific architectural properties.
  • Positions within biological naturalism, which might point towards features like embodiment, homeostasis, autopoiesis, or metabolism as plausibly necessary conditions for consciousness.

Similarly, efforts to paint a philosophically coherent picture of any theory of consciousness have led to an increased interest in views that may be incompatible with CF, such as panpsychism and idealism.

Diversifying research about artificial consciousness to a broad set of theories, not just those assuming CF, reduces the risk of both over-attribution and under-attribution: a robustly good goal to pursue.

So what?

CF is not the only theoretical route to artificial consciousness, but it is the primary route currently considered viable for complex consciousness corresponding to software-level behaviour in digital computers.

Most practical work today is willing to grant CF as an assumption, in order to explore its implications. But given the stakes involved in either under-attributing or over-attributing consciousness, we think CF itself deserves more attention.

As this website demonstrates, there are plausible arguments on both sides of the CF debate, but different people might weigh these arguments in different ways. By engaging closely, institutions can improve their in-house views about the probabilities involved and calibrate their responses accordingly. It can also help channel theoretical and experimental research to topics most likely to change our collective assessment of CF, likely to be high impact work in the coming decade.

Certainty may be too much to hope for, at least in the short-term. Foundational questions about consciousness stretch back millennia across intellectual and cultural traditions, notably in debates of mind-body dualism (cf. Democritus' atomistic views vs Plato's dualism in ancient Greece; the Charvaka materialist school vs the Advaita Vedanta nondualist framework in the Hindu tradition; Leibniz opposing Toland's materialistic views). Epistemic humility is further warranted when considering that every position on consciousness may lead to seemingly crazy conclusions, such as countries being conscious, consciousness being an illusion, or consciousness remaining impossible in digital computers despite being functionally equivalent to biological brains.

There is, however, reason to remain optimistic that we can refine our probability assessments about consciousness: technological advances in neuroscience and quantum biology continue to offer hints as to what the physical correlates of consciousness might be. Deep engagement with the arguments on both sides of the CF debate is necessary for applying insights from this research to artificial systems — and being explicit in the assumptions that move from correlation to causation and identity theories.