All Arguments
A comprehensive list of all arguments for and against the theory of computational functionalism, presented in a single, readable view.
Church Turing Thesis
Overview
All computable functions and physical phenomena can be simulated to arbitrary input/output accuracy on a Turing Machine or any equivalent computational system, including modern digital computers. Given the extremely broad class of such functions, it is highly likely that the functions of consciousness fall within them. If consciousness does not have a function, then it can be hard to understand why we are able to report on it and why consciousness would have been amenable to natural selection (see Natural Selection Argument).
Responses
This argument can be challenged both on its limited definition of computation and its dismissal of non-computable phenomena. On the former, Turing equivalence ensures input-output equivalence of computations, but it is possible that other aspects of computation matter for consciousness. For instance, if the number of computational steps (or underpinning physical mechanisms) in an algorithm is relevant, then Turing Machine simulation does not guarantee capturing all the necessary details. The same goes for the number or nature of inputs and outputs to individual computational steps. Possible motivations can be drawn from theories of consciousness invoking ignition thresholds and phase transitions, pointing to information-processing density requirements that do not carry over in Turing-equivalence. More generally, digital simulations of even fairly simple phenomena break down quite quickly, e.g. the exact development of a system with a few hundred strongly entangled particles. In many cases, the only effective simulation of a complex system long-term is the system itself.
BUT: These arguments are only suggestive of cases where Turing equivalence is insufficient. Without fully specified candidate algorithms for conscious experience, they are hard to evaluate. There is a reasonable debate about where the burden of proof should lie in this debate.
Not all phenomena are exactly computable and these might be relevant for consciousness (or at least, it is premature to rule them out). For instance, the exact values of many real numbers cannot be computed and such numbers often have special significance in our theories of mathematics and physics (such as pi, phi, e). Likewise, it is not possible to generate exact digital equivalents of continuous or analog physical phenomena (such as quantum mechanics, certain field structures, space-time in general relativity). If those phenomena matter in their own right as physical structures (e.g. quantum entanglement for phenomenal binding or sources of 'true' randomness in some relevant sense) or if their exact functional outputs matter for consciousness (e.g. the sensitivity of chaotic systems to exact initial conditions for the right criticality levels), then Turing Machine simulation would be inadequate.
BUT: Regarding exact non-computable numbers: If the base layer of physics is discrete (a question which remains open), then non-computable reals cannot be embedded in physical systems either. Even if physics is continuous, the embedding of any real number could not be "read off" in exact terms, given limits such as Bekenstein's bound, placing restrictions on the ways on which such a value could be incorporated into a function.
BUT: Regarding physical structures: There is an open debate in philosophy of science whether physical structures are exhaustively described by their causal interactions (e.g. ontic structural realism). If they are, it becomes more plausible that all the same properties would inhere in simulations with emergent causal structures that map exactly onto a target causal structure.
It is possible that the same input/output function can be adequately replicated on a Turing Machine in terms of precision, but that there is some other aspect of the function that matters for consciousness. In other words, there might be two functions that broadly (or even precisely) have the same result, but with different causal structures for getting there or with different substrate properties, and that these matter for consciousness. For instance, certain analogue computing methods (such as non-linear optical Ising machines) might be more energy efficient, spatiotemporally concentrated, or amenable to our evolutionary environment than equivalent digital Turing Machine methods. Those functional differences might be directly necessary for consciousness or the physical substrates that enable them might happen to be the only ones capable of consciousness.
More generally, computations would not give rise to consciousness until they are physically implemented. As soon as they are physically implemented, various physical phenomena and constraints become relevant, such as features of the implementation substrate or the thermodynamics of information processing/storage/correction. It is possible that consciousness depends not only on a computable function but some aspect of its thermodynamics or physical implementation.
Further reading
- Stanford Encyclopedia of Philosophy (2023). The Church-Turing Thesis
Fading Qualia
Overview
Imagine gradually replacing a person's neurons with functionally identical silicon chips. The person remains behaviorally and cognitively identical throughout. Suppose that with each replacement, the person's qualia start to slowly fade (become weaker, dimmer, or disappear entirely). It seems absurd that someone could lose conscious experience gradually without noticing or reporting it, since their behaviour and self-reports stay constant. Therefore, functional organisation is what matters for consciousness.
Responses
Even if we accept this argument, it is possible that the necessary functions may have features that do not carry over on Turing-equivalence (i.e. might not work on digital computers) or might even involve non-computable functions (see also Church Turing Thesis and Natural Selection Argument). There are classes of theories of consciousness that draw on non-computable functions, that are nonetheless still functions. In order to replace a biological neuron with something truly functionally identical to maintain consciousness, it might need to be different from a silicon neuron in relevant ways for the CF argument in scope, e.g. it might require the ephaptic EM field effects which have been shown relevant in human brain function.
Further reading
- Chalmers D (1995). Absent Qualia, Fading Qualia, Dancing Qualia
Dancing Qualia
Overview
Imagine a system where we can rapidly toggle back and forth between neurons and silicon chips that implement the same functional roles. If CF is false and the neuronal substrate does matter for consciousness, then the person's experiences would rapidly flip (or "dance") between different states as the medium is swapped (since the silicon chip does not support qualia, or supports different qualia). However, there would be no change in behaviour, introspective report, or cognitive access (because the causal structure of the neural network would remain unchanged) – which seems absurd.
Responses
Same as Fading Qualia.
Further reading
- Chalmers D (1995). Absent Qualia, Fading Qualia, Dancing Qualia
Cognitive Science & AI Success Argument
Overview
Computational models of cognition (e.g., language processing, problem solving, reasoning, planning) have been highly successful, as have mechanistic AI implementations. These successes suggest that computational descriptions can fully capture mental processes. If cognitive capacities are computationally explainable, perhaps phenomenal consciousness is too.
Responses
Consciousness and cognition are plausibly orthogonal as physical phenomena, even if they are seemingly closely related in the human brain and evolutionary history.
Further reading
- Doerig et al. (2023). The Neuroconnectionist Research Programme
Multiple Realisability Argument
Overview
Pain can plausibly be materialised in very different types of brains: people, cats, octopodes, etc. What matters is not the physical medium but the pattern of functional organisation. Since computation abstracts away from physical details and focuses on functional structure, CF naturally accounts for multiple realisability.
Responses
Each physical medium may in fact have key physical features in common that support the experience of pain, such that pattern + substrate is necessary, rather than pattern alone.
Further reading
- Stanford Encyclopedia of Philosophy (2013). Multiple Realizability
Natural Selection Argument
Overview
Non-functionalist accounts of consciousness seem to draw on something mysterious that, by definition, has no actual function. Why would natural selection have latched onto some phenomenon that has no function and therefore no ability to improve our fitness?
Responses
Physical systems have functions in a given setting that can be represented by computation but are not necessarily exhausted by computation. Such functions would be accessible to natural selection and it is possible that certain of them are relevant for consciousness. Examples might include oscillation/resonance in a physical system, quantum entanglement, or electromagnetic field behaviour. The physical function might have relevance in a given context which is not captured in a digital simulation of that function. See also Church Turing Thesis.
Physical phenomena may also be able to implement activities that would be non-computable if modelled using a particular computational technique. For instance, it might be impossible to draw on the exact value of pi or exactly solve the evolution of a complex system or large-scale quantum entanglement using computational techniques in digital architectures, but physical systems embodying that behaviour naturally encode the exact calculation (even though we may be limited in our ability to read out values from the implied calculation).
There are also diverse theories within panpsychism that would identify consciousness in evolved systems without requiring it to have a specific function.
Further reading
- McFadden J (2023). Consciousness: Matter or EMF?
Anti-Mystery / Pro-Parsimony Debate
Overview
References to qualia that reject CF seem to make qualia some mysterious, first person phenomenon that is forever beyond the reach of science – a philosophical mystery that we can never make progress on. Whereas Occam's Razor and epistemological parsimony would suggest that, if a simple physical functionalist explanation seems to suffice, it should be preferred over a mysterious explanation.
Responses
It is possible to discuss qualia in rigorous ways that do not retreat to 'mystery' or 'ineffability', e.g. affirmative ostension or contrastive ostension definitions. Whether CF is parsimonious depends on whether it truly can account for conscious experience.
See Natural Selection argument.
Further reading
- Chalmers D (2020). Debunking Arguments for Illusionism about Consciousness
Introspection of Functions
Overview
When we examine our own mental lives, we mostly have access to functional aspects: patterns of behaviour, introspective reports, cognitive judgments, etc. There is no direct first-person access to "non-functional" qualia. Therefore, positing non-functional qualia may be a conceptual illusion – the functional description suffices. Only functional states enter into causal relations that drive behavior and cognition.
Responses
Further reading
- Dennett D (1993). Consciousness Explained
Ontic Structural Realism
Overview
Our observations and science only ever identify relations between entities. Yoneda's Lemma suggests that the structure of relationships between one mathematical object and all others is adequate to fully explain that mathematical object, with no intrinsic nature to it. OSR takes these as evidence that there is no intrinsic nature to anything in our universe, including our conscious experiences. Everything is exhaustively explained by its structure of relationships with other entities.
Responses
This same argument can be interpreted as identifying a gap – a need to identify some physical phenomenon that 'breathes fire' into the equations of physics, which various accounts suggest can be consciousness (Goff, Russell, Hoffman etc.).
The relationships identified might still be non-computable in nature or not carry over with Turing-equivalence. See also Church Turing Thesis.
According to the Standard Model, there seem to exist physical quantities whose value is not contingent on anything else, such as a field's spin or the vacuum expectation value of the Higgs field.
BUT: This might be challenged by future refinements of the Standard Model.
Further reading
- Kleiner J (2024). Towards a Structural Turn in Consciousness Science
- Stanford Encyclopedia of Philosophy (2023). Structural Realism
Informational Ontologies
Overview
Information is the true basis for reality, so it is more likely that consciousness is grounded in informational manipulations than physical phenomena.
Responses
An information ontology is no more persuasive or parsimonious than a physical ontology, where the latter is the basis for modern physics (e.g. quantum field theory or particle interpretations of quantum mechanics). Information (and Shannon Information) is better understood as a function of observer-knowledge, i.e. an epistemological issue rather than an ontological one. Likewise, Landauer's limit should be seen as a thermodynamic insight about physical state transitions, rather than some ontological grounding for information outside of physical matter.
Further reading
- Chalmers D (1996). The Conscious Mind (Chapter 8: Consciousness and Information)
- Wheeler JA (1990). Information, Physics, Quantum: The Search for Links
Phenomenal Binding Problem
Overview
Conscious experiences seem to contain multiple pieces of integrated information at once (e.g. different colors, shapes, sensations), at least sometimes. Computations build up complex informational constructs (e.g. a matrix multiplication) from simple informational constructs (e.g. binary bits). But how can this exercise generate complex conscious contents with fundamentally integrated information within a single conscious entity, given that every step of the process focuses only on individual simple steps? CF algorithms are designed so that each 'step' of the algorithm can be considered in isolation of the whole algorithm, analysing only its immediate inputs and outputs. Likewise, any implementation of an algorithm in a digital computer reduces it to micro-processing steps which only have informational access to their individual inputs and outputs. The view of the 'algorithm as a whole' only exists in the conscious mind of an external observer.
Responses
A complex entity emerges from the completion of the target CF algorithm.
BUT: If so, such an entity would make no causal contribution to the operation of the system, which is designed to work entirely at the individual algorithm step or logic gate level. Such consciousness would be epiphenomenal within the usual CF framework of local causality and algorithm definition.
If this consequence is accepted, CF can surpass the phenomenal binding problem, but should comment on whether human consciousness is also epiphenomenal and, if not, acknowledge this key distinction between CF-style consciousness and human-style consciousness. If human-style consciousness is epiphenomenal, we need some explanation as to why it appears that natural selection would have promoted consciousness and made it a major part of how we function as competitive, living systems (i.e. suggests consciousness has a fitness-relevant function in our evolutionary environment, which is impossible under epiphenomenalism). If the consequence is not accepted, some new definition of causality and algorithm needs to be introduced that explicitly moves beyond the CF framework.
Further reading
- Bayne T (2011). The Unity of Consciousness
- Hardcastle VG (2017). The Binding Problem
- Chalmers D (2016). The Combination Problem for Panpsychism
- Gómez-Emilsson A & Percy C (2023). Don't Forget the Boundary Problem! How EM Field Topology Can Address the Overlooked Cousin to the Binding Problem for Consciousness
- Percy C & Gómez-Emilsson A (2025). Integrated Information Theory and the Phenomenal Binding Problem: Challenges and Solutions in a Dynamic Framework
Staccato Consciousness Problem
Overview
Asking when a moment of experience occurs in the target CF algorithm identifies it at the conclusion of the target algorithm's final step. It cannot be midway, as the algorithm might not complete and so key parts of the conditions for consciousness might not be met. The next moment of consciousness must, in turn, wait for the conclusion of a second algorithm, e.g. a second loop/iteration of the target algorithm. In between, there would be some period of time without conscious experience present. As a result, when the next moment comes along, perhaps this is experienced by an entirely separate self, without any of the continuity between temporal moments that characterises complex consciousness in the human sense.
Responses
Introduce some minimum temporal thickness to the duration of experience produced by the CF algorithm, which connects the moments together into a continuous experience.
BUT: How can this be motivated using contemporary physics alone? Such thickness would also disconnect the conscious experience from the causal behaviour of the underlying computation, reinforcing the epiphenomenality of the phenomenal binding problem.
Accept 'staccato consciousness' – perhaps that is just the nature of CF.
BUT: CF-style consciousness would therefore be unable to explain the key continuity feature of human style consciousness, arguably important for our selfhood and ordinary intuitions of moral patienthood.
Require a certain speed of algorithmic loop before they 'blur' together, assuming some phenomenal 'flicker continuity' threshold for conscious entities.
BUT: How to motivate this threshold without begging the question of a conscious observer? More importantly, such a requirement would already be a new type of theory beyond CF alone. CF algorithms are spatiotemporally neutral: it is possible to pause an algorithm for a thousand years in the middle, before continuing it, with no change to the resulting conscious experience compared to ordinary algorithmic processing.
Further reading
- No direct reference currently known. Ideas discussed at Ernst Mach Workshop in Prague (June 2025).
Chinese Room Argument
Overview
A sufficiently large look-up table could replace any interaction with the target algorithm, while remaining input/output identical. However, it is a priori implausible that a look-up table, with its simple mechanics, could be conscious, no matter how vast it is.
The canonical example addresses 'understanding' rather than 'consciousness': A non-Chinese speaker inside a closed room uses a look-up table to identify fluent, convincing responses to Chinese sentences provided by an external Chinese speaker. The external person might think that the person inside 'understands' Chinese, but they do not. A common response is that understanding exists not in the look-up table or the operator's mind but somehow in the 'room as a system', e.g. the look-up table + mechanical sensor/operator. However, this response does not help the 'consciousness' critique, because all those elements remain as simple as the look-up table. Note that the original framing can be read as an argument against the Turing Test as a behavioural test of consciousness rather than an argument directly against CF, hence the change in emphasis in this presentation.
Responses
CF could be restricted to require a particular causal structure for the algorithm, rather than allowing any variants of the algorithm that merely have an identical input/output mapping.
Deny the intuition that large look-up tables would be 'too simple' to be conscious.
BUT: Need to motivate a threshold (even if gradual in nature) by which small non-conscious look-up tables would transition into large conscious look-up tables.
To accommodate all possible (infinite) language responses, the 'look-up table' must in fact be a different system, likely applying compression, generation, extrapolation, and other functions. Such advanced functions might plausibly compound into generating consciousness even if a look-up table would not.
BUT: This additional function needs motivating, because relatively simple combination rules may already be sufficient alongside a very large table to generate adequate linguistic complexity for any realistic conversation.
The computation did in fact happen and did generate conscious experiences, but only when the look-up table was created (i.e. a significantly more complex act than reading off the look-up table).
BUT: That means that there was only one conscious experience, contradicting the intuition that conscious experiences should be taking place during the conversations in question. Now we can just re-use the look-up table as often as we like but without generating any new moments of experience.
Further reading
- Searle JR (2010). Minds, brains, and programs
- Stanford Encyclopedia of Philosophy (2024). The Chinese Room Argument
US Economy Argument
Overview
Existing materialist theories of consciousness identify various general principles for consciousness that we can identify also in spatially distributed group entities, such as the US Economy today or alternatives that we might imagine in the future. Such general principles include global broadcast, higher order representation, predictive processing, self-modelling, world-modelling, self-preservation, energy generation, etc.
Responses
Reject existing CF principles derived from materialist theories of consciousness and instead define a sufficiently complex or nuanced algorithm that would differentiate genuinely conscious beings from unconscious spatially distributed group entities.
BUT: Need to motivate the specific algorithm that would never apply to any spatially distributed group entity, which then needs to be assessed on its own merits.
Accept that spatially distributed group entities can be conscious.
BUT: Need to either accept that consciousness of this style is epiphenomenal (see phenomenal binding problem) or identify some specific causality that cannot be explained by the interaction of its parts, where that causality is also relevant for the nature of consciousness in the system. The latter would typically require adopting a different view of causality/existence to the usual CF approach (in which logic gates/algorithms can be fully reduced to their parts), e.g. a mereology which creates (ontological) emergence perhaps by removing intrinsic existence from lower level entities, being the approach in Integrated Information Theory.
Further reading
- Schwitzgebel E (2015). If Materialism Is True, the United States Is Probably Conscious
Leibniz's Mill / Chinese Nation
Overview
If you expand a computer to the size of a galaxy and walk among its parts, you would see each part interacting locally according to its mechanistic rules. Where in this system is the conscious entity?
Responses
The system as a whole is conscious.
BUT: Need to either accept that consciousness of this style is epiphenomenal (see phenomenal binding problem) or identify some specific causality that cannot be explained by the interaction of its parts, where that causality is also relevant for the nature of consciousness in the system. The latter would typically require adopting a different view of causality/existence to the usual CF approach (in which logic gates/algorithms can be fully reduced to their parts), e.g. a mereology which creates (ontological) emergence perhaps by removing intrinsic existence from lower level entities, being the approach in Integrated Information Theory.
Further reading
- Duncan S (2012). Leibniz's Mill Arguments Against Materialism
- Block N (1981). Troubles with Functionalism
Problem of Many Minds
Overview
As a target CF algorithm executes on a digital computer, it does so on physical systems with 'large' components that span multiple particles (or, at least, components with spatial extension). If consciousness occurs when the algorithm completes its path through the system ('supervenes' on the system's physical behaviour), then the same algorithm can be traced through multiple 'narrow' physical pathways, each occurring on constituent particles or adjacent space-time points within each component. Each pathway meets the same criteria as the target CF algorithm does for the system as a whole, so each should individually give rise to an additional 'mind' that holds experience.
The implication is that any one conscious mind would be accompanied by many other identical ones, raising issues such as which one is responsible ("has causal agency") over the computation.
Responses
Accept multiple minds.
BUT: Each mind has the same feeling of conscious agency, yet the system is over-determined. Either one mind must be identified with the causality of the system itself, all minds must be seen as epiphenomenal, or a coherent notion of causal redundancy needs to be motivated within the CF framework.
Identify some rule that means only certain aggregate physical structures should be identified as 'units'.
BUT: This rule needs motivating so it can be assessed on its own merits. For instance, a counterfactual definition could be applied, but issues around counterfactual causality need addressing (e.g. counterfactual causal rules can be hard to define in cases such as two stones striking a window at the same time or separated by a fraction of a second).
Further reading
- Monton B & Goldberg S (2006). The problem of the many minds. Minds & Machines
- Roelofs L (2024). No Such Thing as Too Many Minds
Slicing Problem
Overview
Systems of pipes and flows can be used to construct Turing Machine equivalent architectures, so that any target CF algorithm can be implemented on a water computer. A physical gate can be built as a switch-triggered slicing mechanism to causally separate water flows throughout that architecture, such that a physically trivial act of moving a physical gate (trivial relative to the overall volume of information/matter in the system) can be used to multiply the number of conscious entities in that system.
Responses
Reject the intuition that the physical gate (switch) is in fact trivial.
Accept that one can multiply the number of conscious entities by triggering the switch, either because (a) the extra spatial dimension in the computer offers redundancy that we can reasonably harness or (b) because no laws of physics get violated by multiplying consciousness in such a way.
BUT: Regarding (a), spatial dimensions are not part of the ontological picture of CF, so the on/off systems must be equivalent under CF (i.e. there's no redundancy to exploit).
Regarding (b), one must explain how e.g. energy would be conserved in a system with twice the amount of consciousness (when the switch is triggered).
Further reading
- Gómez-Emilsson A & Percy C (2022). The "Slicing Problem" for Computational Theories of Consciousness
Individuation Problem
Overview
Implementing an algorithm on part of our causally-interconnected physical environment requires three choices that are typically considered arbitrary, i.e. no single option is innately privileged without invoking an external observer perspective:
- How to delineate one set of local causal relationships from the environment.
- Within this delineation, which inputs and outputs to designate for attention.
- What meaning to assign to particular states of the designated inputs and outputs.
While the third can be defeated by CF algorithms that focus on causal/syntactic structure rather than meaning/reference, the other two remain challenging. If there is always more than one option available for which algorithm is taking place in a sufficiently complex system, it becomes an arbitrary observer choice whether they deem the system to have a CF algorithm generating consciousness or some alternative. In this sense, algorithms do not 'exist' outside the eye of a beholder. However, consciousness should not depend on how an observer models me as a system.
Responses
Invoke specific methods for defining computation, such as robust mapping in Anderson & Piccinini (2024).
BUT: Need to address the separate debate around computation definitions.
Allow all possible identifiable computations to be taking place in a given system.
BUT: In any complex system, this results in a combinatorial explosion of possible computations. With nested and overlapping computations in the same system all being equally valid, they cannot all be assigned 'credit' for the causal behaviour of the system. Without reason to choose one, it seems all must be identified as epiphenomenal, with causality identified instead at the physical mechanism level rather than the algorithm that is defined in the eye of the beholder.
Define a CF algorithm such that this situation never occurs in reality.
BUT: Need to define the CF algorithm in order to explore this possibility.
Further reading
- Percy C (2024). Are Algorithms Always Arbitrary? Three Types of Arbitrariness and Ways to Overcome the Computationalist's Trilemma
- Anderson N & Piccinini G (2024). The Physical Signature of Computation: A Robust Mapping Account
Dual Experience Ambiguity
Overview
Because many types of conscious experience are possible, there cannot be a single CF algorithm. As a minimum there is one core algorithm that can integrate multiple contents, as minor variations of the whole algorithm that generate a given experience. As such, it is conceivable that adding one extra algorithmic step at the end would create a new experience. Taken as a new whole, we've now got two moments of experience that are distinct but overlapping in their use of source material (since each requires the full preceding algorithm to qualify). How does the algorithm know which boundary to draw? If it draws both, which one is you?
This dual experience or dual self ambiguity is worse where we imagine it taking place in terms of the same terminal step of the CF algorithm, depending whether the boundary is drawn to include one additional midway input or one additional prior step. This situation is more bizarre, because both distinct 'moments of experience' now come into place at the same time, on the same substrate, following the same algorithmic step. If we deleted the previous steps having completed them, there is nothing in the present that tells you about their distinct nature, but nonetheless there are two separate simultaneous and overlapping experiences due to their overlapping but slightly distinct past lightcones.
Responses
Bite the bullet, perhaps even pointing to evidence in psychology of multiple selves presumed to overlap to some extent in the same brain substrate – tulpas, split personalities, internal family systems, etc.
BUT: This route leads to epiphenomenalism, since we have no disciplined way to attribute causality over multiple systems sharing the same space.
Prevent the possibility by fully specifying CF algorithms in which this issue does not occur.
BUT: Need to define the CF algorithm in order to explore this possibility.
Further reading
- No direct reference currently known. Ideas discussed at Ernst Mach Workshop in Prague (June 2025).
Lightcone Reification Problem
Overview
Thinking about the last step of a target CF algorithm: if the preceding steps had not happened but the same output were provided directly into the last step, then the last step would still proceed the same as with the full CF algorithm in place. The last step and any future calculations using that output would not know any different. In order to permit conscious experience to occur upon that final step concluding only when the full algorithm actually occurred, we need to incorporate the past lightcone of the last step, including information it does not have direct knowledge of.
It is challenging to develop such an account that is consistent with theories of physics. For instance, the conscious entity exists not in the present moment, but distributed over the past. It is somehow constituted 'in the past' prior to the final necessary step of the algorithm in the present that caused it to come into being (since this is complex computational emergence). Moreover, the present moment of experience relies on its past structure in a non-causal manner (because algorithms in CF can ignore the causal specifics of non-proximate steps).
Responses
Accept epiphenomenalism (see phenomenal binding problem).
Introduce new physics that allows for the reification of worldlines in the sense necessary to address this problem.
Further reading
- No direct reference currently known. Ideas discussed at Ernst Mach Workshop in Prague (June 2025).
Simulation Equivalence Argument
Overview
Simulating the weather on a computer does not make anything wet. Simulating consciousness on a computer should likewise not be assumed to generate consciousness.
Responses
Argue that consciousness is the kind of phenomenon that is preserved under simulation (e.g. a simulation of a multiplication or a story is still the same multiplication or story).
Argue that entities within the simulation would experience the simulated weather, assuming they are capable of conscious experiences.
BUT: This assumes that simulated entities within the computation could be conscious. But whether such entities can exist is precisely what CF claims and what we're trying to evaluate. The response therefore begs the question. (The original simulation argument similarly begs the question against CF.) While this line of reasoning helps clarify what each position entails, it doesn't provide independent evidence for or against CF.
Further reading
- Seth A (2021). Being You: A New Science of Consciousness
Free Will Argument
Overview
Computers are wholly determined, so there is no room for free will, which feels like an essential component of human conscious experience.
Responses
Introduce a notion of compatibilist free will in which the human type of free will is similar to what algorithms would have under CF (e.g. humans are also determined, but because we are – as an entity – the algorithm unfolding, it feels like we are causing it, even though there is no other way it could be; or computational irreducibility means we cannot predict our own behaviour perfectly, so our minds model it as if we were causing it).
Note that it is possible for consciousness to be non-epiphenomenal but for there to be no free will outside the compatibilist sense. Consciousness has some function, which natural selection operates on, but this is not 'chosen' any more than any physical object 'chooses' to obey the laws of physics.
Further reading
- Stanford Encyclopedia of Philosophy (2022). Free Will
- Gallagher S (2006). Where's the Action? Epiphenomenalism and the Problem of Free Will
Abstract Objects Problem / Intangibility of Thought
Overview
It seems that we can conceive of and discuss abstract objects (such as 'redness', 'squareness', 'liberty', 'anger') divorced from any specific physical instantiations of those objects. However, in a world where conscious experiences are generated purely by physical instantiations (such as computations running on a physical computer), how can there be any phenomena that are separate from physical instantiations (phenomenal explanandum)? How can we operate with abstract objects so securely, e.g. being confident that we mean the same thing when invoking the number '5' (utility explanandum)?
Responses
Abstract objects can be generated within neural networks (artificial or biological) as informational constructs. They feel 'non-physical' and 'non-tangible' due to the structure of the network's sensory and cognition systems, not because they actually are 'non-physical' (in practice, they exist in the brain or in the computer).
BUT: We need to forego the certainty that comes from everyone operating with the same abstract objects by virtue of some non-physical marker to define them, e.g. the 'utility explanandum' is something we approximate only.
This argument attacks not just CF but also other non-computationalist physicalist views.
BUT: Non-computationalist physicalist views may have specific tools to tackle this question unavailable to CF (e.g. tools based on substrate considerations).
Further reading
- Stanford Encyclopedia of Philosophy (2021). Abstract Objects
- Percy C (2024). Your Red Isn't My Red! Connectionist Structuralism and the Puzzle of Abstract Objects
Intentionality Problem
Overview
Computers simply manipulate symbols, they do not know what those symbols actually mean. Whereas our conscious experience imbues symbols with a grounded meaning.
Responses
Symbolic meaning can be grounded in repeated access to sufficiently consistent input data that embeds those symbols in a web of consistent meanings. Potentially this requires real-world interactions, but this could be incorporated by placing the algorithm in a robot and is in any case an imperfect distinction (input data that we provide in bits to an algorithm is not meaningfully distinct from sensory data from our external environment to the brain).
Further reading
- Searle JR (2010). Minds, Brains, and Programs
Possibility of Analogue Computation
Overview
Consciousness might require functions that are not replicated exactly in a digital simulation, even if they can be simulated to arbitrary input/output accuracy, e.g. numbers whose exact value cannot be calculated digitally (pi, e, phi, etc.) and non-computable mathematics (e.g. certain non-constructivist proofs).
In particular, we might look to certain physical phenomena which are invoked in contemporary theories of consciousness, e.g. continuous physical phenomena (e.g. certain interpretations of physical fields, such as electromagnetic fields or space-time in general relativity) or quantum phenomena non-simulable in practice (e.g. large entangled systems may require prohibitively long simulation times), etc.
Responses
Adjust CF to require analogue computation.
Reject the claim that human capabilities are beyond Turing Machine equivalence. For instance, human creativity is at risk of adequate duplication in current generation AI (at least relative to low percentile human performance, who are presumably conscious in relevantly similar ways to the most brilliant). Interpretations also exist in which non-computable proofs by human mathematicians are not evidence that we infallibly compute non-computable functions, but rather that we are deploying specific forms of representational reasoning and heuristic inference that could also be implemented on a machine subject to similar limitations.
Reject the claim that any analogue features are necessary for consciousness (or place the burden of proof on CF critics to demonstrate the analogue claim).
Further reading
Embodiment Requirements
Overview
Consciousness might require various embodied, embedded, enactive and extended aspects of brain-body-world interactions
Responses
Extend the CF algorithm to require such interactions, e.g. by being placed in a robot.
Reject the claim that any embodiment features are necessary for consciousness (or place the burden of proof on CF critics to demonstrate the claim).
Further reading
- Varela FJ, Thompson E & Rosch E (1991). The Embodied Mind: Cognitive Science and Human Experience
- Stanford Encyclopedia of Philosophy (2021). Embodied Cognition
Knowledge Arguments
Overview
Mary is a brilliant scientist who knows everything about colour vision: all the physical facts, all the neurocomputational facts, everything functionalism (and computationalism) could say about vision. But she has lived in a black-and-white room her whole life. One day she sees red for the first time. It seems she learns something new: what it's like to see red. So there are facts about conscious experience (qualia) that are not captured by physical, functional, or computational descriptions.
Responses
From the inside of an algorithm being implemented, experiences are generated that can only be experienced by the conscious entity supervening on the algorithm (by assumption of CF). But this does not entail that knowledge of the algorithm from the outside necessarily generates the same experience, even if you might reasonably infer the nature of the experience. (Effectively a CF version of the acquaintance response or indexical knowledge response.)
Mary doesn't gain propositional knowledge but gains a new ability (to recognise or imagine red).
Knowledge arguments not only challenge CF, but also any physicalist theory of consciousness. If we are unwilling to support dualism out of principle, then an optimistic agnostic response could say: "we're not sure what answer addresses knowledge arguments, but one must exist, and that answer should be presumed to support CF (as well as other physicalist positions) until shown otherwise)."
BUT: This is a weak approach compared to committing to an answer and exploring its feasibility. Moreover, other physicalist theories might have different tools or more plausible claims, so this response is better seen as the start of a discussion than its end.
Further reading
- Jackson F (1986). What Mary Didn't Know
Fractional / Borderline Qualia
Overview
Given how computers use error correcting code or how any physical instantiation of a computation has 'bit flip' risks, any algorithm can be constructed so that it has only a probabilistic chance of correct execution. Under CF, such a system would have fractional consciousness – for example, '0.75' of a conscious entity or '0.8' of an experience of 'fear' (as distinct from being simply slightly less afraid than a full '1.0' experience). Similarly, if consciousness emerges at some point in a spectrum of computational complexity, there may be borderline cases where consciousness is neither clearly present nor clearly absent.
Responses
Accept that fractional conscious entities exist in the diverse state space of minds, even if humans do not experience it (or argue humans can experience it sometimes, despite it not being our standard awake experience and despite it contradicting certain notions of transitivity).
Reject the description of computation that gives rise to fractional qualia.
Point to precise phase transitions in the instantiation of an algorithm to prevent the possibility of fractional/borderline consciousness.
Further reading
- Bostrom N (2006). Quantity of Experience: Brain-Duplication and Degrees of Consciousness
- Gómez-Emilsson A & Percy C (2022). The "Slicing Problem" for Computational Theories of Consciousness
Absent Qualia / Zombie Argument
Overview
It's conceivable that two systems are functionally identical but one is conscious and the other isn't, e.g. we can imagine an identical replica of a given person that behaves the same but has no 'inner experience' corresponding to the kind of conscious self we normally experience (a 'p-zombie'). If so, there cannot be a functional grounding to consciousness and computational functionalism is false.
The argument is often applied via modal logic. An approximation is as follows: if something can be conceived it must be possible (even if not actual). As soon as zombies are possible (and might exist in some other universe even if not our own), that is sufficient metaphysical reason to require a theory of consciousness to be robust to it (since an explanation for consciousness should hold in all possible universes).
Responses
If we restrict our interest to this universe, conceivability arguments might be invalid (they reflect our current ignorance about how this universe works, not some deeper truth). For instance, p-zombies might be conceivable in some other universe with different physical laws but not actually true in this universe, where physical laws define CF as the condition for consciousness. Where CF is being implemented, there will always be a relevant conscious experience.
This argument attacks not just CF, but all functionalist theories of consciousness. If someone is already committed to functionalist approaches, then an optimistic agnostic response could say: "we're not sure what answer addresses zombie/conceivability arguments, but one must exist, and that answer should be presumed to support CF (as well as other functionalist positions) until shown otherwise)."
BUT: This is a weak approach compared to committing to an answer and exploring its feasibility. Moreover, other physicalist theories might have different tools or more plausible claims, so this response is better seen as the start of a discussion than its end.
Further reading
- Stanford Encyclopedia of Philosophy (2023). Zombies
Inverted Spectrum Arguments
Overview
Imagine two people who are functionally identical, behaving the same, and processing colors identically (functionally), but whose qualia are inverted (e.g., your red is my green). There's a difference in experience without a difference in functional role, which means qualia aren't fully determined by computational states.
Responses
This situation might be conceivable in some other universe with different physical laws but not actually true in this universe, where physical laws define CF as the condition for consciousness. Where CF is being implemented, there will always be a relevant conscious experience of exactly that type. In other words, the pure philosophical trope of inverted spectrums can never happen in our universe.
Further reading
- Stanford Encyclopedia of Philosophy (2025). Inverted Qualia
Unfolding Problem
Overview
Any recurrent neural network in a computer can be translated into a feedforward-only neural network with identical input/output to the original (albeit not necessarily identical in terms of speed/energy efficiency – but these are not part of usual CF criteria). It seems implausible that a feedforward only network could be conscious, given evidence of recurrency in the human brain and our intuitions of self-reference.
Responses
Define CF such that internal causal structures matter, not just input/output mapping.
Further reading
Pen & Paper Argument
Overview
The algorithm that is conscious in a computer can, by CF assumption, be replicated in all relevant aspects of its function by writing it out by hand on pen and paper, e.g. conducting the matrix multiplications by hand over as many years as it takes. Specifically we could use this method to instantiate the feeling of "being you right now in this second". Even if we wrote it down a thousand years from now and it took a thousand years to write it, a moment of experience identical to the one you are having now would materialise – and it would map to some physical spatiotemporal structure somewhere in this system of paper calculations. No matter how long the calculation took to write, the experienced moment would be no longer than the second of your current experience, i.e. there would likely be a temporal disconnect between the algorithm duration and the experience generated. Closely related to the Chinese Room, US Economy, and Leibniz's Mill arguments.
Responses
The bullet can be bitten simply by rejecting the intuition that such a paper system being conscious is 'weird' or by rejecting the claim that 'weirdness' of intuitions is a guide to truthfulness (pointing perhaps to weird intuitions in modern physics, such as quantum mechanics and general relativity, or the diverse ways proposed to resolve certain logical paradoxes).
BUT: Such an approach would need to be applied consistently to alternative accounts of consciousness as well. What makes one intuition about 'weird implications' a credible grounds for rejecting a theory (e.g. the promiscuity of panpsychism) but not credible for another?
Additional constraints could be put on CF to prevent this kind of outcome from occurring. For instance, the thermodynamics of calculation implementation could be drawn on to motivate a need for a spatiotemporal intensity constraint on the algorithm.
BUT: Such constraints could be hard to motivate (although might produce testable conclusions) and would move away from some of the canonical motivations for CF.
Explanatory Gap
Overview
Even if we identify physical brain processes that correlate with conscious experience, we still lack an explanation of why or how those processes give rise to subjective experience. The incompleteness of our explanations points towards the possibility that there may be a fundamental gap between physical descriptions and phenomenal consciousness.
Responses
The Explanatory Gap effectively assumes the answer, i.e. it assumes that CF explanations are not just inadequate today but doomed to be inadequate forever. Even if the conclusion is fair, different arguments need to be used for it, because skepticism can be broadly applied to any aspect of knowledge/experience.
Further reading
- Chalmers D (2006). Phenomenal Concepts and the Explanatory Gap
- Stanford Encyclopedia of Philosophy (2014). 5.2 The Explanatory Gap
The Hard Problem
Overview
The Hard Problem is often framed as a question: why should a given function be accompanied by experience? 'Experience' and 'function' are (conceivably) different kinds of phenomenon, so no explanation for the latter can ever bridge the gap to the former on its own merits.
Whether this question has force is typically motivated by some separate argument contained in this section, such as an explanatory gap argument, a p-zombie argument, or a knowledge argument, and so is best addressed via those motivating arguments. Nonetheless, it is common framing in the literature and worth including in this list. It also helps illustrate the limits of any conceivable explanation for consciousness, as with other forms of knowledge.
Responses
Interpreted as an explanatory regress, there is never a final answer to the Hard Problem, because a deeper 'why' can always be asked of any level of explanation.
To make progress as a community, we instead need different ways of evaluating/contrasting the quality of alternative answers. The subjective impression of whether a given explanation is 'satisfying' is inadequate, noting that some might find dualist/panpsychist/dual-aspect responses to the Hard Problem inherently satisfying while others do not. Moreover, any such explanation can always have 'why' levied at it, with security only ever eventually granted by tautology in all cases.
Like any causal explanation, at some point an answer needs to be 'assumed' and then we test its utility, coherence, and ability to meet various requirements levied at it (such assumptions are often called bridging principles or psychophysical laws in the Hard Problem literature).
Further reading
- Chalmers D (1995). Facing Up to the Problem of Consciousness
Functional Definition Circularity
Overview
Functionalists argue that a mental state like pain does not feel 'painful' in virtue of its intrinsic nature but because of the relationships between that mental state and other mental states. But if mental states are defined in terms of mental states, we have circular definitions, which risks unraveling the robustness of the CF foundations.
Responses
This argument has been formally rebutted by Ramsey-Lewis functionalism, e.g. each defined mental state type has a unique definition in terms of the other state types, and so all of the resulting definitions are provably non-circular. In fact, grounding in causal relationships is an argument in favour of CF (see Ontic Structural Realism).
Further reading
- Wikipedia. Ramsey–Lewis Method
- Block N (1981). Troubles with Functionalism
Counterfactual Computation Critique
Overview
Computation is typically defined in terms of counterfactuals – what a system would do with different inputs is an important part of its causal structure. For instance, a conscious algorithm processing input X generates conscious experience not just because of what it actually does on receipt of X, but because it has the capacity to appropriately handle different inputs Y, Z, etc. Consider a simple playback system that just follows a pre-recorded sequence of exactly what the conscious algorithm did when processing input X. This playback produces identical outputs but cannot handle any other inputs – it would fail or break if given input Y. Since it lacks counterfactual sensitivity, it should not be conscious according to computational functionalism.
However, it is possible to construct a hybrid system with a switch that either plays the pre-recorded sequence (if input = X) or runs the full conscious algorithm (if input ≠ X). This hybrid system is counterfactually sensitive because it could handle different inputs, but when actually given input X, only the recording plays while the full algorithm remains completely inert.
This creates a puzzle: we have two systems that do exactly the same thing when processing input X – the simple playback versus the hybrid system. The only difference is unused machinery that makes no causal contribution to the actual processing. Yet according to computational functionalism, one should be conscious (due to counterfactual sensitivity) while the other should not. This conflicts with intuitions that consciousness should depend on causally active processes rather than inert capabilities.
Responses
Define computation without using counterfactuals.
Accept that consciousness depends on counterfactuals that never happen (but could in principle).
Dismiss these as unusual edge cases where consciousness would indeed behave in strange manners, but only because of the oddness of the set-up. Potentially point to other areas of physics/logics where edge cases result in odd outcomes, but do not nullify the underlying rationale.
Define the relevant computations at a more fine-grained layer of causal structure, such that certain input/output emulations of that structure would not suffice for consciousness (unless they also have the same inner structure).
Further reading
- Maudlin T (1989). Computation and Consciousness
Neural Replay
Overview
It is plausible the brain operates based on what does happen rather than what could have happened, unlike computation which is typically defined via counterfactuals. Imagine a case where I activate your neurons individually exogenously in the same spatiotemporal pattern as occurred in a given historical experience (but prevent those specific within-pattern neurons from firing each other; the overall pattern can still fire subsequent neurons elsewhere). Superficially, the important parts of the brain are doing the same as before and the subsequent action is the same, so the same conscious experience would likely occur, but we've removed the causal relationships that drove the original pattern, so CF would predict no consciousness.
Responses
Disagree that the brain would produce consciousness under this setting. This disagreement is empirically testable in principle, dependent on challenging but feasible advances in neuroscience.
Define computation without using counterfactuals.
See also Counterfactual Computation Critique.
Further reading
- Gidon A, Aru J & Larkum ME (2025). Does Neural Computation Feel Like Something?