CF Debate

Proper Influences

Overview

Computational functionalism holds that consciousness is a matter of implementing the right algorithm. In other words, it requires reliably having the right patterns of internal computational behavior under both actual and counterfactual circumstances.

The problem of proper influences comes from the fact that reliability is context sensitive, and the right contexts under which to evaluate entities can be unclear. How an entity functions depends on the environment it is in. Environmental factors may influence the behavior of mechanisms in different ways. Sometimes, we are inclined to think that the presence of an influence would change whether an algorithm is implemented. Sometimes we are inclined to think not. It is not straightforward to systematically distinguish the former cases from the latter.

The problem may be illustrated with the case of a bystander positioned near a bomb that is likely to soon go off. In assessing the person's present conscious state, we should evaluate the algorithm that their brain performs while they continue to receive oxygenated blood and set aside the bomb's disruptive effects. In the absence of the explosion, their brain would count as fulfilling the right functional role. In the absence of new oxygenated blood, it would not.

Human brains rely on blood and not nearby explosions, but other entities might conceivably need the reverse. Functionalism aims to make it possible for very different sorts of entities to all count as conscious by virtue of implementing the same algorithm in different ways. The problem of proper influences threatens to make implementing the right algorithm too subjective to support an important aspect of reality.

Responses

  1. We might find some feature of environmental factors that we consider relevant, and exclude other external factors as improper. For instance, perhaps we should judge implementation assuming that all and only external influences that were a regular part of the candidate entity's history are present.

  2. We might treat consciousness as an ambiguous or parochial concept and accept ambiguity or arbitrariness in what environmental factors are relevant.

  3. We might accept that there needs only to be some way of settling which environmental influences are properly counted under which an entity would implement the right algorithm in order for it to be conscious, as having the corresponding conscious states.

  4. Apply functionalist theories without a 'reliability' criterion. i.e. What matters for consciousness is whether a particular functional role is played in full, e.g. whether a particular algorithm does unfold as specified, not whether the algorithm is reliable in general or the probability of the algorithm continuing in the near-future. If the bomb explodes before the relevant algorithm has completed enough to generate the next 'moment' of experience, then there would be no conscious experience at that point, but there would be conscious experiences prior to that point wherever sufficient algorithmic cycle completions have occurred.

Further reading

Do you find this argument strong or weak?