Unfolding Problem
Overview
Any recurrent neural network in a computer can be translated into a feedforward-only neural network with identical input/output to the original (albeit not necessarily identical in terms of speed/energy efficiency – but these are not part of usual CF criteria - standard CF argues that all Turing-equivalent calculations should have all the same properties). It seems implausible that a feedforward only network could be conscious, given evidence of recurrency in the human brain and our intuitions of self-reference. If a feedforward network cannot have the property of consciousness, then by standard CF assumptions, nor can any Turing-equivalent system, such as an RNN.
Responses
Define CF such that internal causal structures matter, not just input/output mapping.
Identify the necessary recurrence in terms of abstracted logical operations on data objects, rather than physically re-intersecting causal mechanisms.
Further reading
Do you find this argument strong or weak?