In article <CI1E7p.6qG at carmen.logica.co.uk> WilsonR at LILHD.Logica.com (Richard Wilson) writes:
>>In article <1993Dec13.134805.12597 at clpd.kodak.com> cox at ast.serum.kodak.com (David Cox (15084)) writes:
>>>>Perhaps someday we will know that certain patterns of neural activity
>>are related to consciousness. In fact, consciousness might then be
>>defined as those patterns of neural activity. We can then say that the
>>hand, leg, or other parts of the anatomy are not conscious because they
>>do not contain those patterns of activity.
>>>>Interestingly, if we can identify those patterns, I see no reason
>>why they cannot be transfered to other entities such as computers.
>>Would those entities then be conscious?
>>Sure, if the patterns are computable! IMO conscious neural activity is
>a non-deterministic form of feedback which FSAs cannot, in principle,
If you mean nondeterministic in the technical sense normally used in
FSA theory, then FSAs can indeed handle them. If the original machine
has N states than the nondeterminstic equivalent may need 2**N states.
I think this was first shown by Shepherdson. If you mean
nondeterministic is some unspecified sense, then your assertion is too
vague to refute, but you ought to offer something more than IMO.
Like, say, IMHO?