From my response to a post attempting to refute Searle’s Chinese Room thought experiment…

“The problem, it seems to me at first glance (and I am familiar with Searle’s work) is a fundamental difference that persists, regardless of infinite regress, between symbolic systems and consciousness.

Humans have, at least since the mechanistic transformation of culture, been vulnerable to something like the opposite of the pathetic fallacy: namely, reifying our own bodies and minds in the image of devices we manufacture. Note that devices are comprised, by humans, of parts. Organisms are not comprised this way. There is a difference of »orders here, and this difference underlies many ‘problems’ including the hard problem of consciousness.

Be that as it may, there are features of »incompleteness, that have strong relevance when we examine the difference between artificial »systems and the »beings that compose them.

Godel proved, very conclusively, that there are issues underlying all formal systems, that lead to an infinite irresolveable series … where true statements of a specific order cannot be proven at that order. Even if we leave Godel aside, however, we must acknowledge that minds are not fundamentally computational, though they are capable of computation.

The issue in your conjecture is the term ‘understanding’. I don’t believe that this is or can be ‘merely’ computational, or rather, involving computation, it does not end there. There is a transcendental feature of awareness, that lies I suspect, in a direct and immediate relation with Origin (and all other minds in all of time) that no machine (as they presently or are likely to exist) shares. There are many reasons why this is necessarily a fact, most of them (but not all) having to do with the inheritance of »relationships at the origin of a being. Machines and objects have a catastrophically more modest aspect of this feature.

In any case, minds are not local to beings as we suppose, but are, more likely, an expression not merely of inheritance and re-instancing of (infinite) relationships, they have many other astonishing features which no system will ever capture, however apparently advanced its behavior.

What does seem true, however, is that the nature of human thought, intelligence, sensing and systematization, will become linked with the devices we mistakenly call artificial intelligences (they are artificial, they are not, in my view intelligent, because intelligence is a property native to organisms and hyperstrcutures of organismal relationships over time) and this will certainly change what it means to be human, learn, or have a mind.

I am not certain that it is formally impossible for a machine to become complex enough to appear to possess consciousness or awareness or sentience. But I have grave doubts that these appearances are ‘the same thing’ as what we instruct them to simulate.

Is a good enough simulation ‘the same as’ myself? No. This is due at the very least to non-simultaneity in world-lines (relativity) and their apparently irreconcilable uniqueness, but is also due to the nature of being, which is not merely organism, and can never be simply mechanism (the fallacy I mentioned earlier).

The apparent sophistication of a computational system is not equivalent to understanding or consciousness. A person might possibly ‘learn Chinese’ in the Chinese room, but a machine has nothing that integrates the content of memory into personally (and relationally) meaningful »gestalts… in simpler terms, there is no way to meaningfully integrate the contents of memory into »identity… and there is something beyond mere identity at play in the organization of actual minds…”

OP:

https://www.facebook.com/sebastian.schepis/posts/10160803870679396

Feb 26, 2023

001936

Post

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *