Yay! A possible solution of sorts (for now just working on scaling up the TT). Makes me feel much better. Thoughts as I read through the paper: Is feeling equal to vital force: no extra force necessary? This is answered, probably correctly in the paper, but perhaps it will come to be known that there is indeed something that distinguishes feeling beings from living beings in an analogous way. We are already adept at recognizing consciousness in other humans, so perhaps this could become codified in some way. Also, if feeling is not a fifth causal force, then how do we have something that we refer to as a force that is not causal, shouldn’t it then just be a product (epiphenomenon, woooo) of the other four forces? Does a scale of feeling support its characterization as a force? No, because a force implies a binary possibility. Rather, this supports the above - a product of the other four forces. If the machine alone will no if it feels then why do we bother? I think it’ll only be an interesting cognitive science question if we integrate and interact with AI
Consciousness is feeling. Performance capacity is inseparable from feeling. We would not know we are functioning, clenching our fist, walking, talking, etc., unless we felt it. Without feeling it we are not conscious of it. Therefore, function only becomes consciousness when it is felt. Robots function but are without consciousness. “The real question, then, for cognitive robotics (i.e., for that branch of robotics that is concerned with explaining how animals and humans can do what they can do, rather than just with creating devices that can do things we'd like to have done for us) is whether feeling is a property that we can and should try to build into our robots. Let us quickly give our answer: We can't, and hence we shouldn't even bother to try.” We cannot build build robots with the ability to feel. I feel like the problem here is how do we know this? Because of the other minds problem, if there is a robot that has the same abilities we do (in terms of performance capacity) and tell us they feel, we have no basis to not believe them unless we knew they were programmed to say yes. So how do we know that something in the engineering of this robot has actually been able to provide them with feelings? As we have not answered the hard problem of how and why we feel, how can we Harnad so confidently say we cannot build a robot without the ability to feel?
I see your point. It’s no that we should try to reverse engineer feeling or include it in our robots’ capacities, but that people should not claim anything about understanding the nature of consciousness unless they can explain the causal origins of how and why we feel. So consciousness as “knowledge/awareness of some (change of?) internal state” doesn’t tell you anything informative or interesting about the nature of consciousness because it does not address WHY this knowledge/awareness is felt rather than not felt. So rather than publishing books with grandiose titles that exaggerate what we actually know, why not call a spade a spade and admit that we know nothing about consciousness.Let’s say hypothetically we built a T3 robot that felt, and we were sure about it (I know it is impossible to be sure because of the Other Minds Problem). Would you be comfortable letting the issue rest at that point? If the feeling merely emerged out of the complexity and interdependence of the robots various sensorimotor-cognitive modules, without our intending it? Would that be enough for you to join Dennett’s camp, to conclude that feeling is a by-product of our performance capacity?