Publication of this symposium, almost twenty years after writing the paper, has encouraged me to dig into my records to determine the article’s history. The roots of the article lie in a lengthy e-mail discussion on the topic of “What is Computation”, organized by Stevan Harnad in 1992. I was a graduate student in Doug Hofstadter’s AI laboratory at Indiana University at that point and vigorously advocated what I took to be a computationalist position against skeptics. Harnad suggested that the various participants in that discussion write up “position papers” to be considered for publication in the journal Minds and Machines. I wrote a ﬁrst draft of the article in December 1992 and revised it after reviewer comments in March 1993. I decided to send a much shorter article on implementation to Minds and Machines and to submit a further revised version of the full article to Behavioral and Brain Sciences in April 1994. I received encouraging reports from BBS later that year, but for some reason (perhaps because I was ﬁnishing a book and then moving from St. Louis to Santa Cruz) I never revised or resubmitted the article. It was the early days of the web, and perhaps I had the idea that web publication was almost as good as journal publication.
5 Computational suﬃciency
We now come to issues that connect computation and cognition. The ﬁrst key thesis here is the thesis of computational suﬃciency, which says that there is a class of computations such that implementing those computations suﬃces to have a mind; and likewise, that for many speciﬁc mental states here is a class of computations such that implementing those computations suﬃces to have those mental states. Among the commentators, Harnad and Shagrir take issue with this thesis.
Harnad makes the familiar analogy with ﬂying, digestion, and gravitation, noting that com¬puter simulations of these do not ﬂy or digest or exert the relevant gravitational attraction. His diagnosis is that what matters to ﬂying (and so on) is causal structure and that what computation gives is just formal structure (one which can be interpreted however one likes). I think this misses the key point of the paper, though: that although abstract computations have formal structure, im¬plementations of computations are constrained to have genuine causal structure, with components pushing other components around.
The causal constraints involved in computation concern what I call causal organization or causal topology, which is a matter of the pattern of causal interactions between components. In this sense, even ﬂying and digestion have a causal organization. It is just that having that causal organization does not suﬃce for digestion. Rather, what matters for digestion is the speciﬁc bi¬ological nature of the components. One might allow that there is a sense of “causal structure” (the one that Harnad uses) where this speciﬁc nature is part of the causal structure. But there is also the more neutral notion of causal organization where it is not. The key point is that where ﬂying and digestion are concerned, these are not organizational invariants (shared by any system with the same causal organization), so they will also not be shared by relevant computational implementations.
In the target article I argue that cognition (and especially consciousness) diﬀers from ﬂying and digestion precisely in that it is an organizational invariant, one shared by any system with the same (ﬁne-grained) causal organization. Harnad appears to think that I only get away with saying this because cognition is an “invisible” property, undetectable to anyone but the cognizer. Because of this, observers cannot see where it is present or absent—so it is less obvious to us that cognition is absent from simulated systems than that ﬂying is absent. But Harnad nevertheless thinks it is absent and for much the same reasons.
Here I think he does not really come to grips with my fading and dancing qualia arguments, treating these as arguments about what is observable from the third-person perspective, when really these are arguments about what is observable by the cognizer from the ﬁrst-person perspective. The key point is that if consciousness is not an organizational invariant, there will be cases in which the subject switches from one conscious state to another conscious state (one that is radically diﬀerent in many cases) without noticing at all. That is, the subject will not form any judgment— where judgments can be construed either third-personally or ﬁrst-personally – that the states have changed. I do not say that this is logically impossible, but I think that it is much less plausible than the alternative.
Harnad does not address the conscious cognizer’s point of view in this case at all. He addresses only the case of switching back and forth between a conscious being and a zombie; but the case of a conscious subject switching back and forth between radically diﬀerent conscious states without noticing poses a much greater challenge. Perhaps Harnad is willing to bite the bullet that these changes would go unnoticed even by the cognizer in these cases, but making that case requires more than he has said here. In the absence of support for such a claim, I think there remains a prima facie (if not entirely conclusive) case that consciousness is an organizational invariant.