I am going to start moving these posts to Fridays though, this week has been late as I'm in Uxbridge attending my brother's fringe play and have been especially busy with family related things, my posts were originally set on Thursdays to accommodate a long finished school schedule, and now without that constraint Friday feels like a better day. I started researching this post way early, which is why I'm floored that I'm still actually writing it so late. I'm going to spend this weekend talking a bit more about computational complexity (as mentioned rather ham-handedly in the previous post) and in particular how it presents a response to Searle's popular Chinese Room argument[1. I know that there is hardly a more tired thought experiment in the philosophy of mind].

For those not in the know, the Chinese Room is a thought experiment proposed by John Searle that (extremely simply formulated) asks you to consider a man who speaks no Chinese to be placed in a room with a series of manuals that don't give the meanings of Chinese symbols but direct him as to which symbols he should output when given particular Chinese symbols as input. It then asks if a person who does speak Chinese inputs particular symbols and gets a proper reply from the man in the room is it appropriate to say that the man, or the system which he creates 'knows Chinese' certainly he is able to syntactically recreate Chinese sentences, but strictly speaking this symbol manipulation doesn't afford him any understanding of what he is doing. Searle then argues that this is why mere symbol manipulation cannot be sufficient for an intelligent system and thus claims that this refutes the prospect of 'Strong AI' or genuinely smart human-like intelligence. A computer only manipulates symbols, Searle does not believe intelligence can arise from this as evidenced above.

There are a number of refutations to the Chinese Room thought experiment in particular we will look at the one which presents itself through computational complexity. The general overview of this refutation is simple and is touched upon by a number of philosophers. I first encountered a similar rebuttal used to refute things such as ghost sightings. As with many issues of computational complexity the problem is with resources, and speed. The resources required for a system to process all of the symbols in the Chinese language (or combinations of words in English) is astronomical, even during short conversations the number of potential responses to any one query are huge, and to look for these responses in a book or chart would require more time than the duration of the universe to potentially respond to. This indicates that the Chinese Room isn't really a great model of the brain since while we can think of a theoretically possible 'table of every answer to every question' we cannot imagine a system that could easily traverse all these responses to find the best one possible.

Aaronson in his paper points out that while a huge table is computationally difficult, there is perhaps a tricky algorithm that could be devised to have an intelligent conversation that did not specifically rely on iteratively looking at all possible responses to a question. Searle's algorithm is simplistic and it is easy to agree that the system that he proposes as he proposes it would not be intelligent but that doesn't really matter. To pass a Turing Test we would need a system that takes considerably less time than all of the time in the universe to reply. Aaronson points out that this reply to Searle, which I find quite compelling implies an interesting consequence -- that there is a metaphysical difference between polynomial and exponential time since the lookup table, even if it worked would not be sentient, but the tricky algorithm may in fact be, even if they gave the same output given the same input all the time -- and this difference between them is purely algorithmic. The proof may even be in the pudding as I know of no person who would consider a chat bot like CleverBot intelligent, though it sometimes gives surprisingly good responses to statements posed to it. Thus, the key to strong AI is not passing the Turing Test, but rather something else entirely.