Today's entry is a brief outline of Giulio Tononi's Information Integration Theory of Consciousness. And an attempt to understand what this means, and its implications for synthetic intelligences that he briefly touches upon in the article. I'll be keeping this entry generally very high level without going into the mathematical detail that Tononi does, so it should be fairly accessible even if bipartite graphs put you into a cold sweat.[1]

Information Integration Theory of Consciousness states that what is important to consciousness is not the quantity of information about the external world that we  process, but rather how all of the information coming in from different senses and sources interconnects in our brain. This is most clearly indicated by Tononi's own example -- that a theoretical camera can have as high a resolution as possible, this does not change the fact that since two adjacent pixels are in no way related according to the camera, we would not call the camera conscious because it doesn't 'understand' the information stored which is to say, doesn't integrate the information to create a conscious world picture.

So, then, in what way do humans integrate information that we do not currently see in machines?[2]

On a rather meta scale Information Integration Theory is at odds with even the manner in which we seem to approach artificial intelligence. As I touched on a few weeks ago, the most popular approach to the discipline stipulates that we solve small isolated packets of problems instead of seeking general overarching intelligence. Information Integration would seem to require the whole specification of the brain laid out interconnectedly rather than discretely, since the more information that is interleaved the closer one gets to a conscious intelligence. Simply having this intelligence, or the ability to perform tasks is not enough if the information isn't generally understood by all systems.

This seems to open the gates to robotics being a necessary and integral part of artificial intelligence as it gives systems many more avenues to receive information.

The trick though, rather than getting the information -- which we are continuously making great strides in -- but then blending the information seamlessly into a full understanding of the state of thing. I can even see a place where this may tie into my much earlier post on error theory. Certainly we want a lot of information and we want it integrated, but the beauty of the human mind is the ability to condense this information into manageable chunks. The computer tested theory of schizophrenia being the result of perhaps pounding a system with entirely too much information[3] and no way for it to sort through what is important. Maybe integration has a hand in weeding out the things that we 'need to know' from the information we get.

  1. Rather like most math for me ↩︎

  2. Since I don't even think it is a contentious point to say that not many people today believe that there exists a conscious machine ↩︎

  3. ↩︎