Let's talk a little about our ad hoc understanding of the world as it relates to machines. Something that has kept me interested for a few months, and ideas about which are just starting to really distill for me. We certainly don't have fine wine just yet, but I'd rather these ideas be committed to inter-paper then left in my brain until they are completely forgotten. Always a distinct possibility. This post was originally meant to tie into some thoughts on error in computer science, and optical illusion, but I'm now thinking that's ambitious for a single blog post, I'll save them for later.
In reading about artificial intelligence there is often, if not always a push towards a more logically sound and completely reasonable entity which gathers a complete and objective picture of the world before calculating action. We build a system based on rules for general situations and hope that the system will then be able to apply the rules we give it to a variety of circumstances. Object oriented programming in particular is directed towards code reuse and easy generalization of programming. You can see things like this turning up especially in planning as it relates to AI, general rules are formulated and then acted upon.
The thing is, there seems to be a disconnect if we are striving for more human like intelligence, and building that elusive general intelligence machine it seems to me this approach is incorrect. This is because, as I see it, human intelligence and understanding of the world do not work based exclusively on skilled reasoning about the world as it is objectively, instead the human mind seems, very much to cheat, or ostensibly, to use hacks to get not a real picture of the world but a useful one.
I guess as an aside here, I should state that I do not necessarily believe that human intelligence is the only kind of intelligence and that the only way to reach strong AI is via the computer intelligence most similar to people. I am writing this on the assumption this is the case, but I believe it wrong to take for granted that this assumption is true, you are welcome to reject this premise, and plug your ears and hum for the next few paragraphs.[1]
The human equivalent of so-called 'general rules' are a collection of more ad hoc constructions and a series of 'right for right now' interpretations of the world we live in. A more specific example seems to be that we now know without education, or formal training children think not linearly, but logarithmically, this is to say that, when given a number line, young children are more likely to space one and two further apart then 9 and 10 [2]. As we get older, and go through some sort of education system we start evenly spacing 1 - 10 on a line, linearly.
And we can see why this understanding of numbers could be useful quite easily, telling the difference at a glance between one and two is far more important than knowing the difference between 1,000 and 1,001 if you consider things like the harrowing herd of wildebeests thundering across the plains with murder in their hearts. Although, in civilized society this distinction is less critical we tend to forget the more natural state in programming. That is, more specific rules for more specific circumstances, and using ideas that are useful rather than ideas that are objectively true.
It is difficult to imagine a method to build a system that would be able to successfully weed usefulness out of objective truth since we do not completely understand ourselves how the human mind decides what knowledge is useful. There are a number of databases out there dedicated to 'common sense' but I don't think they capture common sense in the way that I mean. They appear to be high level, descriptive common sense, (ie. 'The sky is blue.', or 'Jean Chretien is a former Prime Minister of Canada') and less low level, descriptive accounts of common sense ('Knowing the difference between 1 and 2 is more important than knowing the difference between 100 and 101' and 'It is more important to focus on things in motion than those that are stationary') Second example being a long time coming perhaps speaking to the difficulty of putting rules like these into a concise form.
The main thesis this article is stumbling towards is that in order to develop artificial intelligence that thinks more like us, and perhaps to think wrongly may be to develop our system based on what would make it easier for the system to "survive" and not on a detailed analysis of what there is.
This article may be more to your liking in this case: The Electronic Brain? Your Mind Vs. a Computer. ↩︎
Though I first encountered this idea in Alex Bellos' Here's Looking at Euclid the general theory is also illustrated here "Why we should love logarithms" ↩︎