Rethinking AI Specifications

Sorry about how bad the last entry was, sometimes the writing bug isn't biting. Or it is, but only in the context of surreal fiction, or bad poetry, or self satisfied navel gazing. Hopefully this post is a little more substantial, if a bit concise. I can't tell if it's more philosophy or computer science, an exercise left to the reader.

Let's talk about the problems currently faced at the requirements stage[1] in the development of artificial intelligence, in particular why vague requirements seem to be stunting the development of strong AI[2], or, that the impossibility of objective requirements indicate the impossibility of strong AI.[3] Which is to say, that if we cannot exactly specify what strong artificial intelligence is that maybe this indicates that it is impossible to develop.

Let's be clear in outlining where the problem seems to be:

Intelligence is a statement of our ignorance of how something works[4]

This pithy quote quite elegantly outlines the main problem we encounter when describing the requirements for artificial intelligence, this is the fact that the thing constitutes strong artificial intelligence isn't a fixed benchmark, but instead an ever sliding scale. Without some sort of objectively fixed, measurable objective stumbling into intelligence that is universally believed to be general seems close to nil, as the bar keeps getting higher and higher as we understand more about the systems we create.

Our belief in an intelligent system is relational, people in the early 20th century probably would have believed a chess playing robot was truly and wholly intelligent, however, because we now see how such mechanisms work inside of a machine we are entirely less likely to include this in intelligence, but instead call it a mechanical process even in the human mind. We cannot at this point state what sorts of things can be concluded as real intelligence because as soon as we meet the current gold standard requirements for general intelligence we set out we will be no further ahead as these requirements will no longer look like intelligence. The article I popped into footnote 4[4:1] is the best I've read on this topic, it quite clearly outlines how intelligence is a relation, and as such concludes that strong AI is simply impossible.

Admittedly, I'm no so convinced, as I believe that if we ever came to fully understand wetware I instinctively doubt that humankind would humbly concede that we are not actually intelligent but meat machines processing input and output. It had been the goal of philosophy since something close to its inception to prove that humans are somehow unique from the animal kingdom, and a full schematic of the brain doesn't seem as though it would be enough to topple humanity's hubris. Perhaps ideas in this direction would be sufficient to prove that human-like intelligence should be the goal of artificial intelligence, rather than an equally intelligent, but different sort of intelligence as is often put forward as a potential angle for developing AI. At any rate it seems to be a starting point, generating clear requirements for a truly intelligent agent.

About a year ago, I read an article about the loss of ambition in artificial intelligence, students with a focus on entertaining pseudo-intelligence but no interest in actually making machines smarter[5] -- making robots that can dance, and make faces but that aren't closer to true intelligence for it -- and where the lack of interest in the big problems stems from. It's possible that this comes from a lacklustre crop of students with no ambition, no money, and a desire to go into any field that hands them a few bucks[6]. But I think the problem runs much deeper, I think a lot of the problem starts from not knowing where to begin.

Combining these two articles makes for a very depressing state of the field, people solving small and inconsequential problems (to the big picture of strong AI development -- I wouldn't trivialize the work done by recent AI researchers for all of the tea in China, as my mother might say), and once these problems are solved they no longer look like intelligence to us, but rather like a Mechanical Turk. The illusion is there, but it isn't fooling anyone anymore.

This is why I do believe that we need to re-examine what we think comprises human intelligence, maybe alternative intelligences will emerge as we create firm specifications, but for now, in order to avoid the pitfalls it is time to figure out more decisively what makes a human being intelligent. Try to quantify it, and itemize it. Surely this is a mammoth undertaking, but I believe a necessary one, a catalogue of humanity might be the particular innovation that we need to push some new discoveries into artificial intelligence which won't leave Minsky accusing us of not making anything smarter.

This is probably why I'm in Philosophy and Computer Science, ladies and gentlemen.


  1. This one goes out to you Professor Amyot ↩︎

  2. http://en.wikipedia.org/wiki/Strong_AI ↩︎

  3. If that sentence needed anything it was me saying impossibly more times. ↩︎

  4. http://www.i-programmer.info/programming/artificial-intelligence/2437-the-paradox-of-artificial-intelligence.html ↩︎ ↩︎

  5. http://www.technologyreview.com/computing/37525/ ↩︎

  6. And please read that sentence for the sarcasm that it was ↩︎