Short Intro to Scott Aaronson's 'Why Philosophers should care about Computational Complexity"
One of the benefits of not being in school and not being employed should have been that I had a lot of time to work on my blog. Indeed this could have been the case if I had some actual planning skills. I decided today I wanted to write on computational complexity and philosophy, mostly as a summary and thoughts on Scott Aaronson's paper about the topic[1. http://www.scottaaronson.com/blog/?p=735 ], however I neglected to notice until this morning that said paper rings in at 59 pages (about 9 of those pages being bibliography if memory serves). So, I read the whole thing in nearly one sitting and I'm still reeling somewhat from the info overload, I'm going to go over some of my notes on the introduction though, next post, after I've had time to let the information settle, I'll post about how he applies computational complexity to AI, and how it works with some of my own thoughts on the matter
As I mentioned above, the purpose of the paper is to demonstrate where computational complexity will be able to bring about the same revolution in philosophy that computation brought about when it first showed up in the discipline. But first, we should probably illustrate what computational complexity is at a high level, and to that end computational complexity is the discussion of how the 'resources' (be it memory, calculations, or disk size) needed to compute a problem scale with the size of a problem. To put this in the perspective of a basic example for those outside of computer science, in order to sort a list into ascending order by comparing each element with the one next to it and swapping them if they are in the wrong order (Bubble Sort) takes in the worst case (in general when the smaller numbers are at the end of the list) n^2 which is to say (the number of elements)^2 is the longest it will take for this algorithm to terminate. When this list is very small so are the maximum number of iterations, but we can see that for large values this algorithm is not extremely efficient (ie. it takes a long time to finish). Good algorithms generally finish in n-time or less and really bad ones can take 2^n or n!.
In general, that value of understanding computational complexity for philosophy is eliminating the assumption that once we know that a problem can be solved that the rest of the work is to be 'left to the engineers'. It is not the case, and there is a difference between something computable in 12 seconds and something that would take more time than the universe has. Aaronson's example is pretty excellent here, as he illustrates the point as 'think of the difference between reading a 400 page book and reading every possible 400 page book, or between writing a several thousand digit number and counting to that number' this I think, is a good way to think about the impact that the complexity of a problem can make. This isn't simply an engineering problem, but a problem of "possibility". He talks at length about how P, and NP factor into this complexity and understanding problems. I'm going to try and avoid too much deviation into this topic for now, I will try to write some things on P and NP in my intro to philosophy and computer science.
Aaronson then applies complexity theory to philosophy and points out areas where he would like philosophers to delve deeper. Next post, I'm going to talk about the ways that he applies complexity to artificial intelligence, and from there outline how it applies to some of the thoughts I've expounded upon here.