"Singularity related articles and worries about "good" AI building makes me a bit uneasy when the performance of machines on many important tasks, e.g. object recognition, is dismal. So we have quite a way to go."
Yudkowsky (2008), "AI as a Negative and Positive Factor in Global Risk" http://singinst.org/upload/artificial-intelligence-risk.pdf addresses this in section 7 (though it's really worth it to read the whole paper): "The first moral is that confusing the speed of AI research with the speed of a real AI once built is like confusing the speed of physics research with the speed of nuclear reactions. It mixes up the map with the territory."
===============
"in (2) how do we define the "upper limits of intelligence", when even defining intelligence is problematic."
Given the huge number of reproducible cognitive biases humans are known to exhibit (http://wiki.lesswrong.com/wiki/Bias), it seems very unlikely that humans are optimally intelligent (one way to define intelligence, from the Omohundro paper below: 'We define “intelligent systems” to be those which choose their
own actions to achieve specified goals using limited resources' so "more intelligent" means better at achieving those goals using limited resources)
===============
"(3), I don't know what sort of "economic arguments" are meant but as we all know becoming "more intelligent" is not a simple hill climbing process, in fact it's not clear how to go about it."
Omohundro (2011), "Rationally Shaped Artificial Intelligence" Makes the case for #3: http://selfawaresystems.files.wordpress.com/2011/10/rational... (summary: by "more intelligent" we mean "more capable of achieving goals." So any AI which has goals will act to become more intelligent so that it can more effectively achieve those goals)
Thanks for a great reply! I think my (and I don't think I'm alone in this) bias is that when I think of intelligence I tend to think of tasks that the human brain has evolved to perform, like linguistic and visual analysis. It may be argued that these tasks form a small subspace of the "intelligence space", but, boy, is the human brain good at solving these! That doesn't mean that we are optimal; however, for many AI tasks the brain operates on such a vastly different performance level that comparisons with machine AI seem out of place. Even systems like Watson, which were created by many hundreds of man-years, can stump humans in narrow domain tasks.
Now, it might be argued that some sort of paradigm shift will occur as machines get more intelligent (e.g. similar to how results from STR and quantum theory were impossible to predict to Newtonian-constrained thinking) and some effects that we cannot predict now will prevail, like the massive increase in intelligence in humans that caused/was caused by the advent of language.
Yudkowsky (2008), "AI as a Negative and Positive Factor in Global Risk" http://singinst.org/upload/artificial-intelligence-risk.pdf addresses this in section 7 (though it's really worth it to read the whole paper): "The first moral is that confusing the speed of AI research with the speed of a real AI once built is like confusing the speed of physics research with the speed of nuclear reactions. It mixes up the map with the territory."
===============
"in (2) how do we define the "upper limits of intelligence", when even defining intelligence is problematic."
Given the huge number of reproducible cognitive biases humans are known to exhibit (http://wiki.lesswrong.com/wiki/Bias), it seems very unlikely that humans are optimally intelligent (one way to define intelligence, from the Omohundro paper below: 'We define “intelligent systems” to be those which choose their own actions to achieve specified goals using limited resources' so "more intelligent" means better at achieving those goals using limited resources)
===============
"(3), I don't know what sort of "economic arguments" are meant but as we all know becoming "more intelligent" is not a simple hill climbing process, in fact it's not clear how to go about it."
Omohundro (2011), "Rationally Shaped Artificial Intelligence" Makes the case for #3: http://selfawaresystems.files.wordpress.com/2011/10/rational... (summary: by "more intelligent" we mean "more capable of achieving goals." So any AI which has goals will act to become more intelligent so that it can more effectively achieve those goals)