| Home |
General Intelligence :
Note: The issues here should be termed "frequently contended issues" and the "answers" actually represent my biased personal views rather than well-established theories or facts.
Right now the explanations are kind of brief. I'll write more when I have time.
GOFAI means "Good Old Fashioned AI". People tend to call my project GOFAI because it is logic-based, but there are many new elements in my theory that makes it different from GOFAI, such as:
My approach is not a clean break away from GOFAI, but is more like GOFAI++.
This is only partially true. I think we should not rely entirely on machine learning because that would take an inordinate amount of time for the system to reach human-level intelligence. The only practical way is to do the learning faster, which means we have to hard-wire the system with structural knowledge, for example, using a logic-based knowledge representation scheme.
Using a fixed knowledge representation scheme would introduce some rigidity to the system, but I think intelligence can be achieved despite such rigidity.
I don't want to reignite a battle of symbolicism vs connectionism because that would be counter-productive. I think the truth is that current NN theory is not advanced enough to build general intelligence. So if you are an NN researcher, you may find it fruitful to tackle the following problems:
Finally, let me add that we do use neural networks for some low-level vision processing.
I have not reached a conclusive decision on this issue.
Let me first describe an "ideal" intelligent system. The system has 2 main characteristics:
Many people think that the above AI is "perfect" and that anything that has a fixed language module or a fixed knowledge representation is not "true AI". This is a fallacy. They think that the AI must start as a tabula rasa, and knowledge that is not acquired through learning is not good enough. I disagree with such a view.
Having a fixed langauge module is a good thing because it allows the intelligent system to have language facility right from the beginning, bypassing the baby-language stage.
On the other hand, a robust language interface may allow the AI to better acquire knowledge, for example via human interaction or by crawling the web.
Let's say that I build a rigid, fixed language module, that can only accept a subset of grammatical sentences, and may sometimes accept ungrammatical sentences. Many AI builders think that such a system is old-fashioned and brittle (not robust), but they have overlooked the fact that language processing is not essential to general intelligence. When the AI acquires human-level intelligence, it can look at its own source code and modify itself. Then the language module is just a sub-problem that can be solved automatically. Our mission is to build a minimal intelligent system as soon as possible.
My theory shares some features with Hawkins' theory, namely the compression of sensory experience via a hierarchical pattern-recognizing structure. But the pattern-recognizer I use is logic-based, which is more expressive and flexible than neural networks, and thus can handle higher cognition more easily.
But there are several other modules (such as belief revision, natural language) that are absent in Hawkins' system. One thing that is becoming clear is that the neurally-based paradigm is too primitive / ineffectual to deal with some of the higher cognitive functions that logic-based AI can handle with great ease.
Yes, computing power is an important constraint. Those who think that human-level AI can be built on a current PC is probably delusional. As our project grows, we may rent a cluster to run it which is one more reason why we should collaborate to share costs. We may also explore distributive computing in the future.
I don't agree that the development of human-level AI must wait till computing power reaches the human brain's level. We can build a complete GI ('the wine cup') without filling it with the entire body of human knowledge ('the wine'). We'll see how it performs with a fraction of human knowledge as much as hardware permits.
See my speculations on the post-AI future.
There is a widespread sentiment that can be summarized as "symbolic bad, numerical good", especially due to the success of statistical learning methods that are currently in vogue, such as SVMs. This is like a kind of "maths envy".
Firstly, remember that logic-based AI is also based on (rigorous) mathematics.
Secondly, probabilistic/fuzzy logic is certainly very "numerical" and can perform statistical classification in vector-spaces just like neural networks. Therefore, it is wrong to say that my approach is purely symbolic.
Thirdly, logic-based inductive learning is also a form of statistical learning, but people tend to overlook that because many are unfamiliar with logic-based inductive learning. In fact, inductive learning based on probabilistic logic would involve just as much number crunching that is no different from other statistical methods.
And fourthly — this is the most important point — I have tried to build a GI using only vector-space classifiers (using neural networks, but it could as well be other statistical classifiers), and found that the approach is not very feasible, or at least highly ineffectual:
The main reason is that I think we can build an AI much sooner than we can simulate the human brain. Which is not to say that AI is very easy, but still.
Another reason is that the brain simulation would be opaque, so you'd end up with a human-level intelligence (like ourselves) who cannot figure out how it works itself (just as you and I, and neuroscientists, cannot figure out how the brain works right now), and so it cannot reprogram or improve itself.
[Muresan 2004] Scale Independence in the Visual System, Rajapakse & Wang, eds. Neural Information Processing: Research and Development, Springer, p1-18 (pdf)
| Home |
Mar/2006 (C) GIRG