| Home | Table of Contents |
General Intelligence :
Deduction is the most-studied form of inference, but inference also includes abduction and induction (in the form of inductive learning).
How do goals guide the direction of the stream of inference?
The knowledgebase of general intelligence is huge, therefore it is extremely important that we select only relevant facts/rules to Working Memory for processing.
For example, if Kellie says she is busy is in Working Memory, then the following facts/rules may be recalled from other memory systems:
fact / rule | logical form | recalled from |
I met Kellie at a party last week | Encounter(i,encounter1,last_week), With(encounter1,kellie), ... |
EM |
Kellie is pretty |
Pretty(kellie) | GM/EM |
If a person says p then p is true unless the person is lying / joking / etc | Says(person,p) ⇒ p unless Lying(person), Joking(person),... | GM |
How does memory recall work?
Which facts are to be recalled from memory depends on the following factors:
See also Generic Memory.
Note that the Inference Engine (or the Planner) accesses other memory systems via Working Memory. For example, the Inference Engine may ask Generic Memory to find all the rules applicable to (ie having the "head" of) Says(person,something). Or this is done automatically when the fact Says(kellie,she_is_busy) appears in Working Memory.
The Inference Engine may talk directly to Generic Memory and other memory systems (instead of passing through Working Memory). This issue has not been decided yet.
{ To do: How to ensure that important facts/rules are not missed from recall? }
The rete (pronounced "ree-tee") algorithm was proposed by [Forgy 1982] and is the core algorithm in many expert system shells such as OPS5, CLIPS (written in C), and Jess (written in Java). It is also a core component of Soar, a rule-based cognitive architecture.
The purpose of rete is to match many rules to many facts in an efficient manner. A "dumb" algorithm is to scan all the rules and try to matching them to the facts, the so-called "rules finding facts" approach. Rete implements a "facts finding rules" approach to exploit the fact that many facts in working memory remain there from iteration to iteration, so only a few facts change per iteration.
Another function of rete is "predicate hashing". For example, a fact may contain 2 predicates X and Y. This fact may be matched against a potentially huge number of rules. Checking each rule even once would be prohibitive. Hashing of the predicates allows only those rules that contain the predicates X or Y in their LHS to participate in the matching.
Deduction (also known as inference) can be done in many ways, we only survey them briefly because they are not the computational bottleneck of common-sense reasoning. The knowledgebase of general intelligence is huge, therefore inference should only be applied to a limited number of facts/rules within Working Memory. After the selection of relevant facts/rules from memory (see above section), inference can be done via traditional algorithms even though the problem is NP-hard.
A model is a particular assignment of variables. Model checking means enumerating all possible models to see if:
There are many model-checking algorithms:
See this page.
See inductive learning.
You may provide some instant feedback with this form:
[Davis & Putnam 1960] A computing procedure for quantification theory. Journal of the Association for Computing Machinery, 7(3), p201-215.
[Davis, Logemann & Loveland 1962] A machine program for theorem-proving. Communications of the Association for Computing Machinery, 5, p394-397.
[Forgy 1982] Rete: A Fast Algorithm for the Many Pattern / Many Object Pattern Match Problem, Artificial Intelligence #19 (1982) p17-37.
| Home | Table of Contents |
23/Aug/2006 (C) GIRG [ Notice about intellectual property on this page ]