| Home | Table of Contents |

General Intelligence :

Inference: deduction

Deduction is the most-studied form of inference, but inference also includes abduction and induction (in the form of inductive learning).

  1. Requirements
  2. Recalling relevant facts from memory
  3. Deduction
  4. Uncertainty

How do goals guide the direction of the stream of inference?


Requirements

  1. Deduction
  2. Abduction — finding explanations
  3. Induction
  4. Uncertainty reasoning
  5. Dealing with assumptions
  6. Reasoning about knowledge / beliefs (autoepistemic logic)

Recalling relevant facts from memory

The knowledgebase of general intelligence is huge, therefore it is extremely important that we select only relevant facts/rules to Working Memory for processing.

For example, if Kellie says she is busy is in Working Memory, then the following facts/rules may be recalled from other memory systems:

fact / rule logical form recalled from
I met Kellie at a party last week Encounter(i,encounter1,last_week),
With(encounter1,kellie), ...
EM

Kellie is pretty

Pretty(kellie) GM/EM
If a person says p then p is true unless the person is lying / joking / etc Says(person,p) ⇒ p unless Lying(person), Joking(person),... GM

How does memory recall work?

Which facts are to be recalled from memory depends on the following factors:

  1. goals in Working Memory
    — eg "Is Kellie lying?"
  2. facts (and thus predicates, variables, constants etc) in Working Memory
  3. relative importance (= saliencies) of the facts in Working Memory
  4. specific requests from:

See also Generic Memory.

Note that the Inference Engine (or the Planner) accesses other memory systems via Working Memory. For example, the Inference Engine may ask Generic Memory to find all the rules applicable to (ie having the "head" of) Says(person,something). Or this is done automatically when the fact Says(kellie,she_is_busy) appears in Working Memory.

The Inference Engine may talk directly to Generic Memory and other memory systems (instead of passing through Working Memory). This issue has not been decided yet.

{ To do: How to ensure that important facts/rules are not missed from recall? }

The rete algorithm

The rete (pronounced "ree-tee") algorithm was proposed by [Forgy 1982] and is the core algorithm in many expert system shells such as OPS5, CLIPS (written in C), and Jess (written in Java). It is also a core component of Soar, a rule-based cognitive architecture.

The purpose of rete is to match many rules to many facts in an efficient manner. A "dumb" algorithm is to scan all the rules and try to matching them to the facts, the so-called "rules finding facts" approach. Rete implements a "facts finding rules" approach to exploit the fact that many facts in working memory remain there from iteration to iteration, so only a few facts change per iteration.

Another function of rete is "predicate hashing". For example, a fact may contain 2 predicates X and Y. This fact may be matched against a potentially huge number of rules. Checking each rule even once would be prohibitive. Hashing of the predicates allows only those rules that contain the predicates X or Y in their LHS to participate in the matching.


Deduction

Deduction (also known as inference) can be done in many ways, we only survey them briefly because they are not the computational bottleneck of common-sense reasoning. The knowledgebase of general intelligence is huge, therefore inference should only be applied to a limited number of facts/rules within Working Memory. After the selection of relevant facts/rules from memory (see above section), inference can be done via traditional algorithms even though the problem is NP-hard.

Model checking

A model is a particular assignment of variables. Model checking means enumerating all possible models to see if:

  1. the KB is true in which models and
  2. the target sentence is true in all of the models where KB is true.

There are many model-checking algorithms:

  1. DPLL algorithm [Davis, Putnam, Logemann, Loveland 1960, 1962] for propositional entailment
    — implements a backtracking depth-first search with early termination
  2. local-search algorithms

Resolution

  1. Proof by contradiction: To show p, it shows that (KB ^ ¬p) is unsatisfiable.
  2. First, the knowledgebase (KB) is convert to CNF (conjunctive normal form, ie a conjunction of disjunctions of literals).
  3. Uses a single rule similar to "A or B, ¬B ⇒ A" but generalized to more literals.
  4. The algorithm repeatedly selects 2 clauses from { KB, ¬p } to apply the resolution rule.
  5. If the 2 clauses resolve to an empty clause (signifying a contradiction) the algorithm returns true, otherwise the new clause is added to the { KB, ¬p } set.
  6. If no more new clause can be added to { KB, ¬p } it returns false.
  7. A third possible outcome is that the algorithm does not terminate, due to the undecidability of first-order logic
  8. Resolution for propositional logic is co-NP-complete.
  9. Some subsets of propositional logic can be solved in polynomial time.
  10. Resolution for propositional Horn clauses can be solved in time linear in the size of KB, using forward- or backward- chaining.

Method of analytical tableaux


Uncertain reasoning

See this page.


Abduction


Induction

See inductive learning.


You may provide some instant feedback with this form:

Name:
Comment / suggestion / question:


Reference

[Davis & Putnam 1960] A computing procedure for quantification theory. Journal of the Association for Computing Machinery, 7(3), p201-215.

[Davis, Logemann & Loveland 1962] A machine program for theorem-proving. Communications of the Association for Computing Machinery, 5, p394-397.

[Forgy 1982] Rete: A Fast Algorithm for the Many Pattern / Many Object Pattern Match Problem, Artificial Intelligence #19 (1982) p17-37.

| Home | Table of Contents |

23/Aug/2006 (C) GIRG [ Notice about intellectual property on this page ]