| Home | Table of Contents |

General Intelligence :

Reason maintenance

Interview question: What are some of the big things that
have been learned over the last 50 years that have
helped shape research in artificial intelligence?

John McCarthy: Well, I suppose one of the big things
was the recognition that computers would have to do
nonmonotonic reasoning.

— CNET news, July 2006

Reason maintenance ("consistency-seeking") is an important form of unsupervised learning.

There are still many unresolved issues concerning nonmonotonic reasoning, so I begin with a survey of what has been done before. This page is VERY messy right now, sorry...

There are a number of research areas:

  1. truth maintenance systems (TMSs)
  2. probabilistic / fuzzy logics
  3. nonmonotonic logics
  4. theory revision (or belief revision)
  5. abduction

and they are all inter-related in some deep ways. Right now a serious headache is to elucidate their nature and to formulate a comprehensive solution, perhaps by unifying some of these mechanisms.

The difficulty is compounded by the fact that all of the above mechanisms are computationally explosive, so even after formulating the logic we still need some heuristics for a practical and efficient reasoning system.

My intuition is that we should build a belief maintenance system based on a probabilistic / fuzzy logic, ie uncertain logic. Basing on this logic, work out an efficient deductive algorithm similar to resolution. Then solve all the problems in nonmonotonic logic, paraconsistency, and belief revision.

  1. Requirements of belief maintenance
  2. Ultra-brief survey of nonmonotonic reasoning
  3. Belief revision / theory revision
  4. Truth maintenance systems

Requirements of knowledge maintenance

This is a tentative list under construction:

  1. CWA (Closed-World Assumption, aka negation as failure) — facts that cannot be proved are assumed to be false;
    Do we need CWA? Perhaps we should say that unproven facts are simply unknown.
  2. Relaxation (monotonic revision) — derive consequences, ie propositional constraint propagation = make all constraints of a network satisfied;
    Unfortunately this is not a complete proof procedure like resolution.
  3. Hypothetical reasoning — keeping track of assumptions
  4. Default rules (aka nonmonotonic justification) — some facts are assumed to be true if consistent with current knowledgebase
    How to check consistency?
  5. A way to resolve competing and conflicting default rules — how? need a "support theory".
  6. Paraconsistency — should we allow logically inconsistent statements in the KB?
  7. Modal logic — maintains distinction between knowledge and beliefs

Ultra-brief survey of nonmonotonic reasoning

(What follows is partly based on these AI textbooks : [Luger 2005], [Russell & Norvig 2003], [Dean, Allen & Aloimonos 1995])

Traditional logic is monotonic: it begins with a set of premises and infers their consequences. Adding new information to the system will only cause the set of true statements to increase, but will never cause existing truths to be retracted. Nonmonotonicity means the ability to make assumptions and to retract false statements later due to additional information.

In the Kellie example, Kellie tells me she is busy, but she may or may not be lying. If the subjective probability of her lying is say, 0.6, a good inference engine should derive consequences for both cases.

Thus, nonmonotonic reasoning is essential for GI, but there are many ways of achieving it:

Abduction

From Q, E => Q, infer E

For example, from the road is wet (Q, observation), we infer that it rains (E, explanation). This reasoning is nonmonotonic because additional facts may cause us to give up certain hypotheses.

More formally, logic-based abduction requires that:

  1. KB U E |= Q
    where KB is the background knowledge, E is the explanation, and Q the observation; and
  2. KB U E is consistent.

Set-covering abduction

Set covering is the oldest and simplest approach. We have a set of hypotheses H and a set of manifestations M related by a function e that takes as an argument a set of hypotheses and gives as a result the corresponding set of manifestations, ie M = e(H). A common assumption is that the effects of the hypotheses are independent, ie, e(H) = Union e(hi). If this condition is met then abduction is a form of "set covering". [From wikipedia: abductive reasoning]

Minimum models

An explanation with fewer exceptions (abnomalities) should be preferred. A minimum model, in its narrow sense, is a model with the fewest abnormalities while satisfying a set of expressions. In a more general sense it is a model with the fewest true atoms satisfying a set of expressions.

There are 2 approaches to minimum models: circumscription and the closed world assumption.

Circumscription

Bird(x) ^ not Abnormal1(x) => Flies(x)

A circumscriptive reasoner is entitled to assume not Abnormal1(x)unless it is known to be true.

Closed-world assumption

Under the closed-world assumption, only things that are true are specified.

Answer set programming

The closed-world assumption can be implemented by a mechanism known as negation as failure. The idea is that anything not proven to be true is assumed to be false.

Sometimes negation as failure will generate spurious assertions that have no justification from explicit rules. A stable model, also known as an answer set, is one containing only assertions with justifications.

Default logic

Default logic [Reiter 1980] has inference rules of the form:

A(z) : B(z) -> C(z)

which is read: if A(z) is true, and if B(z) is consistent with the knowledgebase, then C(z) can be concluded by default.

Default logic uses these special inference rules to generate sets of plausible extensions of the original axioms set.

Autoepistemic logic

From [Sowa 2000]: "In a closed world, knowledge and belief are identical: every proposition known to be true is believed, and all others are unbelievable because they are known to be false. In an open world, beliefs and defaults are tentative excursions into the province of what is unknown. [Moore 1985] developed a version of nonmonotonic logic based on beliefs rather than defaults."

Logic-based abduction

{ Survey of logic-based abduction by [Konolige 1996]: Circumscription and default logic can be implemented by logic-based abduction. }

{ What is knowledge-level abduction? [Levesque 1989] }

Probabilistic methods

Certainty factors, fuzzy sets, Dempster-Shafer theory, Bayesian belief network.

Conclusion

{ What features are essential, where is the computational bottleneck, etc }


Belief revision / theory revision

Belief revision deals with how to update a knowledgebase when a new fact being added to the KB results in some internal inconsistencies. For example:

  1. (A) The bird caught in the trap is a swan.
  2. (B') The bird caught in the trap comes from Sweden.
  3. (B' -> B) Sweden is part of Europe.
  4. (A ^ B -> C) All European swans are white.

These facts would entail the following fact:

(C) The bird caught in the trap is white.

Now suppose that it is observed that the bird caught in the trap is black. This means that the fact "not C" should be added to the KB. But then the KB becomes inconsistent. We have to choose among retracting one of the statements 1-4. Such a choice requires extra-logical considerations — logic alone is insufficient to determine it.

A standard reference for belief revision is [Gardenfors & Rott 1993]. A special case of belief revision is theory revision, where "theory" is a technical term meaning a deductively-closed set of formulae.

Much of belief revision research is centered around the so-called "AGM postulates" (proposed by Alchourron, Gardenfors, and Makinson in 1982-1985).

My opinion: the study of belief revision is more theoretical and abstract, and seems to be of less practical value. Another drawback is that it ignores the explanatory dependencies / justifications among beliefs.

{ To do: epistemic entrenchment }


Truth maintenance systems

Truth maintenance systems (TMSs) allow the efficient updating of beliefs by keeping track of which sentences support other sentences, so-called dependency relations.

Functions of a TMS may include:

  1. identifying responsibility for conclusions (justifications)
  2. contradiction resolution / recovery from inconsistencies
  3. dependency-directed backtracking (a form of inconsistency resolution)
  4. maintaining a cache of inferences (for more efficient inference)
  5. default reasoning (manage assumptions and retract them when necessary)

JTMS: justification-based truth maintenance system [Doyle 1979]. The system is a network D defined by the sets of nodes ( N, P, J ) where N is the set of all nodes, P is a subset of N called the premises (which are believed to be true by the system without justifications), and J is a set of justifications of the form < A | B -> c >, meaning that c could be inferred when A is true and B is false. Each node contains one logical statement. The JTMS lables each node with either { IN, OUT } where IN means a node is believed and OUT means not.

Dependency-directed backtracking has been criticized as insufficiently understood and controlled.

ATMS: assumption-based truth maintenance system [de Kleer 1986]. The labels in these systems are no longer 'IN' or 'OUT' but they are the sets of premises underlying the nodes' derivation. One advantage of ATMS over JTMS is the ability to deal with multiple states of belief (possible worlds).

LTMS: logic-based truth maintenance system [McAllester 1978]. Relationships between propositions are represented by clauses which can be used to deduce the truth values of any of the propositions they describe.

MBR: multiple belief reasoner [Martins & Shapiro 1988].

Relation between TMSs and abduction

Researchers in the 1980s, eg [Reiter & de Kleer 1987], have noticed a strong connection between abduction and ATMS (Assumption-based TMS): each node in the ATMS is labeled by a set of explanations.

A new observation Q is assimilated into the KB, not by directly entering the KB, but by the abductive process of finding the explanations for Q. This is known as knowledge assimilation.

In a TMS inferences are made via justifications of the form: p <- p1, p2,... ,pn expressing that the proposition p can be derived from propositions p1,...,pn. The TMS may also record "nogoods" which can be written in the form not(p1,...,pn) meaning that the propositions p1,...,pn are incompatible and therefore cannot hold together. Given a set of justifications and nogoods, the TMS's task is to determine which propositions can be derived on the basis of the justifications and without violating the nogoods.

The correspondence between TMS and abduction can be made by:

{ More recently [Bochman 2005] proposed a theory of explanatory nonmonotonic reasoning. }

Relation between TMSs and belief revision

One major difference is that TMSs take explicit care of justifications and grounding of beliefs.


You may provide some instant feedback with this form:

Name:
Comment / suggestion / question:


Reference

[Bochman 2005] Explanatory Nonmonotonic Reasoning. Advances in Logic, vol 4. World Scientific Publishing

[Dean, Allen & Aloimonos 1995] Artificial Intelligence: Theory and Practice. Pearson Education

[de Kleer 1984] An assumption based truth maintenance system. Artificial Intelligence 28:127-162

[Doyle 1979] A Truth Maintenance System. Artificial Intelligence, 12:231-272

[Gardenfors 1992] Belief revision: An introduction. Gardenfors ed, Belief Revision, #29 in Cambridge Tracts in Theoretical Computer Science, p1-28. Cambridge University Press

[Gardenfors & Rott 1993] "Belief revision" in Handbook of Logic in Artificial Intelligence and Logic Programming, vol 4, Epistemic and Temporal Reasoning. Oxford Science Publications

[Konolige 1996] Abductive Theories in Artificial Intelligence. Principles of Knowledge Representation, Brewka ed, CSLI Publications

[Levesque 1989] A Knowledge-Level Account of Abduction. Proceedings of the International Joint Conference on Artificial Intelligence. Detroit, MI

[Luger 2005] Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 5th ed. Pearson Education

[McAllester 1978] A three-valued truth maintenance system. MIT AI Lab, Memo 473

[Moore 1985] Semantical considerations on nonmonotonic logic. Artificial Intelligence 25:1

[Reiter 1980] A logic for default reasoning. Artificial Intelligence, 13:81-132

[Reiter & de Kleer 1987] Foundations of Assumption-Based Truth Maintenance Systems, in Proceedings of AAAI-87, Seattle, WA, p183.

[Russell & Norvig 2003] Artificial Intelligence: A Modern Approach, 2nd ed. Prentice Hall

[Sowa 2000] Knowledge Representation: Logical, Philosophical, and Computational Foundations. Brooks/Cole USA

| Home | Table of Contents |

05/Oct/2006 (C) GIRG [ Notice about intellectual property on this page ]