Logic Representation of Natural Language
    for a Resolution Theorem Proving Program

    July 1993


    Plan "B" Project for Master's Degree in Computer Science

    Student:  Jonathan Henckel

    Advisor:  Dr. James Slagle

    Examining Committee:  Dr. Maria Gini, Dr. James Dickey


(c) Copyright by John Henckel 1991, 1993.  All rights reserved.


Abstract
========

The syntactic network knowledge representation is described which is
similar to semantic networks.  A parser is described which uses
syntactic networks to translate restricted English to well formed
formulas (wff) of first-order predicate logic.  The parser is
integrated with a theorem proving program based on the connection
graph proof procedure (CGPP).  The data structures and code
organization of the program, written in C, are described.  Algorithms
for parsing wffs, conversion to conjunctive normal form, and CGPP are
described.  An algorithm for unification by reference rather than
substitution is described.  Finally, topics for further study are
suggested, including connection graph search methods.

Introduction
============

Computers can store vast amounts of information.  However, information
alone is not knowledge.  The word knowledgeable applies to animate objects
which possess information and can use it to, at least, answer questions.
Intelligence also refers to possessing information.  However, an
intelligent object must be able to use information creatively to set its
own goals, to solve problems, to learn from experience, to be artistic or
humorous.  This project attempts to make the computer more knowledgeable,
but it does not attempt to make the computer intelligent.

For a computer to be knowledgeable, it needs (1) a language in which the
user can enter facts and ask questions, (2) a reasoning procedure, (3) a
knowledge representation which stores the facts to facilitate the reasoning
procedure, and (4) a language for displaying knowledge and answers to
questions.  The best choices for these parts depends on the knowledge
domain.  The domain could be procedures, facts, beliefs, probabilities,
quantities, or something else.

For this project I have written a computer program to answer questions
about factual knowledge (things that are true or false), about common
things, such as "Mary rides her bicycle to work." The input language is a
subset of English which is parsed using a recursive transition network
(Winston84).  The reasoning procedure used is the connection graph proof
procedure, CGPP, (Kowalski75) which is based on the resolution rule
(Robinson65).  I chose this procedure because it is "neat", (Sowa83).  It
consists of only a few simple rules, and based on first-order logic, FOL.
An object-relation paradigm based on semantic networks (Nilsson80) is used
to convert the input language to FOL.  The output of the program is FOL; it
is not converted back to English.

Definitions
===========

Natural Language
----------------

The following lists defines some of the terms used in the discussion of
natural language.

    agent, verb, direct object, indirect object are used as the they are
                defined in most grammar textbooks.  For example, in "Mary
                gave the book to John", the agent is Mary, the verb is
                gave, the direct object is the book, and the indirect
                object is John.
    predicate   is the verb and the objects and their modifiers.
    clause      is a string of words representing one complete concept, i.e.
                it must have one agent and one predicate.
    question    is a clause in which some part of the clause is being
                queried, usually with words like "what" or "how".
    sentence    is one or more clauses separated by logical connectives.

Predicate Calculus
------------------

The discussion of the theorem prover in the second half of this
paper uses the definitions for predicate calculus given in
GN86.  These are:

  Depending on its context, every symbol is a constant or a variable.
    (The examples in this paper use x, y, and z for variables.)
  A constant with no arguments is called an object constant.
  An n-ary function constant followed by n arguments is called a function
    expression.
  A term is either an object constant, a variable, or a function
    expression.  The arguments to functions are terms.
  An atomic expression (or atom) is a relation constant (or predicate)
    possibly followed by arguments which are terms.
  A literal is an atom or a negated atom.
  A clause is a set of literals.

First Order Logic
-----------------

The well-formed-formulas (wff) used in this paper follow the definition
found in most logic textbooks (such as Thomas77), except that quantifiers
are given a low precedence instead of a high precedence.  The following
table lists the wff operators.  The name or the symbol of each operator may
be used in a wff.

Precedence  Name    Symbol  Usage          Description
----------  ----    ------  -----           -----------
    1.      not     -       Unary prefix   negation
    2.      and     &       Binary infix   conjunction
    3.      or      |       Binary infix   disjunction
    3.      xor     ^       Binary infix   exclusive or
    4.      if      <       Binary infix   is implied by
    4.      then    >       Binary infix   implies (i.e. only if)
    5.      iff             Binary infix   if and only if
    6.      all x           Unary prefix   universal quantification
    6.      some x          Unary prefix   existential quantification

The associativity of operators with the same precedence is left to right
for binary, and right to left for unary.  Parentheses may be used to
override the natural precedence and associativity.

English Language Parser
=======================

What is information?  Many people would say that the words on this page are
information.  However, if no one can understand the words, they are not
information.  They are meaningless symbols.  The symbols are information if
they stimulate peoples thoughts in a predictable way.  The sentence "Mary
rides her bike to work." may be stored in the computer as one symbol:

  Mary_rides_her_bike_to_work

This approach has the advantage that is easy to write a computer program to
automatically convert from English to the internal representation.  No
common sense knowledge is required, just change all the spaces to
underscores.  In this form, the computer could answer the question

  Is it true that Mary rides her bike to work?

affirmatively by comparing the symbol in its memory with the symbol
in the question.  However, the computer could not answer the questions

  What does Mary ride?
  How does Mary go to work?

The computer did not "understand" the sentence, because it can only answer
trivial questions.  In this case, the reason is that the knowledge
representation is too weak.

Semantic Networks
-----------------

Semantic networks is a representation that has been proposed to effectively
represent the information in English sentences.  Sowa83 describes a kind of
semantic network called conceptual graphs.  The previous sentence as a
conceptual graph would be

  [RIDE] -
      (AGNT) -> [PERSON:Mary]
      (OBJ)  -> [BIKE] <- (POSS) <- [PERSON:Mary]
      (DEST) -> [WORK]

This can be easily converted to FOL by changing each node and arc to a
predicate (a truth function) as follows:

  some r,b,w : RIDE(r) & PERSON(Mary) & BIKE(b) & WORK(w) &
               AGNT(r,Mary) & OBJ(r,b) & POSS(Mary,b) & DEST(r,w).

Where r, b, and w are variables and RIDE(r) means "r is an instance of
riding".

Semantic networks seem to effectively represent most of the information
present in the English sentence.  However, translating from English to a
semantic network is not a simple process.  I had to know that "Mary" is
probably a person (because only people and monkeys can ride bikes, and
monkeys don't go to work usually), and "ride" is a transportation verb so
that "work" is the destination, not the recipient, and so on.  I also know
that "to work" is an idiomatic expression meaning "to the place of one's
employment", (except in other expressions such as "put to work").  For a
computer to convert English to semantic networks, would require a detailed
lexicon (dictionary) and much common sense knowledge.

Syntactic Networks
------------------

For this project, I used a knowledge representation similar to semantic
networks called syntactic networks.  It does not attempt to encode the
semantics of a sentence, instead it encodes the syntax of the sentence.
The previous sentence is represented as

  some r,b,w : inste(r,ride) & agent(r,Mary) &
               object(r,b) & inst(b,bike) & rela(Mary,owner,b) &
               prep(b,to,w) & inst(w,work).

This is fairly similar to the semantic network.  The POSS predicate is
replaced by a more general ternary relationship predicate.  The DEST
predicate is replaced by a prepositional phrase predicate.  The English
parser does not attempt to determine the semantics of "to work", it only
stores it.  This is the "do what I say, not what I mean" approach to
natural language.

With this representation the computer can answer the question "What does
Mary ride?" using a refutation proof.  The negation of the question would
be

  not some r,x : inste(r,ride) & agent(r,Mary) & object(r,x) & -answer(x).

Where answer(x) is a special predicate used for answer extraction as in
Nilsson80.

There are eight different predicates used to represent syntactic networks.
They are

    inst(x,y)     x is an object in the class y
    inste(x,y)    x is an event in the class y
    rela(x,y,z)   x is the y of z,  ex. "Jacob is the father of Dan"
    agent(x,y)    the agent of x is y
    object(x,y)   the object of x is y
    mod(x,y)      x is modified by y,  x may be an event or an object
    prep(x,y,z)   x is y z,  ex.  "John is in Florida", "We ate at noon"
    eq(x,y)       x and y are the same object

  For example, "John gave his sister blue earrings" is represented as

    some g,s,e : agent(g,John) & inste(g,give) &
                 prep(g,to,s) & rela(s,sister,John)
                 object(g,e) & inst(e,earring) & mod(e,blue)

The "his" is a possessive pronoun and is assumed to be an anaphoric
reference back to John.  The gender implied by the pronoun is ignored.
Because "sister" is a relationship (a subclass of nouns), the phrase "his
sister" is represented rela(s,sister,John) instead of rela(John,owner,s).
The indirect object is represented as a prepositional phrase "to his
sister".  The tense of the verb is ignored.  The plurality of the nouns is
also ignore.  For example, "blue earrings" means at least two earrings
(common sense would say exactly two.) Since "at least two" is cumbersome to
represent in logic, I have represented it as "at least one", (some).

  "John is happy" is represented simply mod(John,happy).  "John is
Mary" is represented eq(John,Mary).  "Men are mortal" is represented

    some x : inst(x,man) & mod(x,mortal)

In the absence of a quantifier, "men" is assumed to mean "some men".  In
this case, "all men" is probably the meaning.  However, in "men are
present" or "we saw men", the correct representation is "some men".  To be
consistent and conservative, "some" is the default quantifier.  "All men
are mortal" is represented

    all x : inst(x,man) > mod(x,mortal)

where > means "implies" or "only if".

Some predicates have the same meaning.  The predicates inste, inst, and mod
are all used to indicate membership in a class, but inste is used for
verbs, inst is for nouns, and mod is used for adverbs or adjectives.  This
distinction could be useful for translating from logic to English.

The syntactic representation using these predicates is easy to construct
automatically using the computer.  The computer can parse the English
sentence and usually generate the representation directly from the parse
tree.  Constructing a parser for a complete English grammar would be very
difficult.  However, constructing a parser to accept sentences that can be
represented well using this knowledge representation is much easier.  The
grammar is limited to only simple sentences, with one verb, and it can
ignore plurality, gender, and tense.  The following sections describe the
components of the parser.

Lexicon
-------

The lexicon is the dictionary of English words.  Each word in the lexicon
is given a unique number called a token.  The lexicon has 10 word classes.
Each word is in exactly one class.  Some classes have subclasses, but other
than that, the only attribute attached to each word is its class.

    Lexicon Word Classes    Examples
    1. Noun                 ball, man
         Relation           sister, owner, president, head, middle
         Proper             John, Mary
         Pronouns           me, you, it, they, them, itself, there
         Quant. Pronouns    everything, something, anything, nothing
    2. Article              the, all, my, his
    3. Adjective            red, big
    4. Verb                 eat, give, exist
         Linking verb       is, was
         Aux verb           do, does, did
    5. Adverb               fast, merrily
    6. Preposition          in, on, of, by, to, with, for
    7. Negation             no, not
    8. Wh                   what, who, whom, where, when, why
    9. Connectives          and, or, if, then
    10. Extra               very

Synonyms may be put in the lexicon by preceding them with an equal sign.
For example, "it =he =she =they" means that he, she, and they are all
synonyms of the word it.  The first word (it) is the main word to which all
the synonyms are translated.  The grammar ignores the case, gender, and
plurality of pronouns, so all third person pronouns are translated to "it".
Synonyms are also used to ignore the tense of the verb.

The "Extra" word class is for words that currently the grammar cannot
parse.  The scanner ignores these words.

A regular noun describes a class of objects, such as ball, house.  The
"Relation" noun subclass is for nouns that describe a relationship between
two objects.

Restricting each word to only one class is usually not a problem.  However,
some words like "gold" are used equally often as noun and adjective, and
"work" is used equally as a verb and a noun.  I hope to find a better
solution to this without adding more ambiguity than the parser can handle.

Scanner
-------

The scanner uses the lexicon to convert a character string to an list of
tokens.  Two functions of the scanner have already been mentioned:
converting each synonym to its main word, and removing extra words.

The scanner converts all first person pronouns (I, me, we, us, my, mine,
our, ours) to a special proper noun, User.  It converts the second person
pronouns, you, your, and yours, to the special proper noun, Computer.

The scanner also handles some word endings.  If the word ends with "n't" it
is removed and "not" is added after the word.  If the word ends with "'s"
or "s'" it is removed and "(poss)" is added before the word.  The word
(poss) is a special marker adjective which means possessive.  The scanner
checks for commas following a word, and if so they are removed and (comma)
markers are added after the word.  The (comma) markers are discussed later.
When the scanner encounters a word that is not found in the lexicon, it
tries to do the following

    1. if the first letter of the word is uppercase, change it
        to lowercase and look in the lexicon again.
    2. if the word ends with the letter s, remove the s and look
        in the lexicon again.
    3. ask the user if the word is
        (1) a typing error (retype the word, or ignore the sentence),
        (2) a new word that should be added to the lexicon, or
        (3) a synonym of a word in the lexicon.
    4. if the user selects (2), ask for the class of the word
        and store it.

  For example, when the scanner is given this string of characters:

    "I didn't see any robots, with Mary's telescope."

is converted to a list of eleven token numbers.  For illustration, the
lexicon word corresponding to each token would form this sentence:

    "User do not see all robot (comma) with (poss) Mary telescope"

The word "any" is replaced by its synonym "all".

One area the scanner could be improved upon is to allow words to be
replaced by phrases more generally.  For example,

        cannot     ->  can not
        someone    ->  some person
        everything ->  every thing
        none       ->  not some

The scanner could also be improved to replace entire phrases.  This would
handle some idiomatic expressions and simplify the grammar.  For example,

        all of the      ->  the
        is eating       ->  eats
        at least one    ->  some

The rewrite rules could be incorporated into the lexicon.

Grammar
-------

The grammar has two purposes, (1) to specify the set of English sentences
accepted by the program, and (2) to specify how each sentence will be
parsed, and thus converted to the knowledge representation.

The syntactic network model consists of objects, modifiers to objects, and
relations between objects.  The objects are nouns or verbs, and are
represented as variables, except for proper nouns.  The variables may be
quantified existentially or universally.

In this model, a sentence containing a clause which acts as an object
generally cannot be represented.  The clause may be a gerund clause (I like
eating fish), an infinitive (I like to eat fish), or the object of a verb
like "think" or "believe".  In order to represent the sentence, "Marcus
tried to assassinate Caesar", it is separated into two sentences, "Marcus
tried to do X" and "X is an assassination of Caesar".  Given these facts,
if the computer is asked "Was Caesar assassinated?" it would incorrectly
answer "Yes".  The computer assumes each clause is a true statement.  In
some sentences this is a safe assumption, but such distinctions require
common sense, so in this project, clauses as objects are not accepted by
the grammar.

Another kind of clauses are modifier clauses.  These generally can be
represented correctly.  For example, "He ate the candy I bought" can be
separated into two statements "He ate some candy" and "I bought the candy".
Both of these statements are true, provided the same variable is used to
represent the candy in both.  The grammar for this project does not accept
such clauses, but it could be extended later to do so.

In this model, modifiers to words other than nouns and verbs cannot be
represented.  For example, in "It is right beside a very tall building,"
the word right modifies the preposition beside, and the word very modifies
the adjective tall.  The building itself is a variable, such as x, as in
mod(x,tall), but the tallness is a constant, not a variable, so it cannot
be modified.  The grammar does not allow such modifiers.  The user may get
around this by attaching words.  For example, taller_than and loyal_to are
prepositions, and very_tall is an adjective.

The grammar is based on augmented transition networks, (Winston84), though
the implementation is adhoc, not a general ATN parser.  The grammar is
presented below as a list of regular expressions which recursively invoke
one another.  Each regular expression is equivalent to a finite automaton
transition network.  I chose the regular expression notation because it is
easier to type.  The notation is

  Key:
    expr*           expr repeated zero or more times
    exp1 + exp2     either exp1 or exp2
    exp1 exp2       exp1 exp2 concatenated
    expr'           expr zero or one time

  Word classes:
    The word classes are described in the lexicon section.  The symbol
    "noun" refers to any kind of noun, "rela" refers only to relationship
    nouns.  The symbol "verb" refers only to regular verbs.

  Special words:
    poss    = a possessive marker, e.g. my = poss me
    of      = the word "of"
    comma   = a comma marker
    how     = the word "how"
    if,then,and,or = logical connectives

  Symbols:                            Examples
    N = a noun phrase                 the red bicycle
    M = a noun or adjective phrase    Mary's
    PP = prepositional phrase         by the table
    V = verb phrase                   does not freely give
    LP = object of a linking verb     blue with red stripes
    PR = predicate                    gives me money
    S = declarative sentence          All men are mortal.
    Q = question                      How are you?
    C = compound statement            I like pizza or I like pop.
    E = grammar start symbol

  N : (art + adj + poss noun)* noun (of N)'          (note 1)

  M : (art + adj + poss noun)* (noun (of N)' )'      (note 1)

  PP: comma* prep N

  V : aux' not* adv* (verb + aux) adv*

  LP: not* M' PP*

  PR: (is LP) + (V PP* N' PP* N' (adv + PP)* )

  S : not* N PP* PR

  Q : is not* N PP* LP +
      wh is not* M (PP + not)* prep' +               (note 2)
      wh PR +                                        (note 3)
      (prep wh + wh)' aux not* S +                   (note 3)
      how aux not* S +
      how is not* N (PP + not)*

  C : S ((and + or) S)*

  E : C + Q + if C then C


Note 1:  The (of N) construction in the N and M grammar rules is allowed
only when the preceding noun is of subclass relation.  For example, "Mary
is a sister of John" becomes rela(Mary,sister,John).

Note 2:  I accidentally omitted questions such as "Who is Ron a father of?"
This could be added later by changing prep' to (prep + N of)' where N is a
relation noun.

Note 3:  The grammar is designed to be deterministic for a bottom-up
left-to-right parser, so that no backtracking is required.  There is one
case in the wh-questions that a look-ahead of 1 or 2 symbols is required to
determine the correct path.  This is to distinguish "Who did not see John"
and "Who did not John see" when the parser is at the word "did".  In one
question the agent is being queried, and in the other the direct object is
being queried.

A major source of ambiguity in English is the attachment of prepositional
phrases.  For this project, I decided to remove the ambiguity instead of
resolve it.  By default, every prepositional phrase is assumed to be
attached to the preceding object.  An object can be a noun or a verb (not a
linking or auxiliary verb).  By putting a comma before the prepositional
phrase, the user can attach it to the object before the preceding one.  Two
commas would attach it two objects away, and so on.  An example from
Winston84,

    The robot saw the red pyramid on the hill with the telescope.

has five objects:  robot, saw, pyramid, hill, and telescope.  By default,
on is attached to (modifies) pyramid, and with is attached to hill.  A more
likely interpretation is

    The robot saw the red pyramid on the hill,, with the telescope.

in which "with the telescope" describes the word saw.  If we mean that the
robot is on the hill and the pyramid is with the telescope, (a highly
unlikely interpretation), we would say

    The robot saw the red pyramid,, on the hill, with the telescope.

The mechanism for this is described more in the next section.

Verb tenses involving participles are not allowed by the grammar.  This
simplifies the grammar somewhat.  It eliminates the need for look-ahead to
distinguish "is" as a main verb versus as a auxiliary verb.  Also it
disallows the use of passive voice, which can produce difficult-to-use
representations.  No expressiveness is lost by this because the verb tense
is not part of the knowledge representation anyway.

Auxiliary verbs are allowed by the grammar.  However, the auxiliary verb is
not carried into the knowledge representation.  This would require the
program to reason about possibilities, for example using modal logic.  The
result is that "John will eat" and "John could eat" and "John might eat"
are represented identically to "John does eat".  To avoid misunderstanding,
the words "could", "might", etc.  are omitted from the lexicon.

Verb particles are not allowed by the grammar.  In some cases,
identification of verb particles requires common sense.  For example, in
"He looked over the desk before buying it" and in "He looked over the desk
at the child" the meaning of over is different.  The user may attach the
particle to the verb, look_over, to create a new verb.

The verb "do" when used as a main verb is understood to mean an anonymous
verb.  For example, "John did something" is represented such that John is
the agent of event x, but the class of the event is not specified.  This is
especially useful in questions such as "What did John do?" which queries
all events of which John is the agent.

The sentence "By what is John?" is equivalent to "What is John by?", the
grammar allows only the latter.  Similarly, "By what does John sit?" is
equivalent to "What does John sit by?", the grammar allows only the former.
The reason is to simplify the grammar.

Negation in English sentences is a much more complicated problem than I
anticipated.  English allows great variety in the scope of negation using
subtle cues.  The sentence "Every child with a hat sat at a table" can be
negated at least four ways:

    1. "Not every child with a hat sat at a table"
    2. "Every child not with a hat sat at a table"
    3. "Every child with a hat did not sit at a table"
    4. "Every child with a hat sat not at a table"

The first and second may seem awkward, because typically, "no" is used
instead of "not every", and "without" is used instead of "not with".  The
grammar allows negation to occur only at the beginning of the sentence (as
in 1) and in the verb-phrase (as in 3).  Any other form of negation is not
allowed by the grammar.

Determining the meaning of "any" in an English sentence is difficult and
may require common sense.  In the sentence "Mary did not eat any fruit" the
word "any" is synonymous to "some".  In the sentence "Any friend of John is
a fool", the word "any" can be replaced with "every".  However, "any" is
different from "every" because "any" implies that it is not possible for a
friend John to not be foolish.  In the sentence, "Mary danced with any
man", the word "any" implies a possibility.  It means "Mary would have
danced with every man." The logic of possibilities is beyond the scope of
this project, so "any" is treated as synonymous to "every" in all cases.

The grammar allows connectives to be used only between independent
statements.  For example, "Mary ate cake and cookies" must be changed to
"Mary ate cake and Mary ate cookies".  The grammar could be extended later
to allow connectives more generally.  However, I think that it may be
opening Pandora's box in order to resolve the ambiguities.

Variations in word ordering is not permitted.  The word order for
statements must be subject, verb, object; adjectives must precede nouns;
adverbs may appear beside a the verb or at the end of the sentence; the
first word cannot be a preposition.

Parser
------

The purpose of the parser is to convert a subset of English sentences
(those accepted by the grammar) to FOL.  The interface to the parser is one
function called ParseWords(input,output).  The input is a string of
characters (null terminated) representing the English sentence.  It may be
a question or a statement.  The output is a string of character which is
the FOL representation of the sentence.  The output string is prefixed with
's' or 'q' to indicate a statement or question.

The implementation of the parser for the grammar basically converts each
transition network to a function.  The parameters to each function include
the sentence word-list (an array of tokens) and the index of the current
word.  Each function returns the index of the next unparsed word, or zero,
if a syntax error was found.  The functions of the parser are

    noun_phrase:     accepts N or M (depending on a parameter)
    prep_phrases:    accepts PP*, any number of prepositional phrases
    verb_phrase:     accepts V
    link_predicate:  accepts LP
    predicate:       accepts PR
    sentence:        accepts S or Q
    expr:            accepts E

The function named "sentence" is a misnomer.  In this paper, I use the term
"clause" to refer to simple statements or questions, S or Q.  I use the
term "sentence" to refer to expressions, E, which also includes clauses.

As each sentence is parsed, the output FOL string is directly constructed
by these functions.  There is no intermediate representation such as a
parse tree.  For example, when the next words are "a boy" and noun_phrase
is called, it appends "inst(x3,boy) & " to the output.

The parser makes use of a global object table (called "thing" in the
program listing).  The fields in the object table are

    name:   an array of object names
    neg:    flag to indicate whether the verb has a negation
    q:      flag to indicate type of question (0=statement)
    ag, vb,
    do, io: indices into the name array for agent, verb, direct
            object, and indirect object.

The object table is cleared and initialized before parsing each clause.

The name array contains the names of each object (noun or verb) as it is
encountered in the clause, left to right.  If the object is a proper noun,
the name (token) is stored.  Otherwise, a quantifier "some" or "all" is
stored in the name array.  The index of the object in the name array is
used to generate a variable, i.e.  x1 is the variable of the first object,
x2 is the second, etc.

The name array is also used to attach prepositional phrases.  When a
prepositional phrase is encountered with no commas before it, it modifies
the last entry in the name array, if one comma, it modifies the
second-to-last entry, etc.  One exception to this rule is with relative
pointers in the name array.  If an entry in the name array is negative, it
is a relative pointer to a previous entry.  This is used in three cases:
for anaphoric references (described later), for possessive nouns, and for
prepositional phrases preceding a linking verb.  An example of the second
kind is

    John pulled the man's dog, by the tail.

The preposition "by" is attached to the verb "pulled", not to "man".
A pointer is put in the name array after the possessive so that it is
skipped over when deciding where to attach "by".  An example of the
third kind is

    The man from the agency is in the house.

The preposition "in" is attached to "man", not to "agency".  A pointer to
the agent is put in after "agency" so that it is not eligible.

The ag, vb, do, and io are slots which are filled as the corresponding
parts of the clause are encountered.  After the clause is parsed, these
slots are used to generate the corresponding FOL predicates:  agent(vb,ag),
object(vb,do), and prep(vb,to,io).

As discussed in the previous section, only two forms of negation are
accepted:  negation of the entire clause, and negation of the predicate.  I
will call these types nc and np.  Before processing negation, all
quantifiers in the FOL are moved to the front.  The following rules process
negations

    1. if nc, change the quantifiers in the clause (some <-> all)
    2. if np, change the quantifiers in the predicate
    3. if the agent has universal quantifier, change the conjunction
       (between the subject and predicate) to implication.
    4. if nc xor np, negate the predicate.

As an example, the sentence "a man sat" can be negated nc "no man sat", as
np "a man didn't sit", and as both "no man didn't sit".  In FOL these four
are

        some x1 some x2 inst(x1,man) & inste(x2,sit) & agent(x2,x1)

  nc:   all x1 all x2 inst(x1,man) > not(inste(x2,sit) & agent(x2,x1))

  np:   some x1 all x2 inst(x1,man) & not(inste(x2,sit) & agent(x2,x1))

  both: all x1 some x2 inst(x1,man) > inste(x2,sit) & agent(x2,x1)

An anaphoric reference in English is a reference to a previously mentioned
object in the sentence, or in a different sentence (RK91).  Resolving
such references is an unsolved problem in artificial intelligence, because
it requires common sense.  In this project, I allow some very restricted
anaphoric references.  They are

 1. The same proper name is always assumed to refer to the same object.

 2. The word it, and its synonyms (he,she,him,themselves,etc.) is a
    reference to (1) the nearest previous thing noun (something, anything,
    etc.), or (2) the first noun if only one exists previously in the
    sentence.  It may not refer to an object in a different sentence.

 3. A noun with the article "the" is a reference to the nearest
    previous noun in the same sentence of the same class.

 4. The words x, y, and z are special pronouns which, in the same
    sentence, always refer to the same object.

To illustrate (2), in "if something tastes bitter then it is basic" the
word it refers to the something.  In "A boy threw his ball", the word his
refers to the boy, because boy is the only previous noun.  Note that gender
and plurality are ignored, so "Some girls threw his ball" has a similar
interpretation.

To illustrate (3), the sentence "The man ate fish" is interpreted to mean
"some man ate some fish", and a warning message is printed which says that
the anaphoric reference to "man" cannot be resolved.  However, in "A man
drank wine and the man ate fish", the reference to "the man" can be
resolved, no message is printed.

Anaphora type (4) allows sentences such as "if x is a parent of y then y is
x's child".  The variables (x,y,z) may be quantified, as in "any x is
before any y or y is before x".  Only the first occurrence of a variable
can be quantified.

English to FOL Examples
-----------------------

Statements

  John gave Mary a fish.
  s some x2 some x4 inste(x2,give) & inst(x4,fish) & agent(x2,John) &
  object(x2,x4) & prep(x2,to,Mary) ;

  A big blue box is very heavy.
  s some x1 inst(x1,box) & mod(x1,blue) & mod(x1,big) & mod(x1,heavy) ;

  All my parent's sibling's children are my cousins.
  s all x1 all x2 all x3 rela(x1,child,x2) & rela(x2,sibling,x3) &
  rela(x3,parent,User) > rela(x1,cousin,User) ;

  My fish is blue with a red tail.
  s some x1 some x5 inst(x1,fish) & rela(User,owner,x1) & mod(x1,blue) &
  inst(x5,tail) & mod(x5,red) & prep(x1,with,x5) ;

Questions

  Are there fish in a pond?
  q some x1 some x2 inst(x1,fish) & inst(x2,pond) & prep(x1,in,x2) ;

  Who is Mary's cousin by?
  q some x1 some x2 rela(x2,cousin,Mary) & prep(x2,by,x1) & -answer(x2,x1) ;

  Who did not eat my blue fish?
  q some x1 all x2 all x3 not( inste(x2,eat) & inst(x3,fish) &
  mod(x3,blue) & rela(User,owner,x3) & agent(x2,x1) & object(x2,x3) ) &
  -answer( x1) ;

  To whom did every man give a fish?
  q some x1 all x2 some x3 some x4 inst(x2,man) > inste(x3,give) &
  inst(x4,fish) & agent(x3,x2) & object(x3,x4) & prep(x3,to,x1) &
  -answer( x4,x1) ;

  What did every dog give to Joe?
  q some x1 all x2 some x3 inst(x2,dog) > inste(x3,give) &
  prep(x3,to,Joe) & agent(x3,x2) & object(x3,x1) & -answer( x1) ;

  Do all men like green eggs?
  q all x1 some x2 some x3 inst(x1,man) > inste(x2,like) & inst(x3,egg)
  & mod(x3,green) & agent(x2,x1) & object(x2,x3) ;

  How do some men eat fish?
  q some x1 some x2 some x3 some x4 inst(x2,man) & inste(x3,eat) &
  inst(x4,fish) & agent(x3,x2) & object(x3,x4) & mod(x3,x1) &
  -answer(x2,x4,x1) ;

  What is by a boat?
  q some x2 some x3 inst(x3,boat) & prep(x2,by,x3) & -answer( x2,x3) ;

Statements not accepted

  The dog that belongs to Mary is rabid.
  Adjective clauses such as "that..." are not accepted.

  Mary passed out.
  Verb particles not accepted.

  A man not from the agency called today.
  Negation can only occur on the verb or the whole sentence.

  John has three cousins.
  Quantities are not represented well.  The verb "has" is not
  understood to mean ownership or relationship.

  At least one of Mary's brothers is married.
  The phrase "at least" is not understood.

  Beth is between Alan and Cathy.
  Ternary relationships, such as between, cannot be represented.
  Compound nouns, verbs, adjectives, etc. are not accepted.

  All people live on a planet.
  This is represented so that each person's planet may be different,
  because the order of the quantifiers is assumed to be left-to-right.

  Mars is the only red planet.
  Jupiter is the largest planet.
  Both of these sentences refer to groups of exactly one object.  The
  program does not correctly represent such sentences.  Instead they
  could be paraphrased:
  if x is a planet and x is red then x is Mars.
  if x is a planet and x is not Jupiter then Jupiter is larger_than x

Questions not accepted

  What did John do to Mary?
  The question refers to an action of which Mary is the direct object.
  However, since the word "to" is used, the program incorrectly
  assumes Mary is the indirect object.  Instead one can ask "What did
  John do Mary?"

  Who is Ron a parent of?
  This form of question is not allowed.

  Who by the boat did Joe greet?
  The prepositional phrase cannot appear in this location.

Theorem Prover Commands
=======================

The theorem proving program commands currently operate on only one set of
clauses, hence one connection graph.  However, the program is designed so
that support for multiple connection graphs could be added.

The following commands are currently implemented.

S wff;
    Convert the wff to clause form and add it to the current graph.

Q wff;
    Convert the negation of wff to clause form, add it to the graph as "set
    of support," and try to find a refutation proof.  The answer, or
    "no proof found" is printed.

Q2
    Recall the wff entered on the last S or Q, add it to the graph as "set
    of support," and try to find a refutation proof.  This is a quick
    way to test for support of a statement or to try to prove the
    negative of a question.

/ sentence
    Convert an English sentence to an S or Q command.

? sentence
    Convert an English sentence to a wff, then issue Q wff.  This is
    similar to / except that it is always a question.

PROOF
    Display the proof for the last question.

LIST
    List all the clauses in the current graph.

SET var value
    Change a program variable.  The current variables are:
    ASK     the number of steps to run the proof before asking if the
            user wants to abort.
    DEBUG   turn debugging print statements on or off.
    TOPIC   set the topic number of clauses added to the graph.
    OCCURS  set the maximum depth of the occurs check during
            unification.

WRITE
    Save the English lexicon to a file.

CONT
    Continue a proof which for some reason was aborted, or look for another
    solution to a question.

* ...
    A comment, it is ignored.

READ filename
    Read commands from a file.

NEW
    Delete all clauses and links in the current graph.

END
    End the program, exit to operating system.

HELP
    Display a list of valid commands.


Code Structure of the Theorem Prover
====================================

The theorem proving program has the following ten components.

English translator
    Converts a sentence in English to a statement or query in first
    order logic.

Parser
    Has functions to parse wff input into a logical expression binary tree
    (LOX).  This is a recursive descent parser.  It calls the scanner to
    get tokens from the input.

Scanner
    Reads the input as a stream of tokens.  White space is ignored, symbols
    such as ",()&" are read as separate tokens, and groups of letters such
    as "and if or" are read as single tokens.

Symbol Table
    Maintains the symbol table as a flat array.  To insert a new symbol the
    table must be searched to see if it is already stored.
    The key of a symbol is its index in the array, so to retrieve a
    symbol is very fast.

Structure Management
    Supports create, delete, store, and copy of logical expression binary
    trees (LOX).  This is used by the parser.

Convert to CNF
    Converts a logical expression binary tree (LOX) to a binary tree in
    conjunctive normal form.  The "exists" quantifiers are removed and the
    corresponding variables are replaced with skolem functions.  The
    connectives IFF, IF, XOR are replaced by AND, OR, NOT.  The "all"
    quantifiers are distributed over conjunction, so that no quantifier
    spans two clauses.  The AND and OR connectives are make right
    associative.

Graph Management
    Has functions to create new graph, delete, print, add clauses to a graph,
    build links between clauses, find refutation proof, and display proof.
    This calls the following components.

Clause Management
    Has functions to convert binary trees (LOX) to clause form
    and also to delete and print clauses.

Terms Management
    Has functions to create, copy, delete, and unify term instances (TIN).
    The TIN is the data structure used to represent instances of predicates
    and terms.

Heap Management
    Has functions to manage links in a heap.  These include print,
    evaluate, add, delete, get-max.


Data Structures used in the Theorem Prover
==========================================

The theorem proving program data structures are arranged into five layers.

       Layer           Data Structure
    ----------------------------------------
    |  tokens       |  symbol table
    |  structures   |  LOX binary tree
    |  literals     |  TIN and VARS
    |  clauses      |  CLAUSE linked list
    |  links        |  LINK heap
    ----------------------------------------

The first is the token layer.  All symbols read from the input are stored in
a symbol table, so that the other data structures in the program refer to the
symbols by their token ID which is the index into the symbol table.  The
English language translator does not use the symbol table, it has
its own lexicon.

The next layer, the LOX binary tree, has two uses.  First it is the target
representation for the parser to represent general wff.  This representation
is then used to convert general wff to conjunctive normal form.  For
example, when the input expression is "all x some y(f(x) iff g(x,y))"
the parser creates the following LOX tree:

            all
            / \
           x   some
               / \
              y   iff
                  / \
                 f   g
                /   /
               x   x
                    \
                     y

The left child and right child of each LOX are called a and b, respectively.
When the LOX is a connective, a and b are the operands.  When the LOX is
a quantifier, a is the variable and b is the scope.  When the LOX is a
predicate or function, a is the first argument.  When the LOX is a term,
b is right sibling (in the g predicate, y is the right sibling of x).

When the LOX tree in the example is converted to conjunctive normal form,
it becomes as follows.

                    and
                 /      \
              all         all
             /  \        /  \
            x    or     x    or
                /  \        /  \
             not    g      f   not
             /     /      /    /
            f      x     x    g
           /        \        /
          x          $1      x
                    /         \
                   x           $1
                              /
                             x

The existentially quantified variable, y, has been skolemized.  The universal
quantifiers are left in place so that the scope is apparent during the next
step, which is converting to clause form.

That brings us to the second use for the LOX tree data structure.  After
the LOX is converted to clause form all quantifiers, and connectives in the
LOX tree can be deleted, but the LOX subtrees headed by predicates must be
retained because they are referred to by literal instances in the clause.
The LOX predicate subtrees are stored by the Structure Management functions
into a structure dictionary.

The next data structure layer is the literals layer.  Each clause can have
many literals, some of which may be negated.  Each literal consists of a
record called a TIN (term instance).  A TIN is simply two pointers, one to
a structure (a LOX in the structure dictionary) and one to a list of
variable bindings (a VARS array).  The number of elements in the VARS is
the number of unique variables present in the LOX.  The TIN for a literal
in a clause will always point to the predicate of some LOX structure in the
dictionary.  TINs are used to represent variable bindings.  The VARS array
is simply a list of pointers to TINs.  Initially all the VARS are free, so
by convention a TIN with two NULL pointers is a free variable.  When a
variable becomes bound during unification (discussed later in this paper),
the TIN of the corresponding VARS element is updated.

The next data structure layer is the clause layer.  Clauses are the most
important data structure in the program.  Each clause has the following:

  * set-of-support flag
  * pruned flag, on if the clause has been pruned from the graph
  * proof flag, on if it is part of the refutation proof
  * topic number, this is not used currently, but it could be used to
    control the proof procedure to prefer certain topics.  This is similar
    to weighting described in Wos84.
  * ID number for printing
  * the depth of the clause.  Clauses that are part of the 'given' axioms
    have a depth of zero.  Clauses generated by a resolution step are
    given a depth which is the maximum of the parents depth plus one.
  * a list of literals, each literal is a pointer to a TIN
  * the use count of each literal, the number of links to the literal.
    This is used to detect purities in the clauses.
  * the sign of each literal, positive or negative
  * a list of all the link to literals in this clause
  * the two immediate 'parents' of this clause.  These pointers are null
    if the clause has depth = 0.  The parent pointers are used to display
    a refutation proof in the event that the empty clause is generated.
  * the next and previous clauses in the linked list

The final data structure used in the program is the link.  A link is
created between two clauses in the graph when it is potentially possible
to resolve them.  Each link has a score which is calculated by a heuristic
evaluation that predicts how useful the resolution represented by the link
will be in finding a proof.  The link also has pointers to two clauses,
called p and n.  The pit and nit fields of the link tell what literals in
the two clauses are being resolved.  The pik and nik fields of the link
tell where each clause has recorded this link in its list of links.

A graph consists of a collection of clauses and links.  The clauses in
the graph are stored in a doubly-linked list.  The links are stored in a
heap.  The heap is a complete binary tree stored as an array of elements.
The heap is sorted such that the link with the largest score is always at
the top, (position 0 in the array).

For example, the clause

  all x all y (f(x,y) or not f(y,x))

is represented as follows.  When this wff is parsed, a single LOX tree is
created.  The LOX tree is converted to conjunctive normal form and then
clausified.  Then the literals from the LOX tree are put into the
structure dictionary and the rest of the tree is discarded.

--------------------------------------------
Tokens   1.f
         2.x
         3.y
--------------------------------------------
Structures
          .--> f        .--> f
          |   /         |   /
          |  x          |  y
          |   \         |   \
          |    y        |    x
----------|-------------|----------------
Literals  |      .------|------>(**) FREE TIN
          |      |      |       |
          |      |.-----|-->(**)|
          |      ||     |      ||
         (++)-->|++|   (++)-->|++|

          TIN   VARS   TIN    VARS
------------|-----------|-------------------
Clause      |           |
             <-. .--->---
               | |
     literals |+|+|
     use  (0,0)         neither literal has any links
     sign (1,0)         the sign of the first literal is positive
     flags
     topic
     ID
     .....

--------------------------------------------

The structures in this example store the ID numbers of the tokens.  The
actual symbols are stored in the symbol table.  There are two literals in
this example, each consisting of a TIN and a VARS array.  Two additional
free tins are pointed to by the variables x and y in each literal.


Algorithms used in the Theorem Prover
=====================================

Parsing
-------

Statements and questions may be given to the theorem prover as well-formed
formulae (wff).  The program parses this input using a recursive descent
parser adapted from ASU86.

   Start -> term5 $
   term5 -> term5 iff term4 | term4
   term4 -> term4 if term3 | term3
   term3 -> term3 or term2 | term2
   term2 -> term2 and term1 | term1
   term1 -> not term1
   term1 -> all atom term1
   term1 -> some atom term1
   term1 -> ( term5 )
   term1 -> term
   term  -> const | const() | const(list)
   list  -> term, list | term

The parser does permit expressions such as "all x (x(a) and 3(x))", but
these errors are detected later.

Conversion to Clause Form
-------------------------

To convert LOX structures representing wff to clause form took considerable
effort.  The process used was adapted from text books by GN86 and
RK91.  It consists of these 5 steps

   1. Remove implication,
      change (if a then b) to (not a or b)
      change (a iff b) to (not a or b) and (a or not b)
      change (a xor b) to (not a or not b) and (a or b).
      The resulting expression has only AND OR NOT and quantifiers.
   2. Reduce the scope of negation,
      change not (a or b) to (not a and not b)
      change not (a and b) to (not a or not b)
      change not all ... to some not ...
      change not some ... to all not ...
      change not not a to a
   3. Skolemize existential variables.
      Recursively traverse the LOX tree, keeping a list of all the
      current quantifiers.  As the scope of a quantifier is entered it is
      added to the list, as it is exited it is removed.  Whenever an
      existential variable is found, it is converted to a skolem function
      containing all the universal variables above the scope of the
      existential variable.  The existential quantifiers are removed.
   4. Increase the scope of conjunction.
      Before this step the AND ALL OR nodes may be in any order.  After
      this step the AND nodes are made right associative and moved to the
      highest scope.
      change ((a and b) and c) to (a and (b and c))
      change ((a and b) or c) to ((a or c) and (b or c))
      change (a or (b and c)) to ((a or b) and (a or c))
      change all x (a and b) to (all x a and all x b)
   5. Clausify.  Convert the CNF LOX tree into a linked list of clauses,
      each containing a list of literals.
      For each universal variable, one free TIN is created so that all
      occurrences of that variable in the clause point to the same free
      TIN.

This method simpler than the text book methods because it never needs to
"rename variables apart", since (1) the scope of the universal quantifiers
is reduced instead of removing them in step 4 and since (2) variables are
converted to pointers, the uniqueness of variable names is not important in
clause form.

Unification
-----------

The data structures used for literals allows the unification algorithm to
be very efficient.  The performance of the algorithm is independent of the
size of the clauses and, except for the occurs check, has a worse case
performance which is linear in the size of the two literals.

In the following example, assume w,x,y,z are variables and we want to unify
f(x,t(x),z) and f(a,y,h(w,y)).  First convert the literals to tree
structure as follows


  tree1   tree2
   |        |
   f2       f2
   /        /
  x0       a0
   \        \
    t1       y0
   / \        \
  x0  z1       h2
              /
             w1
               \
               y0

In the trees, each node is assigned a variable count (VC) number.  The VC
number for variables is a unique ID number starting with zero in each tree.
The VC number for constants is always zero.  The VC for functions and
predicates tells the number of variables found up to this symbol (using an
in-order traversal.) These trees are READ ONLY, they are not changed by the
unification process and they can be referred to by many term instances in
the database.  This structure sharing saves storage in the theorem proving
program.

The data structure used to represent instances of terms and atoms is a TIN.
An instance of a term (or atom) is different than the term itself because
the instance contains information about the instanciation (binding) of
variables in the term. A TIN has a pointer to a non-variable node in a LOX
tree, and a pointer to a variable list. The size of the variable list
depends on the number of variables in the tree.  (In the example both trees
have two variables).  The variable list is actually an array of pointers to
TINs.  The TINs in the variable list have two pointers also, if both of
these are null, the TIN is free, or unbound.  It is also possible during
unification that two variables become shared, even though neither is bound.
This is accomplished using chain TINs.  A TIN that has a null LOX pointer
and non-null variable list pointer is a chain TIN.  The variable list
pointer points to another TIN which belongs to another variable.  In the
following figures, a null pointer is *, non-null is +. A free variable TIN
is (**), a chain TIN is (*+), and a VARS array is |+|+|.

The two TINs for the example would be:

        tree1                     tree2
          :                         :
          |      (**)               |      (**) FREE TIN
          |       |                 |       |
  tin1:  (++)->|+|+|        tin2:  (++)->|+|+|VARS
                |                  TIN    |
               (**)                      (**)

They both have a head TIN that points to the structure tree and to a VARS
list size 2.  All the variables are free.

The unification procedure is recursive, it walks the structure trees
pointed to by the two terms.  If the procedure is unable to unify the
two terms, it returns a 4 or greater.  If the procedure is able to
unify the two terms without binding any variable in either tin, it
returns 0.  If unification succeeded and only variables in tin1 were
bound, then 1 is returned.  If some variables were bound in both
terms, then 2 or 3 is returned.  A value of 2 indicates that it may
be possible to unify the terms without binding any variable in tin2,
but not certain.

As the unify procedure descends the two structure trees, if both nodes are
constants, they are compared.  If either node is a variable, its binding is
looked up in the variable list.  If one of the variables is unbound, it is
bound to the other.  If both variables are unbound, one is chained to the
other. If both variables are bound, or if one tree has a bound variable
where the other has a term, then unify is called recursively.

The two TINs after unification would be:

        tree1     h                     tree2
          :       |                       :
          |      (++)---VARS of tin2      |      (**) FREE TIN
          |       |                       |       |
  tin1:  (++)->|+|+|              tin2:  (++)->|+|+|VARS
                |                        TIN    |
               (+*)                            (++)---VARS of tin1
                |                               |
                a                               t

The first variable in the VARS array of tin1, x, is bound to a constant, a.
The structure pointer points to the a in tree2 and the variable list
pointer is set to null.  The second variable in tin1, z, is bound to h(w,y)
in tree2.  When the value of z is printed, the variables in h(w,y) are
looked up in the variable list (of the z TIN) and printed recursively.
After unification, z is printed h(w,t(a)).

During unify, each time a variable is bound or shared, a pointer to it is
added to a change-list.  When unify is finished, we can quickly undo the
effect of the unify by going though the change-list and setting all the
TINs it points to to "free" (**).  This un-unify function is useful
during resolution.  To perform a resolution step, first the two parent
clauses are unified on the indicated literals, then the child clause
is produced by copying literals from each parent.  After the copy the
parents are un-unified.

The Four CGPP Rules
-------------------

The four rules of CGPP are resolution, factoring, removal of tautologies,
removal of subsumed clauses.  (Munch88 lists six rules by including pruning
for purity and variable constaints.)  For convenience, I will refer to them
as R, F, T, and S.  R and F are used to add clauses, and they are required
for completeness.  T and S are used to remove clauses, and they may be used
to prune the search.  All of these rules involve unification between pairs
of literals.  It is instructive to describe these rules in terms of the
following questions.

    1. Is the unification restricted?
       S and T yes, R and F no.
    2. Are the signs of the literals the same?
       S and F yes, R and T no.
    3. Are the literals from different clauses?
       S and R yes, F and T no.

A clause is a tautology, if it contains two identical literals with
opposite signs, such as f(x) and not(f(x)).  To check for tautologies, the
program calls Unify for every pair of literals in the clause with opposite
sign.  If Unify returns zero, then it means the literals are identical (no
substitutions were needed).  The clause can be removed.

A clause, P, subsumes another clause, Q, if P implies Q.  P implies Q if
and only if there exists a substitution, s, such that sP is a subset of Q.
That is to say, if there exists a substitution of the variables in P which
simultaneously unifies every literal in P with some literal in Q, then P
subsumes Q.  For example,

    P = { a, -f(x,y) }
    Q = { a, b, -f(z,z) }
    s = { (z/x), (z/y) }.

Also,
    P = { f(a,x), f(y,b) }
    Q = { f(a,b) }
    s = { (b/x), (a/y) }.

However in the following example, P does not subsume Q, because a
simultaneous unification would require changes to both P and Q.

    P = { f(x), g(x) }
    Q = { f(a), g(y), -g(a) }.

To check for subsumes, the program tests every literal in P by calling
Unify with literals in Q with the same sign until Unify returns 0 or 1.  A
return value of 0 or 1 indicates that only variables in P were changed.  If
every literal in P succeeds with one literal in Q, then Q is removed by
marking it as subsumed.  This matching algorithm is non-backtracking, it
always takes the first match it finds.  This causes it to not discover a
subsumption in some cases, for example,

    P = { f(x), -g(x) }
    Q = { f(a), f(z), -g(z) }.

The program unifies f(x) to f(a) by substituting a/x, thus preventing
the unification of -g(x) to -g(z), (without changing z).  The program
does not try to find a different literal to unify with f(x).  This
situation is rare, so it is of little benefit to check for it.

Whenever a new clause is added, the program may test if it is subsumed
by any existing clause.  If not, the program may test if it subsumes
any existing clause (which is not already excluded.)  Removal of
subsumed clauses in optional because some problems can be solved
easier without it.

The program does not currently implement the factoring rule.  It could be
added later.  Before each new clause is added to the clause list, it should
be factored.  Factoring a clause can produce new clauses which are subsumed
by the original, (but they are not removed).  To factor a clause the
program unifies one pair of literals in the clause which have the same
sign.  For example,

    P = { f(x,x), f(y,z), f(y,h(y)), f(a,z) }

may be factored five ways, (the first and third literals cannot be
unified).

    { f(x,x), f(x,h(x)), f(a,x) }
    { f(a,a), f(y,a), f(y,h(y)) }
    { f(x,x), f(y,h(y)), f(a,h(y)) }
    { f(x,x), f(a,h(a)), f(a,z) }
    { f(x,x), f(a,h(a)) }

Some of the factors can be factored again, they produce the following
clause.

    { f(a,a), f(a,h(a)) }

This clause cannot be factored any more.  The program would add the
original clause and its six factors to the clause list.  This is a extreme
example, usually clauses do not have so many factors.

Reasoning with Equality
-----------------------

Several approaches have been investigated for extending resolution
theorem proving to reason with equality.  Four methods I considered for
this project are paramodulation (Wos84), RUE (DH86), axiomization (Wos84),
and rewrite rules (Dick91).

The axiomization method is not a change to the reasoning procedure.
Instead, the axioms for equality are added as new clauses to the clause
list.  These are reflexivity, symmetry, transitivity, and for
every predicate and function, substitution.
This method is inefficient because the equality axioms are so
general they generate large numbers of resolvents, mostly redundant.
However, this method is logically complete.

The rewrite rules method (demodulation) is used to convert clauses to
canonical form before they are retained.  This method reduces the amount
of redundant information.  Knuth-Bendix completion can expand the set of
rewrite rules so that they are terminating and confluent.  Converting every
clause to an canonical form facilitates equality reasoning.  However, this
method is not logically complete.

Paramodulation seems to be the most popular method for reasoning with
equality.  It theory, resolution with paramodulation has been proven
logically complete.  However, the completeness requires that every
permutation of paramodulation and resolution is performed (DH86), and
paramodulation cannot be restricted to the set of support (Stickel88).
This would be just as inefficient as axiomization.  Instead, most systems
only apply paramodulation to disagreements in unification during
resolution.

The RUE method merges paramodulation and resolution into a single inference
rule.  During unification, a disagreement set is generated which is
attached to the resolvent.  It would be easier to implement a logically
complete version of this rule than paramodulation.  However, it would not
be more or less efficient.

For this project I have chosen to use axiomization.  It is the simplest to
implement and the other methods do not offer significant advantages.  Also
I prefer axiomization because my impression is that it will be the easiest
to integrate with procedural functions and predicates.

The following is an example of an equality problem solved using
axiomization.

Given  p(g(c));
       eq(f(a),g(b));
       eq(b,c);
Prove  p(f(a);

Equality Axioms
       eq(x,x);
       not eq(x,y) or eq(y,x);
       not eq(y,z) or not eq(x,y) or eq(x,z);
       not eq(x,y) or eq(g(x),g(y));
       not eq(x,y) or eq(f(x),f(y));
       not p(x) or not eq(x,y) or p(y);

Proof
    1. (given) not p(x) or not eq(x,y) or p(y);
    2. (quest) not p(f(a));
    3. ( 1, 2) not p(x) or not eq(x,f(a));
    4. (given) p(g(c));
    5. ( 4, 3) not eq(g(c),f(a));
    6. (given) not eq(x,y) or eq(y,x);
    7. ( 6, 5) not eq(f(a),g(c));
    8. (given) not eq(y,z) or not eq(x,y) or eq(x,z);
    9. ( 8, 7) not eq(y,g(c)) or not eq(f(a),y);
   10. (given) eq(f(a),g(b));
   11. (10, 9) not eq(g(b),g(c));
   12. (given) not eq(x,y) or eq(g(x),g(y));
   13. (12,11) not eq(b,c);
   14. (given) eq(b,c);
   15. (14,13) ;

Proof Algorithm
---------------

A graph is a collection of clauses and links.  The proof process
iteratively selects the link from the graph with the highest score and
performs it.  This usually adds one clause and some links to the graph.
After the link is performed it is removed from the graph.  This removal may
cause clauses to have purities, which causes them to also be removed.  The
proof process continues until

    1. An empty clause is found, display "Yes".
    2. A clause containing only answer predicates, display the answer.
    3. There are no more links, display "No answer found."
    4. Some time or space limit was exceeded, display "Search was stopped".

In the typical scenario the user enters many facts as statements (or
axioms).  It is assumed that these do not contain inconsistencies, but
they provide background domain knowledge.  Then the user asks the
theorem prover questions related to the domain knowledge.

Ideally, the graph consisting of the axiom clauses with the links
between them could be a persistent object.  It could grow
incrementally as new facts are added.  It could be used to answer
multiple questions without having to be altered and rebuilt for each
question.  However, it seems that maintaining the links in such a
persistent graph is more trouble than it is worth.  Instead, the
program stores statement clauses without links.  When the user asks a
question, the program creates all the links between all the clauses
and then evaluates them to determine their position in the heap.  For
every question the user asks, all the links in the graph are
reconstructed.

Another issue is with clause retention.  The usual description of CGPP
says that when a clause contains a purity, it is deleted and so are all
the links attached to it.  There are two reasons that this is a bad idea.
First, the user probably doesn't want to have many of the clauses deleted
after every question, and second, if the empty clause is found (the proof
is successful) then the user will probably want to see a proof.  However,
the theorem prover will not be able to display a proof, because many of the
clauses generated during the proof will have been deleted.  The solution we
use is to retain ALL clauses generated, and if a clause contains a purity,
it can be marked as pruned (not used any more), but not removed from
memory.  There is no way to know for certain that any generated clause will
not be part of the proof.

Link and Clause Deletion Algorithm
----------------------------------

Every link points to two clauses in the graph.  Each clause keeps a list of
all the links that point to it.

Whenever a link is deleted, it is removed from the heap, and each of the
clauses it points to are updated.  The deleted link is removed from the
link-list of each clause, and the use-count of the literal that the link
referred to within each clause is decremented.  If the use-count in either
clause becomes zero, then the clause has a purity and the clause can also
be deleted.

When a clause is deleted, it is not actually removed from memory.  It is
marked by setting the "pruned" flag to on.  Then all the links that refer
to the clause are also deleted.

To prevent an infinite loop of a link deleting a clause deleting a link
deleting a clause ... the type field in the link is set to -99 to indicate
that the link is already deleted, and the pruned flag is set to true in the
clause to indicate already deleted.


Link Evaluation
---------------

The link evaluation function gives each link in the graph a score.  The
resolution proof finding procedure always resolves the clauses connected by
the link with the highest score.  The score is chosen heuristically to
accomplish two objectives.  The most important objective is to find a proof
by deducing the empty clause.  To do this, links connecting small clauses
are preferred.  The second objective is to keep the size of the graph small.
To do this, links connected to "almost pure" literals (having no other
links) are preferred.  To ensure completeness, the score of the link is
decreased as the depth of the clauses increases, so that no link is ignored
indefinitely.  The following evaluation function is used in the program.

  if neither clause is set-of-support
    return score = 0
  if either clause is marked pruned or subsumed
    return score = 0
  if both clauses are the same clause (a self link)
    return score = 0
  if both clauses are unit clauses
    return score = 999

  let d = maximum depth of the two clauses

  if neither clause is almost pure, add 1 to d
  if neither clause is a unit clause, add 1 to d
  if the sum of the sizes of the clauses is > 2, add 1 to d
  if the sum of the sizes of the clauses is > 3, add 1 to d
  if the sum of the sizes of the clauses is > 7, add 1 to d

  return score = 99900 / (d + 100)

The function returns a score between 0 and 999, inclusive.

Topics for Further Study
========================

Natural Language
----------------

The grammar as presented here could easily be extended in a number of ways,
some of which have already been mentioned.  The one extension I think would
be best is to allow modifier clauses.  These clauses generally start with
"that", for example "The man that Mary likes eats is a butcher." This would
be easy to do and it would greatly improve the expressiveness of the
language.

Another improvement would to allow anaphoric references to refer to
previous sentences.  This could be accomplished by keeping a table of
skolem constants and their descriptions.  For example, "John bought a dog"
would create a new skolem constant for "dog" and put it in a table along
with the fact the John bought it.  Later, the user can type "The dog that
John bought...." and the skolem constant would be retrieved.

The parser could be extended to understand the verb "has".  For
example, "John has a dog" should be represented "There exists a
dog and John is the owner of it."  Also "John has a sister" should
be represented "There exists x such that x is a sister of John."

Currently, the program represents relations of a sentence using binary
predicates:  agent(V,S), object(V,DO), prep(V,to,IO).  It might be better,
however, to combined them into one predicate, such as sentence(S,V,DO,IO),
and use existential variables when the objects are not given.  The reason
is that using one predicate instead of three makes the search space that
much simpler.

The output of the program is currently FOL.  I think it would not be
difficult to translate the FOL to English sentences.  This could be used to
display the current clause list, display answers, or display a proof.
Displaying proofs may be difficult to do because (1) most people have
trouble understanding proof by contradiction, and (2) parts of the proof
may refer to fragments of sentences.

Connection Graph Search Methods
-------------------------------

The test problems that the theorem prover could not solve were Cousins and
Mikado, (see Appendix).  Inspection of the graphs generated by these
problems revealed a very large branching factor and a large number of
useless clauses.  I suspect that many of the useless clauses could be
avoided by employing consistency checks between links to a clause.  The
paper about CGPP (Kowalski75) recommends using the Waltz algorithm to check
the consistency of the substitutions of connection graph links.  Munch88
proposes a variation of this procedure.  Both of these perform consistency
checking periodically as a procedure separate from resolution.

I think a much better approach is CIG search as proposed in Sickel76.  It
combines consistency checking, factoring, tautology removal, and resolution
into a single procedure.  The CIG search alternates two phases:  graph
searching and graph unrolling.  A partial solution is a set of clauses and
links in the connection graph that form a tree.  The tree may not contain
cycles or merges.  A single substitution must be applied to all clauses in
the tree such that every arc connects complementary literals.  The root of
the tree is the start clause, usually the negation of the theorem.  If the
leaves of the tree are all unit clauses, the solution is a total solution,
and the proof is done.  If the tree has been expanded as much as possible
and it is not a total solution, then either some leaves contain pure
literals (no refutation can be found), or some leaves contain literals that
have links to clauses that are already in the tree.  The latter case may be
a tautology loop, a merge loop (factorization), or a resolution loop.  If
it is a resolution loop, graph unrolling is used to make copies of the
clauses so that the tree can grow.

The CIG search algorithm is especially well suited for the unification data
structures used in this project.  My unification does not build
substitution lists, instead it changes the variables themselves.  The graph
search phase can be one single unification.  There would be no need for
composing substitutions, and checking consistency of substitutions as in
Sickel76.  In CS79, a large part of the algorithm is renaming variables
apart.  This is not necessary in my unification, because variables are not
symbolic, they are pointers to memory locations.

CS79 demonstrates a correspondence between Sickel's partial solution trees
and rules of a context-free grammar (CFG).  The CFG can be used to generate
strings which correspond to refutation plans.  Then each plan can be
checked for consistency of substitutions.  My impression is that the
generation of refutation plans is subject to the same combinatorial
explosion as any other proof procedure, but plan generation is missing
some features of CIG search, such as tautology and purity pruning.

Truth Maintenance
-----------------

Another capability I would like to have added to the program is a
simplified form of truth maintenance (Doyle79).  Doyle's TMS has two lists
of beliefs, those which are IN, meaning believed, and those which are OUT,
meaning not believed.  For this project, I would use only the IN list which
would be the clause list.  Each clause could be either an "axiom", which is
an assumption that cannot be proven, or a "theorem", which is a fact that
is provable using certain other "axiom" clauses.  Each theorem stores a
list of axioms, called the justification, which are sufficient to support
it.

When the user makes a statement, the theorem prover should invoke a short
(2 seconds) proof to see if the statement is already known.  If so, it is
asserted with appropriate justification.  If not, the theorem prover should
invoke a short proof to see if the statement is known to be false.  If so,
the user is prompted with a list of contradictory facts and asked to
retract one.  If a statement is not easily proven true or false, it is
added to the clause list as an axiom.  When the user asks a question, if
the answer to the question is yes, then the question should be added to the
clause list with appropriate justification.  This allows the user to ask
leading questions to guide the theorem prover, much like the Socratic style
of teaching.

The justification list has two purposes.  First, if the user wants to see
a proof for one of the theorems, then the theorem prover could invoke a
resolution search using only the negation of the theorem and the axioms in
its justification list.  Second, if the users wants to retract an axiom
(remove it from the clause list), then the theorem prover could search for
every theorem having that axiom in its justification.  If any were found,
then they are also retracted, unless an alternative justification can be
found.

Nonmonotonic Reasoning
----------------------

A reasoning procedure is called nonmonotonic, if some theorems that
were provable become not-provable when new information is added.
The applications of this are to default logic, temporal reasoning, and
reasoning about beliefs.  The program does not currently support any kind
nonmonotonic reasoning.  One way to accomplish this could be to
procedurally implement the M (consistent) operator.  Thus, the statements

    1. -bird(x) or -M(canfly(x)) or canfly(x)
    2. bird(Tweety)

can be used to show that canfly(Tweety).  The M operator is a procedure
which invokes a proof search (independent of the main proof) attempting to
refute the assertion:  canfly(Tweety).  The sub-search can use all clauses
except the set of support of the main proof.  If the sub-search fails
within some small time and space constraint, then the M operator returns
true.  -M(...) therefore is false, and can be eliminated from the clause.
If the sub-search succeeds, then there is evidence that Tweety cannot fly,
and -M(...) evaluates to true which cannot be eliminated.  This is a very
simple example.  It is possible for a sub-search to in turn invoke another
sub-search, and so on indefinitely.  Some sub-searches may be eliminated by
combining results of identical searches.  It would be best, I think, to not
block the main search, but to continue all searches concurrently.


APPENDICES
==========

Test Cases and Results
----------------------

The following logic problems are found in Copi78.

new
/ all acids are chemicals
/ all bases are chemicals
/ all vinegar is an acid
/ is all vinegar a chemical?
The answer is yes.
proof
 1. (quest) inst($1,vinegar);
 2. (given) inst(x1,acid) or not inst(x1,vinegar);
 3. ( 1, 2) inst($1,acid);
 4. (given) inst(x1,chemical) or not inst(x1,acid);
 5. (quest) not inst($1,chemical);
 6. ( 4, 5) not inst($1,acid);
 7. ( 3, 6) ;

new
/ some skillful photographers are not imaginative
* only artists are photographers
/ all photographers are artists
/ not all photographers are skillful
/ any journeyman is skillful
/ isn't every artist a journeyman?
The answer is yes.
proof
 1. (quest) not inst(x1,artist) or inst(x1,journeyman);
 2. (given) mod(x1,skillful) or not inst(x1,journeyman);
 3. ( 1, 2) not inst(x1,artist) or mod(x1,skillful);
 4. (given) not mod($3,skillful);
 5. ( 3, 4) not inst($3,artist);
 6. (given) inst(x1,artist) or not inst(x1,photographer);
 7. ( 6, 5) not inst($3,photographer);
 8. (given) inst($3,photographer);
 9. ( 8, 7) ;

new
/ if an item is cut_rate then it is either shopworn or it is out_of_date
/ No shopworn item is worth_buying
/ some cut_rate items are worth_buying
/ are some cut_rate items out_of_date?
The answer is yes.
proof
 1. (given) mod($4,cut_rate);
 2. (quest) not mod(x1,cut_rate) or not inst(x1,item) or not mod(x1,
            out_of_date);
 3. ( 1, 2) not inst($4,item) or not mod($4,out_of_date);
 4. (given) inst($4,item);
 5. ( 4, 3) not mod($4,out_of_date);
 6. (given) mod(x1,out_of_date) or mod(x1,shopworn) or not inst(x1,item) or
            not mod(x1,cut_rate);
 7. ( 6, 5) mod($4,shopworn) or not inst($4,item) or not mod($4,cut_rate);
 8. (given) mod($4,cut_rate);
 9. ( 8, 7) mod($4,shopworn) or not inst($4,item);
10. (given) inst($4,item);
11. (10, 9) mod($4,shopworn);
12. (given) not mod(x1,worth_buying) or not inst(x1,item) or not mod(x1,
            shopworn);
13. (11,12) not mod($4,worth_buying) or not inst($4,item);
14. (given) inst($4,item);
15. (14,13) not mod($4,worth_buying);
16. (given) mod($4,worth_buying);
17. (16,15) ;

new
/ all gold things are valuable
/ all rings are ornaments
/ are all gold rings valuable ornaments?
The answer is yes.
proof
 1. (quest) inst($5,ring);
 2. (given) inst(x1,ornament) or not inst(x1,ring);
 3. ( 1, 2) inst($5,ornament);
 4. (quest) not inst($5,ornament) or not mod($5,valuable);
 5. ( 3, 4) not mod($5,valuable);
 6. (quest) mod($5,gold);
 7. (given) mod(x1,valuable) or not mod(x1,gold);
 8. ( 6, 7) mod($5,valuable);
 9. ( 8, 5) ;


--------------------------------------------------------------------------
These are some of my wife's family.  This example demostrates the ability
of the program to represent relationships well.  It also demostrates the
use of the "continue" command to find multiple solutions to a query.  In
the first query "Who are Ron's children?"  the answer(Lynn) is found twice
(i.e. two ways of proving that Lynn is Ron's child) before answer(Lisa) is
finally found.

The last question:  "who are Lynn's cousins?" could not be solved.  If I
would not have stopped the proof after 200 steps, then the program would
have run out of memory and crashed.

new
/Ron is Lynn's parent
/Marilyn is a parent of Lynn
/Randy is Fran's child
/Fran is Lawrence's spouse
/Lawrence is Marilyn's parent
/Ron is Ruth's child
/Jeanie is Ruth's child
/Jeanie is Andy's parent
/Lisa is Lynn's sibling
/Doug is Randy's child
/Randy is Brian's parent

/if x is a child of y then y is x's parent
/if x is a parent of y then y is x's child
/if x is y's sibling then y is x's sibling
/if x is y's spouse then y is x's spouse
/if x is y's parent and x is z's parent then y is z's sibling
/if x is y's parent and y is z's sibling then x is z's parent
/if x is y's child and x is z's child then y is z's spouse
/if x is y's child and y is z's spouse then x is z's child

/if x is y's child and y is z's child then x is z's grandchild
/if x is y's grandchild and z is y's grandchild then x is z's cousin

/who are Ron's children
answer(Lynn);
proof
 1. (given) rela(x2,child,x1) or not rela(x1,parent,x2);
 2. (quest) not rela(x2,child,Ron) or answer(x2);
 3. ( 1, 2) not rela(Ron,parent,x2) or answer(x2);
 4. (given) rela(Ron,parent,Lynn);
 5. ( 4, 3) answer(Lynn);
cont
answer(Lynn);
cont
steps: 50  links: 202  clauses: 0  (22 2 8 20 11,0 0 )
Continue (y/n)?
steps: 100  links: 303  clauses: 0  (22 2 8 20 24,13 0 0 )
Continue (y/n)?
answer(Lisa);
proof
 1. (given) rela(x2,child,x1) or not rela(x1,parent,x2);
 2. (quest) not rela(x2,child,Ron) or answer(x2);
 3. ( 1, 2) not rela(Ron,parent,x2) or answer(x2);
 4. (given) rela(x1,parent,x5) or not rela(x1,parent,x2) or not rela(x2,
            sibling,x5);
 5. ( 4, 3) not rela(Ron,parent,x2) or not rela(x2,sibling,x5) or answer(x2);
 6. (given) rela(Ron,parent,Lynn);
 7. ( 6, 5) not rela(Lynn,sibling,x5) or answer(x2);
 8. (given) rela(x2,sibling,x1) or not rela(x1,sibling,x2);
 9. ( 8, 7) not rela(x1,sibling,Lynn) or answer(x2);
10. (given) rela(Lisa,sibling,Lynn);
11. (10, 9) answer(Lisa);

/who are Doug's siblings
steps: 50  links: 133  clauses: 0  (22 2 8 8 0,0 )
Continue (y/n)?
steps: 100  links: 215  clauses: 0  (22 2 8 26 7,0 0 )
Continue (y/n)?
answer(Brian);
proof
 1. (given) rela(x2,sibling,x5) or not rela(x1,parent,x2) or not rela(x1,
            parent,x5);
 2. (quest) not rela(x2,sibling,Doug) or answer(x2);
 3. ( 1, 2) not rela(x1,parent,x2) or not rela(x1,parent,Doug) or answer(x2);
 4. (given) rela(Randy,parent,Brian);
 5. ( 4, 3) not rela(Randy,parent,Doug) or answer(Brian);
 6. (given) rela(x2,parent,x1) or not rela(x1,child,x2);
 7. ( 6, 5) not rela(Doug,child,Randy) or answer(Brian);
 8. (given) rela(Doug,child,Randy);
 9. ( 8, 7) answer(Brian);

/who are Fran's grandchildren
answer(Doug);
proof
 1. (given) rela(x1,grandchild,x5) or not rela(x1,child,x2) or not rela(x2,
            child,x5);
 2. (quest) not rela(x2,grandchild,Fran) or answer(x2);
 3. ( 1, 2) not rela(x1,child,x2) or not rela(x2,child,Fran) or answer(x2);
 4. (given) rela(Doug,child,Randy);
 5. ( 4, 3) not rela(Randy,child,Fran) or answer(Doug);
 6. (given) rela(Randy,child,Fran);
 7. ( 6, 5) answer(Doug);
cont
answer(Brian);
proof
 1. (given) rela(x1,grandchild,x5) or not rela(x1,child,x2) or not rela(x2,
            child,x5);
 2. (quest) not rela(x2,grandchild,Fran) or answer(x2);
 3. ( 1, 2) not rela(x1,child,x2) or not rela(x2,child,Fran) or answer(x2);
 4. (given) rela(Randy,child,Fran);
 5. ( 4, 3) not rela(x1,child,Randy) or answer(x2);
 6. (given) rela(x2,child,x1) or not rela(x1,parent,x2);
 7. ( 6, 5) not rela(Randy,parent,x2) or answer(x2);
 8. (given) rela(Randy,parent,Brian);
 9. ( 8, 7) answer(Brian);

/who are parents of Lisa
steps: 50  links: 156  clauses: 0  (22 2 8 10 4,0 0 )
Continue (y/n)?
answer(Ron);
proof
 1. (given) rela(x1,parent,x5) or not rela(x1,parent,x2) or not rela(x2,
            sibling,x5);
 2. (quest) not rela(x2,parent,Lisa) or answer(x2);
 3. ( 1, 2) not rela(x1,parent,x2) or not rela(x2,sibling,Lisa) or answer(x2);
 4. (given) rela(Ron,parent,Lynn);
 5. ( 4, 3) not rela(Lynn,sibling,Lisa) or answer(Ron);
 6. (given) rela(x2,sibling,x1) or not rela(x1,sibling,x2);
 7. ( 6, 5) not rela(Lisa,sibling,Lynn) or answer(Ron);
 8. (given) rela(Lisa,sibling,Lynn);
 9. ( 8, 7) answer(Ron);

/who are Lynn's cousins
steps: 50  links: 319  clauses: 0  (22 1 1 1 14,17 0 0 )
Continue (y/n)?
steps: 100  links: 386  clauses: 0  (22 1 1 1 14,35 1 0 0 )
Continue (y/n)?
steps: 200  links: 659  clauses: 0  (22 1 1 1 14,56 27 0 0 )
Continue (y/n)?
The search was stopped.
proof
No proof.

--------------------------------------------------------------------------

These problems are from Suppes88.  They demonstrate the ability of the
program to represent complex combinations of possessives and relationship
words.

The Ockham example makes use of many of the quantifier forms.

/all horses are animals
/are all heads of horses animals' heads?
The answer is yes.
proof
 1. (quest) inst(x2,horse);
 2. (given) inst(x1,animal) or not inst(x1,horse);
 3. ( 1, 2) inst(x1,animal);
 4. (quest) rela($1,head,x2);
 5. (quest) not inst(x4,animal) or not rela($1,head,x4);
 6. ( 4, 5) not inst(x4,animal);
 7. ( 3, 6) ;

new
/no follower of Ockham likes a realist
/any of Ockham's followers like some of Hobbes' followers
/someone is Ockham's follower.
/are some of Hobbes' followers not realists?
The answer is yes.
proof
 1. (given) rela($3(x1),follower,Hobbes) or not rela(x1,follower,Ockham);
 2. (quest) not rela(x1,follower,Hobbes) or inst(x1,realist);
 3. ( 1, 2) not rela(x1,follower,Ockham) or inst($3(x1),realist);
 4. (given) rela($4,follower,Ockham);
 5. ( 4, 3) inst($3($4),realist);
 6. (given) not object(x3,x4) or not inst(x4,realist) or not inste(x3,like)
            or not agent(x3,x1) or not rela(x1,follower,Ockham);
 7. ( 5, 6) not object(x3,$3($4)) or not inste(x3,like) or not agent(x3,x1)
            or not rela(x1,follower,Ockham);
 8. (given) rela($4,follower,Ockham);
 9. ( 8, 7) not object(x3,$3($4)) or not inste(x3,like) or not agent(x3,$4);
10. (given) inste($2(x1),like) or not rela(x1,follower,Ockham);
11. (10, 9) not rela(x1,follower,Ockham) or not object($2(x1),$3($4)) or not
            agent($2(x1),$4);
12. (given) rela($4,follower,Ockham);
13. (12,11) not object($2($4),$3($4)) or not agent($2($4),$4);
14. (given) agent($2(x1),x1) or not rela(x1,follower,Ockham);
15. (14,13) not rela($4,follower,Ockham) or not object($2($4),$3($4));
16. (given) rela($4,follower,Ockham);
17. (16,15) not object($2($4),$3($4));
18. (given) object($2(x1),$3(x1)) or not rela(x1,follower,Ockham);
19. (18,17) not rela($4,follower,Ockham);
20. (given) rela($4,follower,Ockham);
21. (20,19) ;

--------------------------------------------------------------------------

This is the Mikado problem taken from Slagle65.  The program does not
handle this problem very well.  Because the antecedent and consequent of
these if-then rules are entire sentences, each rule becomes several rules.
The search space grows very quickly.

The technique for asking the question makes use of the ? command to change
a statement to a question.  For example, to ask "is there something Koko
can do to stay alive?" we form a statement "if Koko does something then
Koko stays alive" and change it to a question.

new
/if x marries y then y is married
/if Katisha is married then she doesn't claim Nankipoo
/if Katisha doesn't claim Nankipoo then Katisha doesn't accuse Nankipoo
/if Katisha doesn't accuse Nankipoo then Nankipoo appears
/If Nankipoo appears then Koko produces Nankipoo
/If someone produces Nankipoo, then the Mikado not think_dead Nankipoo
/if the Mikado doesn't think_dead Nankipoo, then Koko is alive

?if the Mikado doesn't think_dead Nankipoo, then Koko is alive
The answer is yes.
proof
 1. (given) mod(Koko,alive) or agent($5,Mikado);
 2. (quest) not mod(Koko,alive);
 3. ( 1, 2) agent($5,Mikado);
 4. (quest) not agent(x2,Mikado) or not inste(x2,think_dead) or not object(
            x2,Nankipoo);
 5. ( 3, 4) not inste($5,think_dead) or not object($5,Nankipoo);
 6. (given) mod(Koko,alive) or inste($5,think_dead);
 7. (quest) not mod(Koko,alive);
 8. ( 6, 7) inste($5,think_dead);
 9. ( 8, 5) not object($5,Nankipoo);
10. (given) mod(Koko,alive) or object($5,Nankipoo);
11. (quest) not mod(Koko,alive);
12. (10,11) object($5,Nankipoo);
13. (12, 9) ;

?If someone produces Nankipoo, then Koko is alive
steps: 50  links: 314  clauses: 0  (22 11 16 3 0,0 )
Continue (y/n)?
steps: 100  links: 442  clauses: 0  (22 11 18 14 3,0 0 )
Continue (y/n)?
The answer is yes.
proof
 1. (given) mod(Koko,alive) or inste($5,think_dead);
 2. (quest) not mod(Koko,alive);
 3. ( 1, 2) inste($5,think_dead);
 4. (quest) inste($7,produce);
 5. (given) not object(x5,Nankipoo) or not inste(x5,think_dead) or not agent(
            x5,Mikado) or not agent(x2,x1) or not inste(x2,produce) or not
            object(x2,Nankipoo);
 6. ( 4, 5) not object(x5,Nankipoo) or not inste(x5,think_dead) or not agent(
            x5,Mikado) or not agent($7,x1) or not object($7,Nankipoo);
 7. ( 3, 6) not object($5,Nankipoo) or not agent($5,Mikado) or not agent($7,
            x1) or not object($7,Nankipoo);
 8. (given) mod(Koko,alive) or agent($5,Mikado);
 9. (quest) not mod(Koko,alive);
10. ( 8, 9) agent($5,Mikado);
11. (10, 7) not object($5,Nankipoo) or not agent($7,x1) or not object($7,
            Nankipoo);
12. (quest) object($7,Nankipoo);
13. (12,11) not object($5,Nankipoo) or not agent($7,x1);
14. (given) mod(Koko,alive) or object($5,Nankipoo);
15. (quest) not mod(Koko,alive);
16. (14,15) object($5,Nankipoo);
17. (16,13) not agent($7,x1);
18. (quest) agent($7,$6);
19. (18,17) ;
?If Nankipoo appears then Koko is alive
steps: 50  links: 370  clauses: 0  (21 11 16 8 1,0 0 )
Continue (y/n)?
steps: 100  links: 694  clauses: 0  (21 11 18 24 3,0 0 )
Continue (y/n)?
steps: 200  links: 866  clauses: 0  (21 11 18 40 7,0 0 )
Continue (y/n)?
The answer is yes.
proof
 1. (given) mod(Koko,alive) or agent($5,Mikado);
 2. (quest) not mod(Koko,alive);
 3. ( 1, 2) agent($5,Mikado);
 4. (given) not object(x5,Nankipoo) or not inste(x5,think_dead) or not agent(
            x5,Mikado) or not agent(x2,x1) or not inste(x2,produce) or not
            object(x2,Nankipoo);
 5. ( 3, 4) not object($5,Nankipoo) or not inste($5,think_dead) or not agent(
            x2,x1) or not inste(x2,produce) or not object(x2,Nankipoo);
 6. (given) mod(Koko,alive) or inste($5,think_dead);
 7. (quest) not mod(Koko,alive);
 8. ( 6, 7) inste($5,think_dead);
 9. ( 8, 5) not object($5,Nankipoo) or not agent(x2,x1) or not inste(x2,
            produce) or not object(x2,Nankipoo);
10. (quest) inste($8,appears);
11. (given) agent($4(x2),Koko) or not inste(x2,appears) or not agent(x2,
            Nankipoo);
12. (10,11) agent($4($8),Koko) or not agent($8,Nankipoo);
13. (quest) agent($8,Nankipoo);
14. (13,12) agent($4($8),Koko);
15. (14, 9) not object($5,Nankipoo) or not inste($4($8),produce) or not
            object($4($8),Nankipoo);
16. (quest) inste($8,appears);
17. (given) object($4(x2),Nankipoo) or not inste(x2,appears) or not agent(x2,
            Nankipoo);
18. (16,17) object($4($8),Nankipoo) or not agent($8,Nankipoo);
19. (quest) agent($8,Nankipoo);
20. (19,18) object($4($8),Nankipoo);
21. (20,15) not object($5,Nankipoo) or not inste($4($8),produce);
22. (given) mod(Koko,alive) or object($5,Nankipoo);
23. (quest) not mod(Koko,alive);
24. (22,23) object($5,Nankipoo);
25. (24,21) not inste($4($8),produce);
26. (quest) inste($8,appears);
27. (given) inste($4(x2),produce) or not inste(x2,appears) or not agent(x2,
            Nankipoo);
28. (26,27) inste($4($8),produce) or not agent($8,Nankipoo);
29. (quest) agent($8,Nankipoo);
30. (29,28) inste($4($8),produce);
31. (30,25) ;
?if Katisha doesn't accuse Nankipoo then Koko is alive
steps: 50  links: 240  clauses: 0  (20 6 16 7 1,0 0 )
Continue (y/n)?
steps: 100  links: 780  clauses: 0  (20 6 16 26 10,0 0 )
Continue (y/n)?
steps: 200  links: 2894  clauses: 0  (20 6 16 26 72,0 0 )
Continue (y/n)?
The search was stopped.

The search could not be completed successfully.

--------------------------------------------------------------------------

These are examples from Thomas77 to demonstrate reasoning with
equality using axiomization.

new

/if x is y then y is x
/Venus is brighter_than Jupiter
/Nothing is brighter_than itself

/Is Venus Jupiter?
No answer found.

/Isn't Venus Jupiter?
The answer is yes.
proof
 1. (quest) eq(Venus,Jupiter);
 2. (given) prep(y,p,z) or not eq(x,y) or not prep(x,p,z);
 3. ( 1, 2) prep(Jupiter,p,z) or not prep(Venus,p,z);
 4. (given) prep(Venus,brighter_than,Jupiter);
 5. ( 4, 3) prep(Jupiter,brighter_than,Jupiter);
 6. (given) not prep(x1,brighter_than,x1);
 7. ( 5, 6) ;

/If x is a planet and x isn't Pluto and x is not Mercury
 then x is not smaller_than Mars
/Some planets smaller_than Mars are magnetic
/What are some magnetic planets
answer($1);
/What is something
answer($1,Mercury) or answer(Pluto,$1);

/if x is a planet and x is not Jupiter then Jupiter is larger_than x
/Jupiter is a planet
/Earth is a planet
/Jupiter is not Earth
/What is a planet larger_than Earth
answer(Jupiter);

--------------------------------------------------------------------------

This is an example from RK91.

new
/Marcus was a man.
/Marcus was a Pompeian.
/all Pompeians were Romans.
/Caesar was a ruler.
/if x is Roman then x is loyal_to Caesar or x hates Caesar.
/Everyone is loyal_to someone.
/If a person try_assassinate a ruler then the person is not loyal_to the ruler.
/Marcus did try_assassinate Caesar.
/All men are people.

/isn't Marcus loyal_to Caesar?
The answer is yes.
proof
 1. (quest) prep(Marcus,loyal_to,Caesar);
 2. (given) not prep(x1,loyal_to,x3) or not agent(x2,x1) or not inste(x2,
            try_assassinate) or not inst(x1,person) or not inst(x3,ruler) or
            not object(x2,x3);
 3. ( 1, 2) not agent(x2,Marcus) or not inste(x2,try_assassinate) or not
            inst(Marcus,person) or not inst(Caesar,ruler) or not object(x2,
            Caesar);
 4. (given) inste($3,try_assassinate);
 5. ( 4, 3) not agent($3,Marcus) or not inst(Marcus,person) or not inst(
            Caesar,ruler) or not object($3,Caesar);
 6. (given) inst(Caesar,ruler);
 7. ( 6, 5) not agent($3,Marcus) or not inst(Marcus,person) or not object($3,
            Caesar);
 8. (given) object($3,Caesar);
 9. ( 8, 7) not agent($3,Marcus) or not inst(Marcus,person);
10. (given) agent($3,Marcus);
11. (10, 9) not inst(Marcus,person);
12. (given) inst(x1,person) or not inst(x1,man);
13. (12,11) not inst(Marcus,man);
14. (given) inst(Marcus,man);
15. (14,13) ;
/who does Marcus hate?
steps: 50  links: 21  clauses: 0  (14 3 2 1 1,1 1 1 1 1,1 1 1 2 3,3 1 1 1 14,)
Continue (y/n)?
answer(Caesar);
proof
 1. (given) agent($1(x1),x1) or prep(x1,loyal_to,Caesar) or not inst(x1,roman);
 2. (quest) not object(x3,x1) or not inste(x3,hate) or not agent(x3,Marcus)
            or answer(x1);
 3. ( 1, 2) prep(Marcus,loyal_to,Caesar) or not inst(Marcus,roman) or not
            object($1(Marcus),x1) or not inste($1(Marcus),hate) or answer(x1);
 4. (given) inst(x1,roman) or not inst(x1,pompeian);
 5. ( 4, 3) not inst(Marcus,pompeian) or prep(Marcus,loyal_to,Caesar) or not
            object($1(Marcus),x1) or not inste($1(Marcus),hate) or answer(x1);
 6. (given) inst(Marcus,pompeian);
 7. ( 6, 5) prep(Marcus,loyal_to,Caesar) or not object($1(Marcus),x1) or not
            inste($1(Marcus),hate) or answer(x1);
 8. (given) inste($1(x1),hate) or prep(x1,loyal_to,Caesar) or not inst(x1,
            roman);
 9. ( 8, 7) prep(Marcus,loyal_to,Caesar) or not inst(Marcus,roman) or prep(
            Marcus,loyal_to,Caesar) or not object($1(Marcus),x1) or answer(x1);
10. (given) inst(x1,roman) or not inst(x1,pompeian);
11. (10, 9) not inst(Marcus,pompeian) or prep(Marcus,loyal_to,Caesar) or
            prep(Marcus,loyal_to,Caesar) or not object($1(Marcus),x1) or
            answer(x1);
12. (given) inst(Marcus,pompeian);
13. (12,11) prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            not object($1(Marcus),x1) or answer(x1);
14. (given) object($1(x1),Caesar) or prep(x1,loyal_to,Caesar) or not inst(x1,
            roman);
15. (14,13) prep(Marcus,loyal_to,Caesar) or not inst(Marcus,roman) or prep(
            Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            answer(Caesar);
16. (given) inst(x1,roman) or not inst(x1,pompeian);
17. (16,15) not inst(Marcus,pompeian) or prep(Marcus,loyal_to,Caesar) or
            prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            answer(Caesar);
18. (given) inst(Marcus,pompeian);
19. (18,17) prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            prep(Marcus,loyal_to,Caesar) or answer(Caesar);
20. (given) not prep(x1,loyal_to,x3) or not agent(x2,x1) or not inste(x2,
            try_assassinate) or not inst(x1,person) or not inst(x3,ruler) or
            not object(x2,x3);
21. (19,20) prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            answer(Caesar) or not agent(x2,Marcus) or not inste(x2,
            try_assassinate) or not inst(Marcus,person) or not inst(Caesar,
            ruler) or not object(x2,Caesar);
22. (given) inste($3,try_assassinate);
23. (22,21) prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            answer(Caesar) or not agent($3,Marcus) or not inst(Marcus,
            person) or not inst(Caesar,ruler) or not object($3,Caesar);
24. (given) inst(Caesar,ruler);
25. (24,23) prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            answer(Caesar) or not agent($3,Marcus) or not inst(Marcus,
            person) or not object($3,Caesar);
26. (given) agent($3,Marcus);
27. (26,25) prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            answer(Caesar) or not inst(Marcus,person) or not object($3,Caesar);
28. (given) object($3,Caesar);
29. (28,27) prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            answer(Caesar) or not inst(Marcus,person);
30. (given) inst(x1,person) or not inst(x1,man);
31. (30,29) not inst(Marcus,man) or prep(Marcus,loyal_to,Caesar) or prep(
            Marcus,loyal_to,Caesar) or answer(Caesar);
32. (given) inst(Marcus,man);
33. (32,31) prep(Marcus,loyal_to,Caesar) or prep(Marcus,loyal_to,Caesar) or
            answer(Caesar);
34. (given) not prep(x1,loyal_to,x3) or not agent(x2,x1) or not inste(x2,
            try_assassinate) or not inst(x1,person) or not inst(x3,ruler) or
            not object(x2,x3);
35. (33,34) prep(Marcus,loyal_to,Caesar) or answer(Caesar) or not agent(x2,
            Marcus) or not inste(x2,try_assassinate) or not inst(Marcus,
            person) or not inst(Caesar,ruler) or not object(x2,Caesar);
36. (given) inste($3,try_assassinate);
37. (36,35) prep(Marcus,loyal_to,Caesar) or answer(Caesar) or not agent($3,
            Marcus) or not inst(Marcus,person) or not inst(Caesar,ruler) or
            not object($3,Caesar);
38. (given) inst(Caesar,ruler);
39. (38,37) prep(Marcus,loyal_to,Caesar) or answer(Caesar) or not agent($3,
            Marcus) or not inst(Marcus,person) or not object($3,Caesar);
40. (given) object($3,Caesar);
41. (40,39) prep(Marcus,loyal_to,Caesar) or answer(Caesar) or not agent($3,
            Marcus) or not inst(Marcus,person);
42. (given) agent($3,Marcus);
43. (42,41) prep(Marcus,loyal_to,Caesar) or answer(Caesar) or not inst(
            Marcus,person);
44. (given) inst(x1,person) or not inst(x1,man);
45. (44,43) not inst(Marcus,man) or prep(Marcus,loyal_to,Caesar) or answer(
            Caesar);
46. (given) inst(Marcus,man);
47. (46,45) prep(Marcus,loyal_to,Caesar) or answer(Caesar);
48. (given) not prep(x1,loyal_to,x3) or not agent(x2,x1) or not inste(x2,
            try_assassinate) or not inst(x1,person) or not inst(x3,ruler) or
            not object(x2,x3);
49. (47,48) answer(Caesar) or not agent(x2,Marcus) or not inste(x2,
            try_assassinate) or not inst(Marcus,person) or not inst(Caesar,
            ruler) or not object(x2,Caesar);
50. (given) inste($3,try_assassinate);
51. (50,49) answer(Caesar) or not agent($3,Marcus) or not inst(Marcus,
            person) or not inst(Caesar,ruler) or not object($3,Caesar);
52. (given) inst(Caesar,ruler);
53. (52,51) answer(Caesar) or not agent($3,Marcus) or not inst(Marcus,
            person) or not object($3,Caesar);
54. (given) object($3,Caesar);
55. (54,53) answer(Caesar) or not agent($3,Marcus) or not inst(Marcus,person);
56. (given) agent($3,Marcus);
57. (56,55) answer(Caesar) or not inst(Marcus,person);
58. (given) inst(x1,person) or not inst(x1,man);
59. (58,57) not inst(Marcus,man) or answer(Caesar);
60. (given) inst(Marcus,man);
61. (60,59) answer(Caesar);
/what is Marcus?
answer(pompeian);
cont
answer(man);
cont
answer(person);
cont
answer(roman);
cont
answer(roman);

--------------------------------------------------------------------------

This is from Quine82.

new
/if Blake did contribute then Edwards did not contribute
/did everyone contribute?
No answer found.
q2
The answer is yes.
/who did not contribute?
answer(Blake) or answer(Edwards) or answer(Blake) or answer(Edwards);
proof
 1. (quest) inste($3(x1),contribute) or answer(x1);
 2. (given) not agent(x4,Edwards) or not inste(x4,contribute) or not
            inste(x2,contribute) or not agent(x2,Blake);
 3. ( 1, 2) answer(x1) or not agent($3(x1),Edwards) or not
            inste(x2,contribute) or not agent(x2,Blake);
 4. (quest) inste($3(x1),contribute) or answer(x1);
 5. ( 4, 3) answer(x1) or answer(x1) or not agent($3(x1),Edwards) or not
            agent($3(x1),Blake);
 6. (quest) agent($3(x1),x1) or answer(x1);
 7. ( 6, 5) answer(Edwards) or answer(x1) or answer(Edwards) or not
            agent($3(x1),Blake);
 8. (quest) agent($3(x1),x1) or answer(x1);
 9. ( 8, 7) answer(Blake) or answer(Edwards) or answer(Blake) or
            answer(Edwards);


Lexicon Data File
-----------------
relation
  owner cousin son father mother sister brother sibling
  daughter child =children parent aunt uncle middle top
  side follower head spouse grandchild =grandchildren
#
noun
  there everything =everyone =anything =anyone something =someone nothing
  thing it =he =she =him =they =them =itself
  me =i =we =us you u v w
  x y z (pronoun) boy book dog cat
  man =men fish table turtle ice-cream John Sue
  Lynn Alice Bob boat dish box water Joe
  Ian User Computer Mary car door tail spot
  pond egg ham pompeian roman Marcus Caesar ruler
  person =people acid chemical base vinegar photographer artist
  journeyman item ring ornament girl hat Ockham realist
  Hobbes horse animal Mike idiot Ron Marilyn Randy
  Fran Lawrence Ruth Jeanie Andy Lisa Doug Brian
  Katisha Nankipoo Koko Mikado Venus Jupiter planet Pluto
  Mercury Mars Earth Blake Edwards
#
article
  the all =every =each =any some =a =an
#
adjective
  (poss) my =our =ours =mine your =yours its
  =her =his =hers =their =theirs big heavy white
  left loyal mortal skillful imaginative cut-rate shopworn out-of-date
  worth-buying gold valuable red blue green happy fat
  plain tall bold brave married alive magnetic cut_rate
  out_of_date worth_buying
#
what
  who when where what whom why how
#
adverb
  fast quickly quietly
#
verb
  eat =ate swim =swam run =ran support like
  buy =bought fall =fell give =gave think exist
  hate try-assassinate try_assassinate marry =marries claim accuse appears
  live produce think_dead contribute
#
link
  is are am was were
#
aux-verb
  do did does
#
negation
  not no
#
extra-word
  very already either
#
preposition
  under on at in with of by after
  loyal-to through (comma) from to loyal_to above brighter_than
  smaller_than larger_than
#
connect
  and or then if xor
#

Source Code Listing
-------------------
(see attached)



References
==========

65  Slagle, J.A., Experiments with a Deductive Question-Answering Program.
    Communications of the ACM, v8 n12, 1965, 792-798.

71  Slagle, J.R., Artificial Intelligence:  The Heuristic Programming
    Approach, McGraw-Hill Book Company, NY, 1971.

75  Kowalski, R., A Proof Procedure Using Connection Graphs.  JACM, v22 n4,
    1975, 572-595.

76  Sickel, S., A Search Technique for Clause Interconnectivity Graphs,
    IEEE Trans. on Comp., v25 n8, 1976, 823-835.

77  Thomas, J.A., Symbolic Logic, Merrill Publishing Company, Columbus, OH,
    1977.

78  Copi, I. M., Introduction to Logic, 5th ed., MacMillan Publishing Co.,
    Inc., New York, NY, 1978.

79  Chang, C.L., Slagle, J.R., Using Rewriting Rules for Connection Graphs to
    Prove Theorems.  Artificial Intelligence, v12, 1979, 159-180.

79  Doyle, J., A Truth Maintenance System.  Artificial Intelligence, v12,
    1979, 231-272.

80  Nilsson, Nils, Principles of Artificial Intelligence, Tioga Publishing
    Company, Palo Alto, CA, 1980.

91  Rich, E., K. Knight, Artificial Intelligence 2nd ed., McGraw-Hill Book
    Company, Inc., New York, NY, 1991.

82  Quine, W.V., Methods of Logic, Fourth Edition, Harvard University Press,
    Cambridge, MA, 1982.

83  Sowa, J., Conceptual Structures:  Information Processing in Mind and
    Machine, Addison-Wesley Publishing Company, Reading, MA, 1983.

84  Winston, P. H., Artificial Intelligence, 2nd ed., Addison-Wesley
    Publishing Company, Inc., Reading, MA, 1984.

84  Wos, L., R. Overbeek, E. Lusk, J. Boyle,  Automated Reasoning:
    Introduction and Applications, Prentice Hall, Englewood Cliffs, NJ, 1984.

86  Digricoli, V.J., M.C. Harrison, Equality-Based Binary Resolution, JACM,
    v33 n2, 1986, 253-289.

86  Genesereth, M., Nils Nilsson, Logical Foundations of Artificial
    Intelligence, Morgan Kaufmann Publishers, Inc., Los Altos, CA, 1986.

86  Aho, A.V., Sethi, R., Ullman, J.D.,  Compilers: Principles,
    Techniques, and Tools,  Addison-Wesley Publishing Company, MA, 1986.

88  Munch, K.H., A New Reduction Rule for the Connection Graph Proof
    Procedure.  Journal of Automated Reasoning, v4, 1988, 425-444.

88  Stickel, M.E., Resolution Theorem Proving, Annual Review of Computer
    Science, v3, 1988, 285-316.

91  Dick, A.J.J., An Introduction to Knuth-Bendix Completion.  The Computer
    Journal, v34 n1, 1991, 2-15.


    Source: geocities.com/paris/6502

               ( geocities.com/paris)