archives of the CONLANG mailing list
------------------------------------

 
>From robin@extro.ucc.su.OZ.AU Wed Feb  2 12:16:50 1994
Date: Wed, 2 Feb 1994 01:16:50 +1100
From: Robin F Gaskell 
Message-Id: <199402011416.AA14928@extra.ucc.su.OZ.AU>
To: conlang@diku.dk
Subject: Syntax Analysis

#From: robin@extro.ucc.su.oz.au (Robin Gaskell)         5 Dec 93
#To: conlang@diku.dk (Conlang Mail List)
#Subject: Syntax Analysis

Hello Friends,

Just a little something I whipped up.  In his last letter to me,
Haitao said that we would have to decide on the system of grammar we
wanted for Glosa.  So, I thought about it.

Naturally, I had been waiting for the authors to decide that the time
was right for them to deliver a tabulated version of Glosa's grammar.
But it hasn't happened, and Haitao is a programmer, who could probably
produce a parser for the language.  I decided that this must be the
time for a tabulated Glosa grammar to appear: if the authors weren't
going to write it, perhaps I would have to do so.  While casually
looking at a book on meaning representation, I noticed that it was
full of pseudo-algebraic formulations, and that most of the symbols
would not transmit through the network as e-mail.

Pretty silly, I thought; it would seem a lot better if our
codification of syntax was able to network readily ... using ASCII
symbols, obviously.  So, GAS was born - gASCII Analysed Syntax: in my
ignorance, I imagined that each syntactic function could be awarded a
symbol, preferably a non-alphanumeric one, and, I thought, doing such
an analysis should tell me something.

To start with, I wondered how many syntactic functions there were: it
was something like the earlier question about the number of types of
verbs and nouns.  We would need a rational number: not so many it was
unmorkable; and not so few it was meaningless.  Then I started to
count the symbols on my 101 key keyboard: there were the normal 26
letters, each with upper and lower case versions; then there were
the 10 numbers; and after that, I counted 31 non-alphanumeric symbols.

I decided it would be nice if the syntactic categories could be
restricted to this last group of 31 symbols, so they wouldn't get
mixed up with the normal text.  But, already, I've got up to 38 symbols
plus another 7, which are two-symbol compounds: this yields a total of
45 syntactic categories.  Maybe this is too many, and I'll have to
slim it down a bit.

Oh yes, I mentioned Haitao.  Some months ago, I posted his `pidgeon
post' address, in deepest China, on Conlang - reporting that he was
eager to make contact with researchers in the West, who were looking
into Natural Language Processing as it articulates with the IAL concept.
Did anyone write to him?  Do any of you know of anyone, at all, who
might be doing NLP / IAL research?  In a later blip, I will suggest
that of all the countries in the world, China has most to gain from
adoption of the IAL.

So, here it is:_

            The gASCII Analysed Syntax (GAS) system  (Alpha Test ver.)
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Brief Outline.
      * This is a means of codifying the syntactic function of the
        elements of a sentence.  Using this analysis, we can see the
        functional structure of a sentence, without needing to refer
        to the actual words.
      * The analysis can be done either manually or by machine; if
        done automatically, the code can be used as part of a program
        that parses the language.  It has the advantage that it can be
        scrutinised, if necessary, during the parsing process.

Natural Language Processing
      - This system was invented with both English and the Planned
        Language, Glosa, in mind.  Although linguists have attempted
        to codify the syntax of English, the average educated person
        has learnt his or her use of syntax through practice, and
        there is no easily-read reference book in which the syntax
        rules can be found.
      - Glosa is in a worse position: prior to this, no-one has
        codified its intuitively-used syntax.  This system is
        presented as a means of stabalising that syntax.

Rationale.
     i. ASCII code can be read by OCR software.
    ii. ASCII code transmits through the Internet.
   iii. The various syntactic elements can be covered, in general, by
          the non-alphanumeric ASCII symbols.
    iv. Some semantic categories can also be shown.
     v. Small syntactic distances are shown using single spaces.
    vi. Syntactic spaces between phrases are shown as double spaces.
   vii. Larger syntactic distances, such as those between the major
          parts of a sentence (S-V-O) and between clauses, are
          indicated with triple spaces.
  viii. Clauses are marked with brackets - differently for adjectival,
          adverbial and noun clauses.
    ix. Non-literal language is marked: for ease of recognition
          and as an aid to machine translation.
     x. Patterns of syntax can be found by analysis, and used to
          prompt improvements in clarity.
    xi. Preferred patterns of syntax can be readily recognised and
          taught.
   xii. As part of a meaning representation system, the code (bearing
          the linguistic function) would be matched with the word or
          symbol (carrying the semantic content).
  xiii. Languages that have no morphological grammar will use this
          code to hold the information usually found in Part-of-Speech
          markers and grammatical inflections .. for purposes of
          the machine handling of information and translation.
   xiv. Generation of this code will be a function of Artificial
          Intelligence; the code generated can be perused and
          understood by the human operator of a mechanised translation
          system - thus allowing the process to be monitored.
    xv. The code permits the use of unchanging concept-words, ordered
           according to a syntax-based grammar ... in metalanguages,
           Intermediate Languages and concept-based auxiliary
           languages.

gASCII Elements
~~~~~~~~~~~~~~
Basic        .          !          >           @           $

        substantive   action   modifier   space, time   logical
          (noun)      (verb)  (adjective,  preposition   preposition
                                adverb)


Tense        /          \          ~           ^           |

          future      past     continuous  conditional    now



Modifiers    #          %          >           v

          number     quantity   quality    auxiliary
       (countable) (measurable) (property)   verb


Conjunctions            +                    &

              joins words, phrases     structural: joins clauses


Functions     x         t          =           <           <`

          location    time    equals, like   verb is    participle
       X proper noun          as , similar   passive


              0         ?          -           ,           ;

       negative: un-   general   joining    pronoun    pronoun
     no, not, never   question   concepts   personal  impersonal
      nothing                   (compounds)


People         o             s              '              `.

            other          self        possessive        gerund
        O proper noun   S name of
                         1st person


Specific       ?o        ?.        ?!         ?x          ?t
 questions
              who       what       why      where        when


Clauses    (      )       {       }       [       ]      "      "

          adjectival        noun          adverbial     parenthesis
                                                        or quotation

Non-literal          :            *       *         _         _
 language
              metaphor or           idiom          start     end
              other n-l term                        of sentence

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Examples of GAS in application

  The cat sat on the mat.    =     U felis pa sed epi u tape.

        _.   \!   @._


  Plu studenti fu memo: na pa dice de u Tesaurus de Roget plura kron.

        _o   /!  [,   \!   @.@O #t]_



  Three fat boys sat by the river bank, and ate jam sandwiches.

        _#>o   \!   @.-.   &   \!   >._


  While three fat boys sat by the river bank, and ate jam sandwiches,
     their sisters stole their bicycles.

    _[t  #>o   \!   @.-.   &   \!   >.]  ,'o   \!   ,'._

  Tem tri paki ju-an pa sed proxi u ripa, e pa fago plu konfekti pani,
     mu plu fe-sibi pa klepto mu plu bi-rota.

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Cheers,

Robin


______________________________________________________________________
 
 
>From shoulson@ctr.columbia.edu Tue Feb  1 06:10:07 1994
From: shoulson@ctr.columbia.edu (Mark E. Shoulson)
Received: from localhost (shoulson@localhost) by startide.ctr.columbia.edu (8.6.5/8.6.4.788743) id LAA23057; Tue, 1 Feb 1994 11:10:07 -0500
Date: Tue, 1 Feb 1994 11:10:07 -0500
Message-Id: <199402011610.LAA23057@startide.ctr.columbia.edu>
To: conlang@diku.dk
Subject: Planned languages server

>Date: Tue, 1 Feb 1994 08:19:34 +0100
>From: helz@ecn.purdue.edu (Randall A Helzerman)

>Could someone tell me what is the ftp address of the planned languages server?

The Planned Languages Server is currently down indefinitely.  When it was
up, it never had an ftp site.  There are conlang-related things available
via ftp from world.std.com in /pub/lingua (not Lojban or Esperanto,
chiefly), surplus.demos.su in /esperanto, rand.org (also Esperanto),
casper.cs.yale.edu in /pub/lojban, and HablI.tamu.edu (Klingon, just
starting up), all according to recent messages on this list and the Klingon
list.  I haven't tried most of these places, so I can't tell you how much
is there.

~mark

______________________________________________________________________
 
 
>From shoulson@ctr.columbia.edu Tue Feb  1 06:47:25 1994
From: shoulson@ctr.columbia.edu (Mark E. Shoulson)
Received: from localhost (shoulson@localhost) by startide.ctr.columbia.edu (8.6.5/8.6.4.788743) id LAA23316; Tue, 1 Feb 1994 11:47:25 -0500
Date: Tue, 1 Feb 1994 11:47:25 -0500
Message-Id: <199402011647.LAA23316@startide.ctr.columbia.edu>
To: conlang@diku.dk
Subject: Syntax Analysis

Hmmm.  Interesting method you have there, Robin.  Is it really any better
than actually using the words instead of symbols?  e.g. instead of 
_. /! @._, maybe   ?
OK, it certainly takes much more typing, but it's easier to read.  It's
also language-dependent (whatever language you use for the words), and
could be less useful for marking sentences for analysis (as in _the .cat
/!sat @on the .mat_), though really not too badly: any computer worth its
salt could be perfectly capable of pulling out the information in the <>'s
and treating it separately.  Though similarly, it could use the symbolic
method and translate to the  format (in any of several
conventions/languages) for debugging output by the human operator.

You should realize, of course, that your method is quite limited to
languages that resemble English and Glosa.  Not all languages have
participles, linking verbs, etc., and not all languages can have their
tense system so simply expressed as
past/present/continuous/conditional/now.  Some require perfectives, etc.
Words like "gerund" and "passive" don't have any meaning in many languages,
but instead other constructs which you don't treat do.  This may not
matter, but it will limit which languages can use this system.

Also, I note that you don't mark direct objects.  This will limit your
method's usefulness to coding only those languages which mark case by
word-order, and that in the same order as English/Glosa.  Coding "a fire I
saw" would not help us work out whether this was an OSV language (or more
likely, a SVO language with the O transposed to the front for emphasis), an
SOV language (very common) or a free-order language with case-markings.
This may or may not matter, depending on the purpose this code is intended
for (which I may not understand fully).

All in all, an interesting plan, one which certainly helps focus and narrow
down on some of the basic syntactic (and a few semantic) categories of
English and Glosa and not a few similar languages.  It looks like a nice
thing to run through your mind now and then to see just what the language
is doing, sort of like a computer-codable version of sentence-diagramming
(remember that?)

~mark



______________________________________________________________________
 
 
>From hrick@world.std.com Tue Feb  1 17:16:48 1994
Date: Tue, 1 Feb 1994 22:16:48 -0500
From: hrick@world.std.com (Richard Harrison)
Message-Id: <199402020316.AA04123@world.std.com>
To: conlang@diku.dk
Subject: vocabulary considerations


Here's a rough sketch of an article on vocabulary-creation.  Comments,
suggestions, criticisms invited.

101 things to consider while creating a vocabulary

* Finite or infinite:  Natural languages such as English can continue 
adding new words to their vocabularies endlessly; there is no limit
on the size of their lexicons.  Many conlangs, however, have a
limited-size vocabulary.  In some cases, the limit is imposed by
the phonetic shape of the root-words: a language based on CVC words
will be limited to the number of CVC combinations possible.  In other
cases, such as Basic English, the limit is an arbitrary number
based on the language creator's idea of how many words will be needed.

* Concept mapping:  Most conlang designers say there should be no
synonyms in a conlang's stock of radicals.  (But how far can you
take this notion; for example, should the noun meaning "time," the 
conjunction meaning "when" and the preposition that means "during"
all be derived from the same radical?)  Conlang designers also often
state that each word only represents one idea.  In some cases it
would be more accurate to say that each word represents one narrow
range of closely-related concepts.  After close examination we often
find that a conlang's vocabulary connects concepts to words in a way 
that is nearly identical to the conlang creator's native language.

* Compound words and derivational affixes:  Conlangs that have
finite-size vocabularies usually have a system of derivational
affixes and allow the creation of compound words so that the limited
lexicon can express a wide range of ideas.  But not everyone agrees
that natlang-style compounding is always a good thing; Rick Morneau
has suggested that a conlang suitable for use as an interlingua in
machine translation should explicitly state how the items in a
compound word are related.

It seems desirable that a reader or listener should be able to guess
the meaning of a compound or derivative word, even if very few context
clues are available.  It is not hard to guess the meanings of Esperanto
_preg^ejo_ (place devoted to prayer ~= church) or _piedbati_ (foot + 
hit ~= kick).  But a newcomer to Dutton Speedwords might have a hard time
puzzling out the meaning of _aqe_ (the word for "water" plus an 
augmentative suffix, roughly equivalent to Esperanto _akvego_) -- we 
might assume _aqe_ means "ocean" but it is Dutton's way of saying "steam."

The number of root-words in the vocabulary seems to have a great
influence on the comprehensibility of compound words.  In a conlang
like aUI, which only has 31 radicals, compounds are very ambiguous
and it is unlikely that any two users of the language would 
spontaneously create the same set of compounds to express a given
set of ideas.  (The creator of aUI used these variations to gain
insight on his patients in psychotherapy.)  If we could restrict 
Esperanto or German or Chinese to the 2000 most frequent roots and 
affixes, we would probably find that speakers of these languages could 
create compounds and derivatives that were easy to understand, and we 
would probably find that most speakers were creating the same compounds
to express the same ideas.  Based on this hypothetical observation, we 
might assume that the minimum size for a basic vocabulary is somewhere 
between 31 and 2000 items, eh?

* Hidden irregularities:  Language designers sometimes state that their
languages have no grammatical irregularities because, for example,
nouns always form the plural by adding the same suffix and there are
no irregular verbs.  

But if we dig a little deeper we often find that these claims are not true.
For example, there might be some nouns that have a plural form ("count 
nouns") and some which never take the plural ("mass nouns"), as in English
we never say "three electricities" or "five oatmeals."  Often we find that
the count noun/mass noun distinctions are the same in the conlang as in its
creator's native language.  And often we find that the creator has failed 
to mention which nouns are countable and which are mass.

Even if verbs always form their various tenses in the same way, there
are often huge variations in the arrangement of different verbs' 
"arguments."  (This is one reason why I tried to reduce the number of 
basic verbs in Vorlin as much as possible [got it down to 3], and 
probably one of the reasons why Hogben used a minimal number of verboids
with explicity stated argument-structures in Interglossa.)  For example,
English has >25 different verb patterns <1>, here are a few of them:

[2A] Intransitive verbs that may be used without a complement:
         "We all breathe, eat and drink."

[2B] Verbs used with an adverbial adjunct of distance, duration, weight etc.
         "The meeting lasted (for) two hours."

 [9] The object of the verb is a that-clause; "that" is often omitted.
         "I suppose (that) you'll be leaving soon."

[14] The verb is followed by a direct object and a preposition and its
     object.  This pattern is not convertible to VP12... `Explain something
     to somebody' cannot be converted to `*Explain somebody something.'
     The preposition is linked to the verb and they must be learnt
     together, e.g. `compare one thing TO/WITH another.'

The important factor here is that any given verb can only be used in a 
limited number of these patterns.  At the moment, I can't think of any 
conlangs that don't have the same complication built into them.  (Loglan and 
its successors have taken this phenomenon to its logical extreme by making 
every content word a verb and giving each one a unique argument structure 
that must be memorized along with the word.  But at least the loglans spell
out in detail what the argument structures are going to be.)  Bilingual 
conlang dictionaries seldom reveal how a verb's arguments are to be arranged,
and many conlang proposals don't contain enough sample text to provide this
information by example.

<1> A S Hornby, Guide to Patterns and Usage in English, Oxford University
  Press, as quoted in Oxford Advanced Learner's English-Chinese Dictionary.



______________________________________________________________________
 
 
>From j.guy@trl.oz.au Thu Feb  3 02:47:54 1994
From: j.guy@trl.oz.au (Jacques Guy)
Message-Id: <9402020447.AA16554@medici.trl.OZ.AU>
Subject: Re: vocabulary considerations
To: conlang@diku.dk
Date: Wed, 2 Feb 1994 15:47:54 +1100 (EST)


Ah, interesting stuff I will enjoy to disagree with!
> 
> 
> 101 things to consider while creating a vocabulary
> 
> * Finite or infinite:  Natural languages such as English can continue 
> adding new words to their vocabularies endlessly; there is no limit
> on the size of their lexicons.  
Yes there is! 
First, the memory of the speaker. My brain capacity has a limit. 
I don't know what it is, but I am sure it has. So there is a limit
to what it can hold.

Second, I'll grant you 120 phonemes and such a nimble tongue and
vocal chords that "tQx!zz&%%%p*H" is no problem to you -- and
clearly distinguished from "tQx!zz&%%%%p*H", too! Now even though
I will grant you too words up to 12,000 phonemes long (will that
allow you one googolplex different words? No matter if it doesn't:
make them anything right up to 1,200,000 phonemes long!) there is
a limit, even if you manage to stay awake long enough 
to utter that million-phoneme word: your lifetime -- or that
of the Solar system if your name's Super Mathuselah.

> 
> * Concept mapping:  Most conlang designers say there should be no
> synonyms in a conlang's stock of radicals.  (But how far can you
> take this notion; for example, should the noun meaning "time," the 
> conjunction meaning "when" and the preposition that means "during"
> all be derived from the same radical?)  Conlang designers also often
> state that each word only represents one idea.  In some cases it
> would be more accurate to say that each word represents one narrow
> range of closely-related concepts.  

I will disagree there by saying "in all cases each word represents
a set of ranges of related concepts". (The set might have only
one member). But I'm cheating here: it's really agreeing with you
and going one (or two) better, no?

I agree with all the rest of what you wrote, though!

______________________________________________________________________
 
 
>From chalmers@violet.berkeley.edu Tue Feb  1 23:35:25 1994
Date: Wed, 2 Feb 1994 07:35:25 -0800
From: chalmers@violet.berkeley.edu (John H. Chalmers Jr.)
Message-Id: <199402021535.HAA15487@violet.berkeley.edu>
To: conlang@diku.dk
Subject: Laadan

I recently bought a copy of Suzette Haden Elgin's novel
"Earthsong Native Tongue III," the latest in her series
about the linguist clans who communicate with visiting ET's 
in the context of a repressive anti-feminist US.  I haven't 
finished it yet, so I won't attempt a review, save to say that 
it appears to contain little about Laadan (with an acute accent 
and a high pitch on first a), the conlang she created and used 
thematically in these books to _"express the perceptions of
women"_. 

However an address is given for those who wish to obtain a 
grammar, a dictionary, and more information about the language. 
Send a self-addressed stamped envelope to Laadan, P.O. Box 1137,
 Huntsville, Arkansas, 72740 USA.

I have an earlier version of the grammar and dictionary back at
my home in Houston (I'm currently in Southern California). The
language is rather interesting as it incorporates a number of ideas
from aboriginal North American languages in a quite non-Indo-European framework.

-- John


______________________________________________________________________
 
 
>From shoulson@ctr.columbia.edu Wed Feb  2 06:32:23 1994
From: shoulson@ctr.columbia.edu (Mark E. Shoulson)
Received: from localhost (shoulson@localhost) by startide.ctr.columbia.edu (8.6.5/8.6.4.788743) id LAA04017; Wed, 2 Feb 1994 11:32:23 -0500
Date: Wed, 2 Feb 1994 11:32:23 -0500
Message-Id: <199402021632.LAA04017@startide.ctr.columbia.edu>
To: conlang@diku.dk
Subject: Laadan

I bought acopy of the "First Grammar and Dictionary of L\'aadan" or
whatever it's called some time ago; is there indeed a newer version?
L'aadan, to me, is somewhat interesting, though I disagree with some of
Elgin's premises.  But that's another story.

~mark

______________________________________________________________________
 
 
>From C.J.Fine@bradford.ac.uk Wed Feb  2 17:18:09 1994
Date: Wed, 2 Feb 1994 17:18:09 GMT
Message-Id: <29370.199402021718@discovery.brad.ac.uk>
Received: from Colin Fine's Macintosh (colin_fine.comp.brad.ac.uk) by discovery.brad.ac.uk; Wed, 2 Feb 1994 17:18:09 GMT
From: Colin Fine 
To: conlang@diku.dk
Subject: Re: Laadan

I found some of the ideas of La'adan very interesting (I bought a 
grammar, but I have no idea what I have done with it), but
being a mere man I do not know what it is about it that is 
especially suited to women.

I read Native Tongue and disliked it so much that I haven't
read the second one (I didn't know there was a third).

From the linguistic point of view, I think that she was doing
just the same as some hard science SF writers used to do in the
forties: give you some interesting nuggets of science (in this
case linguistics) and then retreat into double-talk for the 
magic ray that saves the hero (in this case the magic 
language that saves the women). In particular she devotes
a lot of space to the idea of 'encodings': new concepts for
which there has not been a word before. I am convinced that
insofar as a language may have potentially whorfian effects the 
accident of whether a concept has a word or needs to be
expressed in several words is minor, compared to subtle
distinctions which are expressed not in the vocabulary but
in the grammar: the range and variety of grammatical
categories, how aspects are handled, grammaticalisation 
of animacy or other hierarchies, perhaps features such as
the choice of accusative vs ergative marking.... 

In other words, while I accept that Laadan *might* have 
some powerful and wonderful properties for the mental
capabilities of its speakers, I believe that the arguments
within the book that are apparently supposed to explain
why are weak and mostly irrelevant.

I also found that the depiction of the anti-feminist society
got up my nose: it seemed to me to be serving the needs
of polemic rather than art. This is of course a personal
opinion.

	Colin Fine


______________________________________________________________________
 
 
>From lojbab@access.digex.net Wed Feb  2 09:39:43 1994
From: Logical Language Group 
Message-Id: <199402021939.AA26065@access2.digex.net>
Subject: Re: Laadan
To: conlang@diku.dk
Date: Wed, 2 Feb 1994 14:39:43 -0500 (EST)
Cc: lojbab@access.digex.net (Logical Language Group)

Colin Fine writes:

> insofar as a language may have potentially whorfian effects the 
> accident of whether a concept has a word or needs to be
> expressed in several words is minor, compared to subtle
> distinctions which are expressed not in the vocabulary but
> in the grammar:

[examples deleted]

While basically agreeing with what you say, and not having read a word of
Elgin, I think there may be something to be said for this notion of
"encodings", if deconstructed a bit.  Surely it makes little difference
whether a concept is expressed in one word or many, even given a language
where the notion "word" has vivid meaning (true of English, but false of
German, e.g.).  Nevertheless, two points come to mind:

1) Having a word for something may indicate that it is, to some degree,
culturally backgrounded.  As Larry Niven points out somewhere, being an
"American" indicates a different mental attitude from calling yourself
something that translates as "people, the people".  The first case implicitly
says that there exist other kinds of people, whereas the second does not:
the word for "white man" in various African and Native American languages
is often equal to, or related to, words for "ghost/devil/supernatural entity".

I was reflecting today that Japanese has a short word (I forget it, but it
has about 3-4 morae) for "non-verbal communication", which of course is a
very important concept in Japanese culture.  Then there is English, which
wastes a monosyllable on the concept "short cylinder used in playing
hockey and related games."  :-)

2) If two different words exist for what another language conveys with a
single word (possibly modified as needed), then the two different words
may have distinct polysemy sets.  In the conlang LeGuin uses in >Always
Coming Home<, there are distinct monosyllabic words for "male orgasm" and
"female orgasm", each with its own set of polysemous meanings.  I recall
that the word for "male orgasm" also means "achievement", which seems
plausible; I don't recall (alas) any of the polysemous meanings for
"female orgasm".

So maybe "lexical Whorfianism" isn't quite the crock we enlightened folk
always thought it was.

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.

______________________________________________________________________
 
 
>From ucleaar@ucl.ac.uk Wed Feb  2 19:10:32 1994
From: ucleaar 
Message-Id: <25303.9402021910@link-1.ts.bcc.ac.uk>
To: conlang@diku.dk
Subject: Re: vocabulary considerations
Date: Wed, 02 Feb 94 19:10:32 +0000


Jacques to Rick H:

> First, the memory of the speaker. My brain capacity has a limit. 
> I don't know what it is, but I am sure it has. So there is a limit
> to what it can hold.

You're talking about idiolect, not language. A language is a collection
of features shared between idiolects. I would hold that the OED can
fairly be called a dictionary of English, even though no idiolect,
I would wager, contains all words listed in the OED.

So I agree with Rick. And I would pose a further question: When
adopting a word for a relatively novel concept, such as Boson,
Kangaroo, Krill or Modem, is there anything to be gained by
creating a word from preexisting morphemes? I prefer to hold
to the principle that words with related meanings have related
sounds, but that they needn't be analysable into component
morphemes. In fact 'boson' is quite a good example of this:
most names of particles end in -on, but the bos- has (to
me) no independent meaning (i.e. it is a cranberry-morph).

----
And


______________________________________________________________________
 
 
>From shoulson@ctr.columbia.edu Wed Feb  2 11:08:18 1994
From: shoulson@ctr.columbia.edu (Mark E. Shoulson)
Received: from localhost (shoulson@localhost) by startide.ctr.columbia.edu (8.6.5/8.6.4.788743) id QAA05589; Wed, 2 Feb 1994 16:08:18 -0500
Date: Wed, 2 Feb 1994 16:08:18 -0500
Message-Id: <199402022108.QAA05589@startide.ctr.columbia.edu>
To: conlang@diku.dk
Subject: Laadan

My wife read Native Tongue and didn't like it either; many of her comments
are similar to Colin's (though less linguistics-oriented).  I see some
interesting features of L'aadan, but very little that makes it so uniquely
"feminine" or even all that different.  Still, it has some fun parts.

~mark

______________________________________________________________________
 
 
>From lojbab@access.digex.net Wed Feb  2 11:45:45 1994
From: Logical Language Group 
Message-Id: <199402022145.AA02282@access2.digex.net>
Subject: Re: vocabulary considerations
To: conlang@diku.dk
Date: Wed, 2 Feb 1994 16:45:45 -0500 (EST)

And Rosta writes:

> In fact 'boson' is quite a good example of this:
> most names of particles end in -on, but the bos- has (to
> me) no independent meaning (i.e. it is a cranberry-morph).

That's because you don't belong to the relevant community of speakers.
Bosons are so named because they obey "Bose-Einstein statistics", whereas
fermions obey "Fermi-Dirac statistics".

For those who care:  Fermi-Dirac statistics apply to particles which are
indistinguishable but cannot be in the same quantum state simultaneously
(cannot be superposed), whereas Bose-Einstein statistics apply to particles
which can be superposed.  The statistics for distinguishable particles
(such as billiard balls) is called "Maxwell-Boltzmann", so presumably
billiard balls are maxwellons.

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.

______________________________________________________________________
 
 
>From agallagh@stars.sfsu.edu Wed Feb  2 15:03:41 1994
From: agallagh@stars.sfsu.edu (Alexis Gallagher)
Message-Id: <9402030703.AA01623@stars.sfsu.edu>
Subject: Re: Laadan
To: conlang@diku.dk
Date: Wed, 2 Feb 94 23:03:41 PST

> which there has not been a word before. I am convinced that
> insofar as a language may have potentially whorfian effects the 
> accident of whether a concept has a word or needs to be
> expressed in several words is minor. . . .

	I am not a linguist or studying to be a linguist, but let me throw
in my two bits here (though I fear, slightly, that conlang may be the
wrong forum for these two bits).

	Somewhere in _Life of Johnson_ Boswell says how every idea or
thing has a single word associated with it.  Johnson, right away,
points to the phrase "old age," showing how there is no single-word
equivalent for it.  Boswell says that certainly there is: senectus, a
Latin word.  At this point Johnson expatiates on how there needn't be
one word in language for any particular concept, and on how two words
serve just as well. I am in general inclined to agree with Johnson
(and with the above quotation), but I'm not sure that the principle is
absolutely true.  Maybe this has been mentioned before, but it seems
to me that the limits of memory would have an effect.

	I can know what the word 'counterpoints' means.  Three syllabless
are associated in my head with a fairly complex concept.  Now, perhaps
in a language with a smaller vocabulary (or in no language, but in
thoughts themselves, if it be true that many thoughts are
fundamentally non-linguistic in their representation in the brain),
this same concept 'counterpoint' could be expressed as a group of
simpler words, perhaps, say, as 'contrasting, balancing comparison'
(though even here I seem to have failed to use noticably 'simpler'
words).  Perhaps I _could_ express it in simpler words, but how far
can this activity be carried?  Take the phrase "counterpoints the
surrealism of the underlying metaphor."  Each of the words in it is
difficult to explain in itself, and, if each word was reduced to some
compound or collection of simpler words, then I think the entire
phrase would become unmanagably large--mentally unwieldy in its sheer
size.  It would exceed the limits of my short term memory, would thus
be inexpressible and, perhaps, even unthinkable.  (Short term memory
may be the wrong phrase here.)

	I see words sort of as pegs in the mind, onto which we hang ideas.
Maybe a few pegs can do the job of many but if we load them up too
heavily, if we ask a few simple pegs (or even pegs just ill-suited for
a cetain job) to bear the weight of an object they aren't equal or
designed to carry, then they snap.  We need the right peg for the
right job.

	This is my thought.  Begging pardon for the uncalled-for poeticism
of that last paragraph, I wonder what the rest of you think of this
notion of mine.

							Alexis Gallagher.




______________________________________________________________________
 
 
>From hrick@world.std.com Thu Feb  3 02:40:14 1994
Date: Thu, 3 Feb 1994 07:40:14 -0500
From: hrick@world.std.com (Richard Harrison)
Message-Id: <199402031240.AA13375@world.std.com>
To: conlang@diku.dk
Subject: Re: Laadan


agallagh@stars.sfsu.edu (Alexis Gallagher) writes:
 
> Take the phrase "counterpoints the surrealism of the underlying 
> metaphor."  Each of the words in it is difficult to explain in 
> itself, and, if each word was reduced to some compound or collection 
> of simpler words, then I think the entire phrase would become 
> unmanagably large
 
Hmmmm, wait a minute, the main words in your example sentence already 
_are_ compounds.  Counter- is a prefix used in other words (counter-
clockwise, counterculture, etc).  Real, real-ism, sur-real-ism.  
Under-lying.  Metaphor begins with meta- (also seen in metacarpal,
metalanguage, metamorphosis).  (Is -phor used in any other English
words?  Gametophore, anaphora??)  So, each of the content words in
your example sentence already is a collection of simpler words; we
just don't notice this immediately because we have grown accustomed
to these compounds.  (Perhaps if we concentrated on their component
parts, rather than thinking of them as single words, it would cause
an overflow in our output buffers as you suggested.)
 
> I see words sort of as pegs in the mind, onto which we hang ideas.
> Maybe a few pegs can do the job of many but if we load them up too 
> heavily, if we ask a few simple pegs (or even pegs just ill-suited 
> for a certain job) to bear the weight of an object they aren't equal 
> or designed to carry, then they snap.  We need the right peg for the 
> right job.
 
You have expressed your thought quite eloquently.  Let me add that
different people in different situations need different sets of pegs.
People in specialized careers need specialized jargon.  Vocabularies
for religious concepts, foodstuffs, articles of clothing and so forth
vary greatly from culture to culture.  
 

______________________________________________________________________
 
 
>From C.J.Fine@bradford.ac.uk Thu Feb  3 13:51:07 1994
Date: Thu, 3 Feb 1994 13:51:07 GMT
Message-Id: <17722.199402031351@discovery.brad.ac.uk>
Received: from Colin Fine's Macintosh (colin_fine.comp.brad.ac.uk) by discovery.brad.ac.uk; Thu, 3 Feb 1994 13:51:07 GMT
From: Colin Fine 
To: conlang@diku.dk
Subject: Re: Laadan AND vocabulary

I agree with John's and Alexis' points on vocabulary.
Certainly the presence or absence of concise expressions for
a concept, and of distinctions which in other languages are
not automatically made, can have a profound impact on
what gets said in the language. But I still believe that the
effect is more superficial than that of deeper differences
I was mentioning.

An interesting example of what Alexis is saying:

I have on my desk a booklet concerning Nordic Talking Books.
It is in seven languages (Norwegian, Swedish, Danish, Finnish,
Faroese, Icelandic and Greenlandic). The titles include:

Laes Nordiske Lydbo/ger (Danish)

Lesid+ norraenar hljo'd+baekur (Icelandic: d+ is the letter edh)

A"a"nikirjoja pohjoismaissa (Finnish)

Atuakkanik tusarnaagassianngorlugit immiussanik nunanit
avannarlerneersunik atuarit (Greenlandic)

The ones I understand say "Read Nordic sound-books" (I
don't think the imperative 'read' is there in the Finnish).
It is evident that the Greenlandic is much longer than any
of the others. I deduce from the text that the Greenlandic
for 'talking book' is 'atuak- tusarnaagassianngorlug- immiussa-' with 
some suitable collection of endings. (I guess the first word
means 'book' and the third 'recorded'). 

This seems to me to be an example of what Alexis is talking 
about - this is a concept which can be expressed in Greenlandic,
but not succinctly.

Encodings:

As I recall the discussion of encodings in Native Tongue (from
a long time ago) one example given was 'the palm of the hand
and the front of the forearm'. Now it may or may not be 
useful to have a single word for that concept (it reminds me
of a discussion somewhere over whether there was an English
word for the back of the knee). But I refuse to believe that the
presence or absence of a word for that concept in the language
is of such significance that it can have the described effect.
And if Elgin is not claiming that this is a major example, then
she has chosen a bad example to make her point.
(I cannot remember whether it was that or another example 
which was described in the book as a 'significant new encoding'
discovered by the young heroine).

The writer's problem is of course that any concept that she can
get across to the reader in English words is liable to the same
criticism as I have just made. But I think she is sacrificing 
scientific and logical plausibility for didactic purpose.

The kind of 'encoding' I might accept as plausible would be
something described in terms such as 'the difference between
being seen by somebody and being seen in an objective sense'

This example is off the top of my head, as an example of a distinction 
which can be expressed in English but seems to have little or no meaning
to us as English speakers - but I can conceive of a mind-set in which
it is meaningful, even though I do not know what it means. 
Interestingly, when studying or composing Lojban I often get a 
sense of this sort of distinction that I am sure is real but
I can't quite get my mind around.
For example

le ba nu mi klama = the (particular) future event(s) of I go
le nu mi ba klama = the (particular) event(s) of I future go

mi tadni le te djuno = I study the subject (of somebody's knowing about)
mi tadni le te smadi = I study the subject (of somebody's guessing about)
mi tadni le se casnu = I study the subject (of somebody's discussion)
mi tadni le te jijnu = I study the subject (of somebody's intuition)
	

	Colin


______________________________________________________________________
 
 
>From chalmers@violet.berkeley.edu Thu Feb  3 23:38:48 1994
Date: Fri, 4 Feb 1994 07:38:48 -0800
From: chalmers@violet.berkeley.edu (John H. Chalmers Jr.)
Message-Id: <199402041538.HAA01499@violet.berkeley.edu>
To: conlang@diku.dk
Subject: Laadan

As well as I can remember, there is a 2nd grammar and dictionary
of L/aadan as well. I sent a SASE to Huntsville, AR and will
post whatever I learn.

More interesting than the "encodings" perhaps are the inflectional
categories of L/'aadan. They mostly differentiate degrees and
type of duty, obligation ,ownership, and attitude. Whether these 
represent the perceptions of women is arguable -- to me they mostly 
describe relations of power and hence, would be equally applicable 
to men at the bottom of the socio-economic and political ladders.

I recall from a L/'aadan newsletter I received in connection with
publicity for an SF&fantasy convention which had sessions devoted
to L that considerable amazement was expressed that men had devised
"encodings" and transmitted them to Elgin.

I tend to agree that the books are a tough read, especially for me, 
but I found the language interesting on its own.

-- John


______________________________________________________________________
 
 
>From WEINBERG@GMUVAX.GMU.EDU Fri Feb  4 09:39:48 1994
Message-Id: <199402042242.AA24897@odin.diku.dk>
Date: Fri, 4 Feb 94 14:39:48 EST
From: STEVEN H. WEINBERGER 
To: conlang@diku.dk
Subject: laadan

Certainly there my be interesting linguistic relativity issues with the
female language Laadan, but i have found some very instructive phonological
alternations in that grammar--so instructive that i use Laadan data for 
a phonological exercise in my introductory phonology class at George Mason
Univerisity.  the students love it!

--steven weinberger

______________________________________________________________________
 
 
>From chalmers@violet.berkeley.edu Fri Feb  4 11:42:01 1994
Date: Fri, 4 Feb 1994 19:42:01 -0800
From: chalmers@violet.berkeley.edu (John H. Chalmers Jr.)
Message-Id: <199402050342.TAA24208@violet.berkeley.edu>
To: conlang@diku.dk
Subject: Laadan

Steve: Could you post some examples of the use of Laadan in your
introductory phonology class? I haven't looked at it for so long,
I've  forgotten the structure of the words.

-- John

______________________________________________________________________
 
 
>From ram@eskimo.com Fri Feb  4 19:56:44 1994
Date: Sat, 5 Feb 1994 03:56:44 -0800
From: ram@eskimo.com (Rick Morneau)
Message-Id: <199402051156.AA06931@eskimo.com>
To: conlang@diku.dk, ram@eskimo.com
Subject: Re: vocabulary considerations


Don Harlow writes:
>
> If you intend your conlang to be used for literary purposes...
> you are going to have to contend with the insistence of those using
> it that there be several genuine synonyms for each concept in the
> language, so that the same word doesn't have to be repeated twice
> in the same paragraph or on the same page.
>

It's often comforting to have such synonyms, because it allows a writer
(such as myself) to make something that is poorly written look like
something that is well written.

Nowadays, when I see myself groping for a synonym to avoid an
awkward-sounding sentence or paragraph, I try to force myself to
rewrite the thing from scratch, rather than use a synonym.  If you've
got the willpower, this ALWAYS works.  However, it requires discipline
which I don't always have.  I see no reason to design a language to
make it easy for lazy or incompetent writers, such as myself.

An even better reason for avoiding synonyms, though, is that any
synonyms you create are likely to represent corresponding synonyms in
your native language.  Other languages, of course, will not have the
same synonyms.  Thus, by creating such synonyms in your conlang, you'll
simply be cloning your natlang.  For some people, this is acceptable.
For me, it is not.

On a similar note, synonyms also make it easier for poets to write
poetry.  However, making it easier to write poetry simply cheapens the
results.  A conlang designer should not concern himself with poetry.
Leave THAT job to the poet.

In sum, a conlang designer should not coddle, spoil, pamper, baby
or indulge incompetents or poets.  Let them sink or float on their own.

Regards,

Rick


*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
=*   Rick Morneau  ram@eskimo.com   "Be kind to nature -     =*
*=   Idaho Falls, Idaho, USA          brake for dinosaurs."  *=
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

______________________________________________________________________
 
 
>From ram@eskimo.com Fri Feb  4 19:56:41 1994
Date: Sat, 5 Feb 1994 03:56:41 -0800
From: ram@eskimo.com (Rick Morneau)
Message-Id: <199402051156.AA06927@eskimo.com>
To: conlang@diku.dk, ram@eskimo.com
Subject: Re: vocabulary considerations


Howdy conlangers!

First, I apologize in advance for the length of what follows.  Since
I now pay long-distance phone rates for every byte that I read/write
via Internet, I've become very sensitive to verbosity.  I promise
that it won't happen often.


Rick Harrison again provides some interesting food for thought, this
time in an area that is closely related to one of my favorite topics:
lexical semantics.  Here are some of my thoughts on the points he
raised:


Concerning vocabulary size, compounding and derivation:

First, we must make a distinction between words and root morphemes.  If
compounding and/or derivation is allowed, as is true with every language
I'm familiar with (nat and con), then vocabulary size can be essentially
infinite.  Even in a system with little or no derivation (such as Chinese
and Vietnamese), you can create zillions of words from compounding, even
though the number of root morphemes is limited.  The problem here, though,
as Rick pointed out, is that you often have to metaphorically or
idiomatically stretch the meanings of the component morphemes to achieve
the desired result.  How, for example, should we analyze English compounds
such as "blueprint", "cathouse", "skyscraper" and "billboard"?

Another problem surfaces if you want your compounds to be semantically
precise (assuming, of course, that the basic components are semantically
precise to start with).  This will often mean that additional morphemes
must be added to a word to indicate how the component morphemes relate
to each other.  For example, what is the relationship between "house"
and "boat" in the word "houseboat"?  What is the relationship between
"house" and "maid" in the word "housemaid"?  Obviously, the relationships
are different.

A possible solution to this problem is to create compounds by
juxtaposing complete words, but keeping them separate, as is almost
always done in Indonesian, and often done in English.  Some English
examples are "stock exchange", "money order" and "polar bear".  To
remove ambiguities, you will still need the additional morphemes, only
this time they can be linking morphemes such as English prepositions. 
Swahili uses this approach for all of its compounds, and French uses it
for most (French examples: "salle a manger", "eau de toilette", "film en
couleurs", etc.).  If you wish to use this approach, though, make sure
that you have enough linking morphemes to deal with all possible
semantic distinctions.

Unfortunately, if you don't have a very large and expandable set of root
morphemes, you'll definitely run into trouble if your goal is semantic
precision.  Personally, I don't like conlangs that limit the number of
possible root morphemes - you never know what you're going to run into
in the future.  A conlang should not only give itself lots of room for
expansion, but it should make it as easy as possible to implement.

Another thing that should be considered is how easy it will be to learn
the vocabulary.  This can be best achieved by limiting the number of
root morphemes.  But if we limit the number of root morphemes, we run
into the problems mentioned above!

Actually, there is a solution to this problem.  You must design your
vocabulary in two steps, as follows:

First, your conlang must have a powerful classificational and
derivational morphology for verbs.  (Other state words, such as
adjectives and adverbs, will be directly derived from these verbs.) This
morphology will be semantically precise.

Second, root morphemes should be RE-USED with unrelated NOUN classifiers
in ways that are mnemonic rather than semantically precise.  I.e., the
noun classifiers themselves will be semantically precise, but the root
morphemes used with them (and which will be borrowed from verbs) will be
mnemonic rather than semantic.

To clarify the first step somewhat, let me re-post something I posted
several months ago (slightly edited):

**********

     1. Design a derivational morphology for your conlang
	that is as productive as you can possibly make it.
	This will almost certainly require that you mark
	words for part-of-speech, mark nouns for class, and
	mark verbs for argument structure (i.e., valency and
	case requirements) and voice.

     2. Start with a common verb (or adjective) and
	decompose it into its component concepts using the
	above system.  For example, the verb "to know" has a
	valency of two, the subject is a semantic patient and
	the object is a semantic theme.  (The theme provides a
	focus for the state "knowledgeable".  Unfocused, the
	state "knowledgeable" would be closer in meaning to
	the English words "intelligent" or "smart".)

     3. The root morpheme meaning "knowledgeable/intelligent"
	can now undergo all the morphological derivations that
	are available for verbs.  Some of these derivations will
	not have counterparts in your natlang. Many others will.
	For example, this SINGLE root morpheme could undergo
	derivation to produce the following English words: "know",
	"intelligent", "teach", "study", "learn", "review",
	"instruct", plus words derived from these words, such as
	"student", "intelligence", "education", etc.  You will also
	be able	to derive words to represent concepts for which
	English requires metaphor or periphrasis, such as "to broaden
	one's mind", "to keep up-to-date", etc.  It is important
	to emphasize that ALL of these words can be derived from
	a SINGLE root morpheme.

    In other words, use a back door approach - start with a powerful
    derivational system, and iteratively decompose words from a natlang
    and apply all derivations to the resulting root morphemes.  In doing
    so, many additional useful words will be automatically created,
    making it unnecessary to decompose a large fraction of the remaining
    natlang vocabulary.

**********

Now, let me clarify the second step:

Root morphemes used to create verbs can then be re-used with unrelated
NOUN classificational morphemes in a way that is semantically IMPRECISE,
intentionally, but which is mnemonically useful.  For example, a single
root morpheme would be used to create the verbs "see", "look at", "notice",
etc. by attaching it to appropriate classificational affixes for verbs.
These derivations would be semantically precise.  The SAME root morpheme
can then be used to create nouns such as "diamond" (natural substance
classifier), "glass" (man-made substance classifier), "window" (man-made
artifact classifier), "eye" (body-part classifier), "light" (energy
classifier), and so forth.

Thus, verb derivation will be semantically precise.  Noun derivation,
however, cannot be semantically precise without incredible complication
(try to derive the word for "window" from basic primitives).  So why not
re-use the verb roots (which define states and actions) with noun
classifiers in ways that are mnemonically significant?  Finally, if you
combine these two approaches with the compounding scheme mentioned earlier
(using linking morphemes), you will be able to lexify any concept while
absolutely minimizing the number of root morphemes in the language.
Incidentally, this approach also makes it trivially easy to create a
language with a self-segregating morphology.


Concerning concept mapping:

Again, let me save myself some typing by stealing something I wrote in
that earlier post:

**********

    In other words, use a back door approach - start with a powerful
    derivational system, and iteratively decompose words from a natlang
    and apply all derivations to the resulting root morphemes.  In doing
    so, many additional useful words will be automatically created,
    making it unnecessary to decompose a large fraction of the remaining
    natlang vocabulary.

    This approach won't guarantee that concept space will be perfectly
    subdivided, but it will be as close as you can get.  If anyone knows
    of a better system, please tell us about it.

    Another fairly obvious advantage is that your conlang will be easier
    to learn, since you'll be able to create many words from a small
    number of basic morphemes.  Ad hoc borrowings from natlangs will be
    minimized.

    Also, such a rigorous approach to word design has some interesting
    consequences that may not be immediately obvious.  If you use this kind
    of approach, you'll find that many of the words you create have close
    (but not quite exact) counterparts in your native language.  However,
    this lack of precise overlap is exactly what you ALWAYS experience
    whenever you study a different language.

    In fact, it is this aspect of vocabulary design that seems to
    frustrate so many conlangers, who feel that they must capture all
    of the subtleties of their native language.  In doing so, they merely
    end up creating a clone of the vocabulary of their natlang.  The result
    is inherently biased, semantically imprecise, and difficult to learn
    for speakers of other natlangs.  It is extremely important to keep in
    mind that words from different languages that are essentially
    equivalent in meaning RARELY overlap precisely.

    Fortunately, all of this does NOT mean that your conlang will lack
    subtlety.  In fact, with a powerful and semantically precise derivational
    morphology, your conlang can capture a great deal of subtlety, and can
    go considerably beyond any natural language.  The only difference is
    that, unlike a natural language, the subtleties will be predictable
    rather than idiosyncratic, and the results will be eminently neutral.

    So, do you want to create a clone of an existing vocabulary?  Or do you
    want to maximize the neutrality and ease-of-learning of the vocabulary
    of your conlang?  You can't have it both ways.

**********


Concerning hidden irregularities:

A classificational system automatically solves all count/mass/group
problems, since the classification will indicate the basic nature of the
entity represented by the noun.  Other derivational morphemes (let's
call them "class-changing morphemes") can then be used to convert the
basic interpretation into one of the others.  For example, from the
basic substance "glass", we can derive the instance of it: "a glass
item".  From the basic animal "sheep", we can derive its group meaning,
"flock", and its mass meaning, "mutton".  Each basic classifier would
have a default use depending on the nature of the classifier.  Further
derivation would be used to create non-default forms.  With this
approach, it would not even be possible to copy the idiosyncratic
interpretations from a natural language, since the classificational
system would eliminate all such idiosyncrasy.

All of the problems of verbal argument structure are solved in a
classificational system.  My essay goes into a lot of detail on this
point, so I won't say much here.  Basically, though, verbs are created
by combining a root morpheme that indicates a state or action with a
classifier which indicates the verb's argument structure.  For example,
the following verbs are formed from the same root morpheme, but with
different verbal classifiers that indicate the verb's argument structure:

	to teach (someone): subject is agent, object is patient
	to teach (something): subject is agent, object is theme
	to learn: subject is patient, object is theme
	to study: subject is both agent and patient, object is theme

As illustration, the semantics of the English verb "to teach someone
something" can be paraphrased as: 'agent' causes 'patient' to undergo
a change of state from less knowledgeable to more knowledgeable about
'theme'.

You will also need to make distinctions between steady state and change
of state.  The above examples all indicate changes of state (i.e., the
'patient' gains in knowledge).  Some steady-state counterparts, formed
from the same root morpheme, would be:

	to know: subject is patient, object is theme
	to be knowledgeable or smart: subject is patient, no object
	to review (in the sense "keep oneself up-to-date"): subject is
		both agent and patient, object is theme

You will also need an action classifier, which would indicate an ATTEMPT
to achieve a change of state, but with no indication of success or
failure.  For example, the root morpheme for the above examples could be
combined with an action classifier to create the verb "to instruct".

Thus, the verb classifier indicates the verb's argument structure, and
allows creation of related verbs from the same root morpheme, verbs that
almost always require separate morphemes in English.

Finally, if your conlang has a comprehensive system for grammatical
voice, even more words can be derived from the same morpheme.  For
example, if your language has an inverse voice (English does not), you
could derive the verbs "to own" and "to belong to" from the same root
morpheme.  Ditto for pairs such as "parent/child", "doctor/patient",
"employer/employee", "left/right", "above/below", "give/obtain",
"send/receive", etc.  Note that these are not opposites! They are
inverses (also called converses).  Many other words can also be derived
from the same roots if your conlang implements other voice transformations
such as middle, anti-passive, instrumental, etc.  You can save an awful
lot of morphemes if you do it right.  And even though English doesn't do
it this way, there are many other natural languages that do.  So there's
nothing inherently unnatural about this kind of system.  It's almost
certain, though, that no SINGLE natural language has such a comprehensive
and regular system.

Finally, for those among you who want a Euroclone, I'm sorry, but I
have nothing to offer you.  Besides, I doubt if any of you even got
this far.  :-)


(BTW, how to do all of the above and much more is the topic of the essay
on Lexical Semantics that I am currently working on, and which was recently
mentioned on this list.  It goes into much, much more detail than what
I've provided here.  I will let the list know when it is done.)

Ciao for nao!

Rick


*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
=*   Rick Morneau  ram@eskimo.com   "Be kind to nature -     =*
*=   Idaho Falls, Idaho, USA          brake for dinosaurs."  *=
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

______________________________________________________________________
 
 
>From EZ-as-pi@cup.portal.com Fri Feb  4 20:53:41 1994
From: EZ-as-pi@cup.portal.com
To: conlang@diku.dk, ez-as-pi@cup.portal.com
Subject: Re: vocabulary considerations
Lines: 44
Date: Sat,  5 Feb 94 04:53:41 PST
Message-Id: <9402050453.1.11158@cup.portal.com>

Rick Morneau recently repeated some
earlier-posted recipes for vocabulary
formation. I think they are interesting
and I might even be in agreement with
a fair proportion of the ideas he
expressed. I would, however, like to
say that the test of the practicality
of such a scheme is someone's actually
attempting to follow such a plan.

Some months ago, I put forth a scheme
for building a syntactic system, 
essentially a reversal of the system
of Japanese. A few people joined me in
the project, though it ultimately
foundered because the group was small
enough that the departure of just a few
from the project left us with the un-
fortunate situation that there was no
way of achieving consensus when two
people disagreed on an idea, unless the
third was allowed by himself to make
the decision. When there are three
active people, the person who cares the
least about some point at issue gets to
be the decision maker.
Whether it is one person, or a group,
unless somebody (or -bodies!) decides
to follow up Rick's ideas, we don't
really know that his somewhat vague
and abstract proposals can in fact be
followed.
While, after the Voksigid experience,
I am rather gun-shy about offering to
be part of a group to develop Rick's
ideas, I would appreciate it if anyone
who does could keep me informed about
any productive follow up to his note.

                   Bruce R. Gilson


______________________________________________________________________
 
 
>From EZ-as-pi@cup.portal.com Fri Feb  4 21:10:55 1994
From: EZ-as-pi@cup.portal.com
Received: from localhost (pccop@localhost) by hobo.corp.portal.com (8.6.4/1.16) id FAA16557 for conlang@diku.dk; Sat, 5 Feb 1994 05:10:56 -0800
To: conlang@diku.dk, ez-as-pi@cup.portal.com
Subject: Re: vocabulary considerations: verb patterning
Lines: 36
Date: Sat,  5 Feb 94 05:10:55 PST
Message-Id: <9402050510.1.11158@cup.portal.com>

Rick Harrison remarked about the vast
variety of verb patterns in English. I
think that here I'd like to put in a
plug for the approach we followed in
Voksigid: every verb is in fact an
"impersonal" in structure. Explicit
use of prepositional phrases, with
well-defined meanings for the preposi-
tions, related all nouns to the main
verb of the sentence.
Therefore, the word order was rather
flexible in that the individual phrases
could occur in any sequence that 
emphasis could suggest: Voksigid was
not really VSO or VOS, but merely verb-
first, with "subject" and "object" not
being valid concepts of the language;
instead we could consider the active
doer of the verb in question, the
recipient, the undergoer, etc. merely
as being defined by these prepositions
just as, in English, locatives can be
marked by "in" or instrimentals by
"with" or "by means of"; and the dif-
fering ways "subjects" or "objects"
relate to the verb can be expressed by
using different prepositions. I admit
tht wegot into some difficulties. I
have always felt that in the case of
verbs of sensation, for example, the
logical subject corresponds to the
English object and vice versa, but not
all of us agreed. But this could have
been worked out if we had more time and
people on the project.
                     Bruce R. Gilson

______________________________________________________________________
 
 
>From EZ-as-pi@cup.portal.com Fri Feb  4 21:25:13 1994
From: EZ-as-pi@cup.portal.com
Received: from localhost (pccop@localhost) by hobo.corp.portal.com (8.6.4/1.16) id FAA19490 for conlang@diku.dk; Sat, 5 Feb 1994 05:25:16 -0800
To: conlang@diku.dk, ez-as-pi@cup.portal.com
Subject: Re: vocabulary considerations
Lines: 18
Date: Sat,  5 Feb 94 05:25:13 PST
Message-Id: <9402050525.1.11158@cup.portal.com>

And Rosta wrote:

> ... In fact 'boson' is quite a good example of this:
>most names of particles end in -on, but the bos- has (to
>me) no independent meaning (i.e. it is a cranberry-morph).

To And, this is so. To me the associa-
tion of boson with Bose is there, as
I am familiar with the distinction
between bosons (which "obey Bose-
Einstein statistics") and fermions
(which "obey Fermi-Dirac statistics").
A lot of the associations that one 
makes between related words depend on
_knowing_ the related words.


                    Bruce R. Gilson

______________________________________________________________________
 
 
>From WEINBERG@GMUVAX.GMU.EDU Sun Feb  6 17:03:50 1994
Message-Id: <199402070304.AA02706@odin.diku.dk>
Date: Sun, 6 Feb 94 22:03:50 EST
From: STEVEN H. WEINBERGER 
To: conlang@diku.dk
Subject: Laadan as a phonology problem

Engl 690


This is a subset of the Laadan examples i use in my introductory
phonology class at George Mason University.

Ladan is a language constructed by a woman for women, for the specific
purpose ofexpressing the perceptions of women.  This exercise will not
address the whorfio-socio-political aspects of Ladan.  It will instead
focus on the phonology of this constructed language.  describe the
epenthesis  and  deletion phenomena in the following data:

A.
bII  hal  ra   omId wa		the horse doesn't work.
DECL  work  NEG  horse TRUTH

bII   mEhal  ra   omId wa		the horses don't work.
DECL work(pl.) NEG  horse  TRUTH

bII   aja    mahIna  wa	       	the flower is beautiful.
DECL beautiful flower   TRUTH

bII   mEhaja    mahIna  wa	  the flowers are beautiful.
DECL beautiful(pl.) flower  TRUTH   

(Answer:  we asssume that the plural prefix is /mE/.  /h/ is added to
keep vowels
apart.)


B.
hohazh	airport	hohazhEdI	airport (goal)
hoth		place		hothEdI	place   (goal)

marI		island		marIdI	island  (goal)
Eba		spouse	EbadI		spouse (goal)

(Answer:  the "goal" suffix is /EdI/.  the initial vowel in the suffix is
deleted if the
root ends in a vowel.)





______________________________________________________________________
 
 
>From hrick@world.std.com Mon Feb  7 02:27:23 1994
Date: Mon, 7 Feb 1994 07:27:23 -0500
From: hrick@world.std.com (Rick Harrison)
Message-Id: <199402071227.AA09958@world.std.com>
To: conlang@diku.dk
Subject: Re: vocabulary considerations

 
pa ze Morno kwi Rihk (Rick Morneau) shu:
 
> Root morphemes used to create verbs can then be re-used with unrelated
> NOUN classificational morphemes in a way that is semantically IMPRECISE,
> intentionally, but which is mnemonically useful. 
 
Well, I was cheering for you up to this point.  This suggestion is a bit
diappointing.  Why be so precise and predictable in the rest of your
proposal, and then take such a fuzzy approach to noun creation?  This
seems terribly inconsistent.
 
> For example, a single root morpheme would be used to create the verbs 
> "see", "look at", "notice", etc. by attaching it to appropriate 
> classificational affixes for verbs. These derivations would be semantically
> precise.  The SAME root morpheme can then be used to create nouns such as 
...
> "glass" (man-made substance classifier), "window" (man-made artifact 
 
Why not "eyeglasses" rather than "window"?  Really, this blurry method of 
noun creation opens the door to an endless stream of quibbles and
misinterpretations.
 
> Noun derivation, however, cannot be semantically precise without 
> incredible complication (try to derive the word for "window" from 
> basic primitives).
 
English "window" comes from Old Norse vindr + augu (wind-eye).  In
other words, it already _is_ derived from basic primitives.  In Chinese,
"window" _is_ a basic primitive, chuang1.*  
 
If excruciating precision is our goal, we might creating a compound meaning
"opening in wall for:the:purpose:of passage done:by air and/or light."  
But this would be rather clumsy in your proposed system, as "light" is 
already "(energy somehow associated with) seeing" and "air" would presumably 
be "(natural substance somewhow associated with) breathing."  (I shudder
to think how you would express "wall"!  Vertical planar man-made artifact
for dividing interior spaces?)  (By the way, isn't "man-made artifact"
redundant?)
 
I'm looking forward to your essay.  Hopefully it will include a demo
conlang (more than just a few words) so that the audience can test the
viability of your heretical proposals.
 
---
 
* You propose atomizing meanings to the point that they are no longer 
  recognizable, much as a common object such as a hair or a grain of 
  salt becomes unrecognizable when viewed through a powerful microscope.  
  Graphing the semantic domain in a way that it so terribly different 
  from major natlangs would appear to increase the difficulty of learning.
  A conlang built as you propose might have a relatively small stock
  of roots, but this would be offset by having to learn the relatively
  bloated inventory of derivational affixes and the rules that control
  each, plus having to memorize the unpredictable meanings of all those
  nouns.
 

______________________________________________________________________
 
 
>From ram@eskimo.com Wed Feb  9 19:19:43 1994
Date: Thu, 10 Feb 1994 03:19:43 -0800
From: ram@eskimo.com (Rick Morneau)
Message-Id: <199402101119.AA11871@eskimo.com>
To: conlang@diku.dk, ram@eskimo.com
Subject: Re: vocabulary considerations


Howdy conlangers!

Rick Harrison writes:
>
> pa ze Morno kwi Rihk (Rick Morneau) shu:
>
Could you identify the above language(s) for me?

I wrote:
>
> Root morphemes used to create verbs can then be re-used with unrelated
> NOUN classificational morphemes in a way that is semantically IMPRECISE,
> intentionally, but which is mnemonically useful.
>

Rick Harrison responds:
>
> Well, I was cheering for you up to this point.  This suggestion is a bit
> diappointing.  Why be so precise and predictable in the rest of your
> proposal, and then take such a fuzzy approach to noun creation?  This
> seems terribly inconsistent.
>
I'm sorry you're disappointed.  Perhaps you misunderstand what I'm
trying to achieve.  Keep in mind that I'm talking about a CLASSIFICATIONAL
language where classifying morphemes are used in both verb and noun
formation.  Since there is no way to use verbal roots with noun
classifiers, and vice versa, in a way that is semantically precise, you
can either create a completely different set of root morphemes for nouns,
or you can re-use the verb roots for their mnemonic value.

Thus, for nouns, the combination of root+classifier becomes a de facto
new root, even though it has the morphology of root+classifier.  There
is nothing "fuzzy" about it as long as you keep in mind that it's just
a mnemonic aid.  To me, it seems like a great way to re-use roots that
would otherwise be underutilized.

Me again:
>
> "glass" (man-made substance classifier), "window" (man-made artifact 
>

Rick Harrison again:
>
> Why not "eyeglasses" rather than "window"?  Really, this blurry method of 
> noun creation opens the door to an endless stream of quibbles and
> misinterpretations ...
>
> ... English "window" comes from Old Norse vindr + augu (wind-eye).  In
> other words, it already _is_ derived from basic primitives.  In Chinese,
> "window" _is_ a basic primitive, chuang1.*  
>
Sorry, but I don't understand how your Norse and Chinese examples are
relevent.  If anything, they seem to support my point.

Most complex nominals used in natural languages are not semantically
precise - they simply provide clues.  What I'm suggesting is something
akin to "blurry" English words such as "whitefish", "highland",
"seahorse", etc., only the noun classifiers themselves would be more
generic, but would have semantically precise definitions.  Thus, what I
proposed is actually much closer to what is done in Bantu languages such
as Swahili, since it is morphological rather than lexical.

In essense, I am suggesting that you use semantic precision only when
it is practical.  Re-use root morphemes as mnemonic aids when semantic
precision is not practical.  The alternative is to create many hundreds
(perhaps thousands) of additional root morphemes which will have to be
learned by the student.

Rick Harrison again:
>
> (I shudder to think how you would express "wall"! Vertical planar
> man-made artifact for dividing interior spaces?)
>
Of course not.  I hoped that my noun examples would make it obvious that
my goal is to AVOID semantic decomposition of basic nouns, since it
could never succeed without resulting in incredibly long words, as your
"wall" example indicates.  I don't understand how you could have
interpreted my examples in this way.

Rick Harrison again:
>
> * You propose atomizing meanings to the point that they are no longer 
>  recognizable, much as a common object such as a hair or a grain of 
>  salt becomes unrecognizable when viewed through a powerful microscope.  
>
Absolutely not.  Again, I don't understand how you could come to a
conclusion like this from what I wrote.  (Perhaps I should not have
tried to summarize a very long essay in just a few paragraphs.)

Verb design is "atomized" only to the extent that argument structure
and basic verbal nature (steady-state, change-of-state or action) are
specified by the classifier.  I thought my examples made this obvious.
Thus, a single root morpheme plus one classifier would result in the verb
"to teach".  The same root morpheme plus a different classifier would
result in the verb "to study".  And so forth.  Is this "atomizing"?  If
so, I advise you to avoid languages like Arabic, Turkish, Swahili, Tamil,
Hungarian, Quechua, Japanese, Indonesian, Hindi, Korean, ad nauseam.
Even languages closely related to English (such as French and German)
make distinctions in verbal morphology between the agentive "He opened
the door" and the stative, non-agentive "The door opened".  The non-
European languages in the above list make even more distinctions.  In
fact, all of the derivational morphology in my essay has counterparts in
natural languages.  The only difference is that (to my knowledge) no
single language makes ALL of the distinctions I make, nor do they
implement them with total regularity.

And as for nouns, the combination of verbal root plus noun classifier
becomes a de facto noun root.  The classifier has a precisely defined
meaning, while the verbal root simply makes a mnemonic contribution to
the final meaning.  The word "atomizing" simply does not apply, unless
you consider ANY type of classification as "atomizing".

Rick Harrison again:
>
> Graphing the semantic domain in a way that it so terribly different 
>  from major natlangs would appear to increase the difficulty of learning.
>
Both noun and verb classifiers are used, in effect, to create most of
the vocabularies of languages such as Swahili and Arabic.  Also, many
oriental languages make heavy use of lexical classification schemes.  I
admit, though, that my approach has little in common with most European
languages.  (Which I consider a definite advantage.  Euroclones are a
dime a dozen.)

Even so, there is nothing "terribly different" about my scheme.
English creates many complex nominals this way (eg.  "cutworm", "white
water", "red ant", etc.).  My approach, though, uses noun classifiers
that are slightly more generic than "worm", "water" and "ant".  In
effect, it is much more similar to Bantu languages of Africa or
several aboriginal languages of Australia.  These languages, though,
are at the opposite extreme from English, since their classifiers are
even vaguer than what I propose. Thus, my ideas fit in quite snugly
between the opposite poles of classificational possibility.

Difficult???  Adding regularity to word design will make it easier, not
more difficult.  Is Esperanto more difficult because it's inflectional
system is perfectly regular?  Of course not.  Just because perfect
regularity in a natlang is extremely rare does not mean that we should
avoid it in the construction of a conlang.  Or are you saying that it's
okay to have regularity in syntax and inflectional morphology, but that
it's NOT okay to have regularity in derivational morphology or lexical
semantics?

I suggest that most conlangs are irregular in derivational morphology
and lexical semantics because their designers are not aware that such
regularity is even possible.

Rick Harrison again:
>
>  A conlang built as you propose might have a relatively small stock
>  of roots, but this would be offset by having to learn the relatively
>  bloated inventory of derivational affixes and the rules that control
>  each, plus having to memorize the unpredictable meanings of all those
>  nouns.
>
Instead of being forced to learn thousands of unique-but-related
verbs, I would rather learn about one-tenth as many, plus a few dozen
classifiers and a few perfectly regular rules that apply without exception.
As for nouns, mnemonic aids make them easier to learn - their meanings are
unpredictable only if you fool yourself into thinking that they SHOULD BE
predictable.

Rick, I think (hope?) that there are two reasons why you have difficulty
with my post.  First, you raised a topic that I've given a lot of thought
to, and I tried to summarize a large quantity of material that I've
written on the topic in just a few paragraphs.  Misunderstanding was
inevitable.  Second, a classificational language may not hold much
appeal for you.  If so, I'm sure you're not alone.

I choose this approach because it has several advantages.  First, and
least important, it makes word design fast and easy.  Second, it makes
learning the language easier.  Third, it is totally neutral - no one
will accuse you of cloning your native language.  Yet nothing in my
approach is unnatural - every aspect of it has counterparts in some
natural languages.  Fourth, and most importantly, is that a powerful
classificational and derivational system forces the conlang designer
to be systematic.  If done properly, it will prevent the adoption of
ad hoc solutions to design problems.

Aaaiiieeeyaaah!
That fourth point is SO important, that I want to repeat it.
But I won't. :-)

I also believe that the result will have more esthetic appeal to a
larger number of people of varied backgrounds.  A conlang with a large
contribution from European languages may appeal to Europeans, but it
will probably not be as appealing to non-Europeans.


Regards,

Rick


*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
=*   Rick Morneau  ram@eskimo.com   "All kings is rapscallions"  *=
*=   Denizen of Idaho, USA                   --Mark Twain        =*
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

______________________________________________________________________
 
 
>From ucleaar@ucl.ac.uk Thu Feb 10 21:20:51 1994
From: ucleaar 
Message-Id: <63661.9402102120@link-1.ts.bcc.ac.uk>
To: conlang@diku.dk
Subject: Re: vocabulary considerations
Date: Thu, 10 Feb 94 21:20:51 +0000


Rick M:
> Rick Harrison writes:
> >
> > pa ze Morno kwi Rihk (Rick Morneau) shu:
> >
> Could you identify the above language(s) for me?

It's Vorlin, I reckon. I seem to remember Rick once explaining
_ze_ and _kwi_ as Vorlin markers of personal and family names.
It's a shame we don't hear much about Vorlin on Conlang these
days.

----
And

______________________________________________________________________
 
 
>From fritz@rodin.wustl.edu Thu Feb 10 20:58:48 1994
Date: Fri, 11 Feb 94 02:58:48 CST
From: fritz@rodin.wustl.edu (Fritz Lehmann)
Message-Id: <9402110858.AA05773@rodin.wustl.edu>
To: conlang@diku.dk
Subject: Intro/AI/compound-nouns/concept-systems


Dear Conlangers:            10 Feb 1994     conlang@diku.dk

     I'm new to this list.  By way of introduction, I'm working in
Artificial Intelligence, particularly semantic networks and
ontological systems (conceptual hierarchies) for the real world.
I see the inventors of a-priori philosophic planned languages (and
pasigraphic languages) as independent thinkers who deal with many
deep conceptual issues needed for pragmatic communication.  Their
ideas may be important sources for artificial intelligence, and
conceptual systems developed in artificial intelligence may be
useful sources for systems of primitives in planned languages.

     Included at the end of this message is a list which I've
compiled of all the potentially useful concept systems I know
about from Aristotle up to the latest computerized taxonomies.  I
sent a copy to Rick Harrison already.  Notice that several
constructed languages  appear on the list.  Maybe there are others
(those with carefully thought out conceptual primitives, etc.,
rather than the volapu-esperantoids) which I don't know about
which sould be on the list --- if so, please let me know.

     I believe concepts form networks (semantic networks) rather
than strings, so planned languages which can be spoken or written
linearly are stuck with the job of converting a network into a
string.  This is usually done with syntax trees (and pronoun co-
reference where cycles occur).

     Rick Morneau replying to Rick Harrison defended fine-grained
conceptual compositionality for some noun combinations.  I support
this, but I too am bothered by noun compounds in which the
relation between the primitive noun elements is "assumed".  A
"mnemonic" composition isn't good enough, in my books.  A
combination of "alligator" and "shoes" should differ from the one
of "horse" and "shoes"  -- and "olive"+"oil" should differ from
"baby"+"oil".  What's missing is usually the relevant case-like-
relation (like USED-FOR or PART-OF or AGENT-OF).  It seems to me
that there are few enough of these basic relations that they could
be very compactly encoded (say with vowel-dipthongs).  If oil is
BOR, OLIVE is BAK, and baby is BIK, I think BORaiBAK and BORiuBIK
are not much more burdensome on the speaker than BORBAK and
BORBIK, but vastly better if "ai" is "DERIVED-FROM" and "iu" is
something like "USED-FOR".  Case-relations are not just separate
entitites -- they occur in a hierarchy of such relations.  (See
A.F. Parker-Rhodes "Inferential Semantics", Harvester/Humanities
Press, 1978, Chapter XI.)  Some case-relationss are more general
than (i.e. they subsume) other more specific ones, so the case-
mark in compound nouns ned not refer to a too-specific case-
relation.

     Because I'm so new I don't even know what language they are
talking about -- presumably one being devised by Rick Morneau. 
(It sounds very interesting and I'd like to know where I could see
a description.)  The problem of the "unspecified relation" plagues
the compound predicates and the predicate juxtapositions in
Loglan/Lojban too, as Paul Doudna and others pointed out some
years ago.

                          Yours truly,   Fritz Lehmann
4282 Sandburg, Irvine, CA 92715  714-733-0566  fritz@rodin.wustl.edu
====================================================================


LIST APPENDED:
     [Version 4: Thanks to Piet-Hein Speel, Paul Doudna, Kurt Godden, Nick Youd,
Jim Fulton and especially to Dan Fass for COPIOUS new material. Changes:
RICHENS/MASTERMAN/WILKS is now divided into: RICHENS 100 MINIMALS (he
invented "semantic nets" in 1956), MASTERMAN'S SEMANTIC LATTICES,
PREFERENCE SEMANTICS PRIMITIVES (Wilks), and FASS'S GENUS CLASSIFICATION.
The RELATIONAL LEXICON HIERARCHY becomes EVENS/NUTTER LEXICAL RELATION
HIERARCHY.  New total= 147 concept-systems.       FL 11/12/93]
===================================================================

                    CONCEPT-SYSTEMS CATALOGUE

Fritz Lehmann
4282 Sandburg Way, Irvine, CA 92715 USA (714)733-0566
[occasionally accessed email: fritz@rodin.wustl.edu]

Version 4 of: November 1993

	This is to be an informal catalogue of existing concept 
catalogues, taxonomies and hierarchies (including high level 
"ontologies") for possible use in knowledge representation, 
artificial intelligence, simulation, and database integration.
Anybody can contribute (and be acknowledged).  Each concept system
is to be described (in a page or less) with some references and other

information.  I hope to be inclusive, with emphasis on potentially
machine-readable/usable concept (and relation) hierarchies.

	Some people think there is ONE concept system 
for the true structure of the world.  Others like me 
think pragmatic concerns (subjective, mission-determined, or
socially agreed-upon) may dictate different structures.  Most 
"ontologies" have large areas of near-agreement on concepts 
like time, space, individuals, properties, etc.  Technical 
thesauri deal with more specific subject areas like 
accounting, subfields of medicine, or plumbing fixtures.
Philosophical concepts are necessary but controversial; some 
concepts like "check-stub" are quite uncontroversial. 

	The list is now intentionally a grab-bag.  It ranges from
universal to fairly problem-specific, informal to formal.
Formalized or not, two aspects of every system are: its 
purely mathematical (order) structure, and the meanings of 
its components.  Notation or language is incidental to both.

	The page ordering, for now, is vaguely chronological.  
A concept in one system may differ entirely from a concept
with the same name in another system.  Please let me know
of ANY OTHER concept-systems you know about.
-------------------------------------------------
	[I started this list Nov. 20, 1992 for the "PEIRCE project" 
(a cooperative international implementation of a Conceptual 
Graphs inferential database processing sytem, initiated by 
Gerard Ellis and Robert Levinson), with a list of 84 
systems beginning with Aristotle's.  CODES: (i+)= I now have
a little information; (i-)= I have almost no information; (I)=
I have enough information; (p+)= Page written; (p-)= Page not
written; (nr)= Need references; (nd)= Need descriptive documents;
(c=)= Contributed information, besides me.]


     ARISTOTLE'S CATEGORIES                          (i+,p-,nr)
     LLULL'S ARS MAGNA                               (I,p+)
     LEIBNIZ'S ARS COMBINATORIA & CHAR. UNIVERSALIS  (I,p+)
     LODWYCK'S COMMON WRITING                        (i+,p-,nd)
     DALGARNO'S ARS SIGNORUM                         (i-,p-,nr,nd)
     WILKINS'S PHILOSOPHICAL LANGUAGE                (I,p-)
     LINNAEUS BIOLOGICAL TAXONOMY                    (i+,p-,nr)
     (ANONYMOUS, c. 1830) UNIVERSAL CHARACTER        (I,p-)
     CAVE BECK                                       (i-,p-,nr,nd)
     KANT'S CATEGORIES                               (i+,p-,nr,nd)
     ROGET'S THESAURUS                               (I,p-)
     PEIRCE'S CATEGORIES                             (I,p-,nr)
     BOLZANO                                         (i+,p-,nr,nd)
     MEINONG                                         (i-,p-,nr,nd)
     BRADLEY                                         (i-,p-,nr,nd)
     DEWEY DECIMAL, BLISS, UDC & LIBRARY OF CONGRESS (I,p-)
     HUSSERL'S ONTOLOGY                              (i+,p-,nr,nd)
     PRINCIPIA MATHEMATICA                           (I,p-,nd)
     WHITEHEAD'S PROCESS THEORY                      (i+,p-,nd)
     LIESNIEWSKI'S MEREOLOGY & SO-CALLED ONTOLOGY    (i+,p-,nr,nd)
     BASIC ENGLISH                           (I,p-,c=Fass,Thorson)
     SEMANTOGRAPHY/BLISSYMBOLICS SYMBOLS             (I,p-)
     RICHENS'S 100 "SEMANTIC NET" MINIMALS    (i+,p-,nr,nd,c=Fass)
     CECCATO'S CORRELATION NET PRIMITIVES            (i-,p-,nr,nd)
     MASTERMAN'S SEMANTIC LATTICES                   (i+,p-,nr,nd)
     LINCOS INTERPLANETARY LANGUAGE                (I,p-,c=Godden)
     R.M. MARTIN'S SEMIOTIC PRIMITIVES               (i+,p-,nr,nd)
     COLON FACETED LIBRARY CLASSIFICATION            (i+,p-,nd)
     THE SYNOPTICON (FOR ENCYC. BRIT. GT BOOKS)   (I,p-,c=Salsman)
     DEEP CASE SYSTEMS                               (i+,p-,nd)
     LOGLAN/LOJBAN SEMANTIC PRIMITIVE WORD ROOTS     (I,p-)
     INGARDEN'S ARISTOTLE REVISION                   (i-,p-,nd)
     LAFFAL'S CONCEPT DICTIONARY                     (I,p-)
     LEECH'S SEMANTICS                        (i-,p-,nr,nd,c=Fass)
     SCHANK'S CONCEPTUAL DEPENDENCY THEORY           (I,p-)
     ACM COMPUTER SCIENCE CLASSIFICATION             (i-,p-,nr,nd)
     SHUM "SPIRITUAL" NETWORKS                       (i-,p-,nr,nd)
     ANTHROPOLOGICAL CLASSIFICATIONS                 (i-,p-,nr,nd)
     PROPAEDIA OF ENCYLOPAEDIA BRITANNICA         (I,p-,c=Van Roy)
     WEBER RUSSELL'S CATEGORIES OF NOMINALS      (i-,p-,nd,c=Fass)
     PARKER-RHODES' INFERENTIAL SEMANTICS LATTICES   (I,p-)
     WIERZBICKA'S LINGUA MENTALIS                    (i+,p-,nd)
     SCHEELE'S ORDNUNG DES WISSENS            (i-,p-,nr,nd,c=ISKO)
     PATENT CLASSIFICATION SYSTEMS            (i-,p-,nr,nd,c=ISKO)
     NTIS/DOD/COSATI CLASSIFICATION SCHEME    (i-,p-,nr,nd,c=ISKO)
     UNESCO THESAURUS                         (i-,p-,nr,nd,c=ISKO)
     BROAD SYSTEM OF ORDERING                 (i-,p-,nr,nd,c=ISKO)
     RUSSIAN MISON "RUBRICATOR" CLASS. CODES  (i-,p-,nr,nd,c=ISKO)
     BHATTACHARYA'S CLASSAURUS                       (i-,p-,nr,nd)
     AUSTIN'S PRECIS CONCEPT ANALYSIS & INDEXING     (i-,p-,nr,nd)
     KAMP'S DISCOURSE REPRESENTATION STRUCTURES      (i-,p-,nr,nd)
     PREFERENCE SEMANTICS PRIMITIVES           (I,p-,c=Wilks,Fass)
     MILLER/JOHNSON-LAIRD PRIMITIVES          (i+,p-,nr,nd,c=Fass)
     HAYES'S NAIVE PHYSICS                           (i+,p-,nr,nd)
     LEHNERT'S OBJECT PRIMITIVES                 (i+,p-,nd,c=Fass)
     SCHANK/CARBONELL SOCIAL/POLITICAL ACTS      (i+,p-,nd,c=Fass)
     EXPLANATORY-COMBINATORY DICTIONARY (MEANING-TEXT)  (i+,p-,nd)
     MeSH - MEDICAL SUBJECT HEADINGS THESAURUS       (i-,p-,nd)
     JOLLEY'S HOLOTHEME                              (I,p-)
     ZARRI'S RESEDA ONTOLOGY                         (I,p-)
     LENAT'S AM/EURISKO MATH CATEGORIES              (I,p-)
     SOWA'S CONCEPTUAL GRAPHS PRIMITIVES             (I,p-)
     BARWISE/PERRY/DEVLIN SITUATION SEMANTICS        (I,p-)
     SCHUBERTIAN ("ECO") SUBHIERARCHIES              (I,p-)
     FICTION CLASSIFICATION SCHEMES           (i+,p-,nr,nd,c=ISKO)
     WAHLIN'S T.I.M, MANUFACTURING FACETS        (i+,p+,nd,c=ISKO)
     CITIZENS ADVICE BUREAU CLASSIFICATION UK (i-,p-,nr,nd,c=ISKO)
     QUALITATIVE PHYSICS PRIMITIVES                  (i-,p-,nr,nd)
     SMITH-MULLIGAN ONTOLOGY                         (i+,p-,nr,nd)
     SIMONS' PART SYSTEM                             (i-,p-,nr,nd)
     SMALLTALK DATA TYPE TREE                        (i-,p-,nr,nd)
     OBJECTIVE-C (NeXTSTEP) DATA TYPE TREE           (i-,p-,nr,nd)
     BOOCH/RATIONAL OBJECTS HIERARCHIES              (i-,p-,nr,nd)
     WUESTER'S GENERAL THEORY OF TERMINOLOGY     (i+,p+,nd,c=ISKO)
     KAB GERMAN LIBRARY CLASSES (K.LEHMANN)      (i-,p-,nd,c=ISKO)
     GRAESSER'S MULTIPLE CONCEPT HIERARCHIES         (i+,p-,nd)
     DAHLGREN/McDOWELL NAIVE SEMANTICS              (i+,p-,c=Youd)
     LONGMAN DICTIONARY CODINGS (INCL. SLATOR)       (i+,p-,nr,nd)
     LONGMAN LEXICON (THESAURUS)                     (i-,p-,nr,nd)
     BURGER'S THE WORDTREE                           (I,p-)
     FASS'S GENUS CLASSIFICATION                  (I,p-,nd,c=Fass)
     PENMAN UPPER MODEL                              (I,p-,c=Hovy)
     ICONCLASS ART SUBJECT CLASSIFICATION            (i+,p-)
     SPARCK JONES/BOGURAEV DEEP CASE LIST            (I,p-)
     COOK ONTOLOGY                                   (i-,p-,nr,nd)
     DIXON ONTOLOGY                                  (i-,p-,nr,nd)
     VARIOUS WILLE-STYLE FORMAL CONCEPT LATTICES     (i+,p-,nr,nd)
     RUSSIAN MERONOMY/TAXONOMY (SHREIDER ET AL.)     (i-,p-,nr,nd)
     SOMERS'S CASE GRID                              (i+,p-,nd)
     CHAFFIN'S RELATION HIERARCHY                    (i+,p-,nd)
     LENAT/GUHA CYC PROJECT                          (i+,p-,nd)
     EPSTEIN AM-BASED GRAPH THEORY HIERARCHY         (i-,p-,nd)
     MARTY'S SEMIOTIC LATTICES                       (i-,p-,nr,nd)
     GEOGRAPHIC DATABASE CONCEPTS                    (i-,p-,nr,nd)
     MACKWORTH/REITER MAPSEE GEOGRAPHIC MAP AXIOMS   (i+.p-,nd)
     IRDS DATABASE CATEGORIES                        (i-,p-,nr,nd)
     HOBBS' COMMONSENSE ONTOLOGY                     (I,p-)
     GORANSON SYMMETRIES                             (i-,p-,nr,nd)
     PANSYSTEMS PHILOSOPHICAL LOGIC (CHINESE)        (i+,p-,nd)
     GIUNCHILIA'S ITALIAN PREPOSITIONS               (i-,p-,nr,nd)
     LILOG ONTOLOGY                          (i-,p-,nr,nd,c=Speel)
     VELARDI'S SEMANTIC LEXICON FOR ITALIAN          (i-,p-,nr,nd)
     ONTEK, INC.'S ONTOLOGY                          (i-,p-,nd)
     LAKOFF'S CATEGORIES                             (i+,p-,nd)
     HUHNS/STEPHENS RELATION FEATURES                (i+,p-,nd)
     WORDNET                                   (I,p-,c=Consortium)
     EDR CONCEPT DICTIONARY (JAPAN)                  (i-,p-,nr,nd)
     ONTOS CONCEPT HIERARCHY                         (i-,p-,nr,nd)
     DOUDNA QUANTIFIER RHOMBIDODECAHEDRON          (I,p-,c=Doudna)
     SCRAMBLED ROGET (5th Ed.)                       (i-,p-,nr,nd)
     NIRENBURG'S DIONYSUS ONTOLOGY                   (i-,p-,nr,nd)
     SCHUBERT/HWANG EPISODIC LOGIC CATEGORIES        (i+,p-,nd)
     UNITRAN-LCS                                     (i-,p-,nr,nd)
     EVENS/NUTTER LEXICAL RELATION HIERARCHY         (i+,p-,nd)
     PUSTEJOVSKY'S QUALIA/EVENTS              (i-,p-,nr,nd,c=Fass)
     FULTON'S SEMANTIC UNIFICATION METAMODEL      (i-,p-,c=Fulton)
     RANDOM HOUSE WORD MENU CATEGORIES               (i-,p-)
     PANGLOSS ONTOLOGY BASE                          (i-,p-,nr,nd)
     YALE'S ESPERANTO THESAURUS                      (i-,p-,nr,nd)
     DARPA/ROME PLANNING ONTOLOGY                    (i-,p-,nr,nd)
     CIMOSA BUSINESS ENTERPRISE ARCHITECTURE MODEL   (I,p-)
     ICAM MANUFACTURING REFERENCE MODEL              (i-,p-,nr,nd)
     IWI (GERMAN) MANUFACTURING REFERENCE MODEL      (i-,p-,nr,nd)
     EDI (ELEC. DATA INTERCH.) BUSINESS COMM STANDARD(i-,p-,nr,nd)
     GUARINO CONCEPT/RELATION ONTOLOGY               (i+,p-,nd)
     SKUCE ONTOLOGY                                (I,p-,c=Sarris)
     PETRIE ONTOLOGIES                               (i-,p-,nr,nd)
     TEPFENHART ONTOLOGY                             (i-,p-,nd)
     G.E. SEMANTIC HIERARCHY & LEXICON (RAU ET AL.)  (i-,p-,nr,nd)
     PLINIUS CERAMICS ONTOLOGY                      (I,p-,c=Speel)
     MARS' KELVIN MEASUREMENT HIERARCHY      (i-,p-,nr,nd,c=Speel)
     RANDELL & COHN'S SPATIOTEMPORAL LATTICES        (I,p-)
     PDES/STEP PRODUCT DESCRIPTION STANDARDS         (i+,p-,nr,nd)
     HARTLEY'S TIME AND SPACE WORLD                  (i+,p-,nd)
     ONTOLINGUA-KIF                            (i+,p-,nd,c=Gruber)
     GRUBER'S QUANTITIES/UNITS ONTOLOGY        (i-,p-,nd,c=Gruber)
     GENERIC BIBLIOGRAPHY CONCEPT SYSTEM       (i-,p-,nd,c=Gruber)
     DICK'S CASE-RELATION SYSTEM FOR LEGAL ANALYSIS  (I,p-,c=Dick)
     HORN'S SUBJECT MATTER CATEGORIES             (I,p-,c=Horn,nd)
     ARNOPOULOS' SYSTEM UNIFICATION MODEL (SUM)      (I,p-,c=ISKO)
     TENENBAUM/EIT MANUFACTURING ONTOLOGY            (i-,p-,nr,nd)
     INTELL. TEXT PROCESSING INC.SEMANTIC LEXICON    (i-,p-,nr,nd)

[Hundreds of special-area thesauri are listed in Int. Classification
 and Indexing Bibliography (ICIB) published by ISKO, the Int. Soc. for
 Knowledge Organization, Woogstr. 36a, D-6000 Frankfurt 50, Germany]

SUGGESTED FORMAT: Brief description, Examples, Formalized?,
Abstract hierarchy structure, Maximum depth, Necessary/sufficient?,
Current authorities/enthusiasts, Machine-readable text?, Machine-
usable structure?, Source, FTP site?, Implemented? References. 

	I won't release the catalogue until at least half the
entries are written up.
========================================================================

______________________________________________________________________
 
 
>From chalmers@violet.berkeley.edu Sat Feb 12 00:07:21 1994
Received: from violet.Berkeley.EDU by odin.diku.dk with SMTP id AA06708
  (5.65c8/IDA-1.4.4 for ); Sat, 12 Feb 1994 17:07:24 +0100
Received: from localhost by violet.berkeley.edu (8.6.4/1.33r)
	id IAA21770; Sat, 12 Feb 1994 08:07:21 -0800
Date: Sat, 12 Feb 1994 08:07:21 -0800
From: chalmers@violet.berkeley.edu (John H. Chalmers Jr.)
Message-Id: <199402121607.IAA21770@violet.berkeley.edu>
To: conlang@diku.dk
Subject: Laadan

Steve: Thanks for posting the La'adan class material.
If you don't have the latest grammar, you might find the
following of interest.
	I just received an answer from Suzette Haden Elgin re La'adan.
The La'adan grammar and dictionary are available for $10.00 + 
$2.00 shipping and handling. For $6.00, there is a 60 minute
cassette tape to accompany the grammar. She is the speaker.
She has also made a VHS 60 minute videotape about La'adan and
her books. It is $22.00 + $3.00 s/h. Order from the 
Ozark Center for Language Studies, PO Box 1137, Huntsville, AR
72740-1137 USA. (501) 559-2273.
	SHE says that the La'adan Network has not communicated with her
for 2 years and she does not know its current status.
	Apparently all the money from the La'adan grammar goes SF3,
defined as the Society for the Furtherance of and Study of Fantasy
and Science Fiction, Inc. Box 1624, Madison, WI 53701-1624. From the 
xeroxes enclosed, "A First Dictionary and Grammar of La'adan 2nd 
edition appears to be a spiral bound book of 157+ pages. This format
is quite different from the first book that was available.
	SHE also mentions that there are two La'adan Network Bulletins 
and that she will make them available if she can get permission. 
	It appears that the slow pace of La'adan and poor communications 
has disappointed SHE since she contrasts her struggling women's project 
to the success of the  "hypermasculine combat-focussed Klingon language."
Now that all three of the Native Tongue books are back in print, there
may be more interest in the language. Having the first two go out of print
cannot have helped matters.

	I've uploaded this message from a local gateway in the San Diego
area. I have not cancelled or moved my main address from Berkeley,
reply to the List or to chalmers@violet.berkeley.edu

-- John
	 

______________________________________________________________________
 
 
>From ram@eskimo.com Sat Feb 12 19:08:44 1994
Date: Sun, 13 Feb 1994 03:08:44 -0800
From: ram@eskimo.com (Rick Morneau)
Message-Id: <199402131108.AA12632@eskimo.com>
To: conlang@diku.dk, ram@eskimo.com
Subject: Re: Intro/AI/compound-nouns/concept-systems


Fritz Lehmann writes:
>
>     Because I'm so new I don't even know what language they are
> talking about -- presumably one being devised by Rick Morneau. 
> (It sounds very interesting and I'd like to know where I could see
> a description.)
>

There is no language (yet).  I'm just studying how languages work,
organizing what I'm learning, and trying to put it all in writing.
If I ever DO design a language, my goals will be ease-of-learning
(regardless of the background of the student), genetic neutrality,
computer tractability, and linguistic realism, simplicity and
naturalness.  My ultimate goal is and always has been to develop a
source/target-independent interlingua for use in machine translation.

Regards,

Rick


*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
=*   Rick Morneau  ram@eskimo.com   "All kings is rapscallions"  *=
*=   Denizen of Idaho, USA                   --Mark Twain        =*
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

______________________________________________________________________
 
 
>From lojbab@access.digex.net Mon Feb 14 05:47:42 1994
From: Logical Language Group 
Message-Id: <199402141547.AA05701@access2.digex.net>
Subject: Re: Intro/AI/compound-nouns/concept-systems
To: conlang@diku.dk
Date: Mon, 14 Feb 1994 10:47:42 -0500 (EST)

Fritz Lehmann puts in his list:

>      LOGLAN/LOJBAN SEMANTIC PRIMITIVE WORD ROOTS     (I,p-)

I think some comment, or even protest, is needed here.  Despite early
claims by James Cooke Brown, it has long been clear (even to him) that
the Loglan and Lojban gismu lists aren't "semantic primitives" in any sense.
Rather, they are an attempt to blanket semantic space with a sufficient
number of words, balancing various priorities such as terseness, primitiveness,
and ease of use in constructing complexes (>tanru< in Lojban).

To take an obvious example:  Lojban has words for "female", "parent", and
"mother".  If "female" and "parent" are taken as semantic primitives, then
"mother" cannot be semantically primitive: but in Lojban all three words are
considered roots.  (Furthermore, the words >rirni< 'parent' and >mamta< 'mother'
refer to psychological-social parental relationships: there are separate forms
for biological relations.

Within the Lojban community (I don't speak for the Loglan Institute), the
primary emphasis has been for some time the usefulness of a new root in
the construction of >tanru<, and a vocal minority has proposed the deletion
of various existing roots on the grounds that their usefulness for that
purpose is minimal.  (These efforts have been resisted to date.)

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.


______________________________________________________________________
 
 
>From shoulson@ctr.columbia.edu Mon Feb 14 06:00:53 1994
From: shoulson@ctr.columbia.edu (Mark E. Shoulson)
Received: from localhost (shoulson@localhost) by startide.ctr.columbia.edu (8.6.5/8.6.4.788743) id LAA04391; Mon, 14 Feb 1994 11:00:53 -0500
Date: Mon, 14 Feb 1994 11:00:53 -0500
Message-Id: <199402141600.LAA04391@startide.ctr.columbia.edu>
To: conlang@diku.dk
Subject: Laadan

>Date: Sat, 12 Feb 1994 17:21:32 +0100
>From: chalmers@violet.berkeley.edu (John H. Chalmers Jr.)

>She has also made a VHS 60 minute videotape about La'adan and
>her books. It is $22.00 + $3.00 s/h. Order from the 
>Ozark Center for Language Studies, PO Box 1137, Huntsville, AR
>72740-1137 USA. (501) 559-2273.

Tapes?  Videos?  Cool!!  I must write to them.

>   Apparently all the money from the La'adan grammar goes SF3,
>defined as the Society for the Furtherance of and Study of Fantasy
>and Science Fiction, Inc. Box 1624, Madison, WI 53701-1624. From the 
>xeroxes enclosed, "A First Dictionary and Grammar of La'adan 2nd 
>edition appears to be a spiral bound book of 157+ pages. This format
>is quite different from the first book that was available.

I'll check which one I have; I recall it's also a spiral-bound puppey.

>   It appears that the slow pace of La'adan and poor communications 
>has disappointed SHE since she contrasts her struggling women's project 
>to the success of the  "hypermasculine combat-focussed Klingon language."

Oh, man, does that bug me.  Klingon's success probably has little to
nothing to do with the sexual equipment of its inventor; I suspect the fact
that Star Trek has a *vast* fan network in place already and enormous
name-recognition among non-aficcianadoes *might* have had something to do
with the press and attention it gets, eh?  The slightest bit larger than
SHE's fan club, I fear.  I'm struggling hard to retain the respect I have
for SHE; she's nobody to sneeze at, and so far I'm winning.  But this "new
sexism" as I'm coming to call it really gets up my nose.

>-- John


~mark

______________________________________________________________________
 
 
>From lojbab@access.digex.net Mon Feb 14 06:21:18 1994
From: Logical Language Group 
Message-Id: <199402141621.AA07028@access2.digex.net>
Subject: Re: Intro/AI/compound-nouns/concept-systems
To: conlang@diku.dk
Date: Mon, 14 Feb 1994 11:21:18 -0500 (EST)
Cc: fritz@rodin.wustl.edu, lojbab@access.digex.net (Logical Language Group)

Fritz Lehmann writes:

> I too am bothered by noun compounds in which the
> relation between the primitive noun elements is "assumed".  A
> "mnemonic" composition isn't good enough, in my books.  A
> combination of "alligator" and "shoes" should differ from the one
> of "horse" and "shoes"  -- and "olive"+"oil" should differ from
> "baby"+"oil".  What's missing is usually the relevant case-like-
> relation (like USED-FOR or PART-OF or AGENT-OF).  It seems to me
> that there are few enough of these basic relations that they could
> be very compactly encoded (say with vowel-dipthongs).

Your theory falls apart in the face of my facts :-)

Ivan Derzhanski, Lojbanist and semanticist, has identified (using natural
languages) some 34 such "case-like relations", with no claim of exhaustiveness.
His languages (all of AN type) include:

Abazin, Chinese, English, Ewe, Finnish, Georgian, Guarani, Hopi, Hungarian,
Imbabura Quechua, Karaitic, Kazakh, Korean, Mongolian, Qabardian, Quechua,
Russian, Sanskrit, Swedish, Turkish, Udmurt.

Using your style, his "case-like relations" are roughly:

OBJECT-OF-ACTION, TYPE-OF-SET-ELEMENTS, SET-OF-ELEMENTS-OF-TYPE, COMPONENT-OF,
SPECIFIED-BY-DETAIL, SPECIES-OF, POSSESSED-BY, INHABITED-BY, CAUSE-OF, EFFECT-OF,
INSTRUMENT-WITH-PURPOSE, OBJECT-OF-INSTRUMENT-OF-PURPOSE, PRODUCT-OF, SOURCE-OF,
MADE-FROM, TYPICALLY-MEASURED-BY, ANALOGOUS-TO, ANALOGOUS-PART-OF, PRODUCER-OF,
WITH-PROPERTIES-OF, RESEMBLING, CHARACTERISTICALLY-LOCATED-AT, SOLD-AT,
APPLIED-TO, USED-AS-IMPLEMENT-DURING, PROTECTING-AGAINST,
CHARACTERISTICALLY-CONTAINING, CHARACTERISTICALLY-CO-OCCURRING-WITH,
SUPPLYING-ENERGY-TO, SPECIFYING-TEMPORAL-FRAME-OF, AND, OR,
AND-AS-ALSO-TYPIFIED-BY, AND-ALSO-IS-IMPORTANT-PART-OF.

Here are some examples corresponding to the relations above respectively
(English examples have been used where possible; otherwise, see the Notes):

pencil sharpener, row house, cell block, chicken feather,
pendulum clock, pine tree, lion['s] mane, family land, tear gas, water mark,
lamp shade, pepper stone (1), bear meat, coal mine,
stone lion, land piece (2), space ship, tooth root, silk worm,
soldier ant, cherry bomb, field mouse, book bar (3),
tooth paste, Ping-Pong ball, rain hat, 
milk bottle, morning fog,
electric lamp, milk tooth, home town, day night (4),
worm beetle (5), land air (6).

Notes:

(1) Sanskrit, 'stone for grinding pepper'
(2) Turkish, 'piece of land'
(3) Chinese, 'bookstore/library'
(4) Sanskrit, '24-hour day'
(5) Mongolian, 'insect'
(6) Finnish, 'world'

> If oil is
> BOR, OLIVE is BAK, and baby is BIK, I think BORaiBAK and BORiuBIK
> are not much more burdensome on the speaker than BORBAK and
> BORBIK, but vastly better if "ai" is "DERIVED-FROM" and "iu" is
> something like "USED-FOR".

I think the above list clearly demonstrates that the possible relationships
between compounds approach, if not equal, in complexity the possibilities
to be compounded.  So we need to use our general list of roots to indicate
the types of compounds -- which clearly leads to an infinite regress, as
there are now a large number of second-order ways in which the linkage
particles apply to the roots!  And so on, and so on, >in saecula saeculorum<.

Lojban cuts this process off at the start by simply saying that the
relationships between compounds are not defined, and if one wishes to be
more precise, one must be more explicit.

	lo cimni ka satci cu se jdima lo cimni ni valsi
	The price of infinite precision is infinite verbosity.


> Case-relations are not just separate
> entities -- they occur in a hierarchy of such relations.  (See
> A.F. Parker-Rhodes "Inferential Semantics", Harvester/Humanities
> Pres, 1978, Chapter XI.)  Some case-relationss are more general
> than (i.e. they subsume) other more specific ones, so the case-
> mark in compound nouns ned not refer to a too-specific case-
> relation.

But by saying this, you kick the ball through my goal-posts!  There are
compounds, as Ivan's treatment shows, which are discriminated from each
other by just such precision.  So either you admit polysemy in compounds,
or you are forced into more and more "specific" case-relation markers
to express increasing subtleties of concept.

Disclaimer:  Ivan is not responsible for the (mis)use I have made of his
data, collected for an unpublished paper and used by permission.

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.

______________________________________________________________________
 
 
>From ucleaar@ucl.ac.uk Mon Feb 14 19:31:42 1994
From: ucleaar 
Message-Id: <116751.9402141931@link-1.ts.bcc.ac.uk>
To: conlang@diku.dk
Subject: Re: Laadan
Date: Mon, 14 Feb 94 19:31:42 +0000


> >   It appears that the slow pace of La'adan and poor communications 
> >has disappointed SHE since she contrasts her struggling women's project 
> >to the success of the  "hypermasculine combat-focussed Klingon language."
> 
> Oh, man, does that bug me.  Klingon's success probably has little to
> nothing to do with the sexual equipment of its inventor; I suspect the fact
> that Star Trek has a *vast* fan network in place already and enormous
> name-recognition among non-aficcianadoes *might* have had something to do
> with the press and attention it gets, eh?  The slightest bit larger than
> SHE's fan club, I fear.  I'm struggling hard to retain the respect I have
> for SHE; she's nobody to sneeze at, and so far I'm winning.  But this "new
> sexism" as I'm coming to call it really gets up my nose.

Surely part of the appeal of Klingon is its iconic & ironic brutalism.
Just as stories about peace & love tend to be less exciting than
stories about struggle & strife, so perhaps the discordance &
rudeness of Klingon leads to it being preferred over a more
civil language.

John doesn't quote much context from Elgin's letter, but she
seems to be making a point feminists often, with reason, make,
e.g. "Why do men prefer watching films about war rather than
about babies" etc. etc. Fair enough, no?

----
And

______________________________________________________________________
 
 
>From chalmers@violet.berkeley.edu Mon Feb 14 12:48:56 1994
Date: Mon, 14 Feb 1994 20:48:56 -0800
From: chalmers@violet.berkeley.edu (John H. Chalmers Jr.)
Message-Id: <199402150448.UAA11171@violet.berkeley.edu>
To: conlang@diku.dk
Subject: L'aadan

If anything SHE's comments are more bitter in context.
Let me quote the entire paragraph:

	" I do know there are small groups working with L'aadan
	in this country and in Canada; I hear about them from
	time to time. I know that the grammar is used in a
	number of university courses, for various purposes. I
	still sell small quantities of L'aadan products, and get
	an occasional query. However, there is nothing like the
	swell of support that was available for the
	hypermasculine combat-focussed Klingon language, from
	Star Trek  -- no big commercial publisher for the book
	and the tape, no media push, no best seller status, no
	summer camp, no university affiliation with a scholarly
	journal. I'm sorry about that, but there you are; it's
	not the first time (she says understatedly) that a
	women's project has struggled just to survive while a
	comparable men's project has thrived. I tell you this
	not to complain -- I have always said that L'aadan had
	to stand or fall on its own, like any other language--
	but you deserve the most honest answer I'm able to
	provide."

	So, I would urge Stephen and others to let her know that somebody
out there is still interested in her creation, even if they are men.
Unfortunately, I don't think Native Tongue III is going to bring many
new recruits into the fold as there is not enough about L'aadan in it.
	To address And's point, I think it is not just the attraction
and excitement of violence, but rather that in Western culture,
more men than women are interested in intellectual matters. I speak
from some experience; I've seldom, if ever, found a woman very interested
in intellectual fields other than her own profession. I've given
up trying to get dates for contemporary music concerts, for example,
though I know a number of women composers and performers of such music
(all happily married alas). I don't know the reason, but it must be
due to early socialization and schooling.

--John

______________________________________________________________________
 
 
>From jrk@sys.uea.ac.uk Tue Feb 15 09:44:35 1994
Date: Tue, 15 Feb 94 09:44:35 GMT
Message-Id: <17295.9402150944@s5.sys.uea.ac.uk>
To: conlang@diku.dk
From: jrk@sys.uea.ac.uk (Richard Kennaway)
Subject: Re: Intro/AI/compound-nouns/concept-systems

John Cowan writes:
>I think some comment, or even protest, is needed here.  Despite early
>claims by James Cooke Brown, it has long been clear (even to him) that
>the Loglan and Lojban gismu lists aren't "semantic primitives" in any sense.
>Rather, they are an attempt to blanket semantic space with a sufficient
>number of words, balancing various priorities such as terseness, primitiveness,
>and ease of use in constructing complexes (>tanru< in Lojban).

I think some comment, or even protest, is needed here.  I don't speak for
the Loglan Institute, but the sly put-down of JCB above is quite
unwarranted.  The "early claims" must be early indeed, for I have never
seen them in my contact with Loglan.  It is quite clear from, e.g. the 1975
edition of Loglan 1, that the "primitive" predicates are so called only
because they are syntactically primitive, and that their choice was driven
primarily by the need "to blanket semantic space with a sufficient number
of words", primitive and complex.

--                       ____
Richard Kennaway       __\_ /    School of Information Systems
jrk@sys.uea.ac.uk      \  X/     University of East Anglia
                        \/       Norwich NR4 7TJ, U.K.



______________________________________________________________________
 
 
>From lojbab@access.digex.net Tue Feb 15 05:39:59 1994
From: Logical Language Group 
Message-Id: <199402151539.AA20127@access2.digex.net>
Subject: Re: Intro/AI/compound-nouns/concept-systems
To: conlang@diku.dk
Date: Tue, 15 Feb 1994 10:39:59 -0500 (EST)

Richard Kennaway writes:

> I think some comment, or even protest, is needed here.  I don't speak for
> the Loglan Institute, but the sly put-down of JCB above is quite
> unwarranted.

No put-down of JCB was intended.  However, he is (to say the least) a man
well-known for not changing his mind easily: although Loglan '60 is quite
different from Loglan '91, a great many things persist absolutely unchanged
despite well-established difficulties discovered by the rest of the Loglan
community (and I mean things found before the Loglan/Lojban split).

As the joke has it, "I am firm, you are stubborn, he is a pigheaded fool."

> The "early claims" must be early indeed, for I have never
> seen them in my contact with Loglan.  It is quite clear from, e.g. the 1975
> edition of Loglan 1, that the "primitive" predicates are so called only
> because they are syntactically primitive, and that their choice was driven
> primarily by the need "to blanket semantic space with a sufficient number
> of words", primitive and complex.

Correct. I did say "has long been clear".  However, in >Loglan 2< (1970)
we are told that the aboriginal source for the prim/gismu list was
"all those concepts which had been found to be universal in human languages.
There are some 200 of these: words like 'man', 'woman', 'eat', 'vomit', and
other rather earthy words."  The impersonal passive doesn't tell us where
these words came from, but (despite the large figure of 200) they seem
to have been "semantic primitives" in JCB's mind: he also calls them
"anthropologically universal" and "universal in human languages".
(L2 ch. 3, repr. TL1:5/305-06).

Of course, although L2 was first published in 1970, and TL1:5 in 1977,
JCB is speaking here of work done before the Scientific American publication
of 1960, so "early indeed" is a suitable characterization.

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.

______________________________________________________________________
 
 
>From lojbab@access.digex.net Tue Feb 15 13:03:24 1994
Date: Tue, 15 Feb 1994 18:03:24 -0500
From: Logical Language Group 
Message-Id: <199402152303.AA20695@access3.digex.net>
To: conlang@diku.dk
Subject: Re: Intro/AI/compound-nouns/concept-systems
Cc: lojbab@access.digex.net
X-Charset: LATIN1
X-Char-Esc: 29

>I think some comment, or even protest, is needed here.  I don't speak
>for the Loglan Institute, but the sly put-down of JCB above is quite
>unwarranted.

I'm sorry this came across as a put-down of JCB.  Maybe those of us at
LLG are suspect in this area, but I am fairly sure John did not intend a
put-down.  It was a simple statement of fact.  JCB has made such claims,
both in print, and to me personally, in debating issues regarding what
words should be in the prim/gismu list.  And it is equally clear that
his thinking has evolved, though contrary to John, I think that JCB
still DOES consider semantic primitiveness a quality of his set of
'composite primitives'.

I actually think the evolution in JCB's thinking has gone the other way
- he originally was more open to the idea the the primitives were more
syntactic than semantic, but opposition, including ours in the community
that eventually became LLG (pc and myself being major advocates against
interpreting primitives as semantic primitives), has hardened his
position and closed his mind. 

Recent Documentation:

Loglan I 4th edition (1989), p411
"The composite primitives of Loglan are the semantically universal
predicates of human experience."

I can't imagine a much clearer statement than that one.  He then goes on
to state that adding new C-prims is rare indeed, using the late-added
distinction in Loglan between "setci" (set) and "klesi" (class) as an
example:

"The distinction between sets and classes is semantically fundamental."

There is thus no doubt that JCB considers his composite-primitive set to
be a set of semantic primitives.

The *only* contention that counters this general observation is that TLI
Loglan no longer has clear boundaries between the composite-primitives -
the ones made from 8 languages, and the borrowings and invented words
thta have the same morphology, but for which JCB would make no claim of
semantic primitiveness.

As for more complete discussion that you, Richard may have available, I
will cite a longish essay on the subject in The Loglanist 3/3, pg
219-21.  I will give only some extracted quotes - JCB's thought
processes are made extremely clear in that essay.  I will try to outline
his reasoning as I go.

He was responding to a proposal to add a lot of primitives for
zoological hierarchies, which proposal also argued that some primitives
should be compounds based on those primitives; e.g. man =
male-adult-human; mother = female-parent.  JCB was vociferously opposed.
He accepts the usefilness of the zoological terminology "not as a
replacement for our vocabulary of 'people' primitives, but as a
scientific adjunct to it, perhaps with its own set of primtiives".

"...  For there reappears in RLW's argument ... a pernicious idea ...
one thta apparently dies very hard among loglanists:  namely, that
'primitiveness' is whatever produces definitional efficiency, i.e.
shared parsimony.  There are many criteria of 'primitiveness and this is
one of them, the logician's.  Another, perhaps more appropriate to an
empirical science, is 'whatever carves reality at the joints' ...
[comment that this phrase is a quote from some unrecalled academic
source].  Of course, to carve reality at the joints one must first know
where the joints lie, that is, do science ...  And it should, therefore
be our guiding principle in incorporating their [the sciences']
vocabularies into Loglan.  ..."  It seems clear that these two
categories of primitiveness are those associated with 'dividing up of
semantic space', and JCB is saying they are important but are NOT the
proper basis for the language.  "But there is a third notion of
primitiveness that is even more directly related to our concerns as
language-builders, and that is what I've called 'ethological
primitiveness'..."  He then goes on to argue that by logical or
scientific primitiveness, "girl" is not primitive, but is
"female-child".

But this kind of breaking into "components ... are not built into the
neurological equipment of the species ...  On the other hand, *whole*
boys, girls, men, women, fathers, mothers, snakes and horses *are* built
into that neurological substrate of human perception...or so I have
hypothesized, else it would not be the case that nearly all human
tongues, despite their other diversity, hail objects like these, and a
few hundred others, with etymologically simple and mutually unrelated
terms."

He then goes on to compare this hypothesis of his with the
fundamentalness of the 'deep grammar' hypothesis of Chomsky, both
"supported by the weight of the contemporary evolutionary evidence" -
like Chomsky did, he argues not from direct research on the subject, but
"there hardly seems to be any other way in which these massive
uniformities could have developed".

[lojbab:  I didn't buy this argument then, and still don't.  There
are plausible explanations that do not require any particular set of
primitive concepts to be built into our genes.  But JCB clearly believes
that there ARE universal semantic primitives, and designed his language
on that basis.]

The major reason why people may come to believe that JCB feels
otherwise, is the way he came to select his set of primitives, which is
to some extent fundamentally inconsistent with his theories and
pronouncements.  I won't quote the wordy and not-very-to-the-point
passages from Loglan II, which were reprinted in TL volume 1, on the
selection of the set of primitives, but it involved several steps.

JCB first used sets of semantic primitives universal to 'all' languages
based on research that had been done in that area (he has never given
citations for such research).  He then used Ogden's Basic English as an
additional source of primitives - perhaps not realizing that Ogden WAS
looking for syntactic rather than semantic primitives.  But then he
justifies using Basic English because of Ogden's demonstrated success in
"rendering ... almost any English text", which sounds like a
semantic-space coverage argument to me.  Thirdly, he used Eaton's
semantic frequency list, making sure that the first 600 or so words were
prims, even if they were not in the list for other reasons.  Then, for
the words up to frequency 1000, he required that they be expressible as
either primitive or at most 2-parts, and from 1000-5000 rank, at most 3
parts.  He then added some prims because they were "metaphorically
productive", in making a lot of other compounds shorter, even if they
were not particularly primitive by any other rationale.

Now in the 30-odd years since then, he has added perhaps 100-200 words
to his primitive list.  Many of these were not composite primitives,
made from 8 languages, but rather borrowings and scientific words.  The
largest sets added were 1) culture words 2) miscellaneous concrete terms
for everday use like foods, games (football and billiards) added as
"International prims" in 1972-5, 3) a group of words of the
'ethnological primitive' variety - specifically words pertaining to
human bodily function that are commonly taboo and hence were not listed
in research works earlier in this century (this in response to the
criticisms of Arnold Zwicky in his review of Loglan I in 1969 in the
journal 'Language'), 4) metric words, and 5) a bunch of words added as a
result of Faith Rich's completing the analysis of Eaton's semantic
frequency work - these are a combination of words of the 'semantically
productive' variety mentioned above, and words that Faith couldn't come
up with good metaphors for.

In reinventing the language as Lojban, we started with JCB's list as a
given, but then used the metaphor of 'covering semantic space' as a
rationale for our own extensions to the list.  As I recall, that
metaphor first was presented by a local-area Loglanist who has been a
long time lurker following the project.  The idea of blanketing semantic
space with small contiguous 'tiles' representing words was his personal
concept of how Loglan semantics, with unitary word meanings, should be.
We ttok up his metaphor and talked this up a lot in the early years of
redeveloping Lojban, but JCB never recognized, adopted, or otherwise
used words like 'covering semantic space' in any of his writings that I
can recall, and that usage at the first DC LogFest was the first time I
ever heard that phrase used.  (I've since seen it used a lot here on
conlang, of course).

lojbab
----
lojbab                                              lojbab@access.digex.net
Bob LeChevalier, President, The Logical Language Group, Inc.
2904 Beau Lane, Fairfax VA 22031-1303 USA                        703-385-0273


______________________________________________________________________
 

Date: Wed, 16 Feb 94 10:41:04 CST
From: fritz@rodin.wustl.edu (Fritz Lehmann)
Message-Id: <9402161641.AA09516@rodin.wustl.edu>
To: conlang@diku.dk
Subject: combining nouns via case; primitives
X-Charset: LATIN1
X-Char-Esc: 29


Dear ConLangers,

     I got two criticisms replying to my introductory note.

     John Cowan said:

>Ivan Derzhanski, Lojbanist and semanticist, has identified (using natural
>languages) some 34 such "case-like relations", with no claim of exhaustiveness.
>His languages (all of AN type) include:
[languages omitted]
>Using your style, his "case-like relations" are roughly:
>OBJECT-OF-ACTION, TYPE-OF-SET-ELEMENTS, SET-OF-ELEMENTS-OF-TYPE, COMPONENT-OF,
>SPECIFIED-BY-DETAIL, SPECIES-OF, POSSESSED-BY, INHABITED-BY, CAUSE-OF,
EFFECT-OF,
>INSTRUMENT-WITH-PURPOSE, OBJECT-OF-INSTRUMENT-OF-PURPOSE, PRODUCT-OF,
SOURCE-OF,
>MADE-FROM, TYPICALLY-MEASURED-BY, ANALOGOUS-TO, ANALOGOUS-PART-OF, PRODUCER-OF,
>WITH-PROPERTIES-OF, RESEMBLING, CHARACTERISTICALLY-LOCATED-AT, SOLD-AT,
>APPLIED-TO, USED-AS-IMPLEMENT-DURING, PROTECTING-AGAINST,
>CHARACTERISTICALLY-CONTAINING, CHARACTERISTICALLY-CO-OCCURRING-WITH,
>SUPPLYING-ENERGY-TO, SPECIFYING-TEMPORAL-FRAME-OF, AND, OR,
>AND-AS-ALSO-TYPIFIED-BY, AND-ALSO-IS-IMPORTANT-PART-OF.
[examples omitted]

     This is a very interesting list and I would like to get Ivan
Derzhanski's original research on this.  Can anyone send this to me?  Or
his address?  I'll probably add him to the master list of concept-
systems.

     I believe that careful development of deep-case-like relations
should be a first priority of a language creator.  My list of concept-
systems included several case systems.  These include Fillmore's deep
cases (and derivatives), the list of about 28 cases of Karen Sparck
Jones and Branimir Boguraev, Somers' "case grid", the lattice of cases
in Parker-Rhodes' Inferential Semantics, Martha Evens & Terry Nutter's
hierarchy of relations, and Judy Dick's dissertation on case-relations
in legal reasoning.  No doubt there are others.  Is there in existence a
bibliography, or comparative work discussing semantic case systems?

>> If oil is
>> BOR, OLIVE is BAK, and baby is BIK, I think BORaiBAK and BORiuBIK
>> are not much more burdensome on the speaker than BORBAK and
>> BORBIK, but vastly better if "ai" is "DERIVED-FROM" and "iu" is
>> something like "USED-FOR".

>   I think the above list clearly demonstrates that the possible relationships
>between compounds approach, if not equal, in complexity the possibilities
>to be compounded.  So we need to use our general list of roots to indicate
>the types of compounds -- which clearly leads to an infinite regress, as
>there are now a large number of second-order ways in which the linkage
>particles apply to the roots!  And so on, and so on, >in saecula saeculorum<.

     I mentioned two-letter combinations (maybe just vowels).  This
would cover the necessary combinations. 34 or 28 is not too many.
Seven vowels might do it.  Also, rare relations might have longer
combinations so there could be more.

>> Case-relations are not just separate
>> entities -- they occur in a hierarchy of such relations.  (See
>> A.F. Parker-Rhodes "Inferential Semantics", Harvester/Humanities
>> Press, 1978, Chapter XI.)  Some case-relations are more general
>> than (i.e. they subsume) other more specific ones, so the case-
>> mark in compound nouns need not refer to a too-specific case-
>> relation.
>But by saying this, you kick the ball through my goal-posts!  There are
>compounds, as Ivan's treatment shows, which are discriminated from each
>other by just such precision.  So either you admit polysemy in compounds,
>or you are forced into more and more "specific" case-relation markers
>to express increasing subtleties of concept.

     No.  You can legitimately choose any level of generality
within the poset of all relations -- you're not forced to admit
anything.  Loglan/Lojban chose the very top of the poset:
juxtapositions mean SOME-RELATION which is the least informative
choice possible.  A diligent scholar could discriminate more than
50 case-like relations (although they will cease to be case-like at
about that point e.g. relations like ALONG-THE-FORMER-PATH-OF-THE
SHADOW-OF will begin to occur).  I recommend using a more specific
level than SOME-RELATION so as to eliminate most of the guesswork
in recognizing the meaning of a compound.  Generality is not
"polysemy"; outside of logic and mathematics every word is
approximative in that a more specific description is possible.
Learning 10-40 relation-morphemes would be a burden to start with,
but thereafter the payoff would be big.

     The other criticism was of my including LOGLAN/LOJBAN SEMANTIC
PRIMITIVE WORD ROOTS on my concept-systems list.  The subsequent
email discussion tells me that no change is needed.  When I say
"semantic primitive" I don't necessarily mean ultimate
philosophical primitives.  I happen to share Leibniz's desire for
ultimate primitives, but although maybe half of the concept systems
have "primitives" from which all other concepts are defined, very
few (maybe Leibniz, Burger, Wierzbicka,  Schank, Skuce and
Tepfenhart) aim for ultimate primitives.  Almost all concept-
systems with definitions have the notion of "more primitive"
though.  The notion of "covering the semantic space" involves a set
of primitive concepts _sufficient_ to say anything.  This is rarely
claimed to be a _necessary_minimal_ set.

                          Yours truly,   Fritz Lehmann
4282 Sandburg, Irvine, CA 92715  714-733-0566  fritz@rodin.wustl.edu
====================================================================



______________________________________________________________________
 
 
>From doug@netcom.com Wed Feb 16 01:24:35 1994
Message-Id: <199402161724.JAA13601@mail.netcom.com>
From: doug@netcom.com (Doug Merritt)
Date: Wed, 16 Feb 1994 09:24:35 PST
       "Re: Intro/AI/compound-nouns/concept-systems" (Feb 16, 12:19am)
To: conlang@diku.dk
Subject: Re: Intro/AI/compound-nouns/concept-systems
X-Charset: LATIN1
X-Char-Esc: 29

Lojbab quoted:
>Loglan I 4th edition (1989), p411
>"The composite primitives of Loglan are the semantically universal
>predicates of human experience."

That's an unfortunate attitude. If there are semantic universals, we
have an exceedingly ill-defined idea of what they are with the current
state of the art.

One of the best studies on the subject I ever saw was the book from about
10 to 15 years ago "Cross-Cultural Universals of Affective Meaning", in
which both empirical studies and the following statistical analysis were
*very* carefully designed. But even this gave fairly slippery results,
certainly very little to support a claim like JCB appears to be making.

Eventually further studies could be done using e.g. newer NMR brain imaging,
and there's some interesting work being done in neurophysiology and such
that indirectly helps out with the topic. But we're still quite some ways
away from having "semantically universal predicates of human experience."

>shared parsimony.  There are many criteria of 'primitiveness and this is
>one of them, the logician's.  Another, perhaps more appropriate to an
>empirical science, is 'whatever carves reality at the joints' ...

And in *this* sense there will never be a universal set of primitives,
because in addition to whatever is hardwired by biology, there is a further
set that is based on both culture and experience, and those things are
primitives only in a subset of the human race.
	Doug

______________________________________________________________________
 
 
>From lojbab@access.digex.net Wed Feb 16 08:17:42 1994
From: Logical Language Group 
Message-Id: <199402161817.AA07372@access2.digex.net>
Subject: Re: combining nouns via case; primitives
To: conlang@diku.dk
Date: Wed, 16 Feb 1994 13:17:42 -0500 (EST)
Cc: lojbab@access.digex.net (Logical Language Group), iad@cogsci.ed.ac.uk
X-Charset: LATIN1
X-Char-Esc: 29

Fritz Lehmann writes:

>      This is a very interesting list and I would like to get Ivan
> Derzhanski's original research on this.  Can anyone send this to me?  Or
> his address?  I'll probably add him to the master list of concept-
> systems.

Ivan is .  I can't give you his paper, because it is
unpublished and I have no right to distribute it.  (But I did send him a copy
of my reply, and will cc this message to him as well.  Hi, Ivan.)

>      I believe that careful development of deep-case-like relations
> should be a first priority of a language creator.

I think so too, provided it can be done.  My intention in displaying Ivan's
list was to claim that there is no algorithm for developing such a list;
that it is as arbitrary as the list of roots to be compounded.

>      I mentioned two-letter combinations (maybe just vowels).  This
> would cover the necessary combinations. 34 or 28 is not too many.
> Seven vowels might do it.  Also, rare relations might have longer
> combinations so there could be more.

But Ivan explicitly disclaimed completeness.  Another example:  in Li &
Thompson's >Mandarin Chinese: A Functional Reference Grammar (1981,1989: Univ.
of California Press, ISBN 0-520-04286-7 cloth, 0-520-06610-3 pbk), Section 3.2.2
consists of an substantial list of Chinese nominal compounds sorted by relations.
(Looking this over, I see that some of the relationships I attributed to Ivan
are really L & T's.  Apologies to all involved.)

I think the summary of this section is worth quoting:

	The above list of 21 types of nominal compounds by no means
	constitutes an exhaustive categorization; one can still
	think of nominal compounds that are not accounted for in
	the above listing.  The important thing to note, though, is
	that the compounding process of linking noun and noun
	together ...  is a productive and creative one.  As we said
	before, the only constraint is a pragmatic one, and that is
	that the context must be appropriate for naming a certain
	object....

>      You can legitimately choose any level of generality
> within the poset of all relations -- you're not forced to admit
> anything.  Loglan/Lojban chose the very top of the poset:
> juxtapositions mean SOME-RELATION which is the least informative
> choice possible.  A diligent scholar could discriminate more than
> 50 case-like relations (although they will cease to be case-like at
> about that point e.g. relations like ALONG-THE-FORMER-PATH-OF-THE
> SHADOW-OF will begin to occur).

So you have a definite criterion for "case-like-ness"?  I don't believe
there is such a criterion, but I would love to see someone exhibit one.
Most people who believe in fixed case lists seem to blow off that question
with Justice Stewart's comment on obscenity:  "I know it when I see it."
In other words, the definition belongs to the informal level of the
(case-)linguistic subculture.  (See Hall.)  Can this definition be "surfaced"
into a formal- or technical-level one?

> I recommend using a more specific
> level than SOME-RELATION so as to eliminate most of the guesswork
> in recognizing the meaning of a compound.

See comment below.

> Generality is not
> "polysemy"; outside of logic and mathematics every word is
> approximative in that a more specific description is possible.

I agree with this statement (at last, he thinks :-)).  However, I suspect that
any short list of cases will contain ones that are themselves polysemous,
and therefore lead to compounds polysemous by any standard.  Without a list
to talk about, however, this suspicion remains empty.

> Learning 10-40 relation-morphemes would be a burden to start with,
> but thereafter the payoff would be big.

Lojban does have related machinery, although not in the specification of
compounds.  Instead, we have an extensible set of case tags which may be
attached to arguments.  As is well known (:-)), every Lojban predicate has
a definite set of arguments whose meaningfulness is asserted by every use
of the predicate:  predicating >klama< 'come,go' of something entails the
existence of a goer, a destination, an origin, a route, and a means.

In addition, however, there is an indefinite (not infinite) set of additional
arguments which can be made explicit by attaching an argument-expression
using a case tag.  There are 60-odd premade case tags, plus 20-odd more that
express spatio-temporal relationships.  However, there is also syntactic
machinery for making any argument-slot of any statable relationship (using
the full grammar of relationships, not just individual words) into a case tag.
(In Lojban jargon: every selbri can be made into a tag by prefixing "fi'o".)

>      The other criticism was of my including LOGLAN/LOJBAN SEMANTIC
> PRIMITIVE WORD ROOTS on my concept-systems list.  The subsequent
> email discussion tells me that no change is needed.  When I say
> "semantic primitive" I don't necessarily mean ultimate
> philosophical primitives.

Okay, given your clarification [which is omitted], I suppose you are
correct.  The Loglan Project tends to be allergic to the phrase "semantic
primitives": our primitives are meant to be syntactically primitive only,
in the sense that all compounds whether open or closed are built up from
them.

> Almost all concept-
> systems with definitions have the notion of "more primitive"
> though.

Insofar as the Lojban gismu list is a set of primitives at all, all members
of it are considered equal in "degree of primitiveness", though not perhaps
in pragmatic usefulness.

> The notion of "covering the semantic space" involves a set
> of primitive concepts _sufficient_ to say anything.  This is rarely
> claimed to be a _necessary_minimal_ set.

Just so.

Lojbanists' view of case theory is that, for our purposes, no sufficiently
compelling version of it exists.  The other wing of the Loglan Project
has a fixed list of 13 case tags, and a supposed assignment of each standard
argument-slot of each (syntactically) primitive predicate to one of the 13,
but the quality of this work has been questioned by people on both sides of
the divide.

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.

______________________________________________________________________
 
 
>From lojbab@access.digex.net Wed Feb 16 08:33:30 1994
From: Logical Language Group 
Message-Id: <199402161833.AA07823@access2.digex.net>
Subject: Re: Intro/AI/compound-nouns/concept-systems
To: conlang@diku.dk
Date: Wed, 16 Feb 1994 13:33:30 -0500 (EST)
X-Charset: LATIN1
X-Char-Esc: 29

James Cooke Brown wrote:

> >There are many criteria of 'primitiveness and this is
> >one of them, the logician's.  Another, perhaps more appropriate to an
> >empirical science, is 'whatever carves reality at the joints' ...

Doug Merrit writes:

> And in *this* sense there will never be a universal set of primitives,
> because in addition to whatever is hardwired by biology, there is a further
> set that is based on both culture and experience, and those things are
> primitives only in a subset of the human race.

Ironically, I think this phrase is Whorf's, and if so, JCB is using it in a
non-Whorfian and perhaps anti-Whorfian sense.  The "joints" of which Whorf
spoke were culturally based, but JCB talks of them as if they were hardwired
by The Nature Of Things; i.e. physics and such, not hominoid biology, still less
hominoid culture!

Sidenote:  I will no longer talk of "human rights", but of "hominoid rights".
Hominoid are they born to Hominoidal Mitochondrial Eve.

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.

______________________________________________________________________
 
 
>From hrick@world.std.com Wed Feb 16 14:51:14 1994
Date: Wed, 16 Feb 1994 19:51:14 -0500
From: hrick@world.std.com (Rick Harrison)
Message-Id: <199402170051.AA24239@world.std.com>
To: conlang@diku.dk
Subject: in defense of natlang-style compounding
X-Charset: LATIN1
X-Char-Esc: 29

 
While "it" might be interesting in an experimental conlang, with regard
to international auxiliary languages I oppose explicitly marking the 
relationships between radicals in a compound word for three reasons: 
"it" is difficult, inefficient and unnecessary.  (Through the rest of 
this message, the pronoun "it" will refer to "the practice of marking 
the relationships between radicals in a compound word.")
 
1. difficult
 
Looking at Derzhanski's list of case-like relationships, I felt that
some of them could be merged into fewer items, while others could be
divided into finer-grained divisions.  Other readers probably had the
same thought but would come up with a different list from mine.  I doubt
that two or more people could ever reach complete agreement on a list of 
relationships, or that any single conlanger could sustain whole-hearted 
support for a given list for any length of time.
 
One person might argue that the relationship in "chicken-feather" is 
`component of,' while someone else might (equally reasonably) say the 
relationship is `inhabited by' or `produced by.'  "It" opens the door 
to endless quibbles and variations. 
 
2. inefficient
 
"It" requires the insertion of extra phonemes into a compound in order
to indicate the relationship.  Thus "it" lengthens words and decreases
efficiency, while providing little or no benefit in return; akin to 
flexion of adjectives to indicate case, or flexion of verbs to indicate 
person and number.
 
3. unnecessary
 
Natlangs that engage in compounding don't do "it."*  In practice, people
of normal and even subnormal intelligence have no trouble understanding
the meaning of "snowman" or "graveyard" or "rowboat."  Compounding is a 
powerful tool for expanding the usefulness of a limited stock of radicals.
I see no reason to cripple this tool by embracing "it."
 
*Rick Morneau posted some French phrases which he called compounds,
 and which he offered as evidence of a natlang doing "it."  I opine that
 they are not such evidence, as a) they are set phrases and not compounds,
 and b) natlang prepositions are so idiomatic and polysemous that they
 could just as well be replaced by grunts or silent hyphens or unmarked
 juxtaposition.
 

______________________________________________________________________
 
 
>From fritz@rodin.wustl.edu Thu Feb 17 00:30:46 1994
Date: Thu, 17 Feb 94 06:30:46 CST
From: fritz@rodin.wustl.edu (Fritz Lehmann)
Message-Id: <9402171230.AA10483@rodin.wustl.edu>
To: conlang@diku.dk
Subject: RE: in defense of natlang-style compounding
X-Charset: LATIN1
X-Char-Esc: 29


hrick@world.std.com (Rick Harrison) said:

>While "it" might be interesting in an experimental conlang, with regard
>to international auxiliary languages I oppose explicitly marking the
>relationships between radicals in a compound word for three reasons:
>"it" is difficult, inefficient and unnecessary.  (Through the rest of
>this message, the pronoun "it" will refer to "the practice of marking
>the relationships between radicals in a compound word.")
> 1. difficult
>Looking at Derzhanski's list of case-like relationships, I felt that
>some of them could be merged into fewer items, while others could be
>divided into finer-grained divisions.  Other readers probably had the
>same thought but would come up with a different list from mine.  I doubt
>that two or more people could ever reach complete agreement on a list of
>relationships, or that any single conlanger could sustain whole-hearted
>support for a given list for any length of time.

     Yes, I specifically said that you can choose the level of generality.  I
suggest doing so.  You have to do the same in choosing your basic vocabulary
and in everything else.  The fact that interested people may disagree on
where the line should be drawn doesn't mean it shouldn't be done at all.

>2. inefficient
>"It" requires the insertion of extra phonemes into a compound in order
>to indicate the relationship.  Thus "it" lengthens words and decreases
>efficiency, while providing little or no benefit in return; akin to
>flexion of adjectives to indicate case, or flexion of verbs to indicate
>person and number.

     All true, except the "no benefit".  If the otherwise semantically opaque
becomes clear, that's benefit.  I'd be interested to know the real cost.

>3. unnecessary
>Natlangs that engage in compounding don't do "it."*  In practice, people
>of normal and even subnormal intelligence have no trouble understanding
>the meaning of "snowman" or "graveyard" or "rowboat."  Compounding is a
>powerful tool for expanding the usefulness of a limited stock of radicals.
>I see no reason to cripple this tool by embracing "it."
>*Rick Morneau posted some French phrases which he called compounds,
>and which he offered as evidence of a natlang doing "it."  I opine that
>they are not such evidence, as a) they are set phrases and not compounds,
>and b) natlang prepositions are so idiomatic and polysemous that they
>could just as well be replaced by grunts or silent hyphens or unmarked
>juxtaposition.

     I feel that b) almost cinches my case.  Maybe the very worst thing about
learning a natural language is learning the idiosyncrasies of nonsensical
case-relations.  Pretty good non-native English-speakers have told me how
hard it is to figure out the correct English preposition.  They want to say
"look to" but it's "look for".  This is no merit.  It is a strong argument
for using a planned language instead of a natural one.

     You say "people have no trouble understanding the meaning."  That's true
for _natives_, steeped in the culture, but not for strangers.  If I visit the
jungle and hear about spirit bananas, square jerks, river lines, thatch
endeavors, crocodile rings, earth mothers, mother earths, snow men, monkey
beer, etc., I will be at a loss.  And, although you don't aim to address the
computer and artificial intelligence aspect, any processes of logical
elimination "obvious" to some native people ("alligator shoes must mean
something other than shoes they wear because alligators don't wear shoes")
will be extremely hard for computers even when they have enormous built-in
"ontologies" and factual knowledge.

                          Yours truly,   Fritz Lehmann
4282 Sandburg Way, Irvine, California 92715, U.S.A.
Tel.: (714)-733-0566  Fax: (714)-733-0506  fritz@rodin.wustl.edu
====================================================================


______________________________________________________________________
 
 
>From lojbab@access.digex.net Thu Feb 17 04:45:07 1994
Date: Thu, 17 Feb 1994 09:45:07 -0500
From: Logical Language Group 
Message-Id: <199402171445.AA03873@access1.digex.net>
To: lojbab@access.digex.net
Subject: Re: combining nouns via case; primitives
Cc: conlang@diku.dk
X-Charset: LATIN1
X-Char-Esc: 29

LL> Lojbanists' view of case theory is that, for our purposes, no sufficiently
LL> compelling version of it exists.  The other wing of the Loglan Project
LL> has a fixed list of 13 case tags, and a supposed assignment of each
LL> standard argument-slot of each (syntactically) primitive predicate to one
LL> of the 13, but the quality of this work has been questioned by people on
LL> both sides of the divide.
LL> 
LL> -- 
LL> John Cowan              sharing account  for now
LL>                 e'osai ko sarji la lojban.  

Amending that slightly.  Our resident logician, John Parks-Clifford, specifi-
cally researched case theory as it stood in 1988 or so when we were doing 
that par tof the redesign.  His conclusion was that the leading theorists
were of the opinion that, while there was a possibility for a deep structure
theory of cases, that theory allowed for an effectively open set of cases.

Again, somewhere I should have a biblio entry on his major source.

lojbab
lojbab@access.digex.net

______________________________________________________________________
 
 
>From lojbab@access.digex.net Thu Feb 17 04:39:21 1994
Date: Thu, 17 Feb 1994 09:39:21 -0500
From: Logical Language Group 
Message-Id: <199402171439.AA03634@access1.digex.net>
To: conlang@diku.dk
Subject: Re:  combining nouns via case; primitives
Cc: lojbab@access.digex.net
X-Charset: LATIN1
X-Char-Esc: 29

Re: Fritz's list of concept systems:  if you are interested in the
literature on relations between compoiunds, of the sort that Ivan did,
I believe I have a couple of files of bibliogrphic entries on the
subject.  It is a heavily researched field, and I'm sure several of
the othe papers and books in the bibliography have variations on such
lists as Ivan's.

I can send it to you personally, or post it to all of conlang if there is 
sufficient interest.

lojbab
lojbab@access.digex.net

______________________________________________________________________
 
 
>From lojbab@access.digex.net Thu Feb 17 05:24:42 1994
From: Logical Language Group 
Message-Id: <199402171524.AA06863@access2.digex.net>
Subject: Re: in defense of natlang-style compounding
To: conlang@diku.dk
Date: Thu, 17 Feb 1994 10:24:42 -0500 (EST)
Cc: lojbab@access.digex.net (Logical Language Group)
X-Charset: LATIN1
X-Char-Esc: 29

Rick Harrison writes:

> In practice, people
> of normal and even subnormal intelligence have no trouble understanding
> the meaning of "snowman" or "graveyard" or "rowboat."

While I agree with most of Rick's post, I must take issue with this sentence.
Your theory, like Fritz Lehmann's, falls apart in the face of my facts.  :-)

While it is true that compounds like "mouse trap", "rain drop", and "milk
tooth" are understood in many languages, there are other compounds that
are given very different interpretations in different languages.
For example, "tea cup" means the same in English and Chinese, but in
Korean and Abazin it means "cup of tea"; it specifies a quantity, not a
physical object.

Similarly, in English and Hungarian, "glass eye" means an eye which is made
from glass, but in Quechua the same compound signifies spectacles, a glass
adjunct to a (flesh) eye.  Remember that all these languages use adjective-
noun order, unlike (say) French or Spanish.

I think English-speakers would need a lot more than "normal [or] even
subnormal intelligence" to figure out Mongolian's compound "worm beetle" =
"insect", or Kazakh's "cup plate" = "crockery".  These compounds are of
the type in which the two words being compounded provide typical examples
of a more inclusive class which is the meaning of the compound itself.

I point, in addition, to the Lojban word for what in English is called a
"river mouth", which is "river anus".  Why?  Because the anus is the
exit point, and a river exits into the sea through its so-called "mouth".
To a (hypothetical) native Lojbani, "river mouth" would probably be
very confusing.

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.




______________________________________________________________________
 
 
>From ucleaar@ucl.ac.uk Fri Feb 18 19:21:13 1994
From: ucleaar 
Message-Id: <66062.9402181921@link-1.ts.bcc.ac.uk>
To: conlang@diku.dk
Subject: Re: in defense of natlang-style compounding
Date: Fri, 18 Feb 94 19:21:13 +0000
X-Charset: LATIN1
X-Char-Esc: 29


Rick H:
> 2. inefficient
> "It" requires the insertion of extra phonemes into a compound in order
> to indicate the relationship.  Thus "it" lengthens words and decreases
> efficiency, while providing little or no benefit in return; akin to 
> flexion of adjectives to indicate case, or flexion of verbs to indicate 
> person and number.

Such flexion of adjectives & verbs has its uses. With adjectives, it
allows you to identify the adjective's head. With verbs, it helps to
identify the subject (and object if you also have object agreement).
This takes some load off other areas, e.g. inflectional morphology
of nouns, word order, etc. But I agree that the cost outweighs the
gain usually.

I can, however, imagine a language with, say, 30 genders, which
inflects (or adds clitics to) the verb according to the genders 
of its complements. This wd make it easier to recognize which
complement was which (& in fact complements cd often be omitted
if, once you knew the gender, they were recoverable pragmatically).
With 3 types of complement & 30 genders you'd need 90 clitics,
but this wd buy you free word order among the elements of a clause
with a 1 in 15 chance (I think) of ambiguity (assuming each
gender has equal likelihood of occurring).

----
And

______________________________________________________________________
 
 
>From ucleaar@ucl.ac.uk Fri Feb 18 19:54:38 1994
From: ucleaar 
Message-Id: <48954.9402181954@link-1.ts.bcc.ac.uk>
To: conlang@diku.dk
Subject: Re: combining nouns via case; primitives
Date: Fri, 18 Feb 94 19:54:38 +0000
X-Charset: LATIN1
X-Char-Esc: 29


Lojbab writes:
> Amending that slightly.  Our resident logician, John Parks-Clifford, specifi-
> cally researched case theory as it stood in 1988 or so when we were doing 
> that par tof the redesign.  His conclusion was that the leading theorists
> were of the opinion that, while there was a possibility for a deep structure
> theory of cases, that theory allowed for an effectively open set of cases.

If predicates have a limited number of arguments, a limited number of
cases will suffice to disambiguate these arguments. In fact the cases
cd be semantically vacuous - just case-1, case-2, case-3. My conlang 
has 3 cases, but an indefinitely large and open class of predicate words.

My cases are like 'nominative' and 'accusative', i.e. with some but
not much semantic content. But the recent discussion on this list
has also included cases like 'partitive', based on the PART-OF
relation. This, I suggest, is a predicate. So for 'finger nail'
instead of having 'finger' marked for partitive case, it shd be
marked as 'accusative' (or whatever) argument of 'part-of'.
    fingernail: part-of(nail,finger)

I feel that this answers the debate on whether there is (or should be)
an open or closed number of 'cases' to link the components of
compounds. The answer is that c.3 cases and an open class of
predicates may be used to like the parts of compounds. A compound
like 'foxhunting', where 'fox' is an argument of 'hunting'
wd be permissible - as hunting(_,fox) - but not one like 
fingernail, where 'finger' and 'nail' aren't predicates.
Fingernail would be something like 'nail-that-is-part-of-finger'.

Just to make myself less unclear than I've probably been so far,
I intend the above to be a resolution to the current debate. I
hope that the suggestion would satisfy opponents of imprecise
compounds, given the argument of the advocates of imprecision
that the class of possible relations between components must
be open.

-----
And





______________________________________________________________________
 
 
>From robin@extro.ucc.su.OZ.AU Wed Feb 23 10:40:27 1994
From: Robin F Gaskell 
Received: from localhost (robin@localhost) by extro.ucc.su.OZ.AU (8.6.5/8.6.5) id XAA12858 for conlang@diku.dk; Tue, 22 Feb 1994 23:40:27 +1100
Date: Tue, 22 Feb 1994 23:40:27 +1100
Message-Id: <199402221240.XAA12858@extro.ucc.su.OZ.AU>
To: conlang@diku.dk
Subject: re: Syntax Analysis
X-Charset: LATIN1
X-Char-Esc: 29


#From: robin@extro.ucc.su.oz.au (Robin Gaskell)          17 Feb 94
#To: conlang@diku.dk (Conlang Mail List)
#Subject: re: Syntax Analysis

Hi Conlangers,

        Well, on last count, the response to my piece on GAS - a means
of clarifying the syntax of an uninflected language - was one.
Perhaps the idea did not refer to recent work done by top researchers;
or possibly, it duplicates work already done by AI workers, who are
hot on the trail of the `meaning representation system.'
        As far as I know, it is an original ... and a Conlang first.

         >>  Mark Shoulsen  comments

> Hmmm.  Interesting method you have there, Robin.  Is it really any better
> than actually using the words instead of symbols?  e.g. instead of
> _. \! @._, maybe   ?
> OK, it certainly takes much more typing, but it's easier to read.  It's
> also language-dependent (whatever language you use for the words), and
> could be less useful for marking sentences for analysis (as in _the .cat
> \!sat @on the .mat_), though really not too badly: any computer worth its
> salt could be perfectly capable of pulling out the information in the <>'s
> and treating it separately.  Though similarly, it could use the symbolic
> method and translate to the  format (in any of several
> conventions/languages) for debugging output by the human operator.

        Thankyou, Mark for your feedback.  Agreed, words are the
readily readable medium.  However, I wished to reach one level of
abstraction beyond words, for at least two reasons: firstly, I
had in mind the Chinese-speaker - learning Glosa - who wanted neither
to learn English, nor to use a specially coined set of Chinese
characters, to follow the mechanics of of a piece of Glosa prose; and,
secondly, I preferred the elegance and economy of one symbol for one
syntactic function.
        The reason, for wishing to keep the text separate from the
syntactic coding, was more related to IT.  While languages, which
have inflectional grammar, do embody some of their grammatic functions
in the body of their texts, their word orders still presents some
problems to the programmer/parser-writer, and need their own analyses.
Why not, I thought, make a clean break with this mixed system, and ...
for the new `syntax-based' Glosa ... keep the text and gramatical
analysis apart.  I imagined that the text, unchanged, would hold the
semantic content of a document, while the syntactic analysis,
extracted from it, would act as a guide to the functional
relationships between the elements of the text.
        As you point out, the  information, of your preferred
mode of syntactic analysis, can be extracted by computer.  For the
same reason, the  _.   \!   @._  code can also be generated by
computer.  This GAS code would be much more economical of computer
data storage memory; and, for purposes of visual monitoring, it
would place less of a strain on the eyes of the person doing this
monitoring - once they had learnt the forty odd codes.

> You should realize, of course, that your method is quite limited to
> languages that resemble English and Glosa.  Not all languages have
> participles, linking verbs, etc., and not all languages can have their
> tense system so simply expressed as
> past/present/continuous/conditional/now.  Some require perfectives, etc.
> Words like "gerund" and "passive" don't have any meaning in many languages,
> but instead other constructs which you don't treat do.  This may not
> matter, but it will limit which languages can use this system.

        What you say is true.  There may be some way of modifying this
`syntax analysis' method to universalise it; however, that was not my
point in intuiting GAS.  The quite legitemate `syntax-based' system of
grammar did not seem to have a suitable method for its analysis: I
simply presented us with one.
        The historical grammars of the "unplanned languages" have been
the subject of endless hours of enjoyable work by scholars, and the
headache of parser-writers.  And now, my philosophy is showing: in
some ``Brave New World'' people will wish to communicate honestly;
quite frankly, I see the Babel Syndrome as having reached the
proportions of a social disease in need of treatment (or is it
`intervention'); if this diagnosis is correct, then the IAL hypothesis
ought to be the subject of serious research.
        At present, the best the world can offer is institutionalised
linguists saying the study of "Interlanguage" reveals Esperanto to be
the only legitemate object of their scholarship ... and we also have
Conlang.  Things are slow, on the Language Reform front.
        Despite this apparent inertia at the organisational level, I
predict that there is a confrontation brewing.  It will be between
establishment figures, who, for reasons of national pride and personal
position, support the present obfuscation, which gives them their
status - and the under-current of humanity dissatisfied with an
apparent lack of results in `the world language situation' area.
        A brief explanation of my philosophical outburst, and an
analogy, might be in order.  We Antipodeans still retain some of the
intolerance with establishment that led our forefathers to seek a "fair
go" in the new colony.  The analogy:   p 132, THE SUN-HERALD (Sydney),
Feb. 20 1994.
               "In April, the World Congress on Cancer in Sydney
        will draw together nearly 40 scientists and clinicians
        from around the world to discuss their research into non-
        traditional cancer treatments.
                It is the first gathering of its kind, but the
        congress has had little support from the medical
        establishment, according to organiser Jennie Burke,
        director of the pathology centre Australian Biologics."

        When a friend of Jennie's went to Mexico for the successful
cure of a cancer, that Australian doctors said was inoperable, Jennie
decided it was time for a change.  There is an air of dissatisfaction
with establishment, about.  Maybe it will gust through linguistics,
soon, too.

> Also, I note that you don't mark direct objects.  This will limit your
> method's usefulness to coding only those languages which mark case by
> word-order, and that in the same order as English/Glosa.  Coding "a fire I
> saw" would not help us work out whether this was an OSV language (or more
> likely, a SVO language with the O transposed to the front for emphasis), an
> SOV language (very common) or a free-order language with case-markings.
> This may or may not matter, depending on the purpose this code is intended
> for (which I may not understand fully).

        Perhaps I have answered all this, above.  Maybe I did not
clarify the purpose of the GAS code in my earlier post.  The code is
designed to cover the area of languages with syntax-based grammar.  It
could be necessary to build-in the marking of direct objects; also,
for different languages, it would probably prove helpful to declare -
in a preface - the dominant order of the parts.
        Glosa being SVO (with optional Passive), and the one syntax-
based language I am working on at present, the GAS system was fairly
dedicated research, applying directly to it.
        English, with half of its grammar syntax-based, and the other
half only minimally inflected, does submit fairly well to the GAS system.
        To apply syntax analysis to Esperanto, on the other hand,
would certainly require some introductory declarative statements; and,
owing to Esperanto's syntactic flexibility, would probably call for
the addition of one or two further syntatctic categories, e.g.  :
might be used for the direct object.

> It looks like a nice thing to run through your mind now and then to
> see just what the language is doing, sort of like a computer-codable
> version of sentence-diagramming (remember that?)

        Although I have not seen "sentence diagramming," I think you
must have the idea.  The GAS system is specifically designed to be
computer-codable, with the twist that the code, being ASCII, is
directly accessible via the computer keyboard, and is also readily
networked.

  ----------------------------------------------------------------
As an appendix, I will repeat the first draft of the code:-

            The gASCII Analysed Syntax (GAS) system  (Alpha Test ver.)
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Basic        .          !          >           @           $

        substantive   action   modifier   space, time   logical
          (noun)      (verb)  (adjective,  preposition   preposition
                                adverb)


Tense        /          \          ~           ^           |

          future      past     continuous  conditional    now



Modifiers    #          %          >           v

          number     quantity   quality    auxiliary
       (countable) (measurable) (property)   verb


Conjunctions            +                    &

              joins words, phrases     structural: joins clauses


Functions     x         t          =           <           <`

          location    time    equals, like   verb is    participle
       X proper noun          as , similar   passive


              0         ?          -           ,           ;

       negative: un-   general   joining    pronoun    pronoun
     no, not, never   question   concepts   personal  impersonal
      nothing                   (compounds)


People         o             s              '              `.

            other          self        possessive        gerund
        O proper noun   S name of
                         1st person


Specific       ?o        ?.        ?!         ?x          ?t
 questions
              who       what       why      where        when


Clauses    (      )       {       }       [       ]      "      "

          adjectival        noun          adverbial     parenthesis
                                                        or quotation

Non-literal          :            *       *         _         _
 language
              metaphor or           idiom          start     end
              other n-l term                        of sentence

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Examples of GAS in application

1.  The cat sat on the mat.    =     U felis pa sed epi u tape.

        _.   \!   @._


2.  While three fat boys sat by the river bank, and ate jam sandwiches,
     their sisters stole their bicycles.

    _[t  #>o   \!   @.-.   &   \!   >.]  ,'o   \!   ,'._

  Tem tri paki ju-an pa sed proxi u ripa, e pa fago plu konfekti pani,
     mu plu fe-sibi pa klepto mu plu bi-rota.

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Cheers,

Robin


______________________________________________________________________
 
 
>From Edmund.Grimley-Evans@cl.cam.ac.uk Wed Feb 23 17:00:04 1994
Date: Wed, 23 Feb 1994 17:00:04 GMT
From: Edmund.Grimley-Evans@cl.cam.ac.uk
Message-Id: <9402231700.AA22665@nene.cl.cam.ac.uk>
To: conlang@diku.dk
Cc: "Edmund.Grimley-Evans" 
Subject: Glosa/Esperanto

   KOSMO GLOSA                     KOSMA GLOSO

Panto-pe panto-lo               ^Ciu homo ^cie ajn
Pote uti Glosa;                 povas uzi Gloson,
Plu panto-landa civi            de ^ciu lando anoj,
De panto nati-lingua.           de ^ciu naski^glingvo.
In panto domi, panto-lo         En ^ciu domo ie ajn,
Ofici e fabrika.                en oficej', fabriko,
Panto-pe dic; Glosa es          ^ciu diras: Estas Glos'
pro Gaia komunika.              por Tera komuniko.

Id dona interese                ^Gi donas intereson
a panto bon konversa            al konversacio.
Poli civi dice id               ^Ciu ^statano parolas ^gin
In Afika in Asia.               en Afriko kaj Azio.
In trena, navi, aeroplan        En trajno, ^sipo, aviadil',
Panto-speci vaga.               dum ^cia voja^gado,
Pe audi Glosa panto-lo          Gloson oni a^udas ^cie:
Id eko peri Gaia.               ^Gi hejmas ^cirka^u Tero.

Gru ad ali puta-me              Ta^uga por iu komputil',
poesi, e musika.                poemoj, kaj muziko.
Panto tema, panto-lo            ^Ciun temon, ^cie ajn,
Glosa don service.              Gloso povas helpi.
Dice, lekto, audi id            Parolu, legu, a^udu ^gin
tem sporta e relaxa,            dum sporto kaj distri^go.
Glosa es u maxi bon             Gloso estas pleja bon'
A fluvi, bun e saxa.            ^ce rok', river' kaj monto.

Glosa es u nece-ra,             Gloso estas necesa^j',
Un universa lingua.             la lingv' universala.
Fu doci sani panto-lo           Instru' estonta pri sanec'
E paci e eduka.                 kaj paco kaj eduko.
Glosa pote proba                Gloso povas provi
ultra pan limita                preter ^ciu limo,
stop u lingua frustra           la lingvan frustron ^san^gi
Sti kosmo komunika.             al Kosma Komuniko.

   Wendy Ashby                     [translation by Edmund GRIMLEY EVANS]

Would any proponant of Glosa like to reciprocate with a decent
translation into Glosa of "La Espero"?

I hope no one objects to this message. I joined the list only a few
days ago, so I don't yet know precisely what is acceptable here.

I joined after seeing conlang in the list of publicly available
distribution lists in the news group news.lists; conlang was a new
addition to that list, though I realise that the list itself is not
new. It was a good idea to get it listed, in my opinion.

Edmund

______________________________________________________________________
 
 
>From lojbab@access.digex.net Wed Feb 23 12:10:49 1994
From: Logical Language Group 
Message-Id: <199402232210.AA03039@access2.digex.net>
Subject: Re: combining nouns via case; primitives
To: conlang@diku.dk
Date: Wed, 23 Feb 1994 17:10:49 -0500 (EST)
Cc: lojbab@access.digex.net (Logical Language Group)

And Rosta writes:

> But the recent discussion on this list
> has also included cases like 'partitive', based on the PART-OF
> relation. This, I suggest, is a predicate. So for 'finger nail'
> instead of having 'finger' marked for partitive case, it shd be
> marked as 'accusative' (or whatever) argument of 'part-of'.
>     fingernail: part-of(nail,finger)

This is fine for simple compounds, where the relationship "between" 
is clearly a two-place predicate.  But consider compounds like "tooth root"
(or Hungarian "coat finger" (meaning "coat sleeve")).  Here the relationship
is something like:

	analogy(part-of(root,TREE), part-of(tooth_root, tooth).

This is not a simple predicate, and involves the implicit thing-which-really-has
a-root which I have marked "TREE" here.  Given that there is a predicate
"A is a part of B which is analogous to the C'th part of D", how do we
mark which of "tooth" and "root" is being assigned to A, B, C, or D?

This is what I meant by the notion that using the general predicate vocabulary
to mark compounding relationships was recursive.  We have two predicates to
relate, pred1 and pred2.  We mark the relationship between them as pred3.
We then must mark the relationships between pred1-pred3 and pred3-pred2 as
pred4 and pred5, respectively.....

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.


 

______________________________________________________________________
 
 
 
 
>From hrick@world.std.com Wed Feb 23 18:36:38 1994
Date: Wed, 23 Feb 1994 23:36:38 -0500
From: hrick@world.std.com (Rick Harrison)
Message-Id: <199402240436.AA03499@world.std.com>
To: conlang@diku.dk
Subject: Vorlin update
X-Charset: LATIN1
X-Char-Esc: 29

 
A couple of weeks ago, And Rosta said it's a shame we don't hear
much about Vorlin anymore.  Well, that was a very nice thing to
say, and it inspired me to dig up the old computer files (untouched
for nearly a year) and ponder the possibilities.
 
Vorlin has some serious handicaps.  Some compound words are hard
to pronounce (/s/ followed by /sh/, /p/ followed by /b/, /sh/
followed by /zh/ etc).  The initial premise, that a sufficient
vocabulary can be built from CVC roots, might be erroneous; in
the latter days I was experimenting with adding some CVCVC roots.
(An a priori battery of CVC roots might be sufficient, but coming up 
with sufficiently distinct and memorable a posteriori CVC radicals is 
difficult.)  And I no longer believe in mandatory tense indication for
verbs or mandatory flexion of nouns to indicate number.  
 
Regrettably, I should have learned more about linguistics and 
conlangery before I published Vorlin.  On the other hand, it was 
publishing Vorlin that got me connected to the quasi-community of 
people interested in a variety of constructed languages.
 
The good news is that no-one has memorized the last version of Vorlin 
(as far as I know), so I can make changes without inconveniencing anyone.
And indeed I have begun making changes.  Expect something to be
published around March 21st (Vorlin's 3rd birthday).
 


______________________________________________________________________
 
 
>From lojbab@access.digex.net Fri Feb 25 06:25:42 1994
From: Logical Language Group 
Message-Id: <199402251625.AA12718@access2.digex.net>
Subject: Re: Vorlin update
To: conlang@diku.dk
Date: Fri, 25 Feb 1994 11:25:42 -0500 (EST)
Cc: lojbab@access.digex.net (Logical Language Group)
X-Charset: LATIN1
X-Char-Esc: 29

Rick Harrison writes:

> Vorlin has some serious handicaps.  Some compound words are hard
> to pronounce (/s/ followed by /sh/, /p/ followed by /b/, /sh/
> followed by /zh/ etc).

You could adopt the Lojban mechanism for dealing with difficult clusters:
introduce the schwa, written "y", for separating them.  This requires
a definition of "difficult", so that words can have a single canonical
form.

The Lojban definition of a forbidden cluster is fourfold:

	1) duplicates are forbidden:  *kk, *rr, *ss.
	2) voiced-unvoiced pairs are forbidden unless one consonant is
		l, m, n, r:  *bf, br, *zs, zn.
	3) clusters where both members belong to {/sh/, /zh/, /s/, /z/}
		are forbidden.
	4) the special cases cx, kx, xc, xk (which are hard to articulate)
		and mz (which is easily confused with nz).

Others might want to add restrictions based on homorganicity or the lack of it.

> The initial premise, that a sufficient
> vocabulary can be built from CVC roots, might be erroneous; in
> the latter days I was experimenting with adding some CVCVC roots.

This sounds useful.  In particular, you will need to be able to borrow
long forms for representing the multiplicity of living things, cultures,
specialized foods, and other such referents.

> (An a priori battery of CVC roots might be sufficient, but coming up 
> with sufficiently distinct and memorable a posteriori CVC radicals is 
> difficult.)

An excellent point.  The short radicals of -gua!spi (C^nV^n, where V includes
[aeioumnlr]) are technically a posteriori, being derived from English, Latin,
and Chinese, but they are not very mnemonic.

> And I no longer believe in mandatory tense indication for
> verbs or mandatory flexion of nouns to indicate number.  

Good!

-- 
John Cowan		sharing account  for now
		e'osai ko sarji la lojban.



______________________________________________________________________
 
 
>From riddle@is.rice.edu Sat Feb 26 05:26:34 1994
From: riddle@is.rice.edu (Prentiss Riddle)
Message-Id: <9402261726.AA14116@is.rice.edu>
Subject: Volapuk Renaissance
To: CONLANG@diku.dk (ConLang list)
Date: Sat, 26 Feb 1994 11:26:34 -0600 (CST)
X-Www-Page: http://is.rice.edu/~riddle/index.html
X-Charset: LATIN1
X-Char-Esc: 29

Writing in Linguica #22, Michael Helsem said the following:

> Books on, in, or about Volapuk are extremely hard to find.
> Basically, nothing has been reprinted since de Jong's German-Volapuk
> Volapuk-German dictionary (1931), & for English you have to go back
> to Seret (1887).  Even the Volapuk Center in England (Brian Bishop)
> can only xox the copies they have (@ 10p a sheet).  I only just
> received, from a college in Virginia via Interlibrary loan, the
> 106-year-old book, which i plan to copy (very gingerly) in the days
> to come...  It's been checked out 5 times: in 1932, 1959, 1968, 1986,
> 1987 & 1989 (is that a renaissance or what?).
> 
> Ralph Midgley, who does the newsletter (24 Staniwell Rise,
> Scunthorpe, DN 17 1TF United Kingdom) has put together a condensed
> grammar & English-Volapuk only wordlist (but thoughtfully includes
> the Volapuk hymn...), & Joseph Biddulph sells the grammar portion of
> Seret (the first 59 pages) for 6.5 pounds: 32 Stryd Ebeneser,
> Pontypridd, CF37 5PB, Wales.  Biddulph's is the kind of small press i
> like: not many books, & they're not very big, but where else can you
> find out about Frisian, Visigothic, Mandingo, Bushman or Twi?  (Also
> Basque & Gujarati.)  Midgley sent me the wordlist for free (since i
> am what is probably the first original poetry in Vp. since before
> WWI) but a donation is suggested & it has a list of the affixes,
> something oddly missing from the other books i have.  In answer to my
> inquiry, Joseph Biddulph says he might consider a reprint of the rest
> if there were a definite demand for it -- i don't know if that means
> one, ten, or a hundred or more letters -- if i had access to the
> Conlang BBS [sic] on Internet, i would do some agitating, but...

There you have it.  If you would like to see a reprinting of a primary
source on Volapuk, drop a line to Joseph Biddulph.  If you want to reach
Michael Helsem, drop him a line at 1031 Dewitt, Dallas, TX 75224-2651.

-- Prentiss Riddle ("aprendiz de todo, maestro de nada") riddle@rice.edu
-- Opinions expressed are not necessarily those of my employer.



______________________________________________________________________
 
 
>From ucleaar@ucl.ac.uk Sun Feb 27 12:55:48 1994
From: ucleaar 
Message-Id: <24097.9402271255@link-1.ts.bcc.ac.uk>
To: conlang@diku.dk
Subject: Re: Volapuk Renaissance
Date: Sun, 27 Feb 94 12:55:48 +0000
X-Charset: LATIN1
X-Char-Esc: 29


Prentiss Riddle quoting Michael Helsem about two British Volapuekists
reminds me of a feature on British TV a few months ago about 
Volapuek. The programme was called Eurotrash, celebrating features
of European culture that are kitsch, lewd or risible, all introduced
by Jean Paul Gautier & Antoine de Caunes bickering in franglais. One 
item presented two English men who, it was claimed, were the only 
Volapuekists in Britain but had never met before being introduced by 
the programme.  They were filmed conversing in V. & singing the V. hymn. 
I found them rather charming & dignified, and a rather peculiar inclusion
in a magazine programme most of whose features were about avantgarde
pomo Luxembourgeois pornographers.

-----
And



______________________________________________________________________
 
 
>From dasher@netcom.com Sun Feb 27 15:17:33 1994
Date: Sun, 27 Feb 1994 23:17:33 -0800
From: dasher@netcom.com (Anton Sherwood)
Message-Id: <199402280717.XAA15865@mail.netcom.com>
To: conlang@diku.dk
Subject: Biddulph's publications
X-Charset: LATIN1
X-Char-Esc: 29

Announcement found in the back of a booklet on Visigothic:

	Two Serial Publications--
	Edited by Joseph Biddulph:
	
	_Xenododo_:
	_A Miscellany of Ancient and Exotic Tongues_.
	Irregular. 24-28 concise pages.
	ISSN 0957-5375. UKL2.00 per part.
	There are absolutely no boundaries in the 
	studies proposed by _Xenododo_ - living and
	extinct languages and dialects,European,
	Asiatic,Oceanic,American,African linguistics,
	Scots,Creole,English and Celtic dialect,a
	little anthropology - with useful addresses,
	and an invitation to readers to participate
	- popular and academic items from a very 
	wide field.Published in basic format in
	English and any other European language,
	and obtainable as a series or as separate
	parts. Write for details today!
	
	_Hrafnhoh_  ISSN 0952-3294
	Irregular. 24-28 concise pages.
	
	A stimulating mixture of heraldry and philo-
	sophy,genealogy and poetry,literature and
	antiquity,placenames and reviews,etc.,etc.
	Write for details to:
	
	Languages Information Centre,
	32 Stry^d Ebeneser,
	Pontypridd  CF37 5PB,
	Wales/Cymru.

*\\* Anton 						Ubi scriptum?






    Source: geocities.com/raiu_harrison/conlang

               ( geocities.com/raiu_harrison)