< ^ > < v >  < < >  [ > ]  [ Top ]  [ Home ]


Linguistics Theory, Foundations, and Modern Development

An Overview of Linguistics and Linguistic Applications

Prepared for Gary Blahnik, The Union Institute

Austin Ziegler, 15 March 1995

Last Modified: 31 March 1996

 < info > Navigation around Linguistics.

This file is the entire Linguistics paper put into one file for ease of printing. It is fully hyperlinked internally.
This paper is organized in three levels:
1. Chapters
2. Sections
3. Subsections
The following buttons appear at the top of every page associated with the Linguistics paper. The left version is the inactive version, the right version is the active version. These example buttons are not active.
 < ^ >  [ ^ ] Up one level. This will take you to the current section for the subsection or the current chapter for the section.
< v > < v > Down one level. This will take you to the first section of the chapter or the first subsection of the section. At the end of a section or subsection, it will take you to the beginning of the next chapter or section, if it exists.
 < < >  < < > Previous section. This will take you to the previous chapter, section, or subsection.
 < > >  [ > ] Next section. This will take you to the next chapter, section, or subsection.
 < Top >  [ Top ] School page. This will take you to the school index.
 < Home >  [ Home ] Home page. This will take you to Fantôme's homepage.


Table of Contents

 * Introduction

 * On the Origins of Linguistics

 - Ancient Linguistics: Babylon and India
 - Early Western Linguistics: The Greeks, the Church, and Medieval Philosophy
 - Realism, Nominalism, Humanism, and the Renaissance
 - Rationalism, Sciental and Practical
 - Spiritualism and Materialism
 - Interlude: Grammarians

 - Linguistics as a Discipline: Nineteenth Century and the Early Twentieth Century

 * Linguistic Assumptions and Principles

 - Fundamentals

 - Language as Knowledge

 - Grammars

 . Internal Grammars

 . Descriptive Grammars

 . Prescriptive Grammars

 . Teaching Grammars

 - Parts of Grammars

 . Morphology

 . Syntax

 . Semantics

 . Phonetics and Phonology

 - Dialects and Language in Society

 * Language Change and the History of the English Language

 - Written Language and Change

 * Language Acquisition

 - Computers, Formal Language, Natural Language, and Language Acquisition


 < ^ > < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Introduction

This course is an inquiry into the nature of human language and the methods of modern linguistics. This inquiry will be fulfilled by reading literature available to determine the origins and the current state of linguistics as a field of scientific study. Particular highlights will be on the English language and on computing aspects of linguistics (including formal vs. natural languages and computer aids to linguistic applications and research).

Linguistics is a relatively new social science that is involved in the study of the nature of language, the acquisition of language, and language change. This course looks at these issues and some of the finer points with these issues. The following results are expected from this course:

  1. An understanding of the origin of the field of linguistics.
  2. An understanding of the assumptions and principles of linguistics, including:
    1. Universals and structure;
    2. Dialects and language change;
    3. Language acquisition; and
    4. Applications of linguistics to other fields.
  3. A general familiarity with the types of research conducted in the field of linguistics.
  4. A general familiarity using linguistics to conduct literary analysis.
  5. An understanding of the history of the development of the English language. And
  6. A general familiarity with the cutting edge of linguistics, including:
    1. Computer technology used in linguistics; and
    2. Formal and natural languages.

To demonstrate knowledge regarding linguistics, this paper is written to summarize the results of the readings as well as personal insights. The paper is organized thematically, meaning that each of the above results will be addressed in an individual section of the paper, when appropriate, and will be included in other sections when appropriate.

In particular, the use of linguistics in literary analysis and the research conducted in the field of linguistics are independent courses in and of themselves, and should be inferred by the discussion of linguistics in general here. The history of the development of the English language, of course, is part of the discussion on language change, and the applications of linguistics to other fields is partially apparent in the discussion on computer technology.

The discussion on computer technology (and formal and natural languages) will be part and parcel with the acquisition of language, as linguists are trying to emulate that process as much as is possible in both the recognition of language and the translation of language.

< info > Hyperlinks within the text of this document generally go to graphic images of the word highlighted. This is done so that the word can be displayed in plain text and with the proper accents and diacriticals that cannot be displayed with HTML. Use the "back" button or function of your browser to return to the main text. < info >


 < ^ >  [ v ]  [ < ]  [ > ]  [ Top ]  [ Home ]


On the Origins of Linguistics

Not long after language evolved into our species, people began to study the nature of language. For many of them, it was merely practical - different tribes spoke different languages. Others wondered why this difference existed. For the ancient Hebrews, the variety in languages was explained by the myth of the Tower of Babel, recorded in Genesis. Throughout the history of linguistics as a philosophical pursuit, an anthropological tool, and a science in its own right, there have been those who believed that there was at one point a single, unified language.

Some have even believed that if this original language was discovered, it would establish the dominance of the race which had "created" it, in a sense. Of course, many of these same researchers firmly believe that their language, and therefore their race, was the superior race and language. Modern linguists recognize that there are no underdeveloped languages. All languages are infinitely extensible, and there is no "proper" way to use any language, as the form itself constantly changes, much like a living organism.


 [ ^ ]  < v >  < < >  [ > ]  [ Top ]  [ Home ]


Ancient Linguistics: Babylon and India

The earliest known linguistic studies as a structure of language are commonly stated as the fifth century B.C.E. with Panini's grammar of Sanskrit, or the third or second century B.C.E., with Krates of Mallos's and Dionysios Thrax's grammars of Greek. Jacobsen points out that the ancient Babylonians, circa 1600 B.C.E., have the first recorded attempt, with revisions appearing through about 600 or 500 B.C.E. The Babylonians were, according to Jacobsen, attempting to preserve a large body of literature that was written in Sumerian, which was a dying language in the process of being replaced by Akkadian [Thorkild Jacobsen, "Very Ancient Texts: Babylonian Grammatical Texts," in Dell Hymes, Studies in the History of Linguistics: Traditions and Paradigms, 1974, 41-62]. Salient points of Jacobsen's analysis of the Babylonian Grammars include a note that the form of Sumerian was kept (words, et al.) but that the analysis broke through the form for greater understanding, much as current analysis of Latin does in schools. In effect, it made the language live even though it was out of use.

The early Indians faced a similar situation to that of the Babylonians, several centuries later (circa 1000 B.C.E.). The Indian linguistic drive is important because it has the same reasons that drove most linguistic studies until the eighteenth century: religion. Specifically, the rituals for early Hinduism called for the recitation of words in the original Vedic. Therefore, as language changed, the original form (the samhitapatha, "continuous recitation") was divided into the padapatha (the "word for word recitation"), producing a full analysis on the phonemic level of a fixed body of text. Later linguistic efforts, notably Panini, expanded the Pratisakhya Prtikhya linguistic analyses from Vedic utterances (chandas) toward the spoken language (bhasa). Panini, Katyayana, Patañjali and others realized that language was infinite and could not be described by enumeration, but only with the help of "rules and exceptions" (samanyavitesaval laksanam). Throughout the centuries, Indian linguistics has been refined and simplified, and has inspired studies of other languages in similar, rigorous ways [J. F. Staal, "The Origin and Development of Linguistics in India," in Hymes, 63-74]. In many ways, the early Indians are the forefathers of modern linguistics, even before Panini.


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Early Western Linguistics: The Greeks, the Church, and Medieval Philosophy

Western linguistic theories have followed two main trends, originating from a philosophic opposition between Heraclitus (ca. 540 - ca. 480 B.C.E.) and the Eleatic school (Parmenides, ca. 504 B.C.E.), and consist of "genetic dynamism" ("functionalism") arguments and "static elementarism" ("entitarism") arguments. These two separate views are firmly embedded in common language. For example, the "universe" is "Wirklichkeit" ("what works") in German, and "reality" ("things") in English. The first Western writing about the study of language is Plato's (427-347 B.C.E.) Kratylos which, in dealing with the orthotès (rightness) and the alètheia (truth) of words, continues the controversy, and wavers constantly between the two sides, eventually disregarding the truth of words, but only of the proposition as a whole (sentences and paragraphs). While Plato sought the truth of language, the rhetorics sought the practical effectiveness of the language, not so much based on knowledge, but on one's skill in speech [Pieter A. Verburg, "Vicissitudes of Paradigms," in Hymes, 191-195].

Aristotle (384 - 322 B.C.E.) believed that there was but one world view, one inner language (one mode of thought, as it were), merely differences in the sound system which identified languages. While he has had direct influence on linguistics, it was likely Aristotle's taxonomies of disciplines which fostered the philology of Alexandria, including Dionysios Thrax's (first century B.C.E.) identification and elaboration of word classes and Apollonios Dyscolos's (second century C.E.) study of syntax. Alexandria started its own studies of philology and grammar at this time. While Alexandria kept grammar as its own study, isolated from logic and philosophy through the existence of the Roman Empire, in the Medieval period, we find a resurgence of philosophical study and acceptance of Aristotelian logic because, as Verburg notes, the Church could accommodate Aristotle in naturalibus because he generally avoided the supernatural [Verburg in Hymes, 195-197].

Medieval grammatical studies were, prior to approximately 1100 C.E., limited to the study of Latin, for the same reason that the Indians strove to preserve the ancient Vedic: religion. Unlike the Indians, however, departure into "modern" language of the time was all but prohibited; it should be noted that this position was not unique for grammar in the least. The Roman Catholic Church (and its political counterpart, the Holy Roman Empire) actively discouraged any knowledge-seeking outside of the holy writ [G. L. Bursill-Hall, "Toward a History of Linguistics in the Middle Ages, 1100-1450," in Hymes, 77-92]. Following this period, there was a steadily increasing interest in grammars of native (vulgar) languages and dialects. It should be noted that the increasing interest in grammar (linguistics) coincided the resurgence of the Aristotelian dialectic method. According to Bursill-Hall, the study of languages during the Medieval period was predominantly data-oriented, or specific to the problem at hand. Verburg notes that the Church's responsibility in the Middle ages was to "teach culture to barbarians as well as to preach faith to unbelievers" [in Hymes, 197] which resulted in a scholastic study of Latin. Verburg also notes that "the Crusades resulted in a widening of the cultural horizon and a renewed interest in knowledge" [in Hymes, 198].


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Realism, Nominalism, Humanism, and the Renaissance

With the revival of "belles lettres" circa 1100 C.E. in Chartres and Orléans, critical thought and analysis of language once again became not only desirable but necessary. The Modistae wrote the Grammatica Speculativa following a realistic modeling (a "mirroring" experiential grammar). The name of this particular work implies that it is a reflection of what is real, and could be considered descriptive in nature, but as Medieval philosophers did not merely describe, but proscribe and prescribe, it should be considered a prescriptive grammar. Specifically, this grammar denies the flexibility of language in its three modi: modus essenti (reality/things) <==> modus intelligendi (intellect/thoughts) <==> modus significandi (language/signs). As reality is "static" (according to the Modistae), so is the intellectual image of reality, and are the linguistic elements. This theory relies heavily upon Aristotle's categories of language [Verburg in Hymes, 198].

A counterpart to realistic grammar is the nominalistic theory. Like realistic grammar, nominalism believes that language is an expression of thoughts, but it believes that the intellect is spontaneous in nature and chooses whether to express its thoughts lingually or not. These theories are both flawed in that they ignore the human agent (an active intellect is not identical with the human self), the world being a set of entities [Verburg in Hymes, 199].

Humanism is responsible for several shifts in thought in Western language studies. First, Humanism brought about an emphasis on speaking well, a revival of rhetoric. Erasmus (1466-1536 C.E.) went further to indicate that not only should speakers use the language well, but they should use it ethically. Secondly, vernaculars became acceptable as literary (learned) languages. This particular avenue broadened the goal and the accessibility of education. Humanism views language (spoken or written) as a normative function; language is a dynamic revelation of what a person means to say. The primary Renaissance linguist is Francis Bacon (1521-1626 C.E.), who reverted toward Nominalism in his belief on language [Verburg in Hymes, 199-202].


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Rationalism, Sciental and Practical

As philosophy was overrun by rationalism (the late sixteenth century to the early eighteenth century), and physical laws were discerned, everything became calculable and thus predictable. Working from Galileo's (1564-1641 C.E.) rediscovery of the mathematics behind mechanics, Hobbes (1588 - 1679 C.E.), in Ratio, determined that the purpose of words was to perform the calculus for the reasoning about reality. Therefore, metaphors should be avoided because of their inconsistency. Hobbes is guilty of subsuming the natural words of natural languages to an artificial symbolism - mathematics. In truth, mathematic symbolism is a subset of natural language, and not the inverse, as Hobbes assumed.

Other mathematicians, such as Descartes (1596-1650 C.E.) and John Wilkins, decided that language was actually pure mathematic symbols necessary for scientific cognition and an artificial language was necessary for social discourse. Wilkins, like many others, developed his own artificial language. Locke (1632-1704 C.E.), like other rationalists, believed that language had two forms: one to find reality (mathematics) and the other to have discourse. Locke resigned himself to the fact that language was often used inaccurately (the "civil" use, as opposed to the strict "philosophical" use for scientia).

Leibniz (1646-1716 C.E.) worked out a mathematics of language using prime numbers and multiplication rather than addition and subtraction, but soon determined that this, although easy to develop, was nearly impossible to actually use. From this difficulty, Leibniz developed the idea of "one universal grammar," which was strongly based on (but significantly different from) Grammatica Speculativa. Leibniz's theory ultimately developed along the lines of:

Universe and other monads <==> Representation <==> Human Monad

(A monad is a unit.) Leibniz notes that although some languages are representationally poor (they are not good for scientific inquiry), they may still represent the truth but in a "distorted" way (much as something viewed at a 45 angle is a distorted image of the same object viewed at a 90 angle). Leibniz's theory was picked up by Wolff (1679-1754 C.E.), who taught it in a much- diluted manner to Bopp (1791-1867 C.E.). Bopp is considered "the founder of linguistics proper" [Verburg in Hymes, 202-209].

In the early years of the eighteenth century, the practical disciplines (such as ethics, economics, sociology, medicine, etc.) resisted the domination of mathematics on every field of study. Many of the adherents to these practical philosophies did not see the world as a static place which was ultimately calculable. Rather, one could be enlightened through empirical experience and evidence. It was no longer necessary to have everything precisely predictable, but the relative certainty of highest probability - brought about through trial and error and analogy - became the rule of the day.

This spirit culminated in a twenty year debate at the Berlin Academy on the origin of human language [Hans Aarsleff, "The Tradition of Condillac: The Problem of the Origin of Language in the Eighteenth Century and the Debate in the Berlin Academy before Herder," in Hymes, 93-156]. Condillac (1715-1780 C.E.) believed that language was a function of the soul, and that calculus and algebra were merely other languages (although much more precise) [Verburg in Hymes, 209-210]. The greatest impact from the revolution against theoretical mathematics was an introduction of practicality in the study of language.


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Spiritualism and Materialism

Rousseau (1712-1778 C.E.) rejected the functional-logical point of view in studying language in Essai sur l'Oerigine des Langues, and embraced a functional-spiritual point of view. To Rousseau's ear, language corruption "began when precision took the place of expression" [Verburg in Hymes, 211]. Language quality, to Rousseau, is a function of its musical quality, and is measured by its social effects - a language promotes or does not promote liberty, pity, equality, etc. Unique to Rousseau's view in linguistic studies to that date was the introduction of the listener as an active part of the equation (the speaker operates his or her language on the listener). Further, Rousseau viewed language as a multi-level function; language could carry information on a lower level, but a higher level indicated the social "operation" on the listener - indicating respect, contempt, etc. In perhaps an oblique way, a contemporary of Rousseau, Diderot (1713-1784 C.E.) is responsible for the rise of prescriptive grammars, "by teaching that optimum speech is the exclusive personal competence of a genius" [Verburg in Hymes, 211-212].

Contemporaries of Rousseau did not necessarily view man as a spiritual being, but as a machine or a plant (in particular, de Lametrie in his L'Homme plante and L'Homme machine). Materialism is the necessary counterpart of Rousseau's spiritualism, which generated the underlying groundwork for positivism in linguistics and mechanical linguistics. As in Rousseau's inquiries into language, functionalism was the key to language; rapid establishment of new disciplines and new specializations within disciplines demanded a closer scrutiny into the purpose of what is [Verburg in Hymes, 212-213].


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Interlude: Grammarians

Throughout the history of linguistics as a philosophical pursuit, there have been people telling others how their language should be spoken. However, there were no published rules for this until the Enlightenment in the eighteenth century, beginning with Bishop Robert Lowth's publication. Mark Twain wrote "I cannot imagine that God speaks anything but the Queen's own English." Although this is a tongue in cheek view, Twain pinpointed the flaw of all prescriptivists. The assumptions made are that the language is perfect in the form in which it is taught (i.e., it is static), and that change is not good, it is corruption.

It is important to note that prescriptivist grammarians became important only because there was a large, new class which found it important to read and write and to "speak properly," the nouveau riche. This was, to their way of thinking, their only way of being accepted into the upper class by the upper class (at that time, the nobility). As schooling became somewhat more standardized over time, these prescriptivist grammarians became almost Biblical in proportion, even to the point that during the Colonial period the aboriginals were discouraged from speaking their own language because it was uncouth, uncivilized, imperfect, and perhaps most importantly, non-Christian.

Prescriptivist grammarians were not interested in studying the language, but legislating the language (at least as near as they could do so). This was not found only in English, but in most languages at the time, and even in some languages still today (such as French, which is to some degree regulated by a language commission).


 [ ^ ]  [ v ]  [ < ]  < > >  [ Top ]  [ Home ]


Linguistics as a Discipline: Nineteenth Century and the Early Twentieth Century

By the early nineteenth century, linguistics as the study of language had become a discipline in its own right. It was still strongly related to philology and anthropology, because there was still a strong interest in origin-speculations. Linguistics had, however, become a specialization which had heretofore been one of the many general interest studies.

The earliest linguists were Bopp and Grimm (1785-1863 C.E.). Bopp was a student of Leibniz's grammaire rationelle as interpreted by Wolff in the German schools. Bopp's method "was a system of material elements and formal relations, parallel to the system of ideas" [Verburg in Hymes, 214]. Unique to Bopp's analysis was his separation of stems and roots from affixes to analyze the language as wholes into parts, elements, and segments. A flaw in Bopp's view of language is that he saw it as a static collection of written words.

Grimm's effect on the study of language was to transform comparative morphology (as developed by Bopp) into a historical lexicology, which more closely analyzed sound changes and phonetics. Bopp viewed the sound changes as a seesaw, with the emphasis being offset from heavy to light and light to heavy, in a mechanical fashion. Grimm, however, believed these sound changes to be inherently formative and grammatical in nature. Humboldt (1767-1835 C.E.) also has a real, but limited impact on linguistics with his marriage of the mathematico-theoretical and moral-practical thoughts on language, drawing from Leibniz, Wolff, Harris, Condillac, Rousseau, and even Kant (who never discussed language in particular). Humboldt's effect is reflected in Sapir's (1884-1939 C.E.) student Whorf (1897-1941 C.E.), through Croce (1866-1952 C.E.) [Verburg in Hymes, 214- 216].

Throughout most of the nineteenth century, until the 1870s, linguistics further developed the theories of Bopp, where language is a static representation. Late in the century, dynamism returned to the forefront. Schmidt (1843-1901 C.E.) released his wave theory to correct and amend the rigid tree theory. Schuchart (1842-1927 C.E.) introduced his views on the complex relationships of language, and rejected dogmatic views. Steinthal (1823 - 1899 C.E.) and Paul (1864-1921 C.E.) ended the one-sided interest in Indo-European languages. Whitney (1827-1894 C.E.) introduced linguistics to America and emphasized the life-cycle and the growth of languages as a social phenomenon. The dynamic nature of language, however, was mostly limited to the laws of sound-shifts (concerned with the natural conditions of speech). According to Verburg, "the positivistic conception remained, only changing the static substratum for the dynamic subsistence of sounds" [in Hymes, 216]. Brugmann (1849 - 1919 C.E.) summarized the century's results, and Delbruck added a syntax, which had been forgotten or ignored for a very long time.

Saussure (1857-1913 C.E.) posited that languages are composed of two essential and autonomous systems, speech and signs, which must be investigated simultaneously for true understanding. Phonemics (originally phonology) was born in 1928 in the Prague Linguistics Circle, representing an effort to embody Saussure's analysis of sounds for a language in terms of a functional system. This differed from the traditional view in that the phoneme was defined by its semantic qualities and the distribution of the phonemes within words. Bloomfield (1887-1949 C.E.) published Language in 1933 C.E. establishing a theory of lingual intersubjectivity, where language is a means of cooperation between two nervous systems [Verburg in Hymes, 217-218].

Linguistic study has changed rapidly from about 1940 to present day, accelerated by the development of the computer. While the volume of changes is too much to chronicle in this paper, most of the assumptions and principles in use today (as shown in the following chapters) are developed from interdisciplinary paradigms, and the application of computing technology to the study and classification of language.


 < ^ >  [ < ]  [ > ]  [ Top ]  [ Home ]


Linguistic Assumptions and Principles

Modern linguistics holds several basic assumptions. These assumptions range anywhere from the structure of language to how languages change over time and space. These assumptions are sometimes referred to as principles of linguistics. While not necessarily proven, they are generally accepted as truth. They are, at the very least, accepted and accounted for in linguistic research.


 [ ^ ]  < v >  < < >  [ > ]  [ Top ]  [ Home ]


Fundamentals

The most fundamental assumption of linguistics is that all languages have universal features. This concept does not mean that all languages have the same structure, nor does it state that the sound systems of the languages are the same. Simply by observation, one can hear that languages are different in how they are structured and most especially in how they sound. Instead, each language has a natural structure and a limited set of acceptable sounds in the language. English, for example, does not have the harsh gutturals that Dutch has; Japanese has five basic vowel sounds (a i u e o) whereas English has ten basic vowel sounds (a _a e _e i _i o _o u _u). (In Japanese, the length of the vowel itself may vary, and the total sound may appear to be different by rapid pronunciation of words, but each vowel is pronounced independently.) English sentences are constructed in Subject-Verb-Object order, whereas Japanese is Subject-Object-Verb order.

But there are universals in every language, nonetheless. One of these universals is that every language has a grammar. All users of a language know this grammar; if they are native, it is usually an unconscious knowledge; if they are non-native, it is usually a conscious knowledge. A language's grammar includes its sound system, its lexicon, and its structure. The language's sound system are those sounds that can be found in the language, including their proper positions in words. The lexicon are sounds combined to formed meaningful units, or words. The structure is when words are strung together to form meaningful statements, or phrases and sentences. Every language has a specific sound system, a lexicon, and a structure. Some languages share portions of their sound system, their lexicon, or their structure with other languages; but each language is unique.

All languages are human. Some animals may seem to have languages (in particular, the cries of birds, dolphins, and monkeys) but studies have shown that the sounds and patterns used are, beyond a certain vocabulary, invariate. Through the study of language, twelve facts have become part of the operational knowledge of linguistics.

  1. Where ever humans exist, language exists.
  2. There are no "primitive" languages: all languages are equally complex and equally capable of expressing any idea in the universe. The vocabulary of any language can be expanded to include new words for new concepts.
  3. All languages change through time.
  4. The relationships between the sounds and meanings of spoken languages and between the gestures (signs) and meanings of sign languages are for the most part arbitrary.
  5. All human languages utilize a finite set of discrete sounds (or gestures) that are combined to form meaningful elements or words, which themselves form an infinite set of possible sentences.
  6. All grammars contain rules for the formation of words and sentences of a similar kind.
  7. Every spoken language includes discrete sound segments like p, n, or a, which can be defined by a finite set of sound properties or features. Every spoken language has a class of vowels and a class of consonants.
  8. Similar grammatical categories (for example, noun, verb) are found in all languages.
  9. There are semantic universals, such as "male" or "female," "animate" or "human," found in every language in the world.
  10. Every language has a way of referring to past time, negating, forming questions, issuing commands, and so on.
  11. Speakers of all languages are capable of producing and comprehending an infinite set of sentences.
  12. Any normal child, born anywhere in the world, of any racial, geographical, social, or economic heritage, is capable of learning any language to which he or she is exposed. The differences we find among languages cannot be due to biological reasons.
[Fromkin and Rodman, 25]

To properly study linguistics, we must appropriately define certain concepts. Following the pattern of An Introduction to Language, 5th Edition (Fromkin and Rodman, 1993), I will cover the nature of human language and grammatical aspects of language. This comprises parts one and two of An Introduction to Language, over 250 pages of text, diagrams, and charts. As this exceeds the scope of this document, I will be summarizing the main points. Because Introduction builds its discussion of language both thematically and logically, I will follow the same format for the remainder of this chapter.


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Language as Knowledge


When we say that we know a language, we imply that we are familiar with its sound system, have a reasonable command of its lexicon, and properly use its structure. This does not mean that we are necessarily conscious of our knowledge of the language s structures, but it means that we have a usable command of the language, allowing us to communicate with others who know the language. It is important to note that languages are not necessarily spoken (American Sign Language is a fully developed language in its own right).

Part of what makes a language is its infinite extensibility. New words can and are constantly added to the lexicon to describe new concepts. Words may be combined to form phrases, which may be combined to form sentences. So long as the words conform to the internal grammar of the language, any sentence or phrase can be created. The phrase need not necessarily make sense or be contextually appropriate, but it can be created. "Talking" birds (such as Mynah birds and parrots) and other communicating animals are not capable of spontaneous creation of ideas or phrases.

Even though languages are infinitely extensible, they are fundamentally understandable, as well. It might, in view of the "lofty" and lengthy form of English that lawyers and lawmakers use, be more appropriate to term a language as fundamentally "parse"-able. This means that any user of the language, even if there are words not in their personal lexicon, may decipher the meaning of phrases by isolating them into their most basic parts (words). Specifically, it should be noted that all sentences must be parsed by the reader or listener; it is impossible to memorize all possible sentences in a language, as all languages are infinitely extensible.

Sentences should also be considered infinitely extensible; object phrases may be added to existing sentences, making a longer sentence. One example of this is the children's game "Telegraph." The game begins with a short, simple phrase, and every person in the circle adds another word or phrase when they tell the phrase to the next person. Another good example is the children's rhyme, a telegraph-like sentence, "This is the house":

This is the farmer sowing the corn,
that kept that cock that crowed in the morn,
that waked the priest all shaven and shorn,
that married the man all tattered and torn,
that kissed the maiden all forlorn,
that milked the cow with the crumpled horn
that tossed the dog,
that worried the cat,
that killed the rat,
that ate the malt,
that lay in the house that Jack built.
[Fromkin and Rodman, 85]

Every user of a language can parse these sentences as valid because they either consciously or subconsciously follow the rules of the language, or the grammar. The next several sections will cover grammar both in general and in specific.


 [ ^ ]  [ v ]  [ < ]  [ > ]  [ Top ]  [ Home ]


Grammars


Until now, we have indicated that the rules for forming sentences (and to some degree, creating words) in a language, are called the grammar. However, there are four types of grammar: internal grammars, descriptive grammars, prescriptive grammars, and learning grammars.


 [ ^ ]  < v >  < < >  [ > ]  [ Top ]  [ Home ]


Internal Grammars

Internal grammars constitute the whole of our knowledge about a language. This knowledge is not a conscious, but rather an innate (as much as language can be innate; see the section on language acquisition for more information) knowledge. Every time one speaks or writes, one calls upon this knowledge; however, most users of a language cannot describe the process by which they use this knowledge. To provide an understanding of the linguistic processes involved, we must use a descriptive grammar.


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Descriptive Grammars

Descriptive grammars are precisely descriptions of the linguistic processes involved in accessing the knowledge that a user has about a language. It does not tell the user how one should speak or write, but describes the basic linguistic knowledge, explaining how it is possible to speak and understand, and filling in what is known about the sounds, words, phrases, and sentences of the language [Fromkin and Rodman, 13]. When Fromkin and Rodman refer to grammars, and more specifically rules in grammars, they are indicating the natural rules of the language and the rules found in the model of the language (the internal and the descriptive grammars). In descriptive grammars, there is no right or wrong way to speak or to write except as the nature of the language allows. Essentially, all dialects and languages are considered equal in complexity and structure.


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Prescriptive Grammars

This is precisely the opposite of a descriptive grammar in application. Like a descriptive grammar, it proposes rules that describe the language. In contrast to a descriptive grammar, it prescribes these rules as the "proper" way to speak or write, and argues that change in a language is corruption of the language and should be avoided at all costs. The source for prescriptive grammar initially came from the desire of the nouveau riche middle class to have their children speak like those of the upper classes. (Bishop Robert Lowth is responsible for the first English prescriptive grammar, A Short Introduction to English Grammar with Critical Notes, which combines English and Latin grammars with his personal preferences. Prior to the publication of his work in 1762, nearly everyone spoke in the same style [vulgate]; only with the publication of this work did both the rising middle class and the upper classes begin diverging from what was then standard grammar to the new prescribed grammars [Fromkin and Rodman, 15].)


 [ ^ ]  [ v ]  [ < ]  < > >  [ Top ]  [ Home ]


Teaching Grammars

Teaching grammars are those grammars used by non-users of a language to associate a new language with their native language. Teaching grammars generally use descriptive grammars to describe the new language s rules for the learner, and assist the learner with the lexicon of the new language by providing glosses, which are parallel words in both the new and the old languages (such as the French word maison, which corresponds to English house). Linguistic rules are generally provided in reference to the user's knowledge of their native language. Teaching grammars do, strictly speaking, prescribe how a new language should be spoken, but they do so for a non-native speaker, not for one who already knows the language. Once the language is learned, the teaching grammar can be abandoned for one's internal knowledge of language structure [Fromkin and Rodman, 16].


 [ ^ ]  [ v ]  [ < ]  [ > ]  [ Top ]  [ Home ]


Parts of Grammars

Grammars are composed of several different parts, each of which will be approached separately throughout this paper. Fromkin and Rodman point out that most people consider a language's grammar to be solely the syntactic rules. In comparison, linguistic grammars consist of phonology (the sound system), semantics (the system of meanings), morphology (the rules of word formation), syntax (the rules of sentence formation), and lexicons (the vocabulary of words). Every language has these parts, and each follows similar rules. Each part of the linguistic grammar is essential; without any one, linguistic knowledge is incomplete.


 [ ^ ]  < v >  < < >  [ > ]  [ Top ]  [ Home ]


Morphology

Morphology is the arbitrary assignment of meaning to valid phonemes. Phonemes are the smallest unit of sound available in a language; these will be covered more thoroughly in the section on phonology. "[Knowledge of] a word means knowing both how to pronounce it and its meaning" [Fromkin and Rodman, 35]. While assignment of a string of sounds to a meaning and vice-versa make a word, for it to be meaningful, the string of sounds still has to follow the linguistic rules of the language (one reason that most speakers of one tongue "massacre" other tongues with their native accent). It must also be generally accepted. I cannot reason that the word "dinosaur" means "the lowest color on the visible spectrum," because the word "dinosaur" already has a meaning and the meaning "the lowest color on the visible spectrum" is represented by several words, most notably "red." Certainly, if enough people used "dinosaur" to mean the color we know as "red," then "dinosaur" would take on the meaning of "red" in addition to "thunder lizard."

A perfect example is the word "googol" which means "1 followed by 100 zeros (10^100)." This word was coined in 1955 by the nine-year-old nephew of Dr. Edward Kasner, an American mathematician. While the concept was there to be named, there was no single word to refer to the numerical concept before then. Not all speakers of English know the word, but it is generally accepted and used by an entire class of English speakers [Fromkin and Rodman, 36]. If only one person used the word, however, it would be without valid meaning. One's internal lexicon associates many things with a word. The spelling ("orthography"), the sound ("phonology"), the syntactic class ("grammatical category," i.e., verb, noun, preposition, etc.), as well as multiple meanings (denotative, connotative, and personal).

Within the syntactic classes of words, we will find that some words are added, removed, even changed in usage or meaning. In English, these are the nouns, verbs, adverbs, and adjectives, which make up the largest part of vocabulary. These words are considered the "content" or "open class" words, because they provide meaning to the language and change often. Other words, such as personal pronouns, prepositions, articles, and conjunctions are considered "function" or "closed class" words because they do not provide meaning, but reference, and they rarely change [Fromkin and Rodman, 38-39].

As stated earlier, morphology is the knowledge of the rules by which new words are added to a language. Most of the time they are not coined, as "googol" was, but are extensions or combinations of morphemes (the most elemental unit of grammatical form) [Fromkin and Rodman, 41]. Morphemes are often, but not always represented by a single syllable (such as a + mor + al and un + lady + like). (Syllables will be covered more thoroughly under phonology.) While some of them are "root" meanings (such as mor, from mores, and lady), others are prefix (a, meaning "without" or un, meaning "not") or suffix (al, meaning "characterized by" or like, meaning "characteristic of") modifiers. Although not present in English, some languages have "infix" morphemes (which are inserted into other "whole" morphemes) and "circumfix" morphemes (in which both prefix and suffix are required to complete the morpheme) [Fromkin and Rodman, 41-45].

Root (or "stem") morphemes may be bound or free. A free morpheme might be "boy," "aardvark," or any other word. However, some root morphemes are bound to other root morphemes, such as "huckle + berry." While you will find "berry" by itself, you will never find "huckle" used as a word, because it only has meaning when attached to the morpheme "berry" [Fromkin and Rodman, 45-47].

Sometimes, there is a "lexical gap," where a meaning is needed but no word exists. Sometimes, as with "googol," new words (or morphemes!) are coined, but there are several ways to "create" new words within a language. The first is "derivational morphology," which is the combination of specific ("derivational") morphemes to other morphemes to create, or derive, new words. Many times, this will involve a change in grammatical class (such as the noun "boy" combined with the derivational morpheme "ish" to form the adjective "boyish"), but this is not necessarily the case (such as "semi" + "annual" or "pun" + "ster"). Not all derivational morphemes are applicable to all words (such as "Commun" + "ist" and not "ite" or "ian" which are other morphemes used to identify members of a group). Thus, derivational morphology is the application of the knowledge of which afix morphemes can be added to which root morphemes and words. (Our knowledge of all of the rules is not complete by any means, however, as is shown by Amsel Greene's book Pullet Surprises, published in 1969.)

Words may be coined in a variety of ways. Sometimes they are created for a specific purpose, such as Kodak, Jell-O, and other "brand names." At other times, two words (not morphemes) may be combined to form other words, such as "bittersweet," "pickpocket," and other compound words. To further confuse things, "the meaning of a compound is not always the sum of the meanings of its parts; a blackboard may be green or white" [Fromkin and Rodman, 54]. All languages have rules for forming compounds by joining words. Acronyms, formed by taking the initials of several words (such as "NASA") from National Aeronautics and Space Agency, are another form of word coinage (although not all of them are pronounceable as words, such as "UCLA" from University of California, Los Angeles). Sometimes, words are conjoined in such a way that part of one or more of the words are elided, or blended, such as "smog" from "smoke + fog" [Fromkin and Rodman, 55-57]. Other common words are "back-formed," or the removal of common affixes from words that did not originally have them, such as "peddle" from "peddler" [Fromkin and Rodman, 57-58]. Sometimes, abbreviations are lexicalized ("phone," "bus," and others) in a process called "clipping" [Fromkin and Rodman, 58]. At even other times, proper nouns become lexicalized, such as "sandwich," "jumbo," and other words (1500 of which are compiled in Willard R. Espy's book, O Thou Improper, Thou Uncommon Noun: An Etymology of Words That Once Were Names, published in 1978) [Fromkin and Rodman, 58].

Most importantly, it is important to note that all morphemes have grammatical structure attached to them. Sometimes, the grammatical meaning is only apparent when combined with other morphemes (as affixes may modify the grammatical meaning). Inflectional morphemes are those morphemes which have no meaning outside of the grammatical meaning, such as the pluralizing 's' in English. Other morphemes have exceptions, or suppletive forms, such as hit/hit (present/past) or sheep/sheep (singular/plural) [Fromkin and Rodman, 59-63].


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Syntax

When examining syntax, or more specifically, sentence structure and the rules for forming sentences (much as morphology covered the rules for forming words), linguists use "Phase Structure Trees" containing "Syntactic Categories." This structure is designed to show the linear order of words within syntactic categories, which are "phrasal categories," or "lexical categories." Phrasal categories are, of course, made of phrases, such as noun phrases ("the cat" or ("the cat in the hat") or verb phrases ("sat" or "sat in the chair") [Fromkin and Rodman, 79-84].

Grammatical knowledge is inherent to the knowledge of the language. This includes not only using proper words, but knowledge of the proper order of the words. Some sentences may be syntactically correct but not be parseable because the words do not exist in the vocabulary of the recipient (such as Lewis Carroll's poem "Jabberwocky") [Fromkin and Rodman, 73-76]. Because the users of a language know the syntactic ordering of the language, they are able to parse an infinite number of sentences. Even if the reader has never seen the sentences:

Jack sat.
The cat sat.
The cat in the hat sat.
The cat sat in the chair.
The cat in the hat sat in the chair.

The reader can parse them because they are properly formed sentences. A linguist might use a phrase structure tree to parse the sentence into multiple phrasal and lexical categories, and the phrasal categories would themselves be parsed into multiple lexical categories. In English, the sentence is always formed of a Noun Phrase and a Verb Phrase. A Verb Phrase may have a Noun Phrase following the verb, making it the object of the sentence. Phrases are always one or more words possibly with grammatical morphemes. If an object is not present, by our innate knowledge of the language we know that the object of the sentence is the same as the subject of the sentence.

The sentence "Jack sat" and the sentence "the cat in the hat sat in the chair" each only have one noun phrase and one verb phrase. The second sentence has an object phrase ("in the chair," indicating the location of the action), but the sentences are essentially the same in structure. (English, and other languages, have prepositional phrases, which are prepositions followed by noun phrases.) The limitless aspect of language (continuing to add phrasal categories), such as in the "This is the House that Jack built" example earlier in this paper is called recursion. Because phrasal categories (particularly in noun phrases and prepositional phrases) can be added to provide greater precision, the ultimate sentence length is infinite, although sentences become unwieldy after a time and the listener or reader tend to dissociate the meaning from the action [Fromkin and Rodman, 78-87]. (I have been told a story about High German where the verb, which always comes at the end of the sentence, is the capstone which brings meaning. A professor was giving a lecture and continued in one long sentence. He had a heart attack and died before he could state the verb, which meant that his entire meaning was lost. This may be anecdotal, but it illustrates the problem of exceedingly long sentences.)

Phrase Structure Rules are the rules which a language follows, or more accurately, the linguist's interpretation of the rules. For example, in English, the Sentence (S) is always a Noun Phrase (NP) followed by a Verb Phrase (VP), represented as: S => NP P. A NP, then, is an optional article (Art) followed by a noun (N), or NP => (Art) N. A VP is a verb (V) followed by a NP and/or a prepositional phrase (PP), or VP => V (NP) (PP). And of course, a PP is a preposition (P) followed by a NP, or PP => P NP [Fromkin and Rodman, 87-89]. (It should be noted that this differs from Fromkin and Rodman in that I have indicated that the Art of a NP is optional, while Fromkin and Rodman indicate that it is not optional. The Art is, in fact, optional only when the noun involved is a proper noun, noun, or a pronoun that does not need a qualifier of number or location.) Of course, there may be additional modifiers to phrases, such as NP => (Art) (Adj)* N (PP), or VP => V (Adv)* (NP) (PP), where (Adj)* means zero, one or more adjectives, and (Adv)* means zero, one, or more adverbs [Fromkin and Rodman, 96-97]. In theory, perhaps, (PP)* would be the proper representation of (PP) in VP and NP. Other languages have different phrase structure rules, which, in turn, create different sentence (tree) structures. In Swedish, the NP rule would be stated as NP => N Art. In Japanese, the PP rule would be PP => NP P [Fromkin and Rodman, 94-95].

The lexicon is a vital part of the syntax, as well, for each word in the lexicon has attached to it the necessary syntactical rules. "Put," for example, might be represented as "put, V; put, NP PP," meaning that it requires both a NP and a PP within the VP [Fromkin and Rodman, 98-99]. Other words might require different combinations depending on the meaning and intent of the word (such as belief, which might be "belief, N; belief, (PP), belief, (S)") [Fromkin and Rodman, 99].

There are six types of language syntax: subject-verb-object, subject-object-verb, object-subject-verb, object-verb-subject, verb-subject-object, and verb-object-subject (abbreviated SVO, SOV, OSV, OVS, VSO, and VOS), with the most frequent being SVO, VSO, and SOV. Examples of each are English (SVO), Irish (VSO), Japanese (SOV), Panare from Venezuela (OVS), Xavante from Brazil (OSV), and Huave from Mexico (VOS). In VO languages (SVO, VSO, or VOS), auxiliary verbs ("to be," etc.) tend to precede the verb, adverbs tend to follow the verb, and prepositions tend to precede the noun. In OV languages (SOV, OSV, and OVS), most of which are SOV, like Japanese, auxiliary verbs tend to follow the verb, adverbs tend to precede the verb, and there are post-positions instead of prepositions. The VP phrase structure rule for an SVO language might be VP => V NP, whereas in Japanese, the VP phrase structure rule would be VP => NP V [Fromkin and Rodman, 110-111].


 [ ^ ]  < v >  [ < ]  [ > ]  [ Top ]  [ Home ]


Semantics

Semantics is "the study of the linguistic meaning of words, phrases, and sentences" [Fromkin and Rodman, 124]. As discussed in syntax and morphology, all parts of the language have some meaning (even if they are only "markers" like "wa" and "o" in Japanese). Words and morphemes have meanings as defined in our internal lexicon. If we know the word "assassin," we know that it is someone who is a human murderer, and is a killer of prominent people [Fromkin and Rodman, 125]. All words contain semantic properties such as that, sometimes more, sometimes less. A "tempter," for example, is a man who tempts (someone, usually a woman), while a "temptress" is a woman who tempts (someone, usually a man). Both words imply humans.

Homonyms (including that class of words that have similar pronunciations and different spellings and that class of words that are spelled the same way but have completely different meanings) add to the ambiguity of the language and often require additional semantic context to determine the proper meaning of a given word. ("He grabbed a bat" would be a good example, because we do not know if the "he" we are talking about is a baseball player or a veterinarian.) Ambiguity can also be caused by the use of synonyms (words that have similar meanings); minor differences can be found and identified with synonyms. Antonyms are words that have properties that are mutually exclusive (such as fast/slow). Fromkin and Rodman note that some antonyms are complementary pairs (fast/slow), while others are gradeable (they need specification, like hot/cold). In gradeable pairs, one of them is marked and the other is unmarked; the unmarked one is used to determine the degree of one or the other (such as, "how tall is it" being answered with "three hundred feet"). Other antonyms are relational opposites (employer/employee). (There will, of course, be times when otherwise antonymical words will mean the same thing such as good/bad in reference to the "quality" of a scare) [124-134].

Names, the final class of words in semantics, always refer to a unique object. In English, names rarely have articles preceding them unless there is a need to clarify, such as "the Mississippi River." Other languages, such as Greek, require articles before names. Proper names, unlike other words, cannot be pluralized and remain definite (such as "the Smiths" implying the Smith family). Proper names, may, however, be plural in their "natural" state, such as "the Pleiades." Rarely will adjectives be used with a proper name, but they can be used to further specify or to emphasize a quality [Fromkin and Rodman, 135-137].

Linguists indicate the semantic properties of words with semantic features and semantic redundancy rules. For example, if a word has the feature [+human], it is automatically [-abstract]. This means that antonyms almost always have reciprocal features, such that:

     fast     [+fast]     [-slow]
     slow     [+slow]     [-fast]

There are literally hundreds of semantic redundancy rules for every word, most of which need not be stated. When we look at a dictionary, we are looking at the most common usage of the word (the properties are refined to the most common present and not-present properties) [Fromkin and Rodman, 124-137].

But as we have seen, knowledge of words does not constitute knowledge of the language. Just as we must know the syntax (how to put words together into sentences) of a language, we must know the semantic meaning that the syntax enforces. We must know how adjectives affect nouns, to what degree, and in what order. Although "large" in English implies "large for the modified object," such as "large balloon" or "large house," in another language, there might be two words for the concept "large" - one for smaller items, like a balloon, and one for larger items, like a horse. The order is important as well: "brick red" is not the same thing as "red brick." As Fromkin and Rodman say, "meanings build on meanings" [138].

Noun phrases take on different roles within sentences depending upon location. For example, in one sentence, there may be an agent (a "doer"), a theme (a "recipient"), a location ("where"), a goal ("where the action is directed"), a source ("where the action originated"), and an instrument ("the object used to accomplish the action") [Fromkin and Rodman, 139]. In all cases, the verb indicates the action. While some languages, such as English, allow various themes - even non-themes - to be the subject of the sentence, others, such as German, are much stricter. Languages which are stricter in what theme a subject may represent generally have a strong "case" system, or what morphological shape the noun takes according to its thematic role in the sentence. English has the genitive/possessive case (the possessive form of a noun). The universal principle of "theta-criterion" has been proposed, which states in part that a particular thematic role may occur only once in a sentence [Fromkin and Rodman, 140-142].

Semantics and syntax closely interact in that something must generally be syntactically correct to be semantically correct (in turn, the words must also be morphologically correct). The semantics of the language tell us when the syntax is incorrect because meaning is not present. However, one will often find syntactically and semantically correct sentences that are still "false" in actuality. But we have the knowledge in principle of how to discover the truth of a sentence, even if we do not have the direct means to do so, because we know the language. If the language is unknown to the reader, then we do not have any way to determine its truth. Often, however, the truth of one sentence entails the truth of another (in much the same way that semantic redundancy rules imply properties of words). As we know how to determine the truth of sentences, so do we know how to find the reference of noun phrases objects Fromkin and Rodman, 142-148].

Rules of language, however, can be broken often. Although a sentence may be syntactically correct, it is often devoid of proper semantic meaning (such as Noam Chomsky's famous phrase, "Colorless green ideas sleep furiously") [Noam Chomsky, Syntactic Structures, The Hague: Mouton, 1957 quoted in Fromkin and Rodman, 149]. Although anomalous in nature, semantic meaning is often broken for poetic imagery. More often, anomalous phrases are used metaphorically, where the reader is required to stretch his or her imagination to derive the proper meaning (such as Cervantes' statement, "Walls have ears"). Idioms, phrases which have specific meaning when in fixed forms, cannot be combined with other phrases using "normal" semantic rules. Specifically, they cannot be reformed ("bite your tongue" cannot be reformed into "bite the tongue which is yours" without losing meaning) - they are frozen in meaning [Fromkin and Rodman, 150-153].

While it is all well and fine to be able to create meaningful sentences, it is vital that they be meaningful in the context of a given discourse. Spoken communication is most often "telegraphic" in nature with verb phrases not being specifically mentioned, pronouns abounding, clauses dropped, and other "breaking" of semantic rules. In nearly all cases, there is contextual knowledge to fill in the missing gaps in communication to make the discourse cohere. Articles such as the and a determine whether a specific instance or just some instance of the referenced noun is being used ("a contract" or "the contract"). Yet, there are also rules of spoken conversation to ensure that the elisions of language do not interfere with the meaning of the words. The "maxims of conversation" (first discussed by H. Paul Grice in 1967) include the cooperative principle (a speaker's contribution to the discourse should be as informative as required, neither more nor less) and the maxim of relevance (a speaker's contribution should be relevant to the conversation) [Fromkin and Rodman, 154-158].


 [ ^ ]  [ v ]  [ < ]  < > >  [ Top ]  [ Home ]


Phonetics and Phonology

There is a finite set of meaningful sounds which appear in human languages. Not all of these sounds appear in any given language, i.e., each language has its own finite subset of meaningful sounds. For example, clicks have no meaning in English, but they are part of Xhosa (used in South Africa; in fact "xh" represents the tsk clicking sound). Linguistic sounds (those that are meaningful, or phonemes) are, like language, deliberate in nature. Throat-clearing and sneezing, therefore, are not phonemes in any language. It is, in part, the difference in the subset of phonemes which makes it difficult for native speakers of English to understand French-born speakers of English. Even two dialects of the same language can have different phonemes for the same representational spelling (this difference is found often between American English, British English, and Scottish English) [Fromkin and Rodman, 176-179].

Alphabetic spelling (rather than pictographic representation of words) represents the pronunciation of words, although the sounds of the words in a language are unsystematically represented [Fromkin and Rodman, 181]. Because of the discrepancy between spelling and sounds, and even the variety of sound sets in languages around the world, in 1888 the International Phonetics Association developed an alphabet based on the Roman alphabet to phonetically spell words. The phonetic symbols in the IPA alphabet "have a consistent value unlike ordinary letters which may or may not represent the same sound in the same or different languages" [Fromkin and Rodman, 184]. While some differences in pronunciation are important for meaning, others are not important and are merely standard deviations from the same utterance.

The study of speech sounds, particularly articulatory phonetics, emphasizes how sounds are made by speakers. Speech sounds are distinguished by a variety of factors: the state of the vocal cords, the volume of air used in voicing a single sound (aspiration), the manner of articulation, and even the place of articulation within the mouth, the head (nasal or oral), or the throat [Fromkin and Rodman, 210].

A speaker of a language innately knows the acceptable range of sounds ("phones") within his or her language. Phones are the variants of phonemes ([p^h] and [p] are phones represented by /p/). Phonetic features exist, similar to semantic features for words, and include voicing, nasality, labiality, continuance, and aspiration. Those which are binary valued features (such as [±nasal] and [±voicing]) are considered distinctive features of phonemes. A linguist may use minimal pairs (words distinguished by a single phone occurring in the same position, such as /bat/ and /pat/) to discover the phonemes of any given language; the phones may be distinguished by a single phonemic feature, thus this may also identify distinctive features within the phonetic set. Features may vary in distinctiveness from language to language. The innate knowledge of a language informs the speaker as to what phones and what sequence of phones are legal in a language. Also, stress may be introduced to identify different words and different meanings of words based on sentence position. The emphasis on a particular word (or syllable) helps carry the context of the conversation [Fromkin and Rodman, 261-263].


 [ ^ ]  [ v ]  [ < ]  < > >  [ Top ]  [ Home ]


Dialects and Language in Society

There are as many idiolects as there are speakers of English. When there are systematic differences in a language because of geographical regions, social class, or political boundaries, but the same basic grammar and there is mutual intelligibility between two speakers, they are considered dialects. "Dialect differences tend to increase proportionately to the degree of communicative isolation between groups" [Fromkin and Rodman, 277]. Accents are developed because of regional phonological or phonetic differences, and sometimes nonnative speakers' use of their native pronunciation for native words. There are some differences among regional dialects in syntax. Fromkin and Rodman point out that whereas most American English speakers will conjoin the sentence "John will eat and Mary will eat" as "John and Mary will eat," but the Ozark dialect permits "John will eat and Mary." Some people feel that the dialectization of language is the corruption of language: these purists, however, rarely succeed in changing the language of the people. In fact, in France, recognition of local dialects has been granted because of political pressure, when they had been so long denied by law [Fromkin and Rodman, 275-294].

Certain dialects are actually minimal combinations of two languages; often just enough to permit the interaction of two societies. These are generally called pidgins. These will use just enough of each language's vocabulary and syntax to produce something that all sides can understand with little work. Pidgins, such as Tok Pisin, might be learned in six months and provide the basic level of knowledge to begin semiprofessional training, where Standard English might take sixty months (five years) [Fromkin and Rodman, 297]. Pidgins that are adopted by a community as a native language are creolized and are from that point called creoles. They are, unlike pidgins, full languages used by a limited number of people [Fromkin and Rodman, 298-299].

Dialects may also change depending on situation: formal, informal, jargon, and slang are good examples of different situational dialects. While "formal" and "informal" styles are basically "standard" with a language and are acceptable in most circles, "slang" takes on a highly informal meaning, generally not acceptable except in the most casual of circumstances. However, as slang terms become more common in general use, they switch from slang to informal use (and sometimes rise as high as formal use). Jargon, on the other hand, is professional or occupational slang, such as "phoneme" for linguistics or "byte" for computers [Fromkin and Rodman, 299-302]. Again, jargon will often pass into the "standard" language for informal or potentially formal use.

Taboos ("forbidden practices") develop with societal restriction (and different dialects, even situational dialects, may have different taboos). In particular, some words are not used in polite company because they refer to private actions, or perhaps religious ceremonies, and outside of those specific contexts, they are considered forbidden. There is no linguistic basis for taboos, but "pointing this fact out does not imply advocating the use or nonuse of any such words" [Fromkin and Rodman, 304]. Because of these taboos, euphemisms are created, replacing taboo words or avoiding unpleasant subjects. Other than George Carlin's seven "dirty" words, one does not die, but passes on, and the one who cares for them is not a mortician, but a funeral director. However, those who originally placed the taboo on words may have a point about the use of epithets: they tell us something about the users of those words, especially those who use the epithets of race, nationality, religion, or sex [Fromkin and Rodman, 302-306].

Language can be used as a weapon; however the language itself is neither intrinsically good nor bad. It merely reflects the general society in question. Individual users of a language may yet add another level of reflection, because one may use language in a sexist way and yet another uses language in an inclusive manner, even though they use the same word. More importantly, each will hear the other's statement as they themselves would use the word. Changing the language is not the answer, it is changing society's acceptance of exclusive language (most commonly seen now in sexist language, such as "Dr. Fromkin and Mrs. Fromkin," even though both Mr. and Mrs. Fromkin hold Ph.D.s).


 < ^ >  [ v ]  [ < ]  [ > ]  [ Top ]  [ Home ]


Language Change and the History of the English Language

Languages change over time. Slowly, to be sure, but they do change. English is measured in three "cataclysmic" changes that generally coincide with historical events that had a profound effect on the language. The first appearance of English, as such, was when the Saxons invaded Britain. This form of English is called Old English and dates from approximately 449 to 1066, when the Normans conquered England, beginning the period of Middle English. It was during this time period (1066-1500) that many of the Latinate words used in English today were introduced into the language, as well as Latinate spellings. Around 1500, there was a great vowel shift which brought the language into Modern English, which is where it is today. Based on this measure (approximately 500 years per shift), we may expect major changes in the language today. The Great Vowel Shift in English changed the seven long (tense) vowels of Middle English and moved them "up" on the tongue. Fromkin and Rodman posit that the Great Vowel Shift is responsible for many of the spelling "inconsistencies" today [320-327]. Language change, however, is a highly regular process.

Any of the linguistic rules identified in Linguistics Assumptions and Principles may be changed: phonemes may be changed, added or removed, morphological rules may be added, changed, or lost, and even syntactical rules might be modified. Semantic rules and the lexicon change much more rapidly than the other three. Lexical changes (the addition, modification, or removal of words from the general lexicon) are perhaps the quickest changes in language. The semantic change of words may change broaden, narrow, or even shift in meaning.

It has been demonstrably shown that all languages are derived from some original tongues, now long dead. There is enough similarity between English and German that they can be considered distant cousins at this point. It is supposed that proto-Latin and proto-German language were once "sister" languages, making all of the descendants "cousins." Often, but not always, this relationship of languages is based on geographic areas. As Latin speakers moved north and west, they successfully integrated themselves into what was to become the Spanish and French cultures, warping the existing languages into a form of Latin. There was more resistance to Latin across the Channel, so the language did not develop from Latin, but more from the barbaric languages of the Bretons, the Angles, and the Saxons [Fromkin and Rodman, 338-347].

Sufficient research has been done to indicate that the "parent" tongue for all of these languages was a tongue now called "Indo-European," which in turn created a host of other languages as demonstrated in Figure 3-1 [from ORCHIS software package, 1994].

Of all of these, it is important to note that the parent languages were reconstructed by measuring the differences of the "daughter" languages. There are linguistic universals as well as differences.


 [ ^ ]  [ v ]  < < >  < > >  [ Top ]  [ Home ]


Written Language and Change


Written Language and Change

Written language has been a preserver - and a changer in its own right - of language. It has preserved human history and science from the ravages of time, but it has also introduced changes to the language. As language changes, the writing may or may not change. In Hong Kong, one will often see two people speaking and drawing Chinese characters in the air at the same time, because the characters (pictographs) do not change as often as the pronunciations of the language do. Even the spelling of some English words reflect older spellings prior to the Great Vowel Shift, causing great problems in ensuring accurate spelling of words [Fromkin and Rodman, 327, 372-373].

There are generally three systems of writing in use in the world today: word writing (pictographs), syllabic, and alphabetic. Cuneiform and Hieroglyphic writing systems are not in use today because of the difficulty in adding new concepts to them, and the amount of time it takes to write even the simplest of concepts (although pictographic and alphabetic writing systems were developed from hieroglyphic and cuneiform writings). Pictographic writing is used primarily in China, where the written language remains the same while the dialects and spoken languages change. Japan uses a combination of pictographics (Kanji) and syllabic writings (Kana). The Kana are specifically simplified pictographs assigned to specific syllables of the Japanese language (phonemes, actually). Alphabetic writings tend to be phonemic rather than phonetic in nature, and are only approximations of the "sound" of the word. All systems of writing, however, are arbitrary in the way that they have assigned signs to words [Fromkin and Rodman, 363-385].


 < ^ >  [ v ]  [ < ]  < > >  [ Top ]  [ Home ]


Language Acquisition

Language acquisition is perhaps one of the most frustrating and confusing aspects of linguistics. For the first several months of a baby's life, all of the sounds and signs made are stimulus-responses. Before babies produce words, however, they go through a sound formation period, called "babbling." It has been shown that deaf children introduced to sign languages will babble in sign.

There is a definite gradation to the learning of language by children, but how it is acquired is still unclear. The stages are single-word ("holophrastic", usually from about one year to about two years), two-words (usually from two years until about three years), and telegraphic (three words or more without grammatical morphemes, from about three years to four or five years). Beyond that, the language has been acquired and the child uses it as an adult would, albeit with a much smaller vocabulary [Fromkin and Rodman, 394-402].

Although a number of theories have been introduced to explain language acquisition, none of them are supported by observational or experimental data. The two most prominent are the imitation theory (where the child imitates his or her parents) and the reinforcement theory (where the child is corrected or praised for using proper forms). An hypothesis for a critical age has been proposed that there is a critical age during which a child may acquire a language without overt teaching. This critical stage is also exhibited in songbirds and in deaf children that are exposed to sign languages.

 [ ^ ]  < v >  < < >  < > >  [ Top ]  [ Home ]


Computers, Formal Language, Natural Language, and Language Acquisition

Computers, Formal Language, Natural Language, and Language Acquisition

Computers, by definition, translate one language into another. As we cannot speak the 0s and 1s that computers manipulate, they cannot directly manipulate languages which we can understand. To aid in the communication with computers, man formal languages have been developed (sometimes with a formal definition in Backus-Naur Format). These languages are often called "computer" languages and are part mathematical in nature and part linguistic in natures. They are designed to give an English-like interface to the computer. However, even statements such as:

PRINT "Use this statement to print a message."

is not always as clear as we would like. Further, as languages grow more complex, both the computer and the programmer must know more to translate from the desired results (the English project proposal, for example) into the computer's binary code.

Ideally, a computer would understand the spoken or written word, and if we were to tell it "give me the sales summary for 1989, 1990, and 1991" it would automatically gather the information which we need. Further, we would not even need to use English or even a specific syntactical structure to be understood. Granted, this goal is a long way from realization, but computers are getting better at understanding one word commands and responding to us in clear speech.

What is needed is to not formalize a natural language (which would, by definition, freeze it), but to make it so that a formal-mathematical machine can understand a natural language. The mystery behind this might be tied closely to the mystery of language acquisition; if we could understand that process better, we might be able to emulate it in Intelligent Agents on the computer to permit us to interact better with the information that is there.


 [ Top ] School Index

 [ Home ] Fantôme's Home


Copyright © 1995-1996 Austin Ziegler
[ Mail ] fantome@usa.net