home | stands | games | about | mpjournal | back | links |
Algebra provides a generalization of arithmetic by using symbols,
usually letters, to represent numbers. For example, it is obviously true
that
2 + 3 = 3 + 2
This arithmetic statement can be generalized using algebra to
x + y = y + x
where x and y can be any number. Algebra has been studied for many
centuries. Babylonian, and ancient Chinese and Egyptian mathematicians
proposed and solved problems in words, that is, using "rhetorical
algebra". However, it was not until the 3rd century that algebraic
problems began to be considered in a form similar to those studied today.
In the 3rd century, the Greek mathematician Diophantus of Alexandria wrote
his book Arithmetica. Of the 13 parts originally written, only six still
survive, but they provide the earliest record of an attempt to use symbols
to represent unknown quantities. Diophantus did not consider general
methods in Arithmetica, but instead solved a large number of practical
problems.
Several Indian mathematicians carried out important work in the field of
algebra in the 6th and 7th centuries. These include Aryabhatta, whose book
entitled Aryabhatta included work on linear and quadratic equations, and
Brahmagupta, who presented a general solution for a quadratic equation.
The next major development in the history of algebra was the book al-Kitab
al-muhtasar fi hisab al-jabr wa'l-muqabala ("Compendium on
calculation by completion and balancing"), written by the Arabic
mathematician Al-Khwarizmi in the 9th century. The word algebra is derived
from al-jabr, or "completion". This book developed methods for
solving six different types of quadratic equations, and contained the
first systematic consideration of the subject separately from number
theory.
In about 1100, the Persian mathematician Omar Khayyam wrote a treatise on
algebra based on Euclid's methods. In it he identified 25 types of
equations and made the first formal distinction between arithmetic and
algebra. Some time later during the 12th century, Al-Khwarizmi's works
were translated and became available to Western scholars. In the 13th
century Leonardo Fibonacci wrote some important and influential books on
algebra. Other highly influential works were those of the Italian
mathematician Luca Pacioli (1445-1517), and of the English mathematician
Robert Recorde (1510-1558).
Rules for solving cubic equations were discovered about 1515 by Scipione
del Ferro (c. 1465-1526), and for the quartic equation by Ludovico Ferrari
(1522-1565) about 1545. In 1824 Niels Henrik Abel (1802-1829) finally
proved that, in general, it is not possible to give general rules of this
kind for solving equations of the fifth degree or higher.
Further contributions to the symbols used in algebra were made in the late
16th century and the 17th century by François Viète (1540-1603) and
René Descartes, among others.
Complex and negative roots were a later discovery, and took some time to
become accepted. In 1799, Karl Friedrich Gauss proved the fundamental
theorem of algebra, which had been proposed as early as 1629.
In the 19th and 20th centuries algebra has become much more abstract and
has grown to include much more than the theory of equations. Modern
developments in algebra include group theory and the study of matrices.
Boolean algebra is the algebra of sets and of logic. It uses symbols to
represent logical statements instead of words. Boolean algebra was
formulated by the English mathematician George Boole in 1847. Logic had
previously been largely the province of philosophers, but in his book, The
Mathematical Analysis of Logic, Boole reduced the whole of classical,
Aristotelian logic to a set of algebraic equations. Boole's original
notation is no longer used, and modern Boolean algebra now uses the
symbols of either set theory, or propositional calculus.
Boolean algebra is an uninterpreted system - it consists of rules for
manipulating symbols, but does not specify how the symbols should be
interpreted. The symbols can be taken to represent sets and their
relationships, in which case we obtain a Boolean algebra of sets.
Alternatively, the symbols can be interpreted in terms of logical
propositions, or statements, their connectives, and their truth
values.
This means that Boolean algebra has exactly the same structure as
propositional calculus.
The most important application of Boolean algebra is in digital computing.
Computer chips are made up of transistors arranged in logic gates. Each
gate performs a simple logical operation. For example, an AND gate
produces a high voltage electrical pulse at the output r if and only if a
high voltage pulse is received at both inputs p, q. The computer processes
the logical propositions in its program by processing electrical pulses -
in the case of the AND gate, the proposition represented is p Ù q
º r. A
high pulse is equivalent to a truth value of "true" or binary
digit 1, while a low pulse is equivalent to a truth value of
"false", or binary digit 0.
The design of a particular circuit or microchip is based on a set of
logical statements. These statements can be translated into the symbols of
Boolean algebra. The algebraic statements can then be simplified according
to the rules of the algebra, and translated into a simpler circuit design.
An algebraic equation shows the relationship between two or more variables. The equation below states that the area (a) of a circle equals p (pi, a constant) multiplied by the radius squared (r 2). Given a particular value for a or r, the equation can be solved (a value can be found) for the other variable. Given another equation that is simultaneously true, for example c = 2pr, we can substitute c/2p for r into the first equation. This gives a new equation, a = c 2/4p.
An operation is any procedure carried out on one or more original
values (the operands) to generate a new value. The idea of an operation is
fundamental to mathematics. For example, addition is one of the four basic
operations of arithmetic, the other three being subtraction,
multiplication, and division. The operation of addition, when carried out
on the operands 3 and 4, generates a sum of 7. Even quite simple algebraic
techniques, such as factorization, depend on a thorough understanding of
basic operations. There is always a well-defined rule for calculating the
result of a particular operation.
For many operations, the result is one value, regardless of the number of
input values. (One exception is the operation of taking square roots -
these may be positive or negative). Such operations may be described as
one-to-one or many-to-one mappings, or functions.
The symbol used to indicate an operation is called an operator. For
example, the operator for addition is the plus sign (+), and the operator
for integration is the integral sign. In some cases, different symbols are
used to represent the same operator. In computing, the operator * is used
to mean exactly what the operator ×, "times", means in
arithmetic. Different operators are used in different areas of
mathematics. For example, in logic, there are several sets of operators
that are used to express logical relationships.
The manipulation of operators together with other mathematical symbols
constitutes an algebra of operations. Discovering rules in such an algebra
helps to simplify calculations. To give an elementary example, the
expression -(-(-3)) can be simplified to -3. Mathematicians working in the
more abstract reaches of algebra investigate general properties of
operations. For example, group theory is concerned with sets that are
closed under associative operations - that is, sets that contain the
results of the operation when carried out on the original elements.